Jan 31 06:09:08 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 06:09:08 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 06:09:08 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 06:09:08 localhost kernel: BIOS-provided physical RAM map:
Jan 31 06:09:08 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 06:09:08 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 06:09:08 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 06:09:08 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 06:09:08 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 06:09:08 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 06:09:08 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 06:09:08 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 06:09:08 localhost kernel: NX (Execute Disable) protection: active
Jan 31 06:09:08 localhost kernel: APIC: Static calls initialized
Jan 31 06:09:08 localhost kernel: SMBIOS 2.8 present.
Jan 31 06:09:08 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 06:09:08 localhost kernel: Hypervisor detected: KVM
Jan 31 06:09:08 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 06:09:08 localhost kernel: kvm-clock: using sched offset of 9129412439 cycles
Jan 31 06:09:08 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 06:09:08 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 31 06:09:08 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 31 06:09:08 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 31 06:09:08 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 06:09:08 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 06:09:08 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 06:09:08 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 06:09:08 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 06:09:08 localhost kernel: Using GB pages for direct mapping
Jan 31 06:09:08 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 06:09:08 localhost kernel: ACPI: Early table checksum verification disabled
Jan 31 06:09:08 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 06:09:08 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 06:09:08 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 06:09:08 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 06:09:08 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 06:09:08 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 06:09:08 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 06:09:08 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 06:09:08 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 06:09:08 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 06:09:08 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 06:09:08 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 06:09:08 localhost kernel: No NUMA configuration found
Jan 31 06:09:08 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 06:09:08 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 31 06:09:08 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 06:09:08 localhost kernel: Zone ranges:
Jan 31 06:09:08 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 06:09:08 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 06:09:08 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 06:09:08 localhost kernel:   Device   empty
Jan 31 06:09:08 localhost kernel: Movable zone start for each node
Jan 31 06:09:08 localhost kernel: Early memory node ranges
Jan 31 06:09:08 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 06:09:08 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 06:09:08 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 06:09:08 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 06:09:08 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 06:09:08 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 06:09:08 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 06:09:08 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 06:09:08 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 06:09:08 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 06:09:08 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 06:09:08 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 06:09:08 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 06:09:08 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 06:09:08 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 06:09:08 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 06:09:08 localhost kernel: TSC deadline timer available
Jan 31 06:09:08 localhost kernel: CPU topo: Max. logical packages:   8
Jan 31 06:09:08 localhost kernel: CPU topo: Max. logical dies:       8
Jan 31 06:09:08 localhost kernel: CPU topo: Max. dies per package:   1
Jan 31 06:09:08 localhost kernel: CPU topo: Max. threads per core:   1
Jan 31 06:09:08 localhost kernel: CPU topo: Num. cores per package:     1
Jan 31 06:09:08 localhost kernel: CPU topo: Num. threads per package:   1
Jan 31 06:09:08 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 06:09:08 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 06:09:08 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 06:09:08 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 06:09:08 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 06:09:08 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 06:09:08 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 06:09:08 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 06:09:08 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 06:09:08 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 06:09:08 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 06:09:08 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 06:09:08 localhost kernel: Booting paravirtualized kernel on KVM
Jan 31 06:09:08 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 06:09:08 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 06:09:08 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 06:09:08 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 31 06:09:08 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 31 06:09:08 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 06:09:08 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 06:09:08 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 06:09:08 localhost kernel: random: crng init done
Jan 31 06:09:08 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 06:09:08 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 06:09:08 localhost kernel: Fallback order for Node 0: 0 
Jan 31 06:09:08 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 06:09:08 localhost kernel: Policy zone: Normal
Jan 31 06:09:08 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 06:09:08 localhost kernel: software IO TLB: area num 8.
Jan 31 06:09:08 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 06:09:08 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 06:09:08 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 06:09:08 localhost kernel: Dynamic Preempt: voluntary
Jan 31 06:09:08 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 06:09:08 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 31 06:09:08 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 06:09:08 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 31 06:09:08 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 31 06:09:08 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 31 06:09:08 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 06:09:08 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 06:09:08 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 06:09:08 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 06:09:08 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 06:09:08 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 06:09:08 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 06:09:08 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 06:09:08 localhost kernel: Console: colour VGA+ 80x25
Jan 31 06:09:08 localhost kernel: printk: console [ttyS0] enabled
Jan 31 06:09:08 localhost kernel: ACPI: Core revision 20230331
Jan 31 06:09:08 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 06:09:08 localhost kernel: x2apic enabled
Jan 31 06:09:08 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 06:09:08 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 06:09:08 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 31 06:09:08 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 06:09:08 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 06:09:08 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 06:09:08 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 06:09:08 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 06:09:08 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 06:09:08 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 06:09:08 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 06:09:08 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 06:09:08 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 06:09:08 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 06:09:08 localhost kernel: active return thunk: retbleed_return_thunk
Jan 31 06:09:08 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 06:09:08 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 06:09:08 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 06:09:08 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 06:09:08 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 06:09:08 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 06:09:08 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 31 06:09:08 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 31 06:09:08 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 06:09:08 localhost kernel: landlock: Up and running.
Jan 31 06:09:08 localhost kernel: Yama: becoming mindful.
Jan 31 06:09:08 localhost kernel: SELinux:  Initializing.
Jan 31 06:09:08 localhost kernel: LSM support for eBPF active
Jan 31 06:09:08 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 06:09:08 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 06:09:08 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 06:09:08 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 06:09:08 localhost kernel: ... version:                0
Jan 31 06:09:08 localhost kernel: ... bit width:              48
Jan 31 06:09:08 localhost kernel: ... generic registers:      6
Jan 31 06:09:08 localhost kernel: ... value mask:             0000ffffffffffff
Jan 31 06:09:08 localhost kernel: ... max period:             00007fffffffffff
Jan 31 06:09:08 localhost kernel: ... fixed-purpose events:   0
Jan 31 06:09:08 localhost kernel: ... event mask:             000000000000003f
Jan 31 06:09:08 localhost kernel: signal: max sigframe size: 1776
Jan 31 06:09:08 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 31 06:09:08 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 31 06:09:08 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 31 06:09:08 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 31 06:09:08 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 06:09:08 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 06:09:08 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 31 06:09:08 localhost kernel: node 0 deferred pages initialised in 9ms
Jan 31 06:09:08 localhost kernel: Memory: 7763940K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618400K reserved, 0K cma-reserved)
Jan 31 06:09:08 localhost kernel: devtmpfs: initialized
Jan 31 06:09:08 localhost kernel: x86/mm: Memory block size: 128MB
Jan 31 06:09:08 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 06:09:08 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 06:09:08 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 06:09:08 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 06:09:08 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 06:09:08 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 06:09:08 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 06:09:08 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 31 06:09:08 localhost kernel: audit: type=2000 audit(1769839747.540:1): state=initialized audit_enabled=0 res=1
Jan 31 06:09:08 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 06:09:08 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 06:09:08 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 06:09:08 localhost kernel: cpuidle: using governor menu
Jan 31 06:09:08 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 06:09:08 localhost kernel: PCI: Using configuration type 1 for base access
Jan 31 06:09:08 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 31 06:09:08 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 06:09:08 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 06:09:08 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 06:09:08 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 06:09:08 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 06:09:08 localhost kernel: Demotion targets for Node 0: null
Jan 31 06:09:08 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 06:09:08 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 31 06:09:08 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 31 06:09:08 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 06:09:08 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 06:09:08 localhost kernel: ACPI: Interpreter enabled
Jan 31 06:09:08 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 06:09:08 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 06:09:08 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 06:09:08 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 06:09:08 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 06:09:08 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 06:09:08 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [3] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [4] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [5] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [6] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [7] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [8] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [9] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [10] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [11] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [12] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [13] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [14] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [15] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [16] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [17] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [18] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [19] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [20] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [21] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [22] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [23] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [24] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [25] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [26] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [27] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [28] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [29] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [30] registered
Jan 31 06:09:08 localhost kernel: acpiphp: Slot [31] registered
Jan 31 06:09:08 localhost kernel: PCI host bridge to bus 0000:00
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 06:09:08 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 06:09:08 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 06:09:08 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 06:09:08 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 06:09:08 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 06:09:08 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 06:09:08 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 06:09:08 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 06:09:08 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 06:09:08 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 06:09:08 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 06:09:08 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 06:09:08 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 06:09:08 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 06:09:08 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 06:09:08 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 06:09:08 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 06:09:08 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 06:09:08 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 06:09:08 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 06:09:08 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 06:09:08 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 06:09:08 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 06:09:08 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 06:09:08 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 06:09:08 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 06:09:08 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 06:09:08 localhost kernel: iommu: Default domain type: Translated
Jan 31 06:09:08 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 06:09:08 localhost kernel: SCSI subsystem initialized
Jan 31 06:09:08 localhost kernel: ACPI: bus type USB registered
Jan 31 06:09:08 localhost kernel: usbcore: registered new interface driver usbfs
Jan 31 06:09:08 localhost kernel: usbcore: registered new interface driver hub
Jan 31 06:09:08 localhost kernel: usbcore: registered new device driver usb
Jan 31 06:09:08 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 06:09:08 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 06:09:08 localhost kernel: PTP clock support registered
Jan 31 06:09:08 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 31 06:09:08 localhost kernel: NetLabel: Initializing
Jan 31 06:09:08 localhost kernel: NetLabel:  domain hash size = 128
Jan 31 06:09:08 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 06:09:08 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 06:09:08 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 31 06:09:08 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 31 06:09:08 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 31 06:09:08 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 31 06:09:08 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 06:09:08 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 06:09:08 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 06:09:08 localhost kernel: vgaarb: loaded
Jan 31 06:09:08 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 06:09:08 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 06:09:08 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 06:09:08 localhost kernel: pnp: PnP ACPI init
Jan 31 06:09:08 localhost kernel: pnp 00:03: [dma 2]
Jan 31 06:09:08 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 31 06:09:08 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 06:09:08 localhost kernel: NET: Registered PF_INET protocol family
Jan 31 06:09:08 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 06:09:08 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 06:09:08 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 06:09:08 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 06:09:08 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 06:09:08 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 06:09:08 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 06:09:08 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 06:09:08 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 06:09:08 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 06:09:08 localhost kernel: NET: Registered PF_XDP protocol family
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 06:09:08 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 06:09:08 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 06:09:08 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 06:09:08 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 35396 usecs
Jan 31 06:09:08 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 31 06:09:08 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 06:09:08 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 06:09:08 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 31 06:09:08 localhost kernel: ACPI: bus type thunderbolt registered
Jan 31 06:09:08 localhost kernel: Initialise system trusted keyrings
Jan 31 06:09:08 localhost kernel: Key type blacklist registered
Jan 31 06:09:08 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 06:09:08 localhost kernel: zbud: loaded
Jan 31 06:09:08 localhost kernel: integrity: Platform Keyring initialized
Jan 31 06:09:08 localhost kernel: integrity: Machine keyring initialized
Jan 31 06:09:08 localhost kernel: Freeing initrd memory: 88000K
Jan 31 06:09:08 localhost kernel: NET: Registered PF_ALG protocol family
Jan 31 06:09:08 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 31 06:09:08 localhost kernel: Key type asymmetric registered
Jan 31 06:09:08 localhost kernel: Asymmetric key parser 'x509' registered
Jan 31 06:09:08 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 06:09:08 localhost kernel: io scheduler mq-deadline registered
Jan 31 06:09:08 localhost kernel: io scheduler kyber registered
Jan 31 06:09:08 localhost kernel: io scheduler bfq registered
Jan 31 06:09:08 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 06:09:08 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 06:09:08 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 06:09:08 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 31 06:09:08 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 06:09:08 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 06:09:08 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 06:09:08 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 06:09:08 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 06:09:08 localhost kernel: Non-volatile memory driver v1.3
Jan 31 06:09:08 localhost kernel: rdac: device handler registered
Jan 31 06:09:08 localhost kernel: hp_sw: device handler registered
Jan 31 06:09:08 localhost kernel: emc: device handler registered
Jan 31 06:09:08 localhost kernel: alua: device handler registered
Jan 31 06:09:08 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 06:09:08 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 06:09:08 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 06:09:08 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 06:09:08 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 06:09:08 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 06:09:08 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 31 06:09:08 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 06:09:08 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 06:09:08 localhost kernel: hub 1-0:1.0: USB hub found
Jan 31 06:09:08 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 31 06:09:08 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 06:09:08 localhost kernel: usbserial: USB Serial support registered for generic
Jan 31 06:09:08 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 06:09:08 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 06:09:08 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 06:09:08 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 06:09:08 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 06:09:08 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 06:09:08 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T06:09:08 UTC (1769839748)
Jan 31 06:09:08 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 06:09:08 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 06:09:08 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 06:09:08 localhost kernel: usbcore: registered new interface driver usbhid
Jan 31 06:09:08 localhost kernel: usbhid: USB HID core driver
Jan 31 06:09:08 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 31 06:09:08 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 06:09:08 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 06:09:08 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 06:09:08 localhost kernel: Initializing XFRM netlink socket
Jan 31 06:09:08 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 31 06:09:08 localhost kernel: Segment Routing with IPv6
Jan 31 06:09:08 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 31 06:09:08 localhost kernel: mpls_gso: MPLS GSO support
Jan 31 06:09:08 localhost kernel: IPI shorthand broadcast: enabled
Jan 31 06:09:08 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 06:09:08 localhost kernel: AES CTR mode by8 optimization enabled
Jan 31 06:09:08 localhost kernel: sched_clock: Marking stable (1065002155, 152008594)->(1310852698, -93841949)
Jan 31 06:09:08 localhost kernel: registered taskstats version 1
Jan 31 06:09:08 localhost kernel: Loading compiled-in X.509 certificates
Jan 31 06:09:08 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 06:09:08 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 06:09:08 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 06:09:08 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 06:09:08 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 06:09:08 localhost kernel: Demotion targets for Node 0: null
Jan 31 06:09:08 localhost kernel: page_owner is disabled
Jan 31 06:09:08 localhost kernel: Key type .fscrypt registered
Jan 31 06:09:08 localhost kernel: Key type fscrypt-provisioning registered
Jan 31 06:09:08 localhost kernel: Key type big_key registered
Jan 31 06:09:08 localhost kernel: Key type encrypted registered
Jan 31 06:09:08 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 06:09:08 localhost kernel: Loading compiled-in module X.509 certificates
Jan 31 06:09:08 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 06:09:08 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 31 06:09:08 localhost kernel: ima: No architecture policies found
Jan 31 06:09:08 localhost kernel: evm: Initialising EVM extended attributes:
Jan 31 06:09:08 localhost kernel: evm: security.selinux
Jan 31 06:09:08 localhost kernel: evm: security.SMACK64 (disabled)
Jan 31 06:09:08 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 06:09:08 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 06:09:08 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 06:09:08 localhost kernel: evm: security.apparmor (disabled)
Jan 31 06:09:08 localhost kernel: evm: security.ima
Jan 31 06:09:08 localhost kernel: evm: security.capability
Jan 31 06:09:08 localhost kernel: evm: HMAC attrs: 0x1
Jan 31 06:09:08 localhost kernel: Running certificate verification RSA selftest
Jan 31 06:09:08 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 06:09:08 localhost kernel: Running certificate verification ECDSA selftest
Jan 31 06:09:08 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 06:09:08 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 06:09:08 localhost kernel: clk: Disabling unused clocks
Jan 31 06:09:08 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 31 06:09:08 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 06:09:08 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 31 06:09:08 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 06:09:08 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 06:09:08 localhost kernel: Run /init as init process
Jan 31 06:09:08 localhost kernel:   with arguments:
Jan 31 06:09:08 localhost kernel:     /init
Jan 31 06:09:08 localhost kernel:   with environment:
Jan 31 06:09:08 localhost kernel:     HOME=/
Jan 31 06:09:08 localhost kernel:     TERM=linux
Jan 31 06:09:08 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Jan 31 06:09:08 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 06:09:08 localhost systemd[1]: Detected virtualization kvm.
Jan 31 06:09:08 localhost systemd[1]: Detected architecture x86-64.
Jan 31 06:09:08 localhost systemd[1]: Running in initrd.
Jan 31 06:09:08 localhost systemd[1]: No hostname configured, using default hostname.
Jan 31 06:09:08 localhost systemd[1]: Hostname set to <localhost>.
Jan 31 06:09:08 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 31 06:09:08 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 31 06:09:08 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 06:09:08 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 06:09:08 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 06:09:08 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 31 06:09:08 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 06:09:08 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 06:09:08 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 06:09:08 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 06:09:08 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 06:09:08 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 31 06:09:08 localhost systemd[1]: Reached target Local File Systems.
Jan 31 06:09:08 localhost systemd[1]: Reached target Path Units.
Jan 31 06:09:08 localhost systemd[1]: Reached target Slice Units.
Jan 31 06:09:08 localhost systemd[1]: Reached target Swaps.
Jan 31 06:09:08 localhost systemd[1]: Reached target Timer Units.
Jan 31 06:09:08 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 06:09:08 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 31 06:09:08 localhost systemd[1]: Listening on Journal Socket.
Jan 31 06:09:08 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 06:09:08 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 06:09:08 localhost systemd[1]: Reached target Socket Units.
Jan 31 06:09:08 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 06:09:08 localhost systemd[1]: Starting Journal Service...
Jan 31 06:09:08 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 06:09:08 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 06:09:08 localhost systemd[1]: Starting Create System Users...
Jan 31 06:09:08 localhost systemd[1]: Starting Setup Virtual Console...
Jan 31 06:09:08 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 06:09:08 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 06:09:08 localhost systemd-journald[305]: Journal started
Jan 31 06:09:08 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/d14f084bec774fba801f103494d34b3a) is 8.0M, max 153.6M, 145.6M free.
Jan 31 06:09:08 localhost systemd[1]: Started Journal Service.
Jan 31 06:09:08 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Jan 31 06:09:08 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Jan 31 06:09:08 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 06:09:08 localhost systemd[1]: Finished Create System Users.
Jan 31 06:09:09 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 06:09:09 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 06:09:09 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 06:09:09 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 06:09:09 localhost systemd[1]: Finished Setup Virtual Console.
Jan 31 06:09:09 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 06:09:09 localhost systemd[1]: Starting dracut cmdline hook...
Jan 31 06:09:09 localhost dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 06:09:09 localhost dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 06:09:09 localhost systemd[1]: Finished dracut cmdline hook.
Jan 31 06:09:09 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 31 06:09:09 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 06:09:09 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 31 06:09:09 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 06:09:09 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 31 06:09:09 localhost kernel: RPC: Registered udp transport module.
Jan 31 06:09:09 localhost kernel: RPC: Registered tcp transport module.
Jan 31 06:09:09 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 06:09:09 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 06:09:09 localhost rpc.statd[444]: Version 2.5.4 starting
Jan 31 06:09:09 localhost rpc.statd[444]: Initializing NSM state
Jan 31 06:09:09 localhost rpc.idmapd[449]: Setting log level to 0
Jan 31 06:09:09 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 31 06:09:09 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 06:09:09 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 06:09:09 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 06:09:09 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 31 06:09:09 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 31 06:09:09 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 06:09:09 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 31 06:09:09 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 06:09:09 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 06:09:09 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 06:09:09 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 06:09:09 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 06:09:09 localhost systemd[1]: Reached target Network.
Jan 31 06:09:09 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 06:09:09 localhost systemd[1]: Starting dracut initqueue hook...
Jan 31 06:09:09 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 06:09:09 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 06:09:09 localhost kernel:  vda: vda1
Jan 31 06:09:09 localhost systemd-udevd[500]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:09:09 localhost kernel: libata version 3.00 loaded.
Jan 31 06:09:09 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 31 06:09:09 localhost kernel: scsi host0: ata_piix
Jan 31 06:09:09 localhost kernel: scsi host1: ata_piix
Jan 31 06:09:09 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 06:09:09 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 06:09:09 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 06:09:09 localhost systemd[1]: Reached target Initrd Root Device.
Jan 31 06:09:09 localhost kernel: ata1: found unknown device (class 0)
Jan 31 06:09:09 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 06:09:09 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 06:09:09 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 06:09:09 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 06:09:09 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 06:09:09 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 31 06:09:09 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 31 06:09:09 localhost systemd[1]: Reached target System Initialization.
Jan 31 06:09:09 localhost systemd[1]: Reached target Basic System.
Jan 31 06:09:10 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 31 06:09:10 localhost systemd[1]: Finished dracut initqueue hook.
Jan 31 06:09:10 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 06:09:10 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 06:09:10 localhost systemd[1]: Reached target Remote File Systems.
Jan 31 06:09:10 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 31 06:09:10 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 31 06:09:10 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 06:09:10 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 06:09:10 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 06:09:10 localhost systemd[1]: Mounting /sysroot...
Jan 31 06:09:10 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 06:09:10 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 06:09:10 localhost kernel: XFS (vda1): Ending clean mount
Jan 31 06:09:10 localhost systemd[1]: Mounted /sysroot.
Jan 31 06:09:10 localhost systemd[1]: Reached target Initrd Root File System.
Jan 31 06:09:10 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 06:09:10 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 06:09:10 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 06:09:10 localhost systemd[1]: Reached target Initrd File Systems.
Jan 31 06:09:10 localhost systemd[1]: Reached target Initrd Default Target.
Jan 31 06:09:10 localhost systemd[1]: Starting dracut mount hook...
Jan 31 06:09:10 localhost systemd[1]: Finished dracut mount hook.
Jan 31 06:09:10 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 06:09:11 localhost rpc.idmapd[449]: exiting on signal 15
Jan 31 06:09:11 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 06:09:11 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 06:09:11 localhost systemd[1]: Stopped target Network.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Timer Units.
Jan 31 06:09:11 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 06:09:11 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Basic System.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Path Units.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Remote File Systems.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Slice Units.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Socket Units.
Jan 31 06:09:11 localhost systemd[1]: Stopped target System Initialization.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Local File Systems.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Swaps.
Jan 31 06:09:11 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped dracut mount hook.
Jan 31 06:09:11 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 31 06:09:11 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 06:09:11 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 06:09:11 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 31 06:09:11 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 31 06:09:11 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 06:09:11 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 06:09:11 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 06:09:11 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 06:09:11 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 31 06:09:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 06:09:11 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 06:09:11 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Closed udev Control Socket.
Jan 31 06:09:11 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Closed udev Kernel Socket.
Jan 31 06:09:11 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 31 06:09:11 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 31 06:09:11 localhost systemd[1]: Starting Cleanup udev Database...
Jan 31 06:09:11 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 06:09:11 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 06:09:11 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Stopped Create System Users.
Jan 31 06:09:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 06:09:11 localhost systemd[1]: Finished Cleanup udev Database.
Jan 31 06:09:11 localhost systemd[1]: Reached target Switch Root.
Jan 31 06:09:11 localhost systemd[1]: Starting Switch Root...
Jan 31 06:09:11 localhost systemd[1]: Switching root.
Jan 31 06:09:11 localhost systemd-journald[305]: Journal stopped
Jan 31 06:09:12 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Jan 31 06:09:12 localhost kernel: audit: type=1404 audit(1769839751.651:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 06:09:12 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:09:12 localhost kernel: SELinux:  policy capability open_perms=1
Jan 31 06:09:12 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:09:12 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:09:12 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:09:12 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:09:12 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:09:12 localhost kernel: audit: type=1403 audit(1769839751.792:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 06:09:12 localhost systemd[1]: Successfully loaded SELinux policy in 153.354ms.
Jan 31 06:09:12 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.248ms.
Jan 31 06:09:12 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 06:09:12 localhost systemd[1]: Detected virtualization kvm.
Jan 31 06:09:12 localhost systemd[1]: Detected architecture x86-64.
Jan 31 06:09:12 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:09:12 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 31 06:09:12 localhost systemd[1]: Stopped Switch Root.
Jan 31 06:09:12 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 06:09:12 localhost systemd[1]: Created slice Slice /system/getty.
Jan 31 06:09:12 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 31 06:09:12 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 31 06:09:12 localhost systemd[1]: Created slice User and Session Slice.
Jan 31 06:09:12 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 06:09:12 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 31 06:09:12 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 06:09:12 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 06:09:12 localhost systemd[1]: Stopped target Switch Root.
Jan 31 06:09:12 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 31 06:09:12 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 31 06:09:12 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 31 06:09:12 localhost systemd[1]: Reached target Path Units.
Jan 31 06:09:12 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 31 06:09:12 localhost systemd[1]: Reached target Slice Units.
Jan 31 06:09:12 localhost systemd[1]: Reached target Swaps.
Jan 31 06:09:12 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 31 06:09:12 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 31 06:09:12 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 31 06:09:12 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 31 06:09:12 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 31 06:09:12 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 06:09:12 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 06:09:12 localhost systemd[1]: Mounting Huge Pages File System...
Jan 31 06:09:12 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 31 06:09:12 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 31 06:09:12 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 31 06:09:12 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 06:09:12 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 06:09:12 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 06:09:12 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 31 06:09:12 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 31 06:09:12 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 31 06:09:12 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 06:09:12 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 31 06:09:12 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 31 06:09:12 localhost systemd[1]: Stopped Journal Service.
Jan 31 06:09:12 localhost systemd[1]: Starting Journal Service...
Jan 31 06:09:12 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 06:09:12 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 31 06:09:12 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 06:09:12 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 31 06:09:12 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 06:09:12 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 06:09:12 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 06:09:12 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:12 localhost systemd[1]: Mounted Huge Pages File System.
Jan 31 06:09:12 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 31 06:09:12 localhost systemd-journald[678]: Journal started
Jan 31 06:09:12 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 06:09:12 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 31 06:09:12 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 06:09:12 localhost systemd[1]: Started Journal Service.
Jan 31 06:09:12 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 31 06:09:12 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 31 06:09:12 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 06:09:12 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 06:09:12 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 06:09:12 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 06:09:12 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 06:09:12 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 06:09:12 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 06:09:12 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 06:09:12 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 06:09:12 localhost kernel: ACPI: bus type drm_connector registered
Jan 31 06:09:12 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 31 06:09:12 localhost kernel: fuse: init (API version 7.37)
Jan 31 06:09:12 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 06:09:12 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 06:09:12 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 06:09:12 localhost systemd[1]: Starting Create System Users...
Jan 31 06:09:12 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 06:09:12 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 31 06:09:12 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 06:09:12 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 31 06:09:12 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 06:09:12 localhost systemd[1]: Mounting FUSE Control File System...
Jan 31 06:09:12 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 06:09:12 localhost systemd-journald[678]: Received client request to flush runtime journal.
Jan 31 06:09:12 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 06:09:12 localhost systemd[1]: Mounted FUSE Control File System.
Jan 31 06:09:12 localhost systemd[1]: Finished Create System Users.
Jan 31 06:09:12 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 06:09:12 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 06:09:12 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 06:09:12 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 06:09:13 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 06:09:13 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 06:09:13 localhost systemd[1]: Reached target Local File Systems.
Jan 31 06:09:13 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 06:09:13 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 06:09:13 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 06:09:13 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 06:09:13 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 06:09:13 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 06:09:13 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 06:09:13 localhost bootctl[696]: Couldn't find EFI system partition, skipping.
Jan 31 06:09:13 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 06:09:13 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 06:09:13 localhost systemd[1]: Starting Security Auditing Service...
Jan 31 06:09:13 localhost systemd[1]: Starting RPC Bind...
Jan 31 06:09:13 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 06:09:13 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 06:09:13 localhost auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 06:09:13 localhost auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 06:09:13 localhost systemd[1]: Started RPC Bind.
Jan 31 06:09:13 localhost augenrules[707]: /sbin/augenrules: No change
Jan 31 06:09:13 localhost augenrules[722]: No rules
Jan 31 06:09:13 localhost augenrules[722]: enabled 1
Jan 31 06:09:13 localhost augenrules[722]: failure 1
Jan 31 06:09:13 localhost augenrules[722]: pid 702
Jan 31 06:09:13 localhost augenrules[722]: rate_limit 0
Jan 31 06:09:13 localhost augenrules[722]: backlog_limit 8192
Jan 31 06:09:13 localhost augenrules[722]: lost 0
Jan 31 06:09:13 localhost augenrules[722]: backlog 3
Jan 31 06:09:13 localhost augenrules[722]: backlog_wait_time 60000
Jan 31 06:09:13 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 31 06:09:13 localhost augenrules[722]: enabled 1
Jan 31 06:09:13 localhost augenrules[722]: failure 1
Jan 31 06:09:13 localhost augenrules[722]: pid 702
Jan 31 06:09:13 localhost augenrules[722]: rate_limit 0
Jan 31 06:09:13 localhost augenrules[722]: backlog_limit 8192
Jan 31 06:09:13 localhost augenrules[722]: lost 0
Jan 31 06:09:13 localhost augenrules[722]: backlog 3
Jan 31 06:09:13 localhost augenrules[722]: backlog_wait_time 60000
Jan 31 06:09:13 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 31 06:09:13 localhost augenrules[722]: enabled 1
Jan 31 06:09:13 localhost augenrules[722]: failure 1
Jan 31 06:09:13 localhost augenrules[722]: pid 702
Jan 31 06:09:13 localhost augenrules[722]: rate_limit 0
Jan 31 06:09:13 localhost augenrules[722]: backlog_limit 8192
Jan 31 06:09:13 localhost augenrules[722]: lost 0
Jan 31 06:09:13 localhost augenrules[722]: backlog 3
Jan 31 06:09:13 localhost augenrules[722]: backlog_wait_time 60000
Jan 31 06:09:13 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 31 06:09:13 localhost systemd[1]: Started Security Auditing Service.
Jan 31 06:09:13 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 06:09:13 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 06:09:14 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 31 06:09:14 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 06:09:14 localhost systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 06:09:14 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 06:09:14 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 06:09:14 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 06:09:14 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 06:09:14 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 06:09:14 localhost systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:09:14 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 06:09:14 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 06:09:14 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 06:09:14 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 06:09:14 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 06:09:14 localhost systemd[1]: Starting Update is Completed...
Jan 31 06:09:14 localhost systemd[1]: Finished Update is Completed.
Jan 31 06:09:14 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 06:09:14 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 06:09:14 localhost kernel: Console: switching to colour dummy device 80x25
Jan 31 06:09:14 localhost systemd[1]: Reached target System Initialization.
Jan 31 06:09:14 localhost systemd[1]: Started dnf makecache --timer.
Jan 31 06:09:14 localhost systemd[1]: Started Daily rotation of log files.
Jan 31 06:09:14 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 06:09:14 localhost systemd[1]: Reached target Timer Units.
Jan 31 06:09:14 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 06:09:14 localhost kernel: [drm] features: -context_init
Jan 31 06:09:14 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 06:09:14 localhost kernel: [drm] number of scanouts: 1
Jan 31 06:09:14 localhost kernel: [drm] number of cap sets: 0
Jan 31 06:09:14 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 06:09:14 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 06:09:14 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 06:09:14 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 31 06:09:14 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 06:09:14 localhost systemd[1]: Reached target Socket Units.
Jan 31 06:09:14 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 31 06:09:14 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 06:09:14 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 31 06:09:14 localhost systemd[1]: Reached target Basic System.
Jan 31 06:09:14 localhost kernel: kvm_amd: TSC scaling supported
Jan 31 06:09:14 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 31 06:09:14 localhost kernel: kvm_amd: Nested Paging enabled
Jan 31 06:09:14 localhost kernel: kvm_amd: LBR virtualization supported
Jan 31 06:09:14 localhost dbus-broker-lau[787]: Ready
Jan 31 06:09:14 localhost systemd[1]: Starting NTP client/server...
Jan 31 06:09:14 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 06:09:14 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 06:09:14 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 06:09:14 localhost systemd[1]: Started irqbalance daemon.
Jan 31 06:09:14 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 06:09:14 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 06:09:14 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 06:09:14 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 06:09:14 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 31 06:09:14 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 06:09:14 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 31 06:09:14 localhost systemd[1]: Starting User Login Management...
Jan 31 06:09:14 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 06:09:14 localhost systemd-logind[801]: New seat seat0.
Jan 31 06:09:14 localhost systemd-logind[801]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 06:09:14 localhost systemd-logind[801]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 06:09:14 localhost systemd[1]: Started User Login Management.
Jan 31 06:09:14 localhost chronyd[829]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 06:09:14 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 06:09:14 localhost chronyd[829]: Loaded 0 symmetric keys
Jan 31 06:09:14 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 06:09:14 localhost chronyd[829]: Using right/UTC timezone to obtain leap second data
Jan 31 06:09:14 localhost chronyd[829]: Loaded seccomp filter (level 2)
Jan 31 06:09:14 localhost systemd[1]: Started NTP client/server.
Jan 31 06:09:15 localhost iptables.init[799]: iptables: Applying firewall rules: [  OK  ]
Jan 31 06:09:15 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 06:09:15 localhost cloud-init[839]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 06:09:15 +0000. Up 8.42 seconds.
Jan 31 06:09:16 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 31 06:09:16 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 31 06:09:16 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpenfvcnxp.mount: Deactivated successfully.
Jan 31 06:09:16 localhost systemd[1]: Starting Hostname Service...
Jan 31 06:09:16 localhost systemd[1]: Started Hostname Service.
Jan 31 06:09:16 np0005603610.novalocal systemd-hostnamed[853]: Hostname set to <np0005603610.novalocal> (static)
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Reached target Preparation for Network.
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Starting Network Manager...
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8251] NetworkManager (version 1.54.3-2.el9) is starting... (boot:0cf93558-72ef-4562-a65b-1a5f29acc8ec)
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8258] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8622] manager[0x563143259000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8677] hostname: hostname: using hostnamed
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8678] hostname: static hostname changed from (none) to "np0005603610.novalocal"
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8682] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8805] manager[0x563143259000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8806] manager[0x563143259000]: rfkill: WWAN hardware radio set enabled
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8961] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8962] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8963] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8963] manager: Networking is enabled by state file
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.8965] settings: Loaded settings plugin: keyfile (internal)
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9002] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9031] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9045] dhcp: init: Using DHCP client 'internal'
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9052] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9065] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9076] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9184] device (lo): Activation: starting connection 'lo' (300d976d-4218-4e05-b077-90422be37423)
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9193] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9195] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Started Network Manager.
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9228] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9233] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9236] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9238] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9241] device (eth0): carrier: link connected
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9246] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Reached target Network.
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9254] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9263] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9268] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9268] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9272] manager: NetworkManager state is now CONNECTING
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9274] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9282] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9285] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9317] dhcp4 (eth0): state changed new lease, address=38.129.56.169
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9325] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9348] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9587] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9590] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9591] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9596] device (lo): Activation: successful, device activated.
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9601] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9604] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9606] device (eth0): Activation: successful, device activated.
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9611] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 06:09:16 np0005603610.novalocal NetworkManager[857]: <info>  [1769839756.9613] manager: startup complete
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Reached target NFS client services.
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Reached target Remote File Systems.
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 06:09:16 np0005603610.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 06:09:17 +0000. Up 9.80 seconds.
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.129.56.169         | 255.255.255.0 | global | fa:16:3e:ec:6f:42 |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:feec:6f42/64 |       .       |  link  | fa:16:3e:ec:6f:42 |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 06:09:17 np0005603610.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 06:09:21 np0005603610.novalocal chronyd[829]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 31 06:09:21 np0005603610.novalocal chronyd[829]: System clock TAI offset set to 37 seconds
Jan 31 06:09:23 np0005603610.novalocal chronyd[829]: Selected source 209.227.173.244 (2.centos.pool.ntp.org)
Jan 31 06:09:24 np0005603610.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Jan 31 06:09:24 np0005603610.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 31 06:09:24 np0005603610.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Jan 31 06:09:24 np0005603610.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Jan 31 06:09:24 np0005603610.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Jan 31 06:09:24 np0005603610.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: IRQ 25 affinity is now unmanaged
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: IRQ 31 affinity is now unmanaged
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: IRQ 28 affinity is now unmanaged
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: IRQ 32 affinity is now unmanaged
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: IRQ 30 affinity is now unmanaged
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 06:09:25 np0005603610.novalocal irqbalance[800]: IRQ 29 affinity is now unmanaged
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: Generating public/private rsa key pair.
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: The key fingerprint is:
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: SHA256:G2qcFYQHbj3l9h9Azj+c7h1IRPxhVCCjmQvIDlVLKek root@np0005603610.novalocal
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: The key's randomart image is:
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: +---[RSA 3072]----+
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |      o=+.. =.ooo|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |     ++=o+ B.+ o |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |    ..=o* * +.o .|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |     +E  = o.+ o |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |      . S . ..*  |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |     . + o  .o.o |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |      = .    .o. |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |     .       . ..|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |              . .|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: The key fingerprint is:
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: SHA256:pcc0xaKOaxZ9a8b2l3dN/vgk1dRFni3nby535avOfxo root@np0005603610.novalocal
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: The key's randomart image is:
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: +---[ECDSA 256]---+
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |           ..  .o|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |          ...  .=|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |         .+.  ..*|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |        .= .   =.|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |       +S o     +|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |      o o..    .+|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |       o o .  E=*|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |      +   * . +BX|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |     o   + .o==OX|
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: The key fingerprint is:
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: SHA256:lStPhI89hSJG4dq8ldFWGIhvPcDW7tJJrcJoNM25Jsg root@np0005603610.novalocal
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: The key's randomart image is:
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: +--[ED25519 256]--+
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |      o+ o.o.    |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |     o. =oo+     |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |      +=+=B..    |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |     =o.*@=o.    |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |   ..oo=S=*+     |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |    E +o*+=.     |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |     ..o o.      |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |                 |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: |                 |
Jan 31 06:09:26 np0005603610.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Reached target Network is Online.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Starting System Logging Service...
Jan 31 06:09:26 np0005603610.novalocal sm-notify[1004]: Version 2.5.4 starting
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Starting Permit User Sessions...
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Finished Permit User Sessions.
Jan 31 06:09:26 np0005603610.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Jan 31 06:09:26 np0005603610.novalocal sshd[1006]: Server listening on :: port 22.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Started Command Scheduler.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Started Getty on tty1.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Reached target Login Prompts.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 31 06:09:26 np0005603610.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Jan 31 06:09:26 np0005603610.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 31 06:09:26 np0005603610.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 38% if used.)
Jan 31 06:09:26 np0005603610.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Jan 31 06:09:26 np0005603610.novalocal rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Jan 31 06:09:26 np0005603610.novalocal rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Started System Logging Service.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Reached target Multi-User System.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 06:09:26 np0005603610.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 06:09:26 np0005603610.novalocal sshd-session[1027]: Unable to negotiate with 38.102.83.114 port 56684: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 31 06:09:26 np0005603610.novalocal sshd-session[1052]: Unable to negotiate with 38.102.83.114 port 56698: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 31 06:09:26 np0005603610.novalocal rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:09:26 np0005603610.novalocal sshd-session[1063]: Unable to negotiate with 38.102.83.114 port 56700: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 31 06:09:26 np0005603610.novalocal sshd-session[1011]: Connection closed by 38.102.83.114 port 56676 [preauth]
Jan 31 06:09:26 np0005603610.novalocal sshd-session[1038]: Connection closed by 38.102.83.114 port 56686 [preauth]
Jan 31 06:09:26 np0005603610.novalocal kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Jan 31 06:09:26 np0005603610.novalocal kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 06:09:26 np0005603610.novalocal sshd-session[1086]: Unable to negotiate with 38.102.83.114 port 56710: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 31 06:09:26 np0005603610.novalocal cloud-init[1089]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 06:09:26 +0000. Up 19.42 seconds.
Jan 31 06:09:26 np0005603610.novalocal sshd-session[1094]: Unable to negotiate with 38.102.83.114 port 56718: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 31 06:09:26 np0005603610.novalocal sshd-session[1073]: Connection closed by 38.102.83.114 port 56702 [preauth]
Jan 31 06:09:27 np0005603610.novalocal sshd-session[1075]: Connection closed by 38.102.83.114 port 56708 [preauth]
Jan 31 06:09:27 np0005603610.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 06:09:27 np0005603610.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 06:09:27 np0005603610.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:09:27 np0005603610.novalocal cloud-init[1246]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 06:09:27 +0000. Up 19.81 seconds.
Jan 31 06:09:27 np0005603610.novalocal cloud-init[1275]: #############################################################
Jan 31 06:09:27 np0005603610.novalocal cloud-init[1278]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 06:09:27 np0005603610.novalocal cloud-init[1282]: 256 SHA256:pcc0xaKOaxZ9a8b2l3dN/vgk1dRFni3nby535avOfxo root@np0005603610.novalocal (ECDSA)
Jan 31 06:09:27 np0005603610.novalocal cloud-init[1284]: 256 SHA256:lStPhI89hSJG4dq8ldFWGIhvPcDW7tJJrcJoNM25Jsg root@np0005603610.novalocal (ED25519)
Jan 31 06:09:27 np0005603610.novalocal cloud-init[1289]: 3072 SHA256:G2qcFYQHbj3l9h9Azj+c7h1IRPxhVCCjmQvIDlVLKek root@np0005603610.novalocal (RSA)
Jan 31 06:09:27 np0005603610.novalocal cloud-init[1290]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 06:09:27 np0005603610.novalocal cloud-init[1292]: #############################################################
Jan 31 06:09:27 np0005603610.novalocal cloud-init[1246]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 06:09:27 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 19.97 seconds
Jan 31 06:09:27 np0005603610.novalocal dracut[1301]: dracut-057-102.git20250818.el9
Jan 31 06:09:27 np0005603610.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 06:09:27 np0005603610.novalocal systemd[1]: Reached target Cloud-init target.
Jan 31 06:09:27 np0005603610.novalocal dracut[1303]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 06:09:28 np0005603610.novalocal dracut[1303]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: memstrack is not available
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: memstrack is not available
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 06:09:29 np0005603610.novalocal dracut[1303]: *** Including module: systemd ***
Jan 31 06:09:30 np0005603610.novalocal dracut[1303]: *** Including module: fips ***
Jan 31 06:09:30 np0005603610.novalocal dracut[1303]: *** Including module: systemd-initrd ***
Jan 31 06:09:30 np0005603610.novalocal dracut[1303]: *** Including module: i18n ***
Jan 31 06:09:30 np0005603610.novalocal dracut[1303]: *** Including module: drm ***
Jan 31 06:09:30 np0005603610.novalocal dracut[1303]: *** Including module: prefixdevname ***
Jan 31 06:09:30 np0005603610.novalocal dracut[1303]: *** Including module: kernel-modules ***
Jan 31 06:09:30 np0005603610.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 31 06:09:31 np0005603610.novalocal dracut[1303]: *** Including module: kernel-modules-extra ***
Jan 31 06:09:31 np0005603610.novalocal dracut[1303]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 31 06:09:31 np0005603610.novalocal dracut[1303]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 31 06:09:31 np0005603610.novalocal dracut[1303]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 31 06:09:31 np0005603610.novalocal dracut[1303]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 31 06:09:31 np0005603610.novalocal dracut[1303]: *** Including module: qemu ***
Jan 31 06:09:31 np0005603610.novalocal dracut[1303]: *** Including module: fstab-sys ***
Jan 31 06:09:31 np0005603610.novalocal dracut[1303]: *** Including module: rootfs-block ***
Jan 31 06:09:31 np0005603610.novalocal dracut[1303]: *** Including module: terminfo ***
Jan 31 06:09:31 np0005603610.novalocal dracut[1303]: *** Including module: udev-rules ***
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]: Skipping udev rule: 91-permissions.rules
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]: *** Including module: virtiofs ***
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]: *** Including module: dracut-systemd ***
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]: *** Including module: usrmount ***
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]: *** Including module: base ***
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]: *** Including module: fs-lib ***
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]: *** Including module: kdumpbase ***
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:   microcode_ctl module: mangling fw_dir
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:     microcode_ctl: configuration "intel" is ignored
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 06:09:32 np0005603610.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]: *** Including module: openssl ***
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]: *** Including module: shutdown ***
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]: *** Including module: squash ***
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]: *** Including modules done ***
Jan 31 06:09:33 np0005603610.novalocal dracut[1303]: *** Installing kernel module dependencies ***
Jan 31 06:09:34 np0005603610.novalocal dracut[1303]: *** Installing kernel module dependencies done ***
Jan 31 06:09:34 np0005603610.novalocal dracut[1303]: *** Resolving executable dependencies ***
Jan 31 06:09:35 np0005603610.novalocal irqbalance[800]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 31 06:09:35 np0005603610.novalocal irqbalance[800]: IRQ 27 affinity is now unmanaged
Jan 31 06:09:36 np0005603610.novalocal dracut[1303]: *** Resolving executable dependencies done ***
Jan 31 06:09:36 np0005603610.novalocal dracut[1303]: *** Generating early-microcode cpio image ***
Jan 31 06:09:36 np0005603610.novalocal dracut[1303]: *** Store current command line parameters ***
Jan 31 06:09:36 np0005603610.novalocal dracut[1303]: Stored kernel commandline:
Jan 31 06:09:36 np0005603610.novalocal dracut[1303]: No dracut internal kernel commandline stored in the initramfs
Jan 31 06:09:36 np0005603610.novalocal dracut[1303]: *** Install squash loader ***
Jan 31 06:09:37 np0005603610.novalocal dracut[1303]: *** Squashing the files inside the initramfs ***
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: *** Squashing the files inside the initramfs done ***
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: *** Hardlinking files ***
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: Mode:           real
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: Files:          50
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: Linked:         0 files
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: Compared:       0 xattrs
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: Compared:       0 files
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: Saved:          0 B
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: Duration:       0.000311 seconds
Jan 31 06:09:38 np0005603610.novalocal dracut[1303]: *** Hardlinking files done ***
Jan 31 06:09:39 np0005603610.novalocal dracut[1303]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 06:09:41 np0005603610.novalocal kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Jan 31 06:09:41 np0005603610.novalocal kdumpctl[1017]: kdump: Starting kdump: [OK]
Jan 31 06:09:41 np0005603610.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 31 06:09:41 np0005603610.novalocal systemd[1]: Startup finished in 1.304s (kernel) + 2.855s (initrd) + 29.392s (userspace) = 33.552s.
Jan 31 06:09:46 np0005603610.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 06:09:53 np0005603610.novalocal sshd-session[4305]: Accepted publickey for zuul from 38.102.83.114 port 52988 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 31 06:09:53 np0005603610.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 31 06:09:53 np0005603610.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 06:09:53 np0005603610.novalocal systemd-logind[801]: New session 1 of user zuul.
Jan 31 06:09:53 np0005603610.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 06:09:53 np0005603610.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Queued start job for default target Main User Target.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Created slice User Application Slice.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Reached target Paths.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Reached target Timers.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Starting D-Bus User Message Bus Socket...
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Starting Create User's Volatile Files and Directories...
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Finished Create User's Volatile Files and Directories.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Listening on D-Bus User Message Bus Socket.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Reached target Sockets.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Reached target Basic System.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Reached target Main User Target.
Jan 31 06:09:53 np0005603610.novalocal systemd[4309]: Startup finished in 212ms.
Jan 31 06:09:53 np0005603610.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 31 06:09:53 np0005603610.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 31 06:09:53 np0005603610.novalocal sshd-session[4305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:09:54 np0005603610.novalocal python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:10:01 np0005603610.novalocal python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:10:15 np0005603610.novalocal python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:10:16 np0005603610.novalocal python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 06:10:18 np0005603610.novalocal python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDchShM99lH6I0ER4M5bdTKJqBTEwI+oB9SwUKCFnfSFe+YXdwGln/ZQz1oTQoc7uHsosGAjxkBLnzurBq9QuoyCLJfHlIRMt33udq87cbS+4TPUzX86YzbvCdjL2JcQ7HQdT/t4eiTsq/T6rUG6NN8sZSab/kVk1sT3I1DEnUGPGWr5xAUZ/TMosNE9wHhXQsHXN13G6YeYDfG/h+84mm6kTISBC+8M8Ne+jGn4udnhGcj24MjbKqS4l405WKsvB7IHwjnkEFFSQ0MXxcPMC+W1PqE0JQeoE6StfGL1kcrrAyVCz+t2vX8dRWY/nDCcOyEiXPb/tEW8ddykqk/ZgDlBYlNimaPvgLPoGTr6XZHfSGRjrYhiPwQI9xa+AHOZOXJsMdEBoZk1VMty2FQTwJgfV/t7gi5q5lagFfQ44wTy8HBcOiaQ08p2URDYYWoqWLaBV2TDvVmjvuCHKZJiWKb9vRE0G1BBNIVjIvkTPeu+7RayIoYTevDiRJsPJ0pJi0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:19 np0005603610.novalocal python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:19 np0005603610.novalocal python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:19 np0005603610.novalocal python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839819.239079-254-238635289424989/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ef956508a7f94c3dbffb4bb08b3ee84f_id_rsa follow=False checksum=38004d5aef71e7771fda9a333b3cee8e58c249ca backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:20 np0005603610.novalocal python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:20 np0005603610.novalocal python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839820.240448-309-3144472475993/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ef956508a7f94c3dbffb4bb08b3ee84f_id_rsa.pub follow=False checksum=aabe17515195de3c7c885977f74a37a021ba1e01 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:22 np0005603610.novalocal python3[4979]: ansible-ping Invoked with data=pong
Jan 31 06:10:23 np0005603610.novalocal python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:10:25 np0005603610.novalocal python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 06:10:26 np0005603610.novalocal python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:26 np0005603610.novalocal python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:27 np0005603610.novalocal python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:27 np0005603610.novalocal python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:28 np0005603610.novalocal python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:28 np0005603610.novalocal python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:29 np0005603610.novalocal sudo[5237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scmqegsoktopfebblfliuolpmensjixk ; /usr/bin/python3'
Jan 31 06:10:29 np0005603610.novalocal sudo[5237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:30 np0005603610.novalocal python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:30 np0005603610.novalocal sudo[5237]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:30 np0005603610.novalocal sudo[5315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktrzntocjftgyphdlcqypubhbzivykhw ; /usr/bin/python3'
Jan 31 06:10:30 np0005603610.novalocal sudo[5315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:30 np0005603610.novalocal python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:30 np0005603610.novalocal sudo[5315]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:30 np0005603610.novalocal sudo[5388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpudqcmettfkegqkedmmxzvcarmmhzls ; /usr/bin/python3'
Jan 31 06:10:30 np0005603610.novalocal sudo[5388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:31 np0005603610.novalocal python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839830.2772565-34-191021491956603/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:31 np0005603610.novalocal sudo[5388]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:31 np0005603610.novalocal python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:32 np0005603610.novalocal python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:32 np0005603610.novalocal python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:32 np0005603610.novalocal python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:32 np0005603610.novalocal python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:33 np0005603610.novalocal python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:33 np0005603610.novalocal python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:34 np0005603610.novalocal python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:34 np0005603610.novalocal python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:34 np0005603610.novalocal python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:34 np0005603610.novalocal python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:35 np0005603610.novalocal python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:35 np0005603610.novalocal python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:35 np0005603610.novalocal python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:36 np0005603610.novalocal python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:36 np0005603610.novalocal python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:36 np0005603610.novalocal python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:37 np0005603610.novalocal python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:37 np0005603610.novalocal python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:37 np0005603610.novalocal python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:38 np0005603610.novalocal python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:38 np0005603610.novalocal python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:38 np0005603610.novalocal python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:38 np0005603610.novalocal python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:39 np0005603610.novalocal python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:39 np0005603610.novalocal python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:41 np0005603610.novalocal sudo[6062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijsjtcjhkajzdwyhtvxnfuavckgstcfm ; /usr/bin/python3'
Jan 31 06:10:41 np0005603610.novalocal sudo[6062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:41 np0005603610.novalocal python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 06:10:41 np0005603610.novalocal systemd[1]: Starting Time & Date Service...
Jan 31 06:10:41 np0005603610.novalocal systemd[1]: Started Time & Date Service.
Jan 31 06:10:41 np0005603610.novalocal systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 31 06:10:41 np0005603610.novalocal sudo[6062]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:41 np0005603610.novalocal sudo[6093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygwpoeguvlibmjnmppbmowynhbxkuxsu ; /usr/bin/python3'
Jan 31 06:10:41 np0005603610.novalocal sudo[6093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:41 np0005603610.novalocal python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:41 np0005603610.novalocal sudo[6093]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:42 np0005603610.novalocal python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:42 np0005603610.novalocal python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769839842.213702-254-63956311222346/source _original_basename=tmpvfhfzr3p follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:43 np0005603610.novalocal python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:44 np0005603610.novalocal python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769839843.1153605-304-36312529487685/source _original_basename=tmpm8w2yd3j follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:45 np0005603610.novalocal sudo[6513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cidxcxfosusfxlvsfkkzpegtarhchwxw ; /usr/bin/python3'
Jan 31 06:10:45 np0005603610.novalocal sudo[6513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:45 np0005603610.novalocal python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:45 np0005603610.novalocal sudo[6513]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:45 np0005603610.novalocal sudo[6586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hytsqcrcxzefurrrdqmvzfsgvqbofndo ; /usr/bin/python3'
Jan 31 06:10:45 np0005603610.novalocal sudo[6586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:45 np0005603610.novalocal python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769839845.218647-384-174996880704845/source _original_basename=tmpqybs7xdp follow=False checksum=001853d31034d3172874f9f777332fa0768b617c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:45 np0005603610.novalocal sudo[6586]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:46 np0005603610.novalocal python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:46 np0005603610.novalocal python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:47 np0005603610.novalocal sudo[6740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfajiqemfhljqkhiujldvmjvlcavosls ; /usr/bin/python3'
Jan 31 06:10:47 np0005603610.novalocal sudo[6740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:47 np0005603610.novalocal python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:47 np0005603610.novalocal sudo[6740]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:47 np0005603610.novalocal sudo[6813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvqmbxahzgsschmimmkojholmxnbznfd ; /usr/bin/python3'
Jan 31 06:10:47 np0005603610.novalocal sudo[6813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:48 np0005603610.novalocal python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839847.283184-454-239166268545793/source _original_basename=tmpvcx98unn follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:48 np0005603610.novalocal sudo[6813]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:48 np0005603610.novalocal sudo[6864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svzxtznxkkpwxyqwnoiocjoqtzvttacp ; /usr/bin/python3'
Jan 31 06:10:48 np0005603610.novalocal sudo[6864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:48 np0005603610.novalocal python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-a598-46a5-00000000001f-1-compute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:48 np0005603610.novalocal sudo[6864]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:49 np0005603610.novalocal python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-a598-46a5-000000000020-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 06:10:50 np0005603610.novalocal python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:11 np0005603610.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 06:11:12 np0005603610.novalocal sudo[6949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fugglnwqfjpsqfeouavwttgixzlhsgsu ; /usr/bin/python3'
Jan 31 06:11:12 np0005603610.novalocal sudo[6949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:13 np0005603610.novalocal python3[6951]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:13 np0005603610.novalocal sudo[6949]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:11 np0005603610.novalocal systemd[4309]: Starting Mark boot as successful...
Jan 31 06:12:11 np0005603610.novalocal systemd[4309]: Finished Mark boot as successful.
Jan 31 06:12:13 np0005603610.novalocal sshd-session[4318]: Received disconnect from 38.102.83.114 port 52988:11: disconnected by user
Jan 31 06:12:13 np0005603610.novalocal sshd-session[4318]: Disconnected from user zuul 38.102.83.114 port 52988
Jan 31 06:12:13 np0005603610.novalocal sshd-session[4305]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:12:13 np0005603610.novalocal systemd-logind[801]: Session 1 logged out. Waiting for processes to exit.
Jan 31 06:12:38 np0005603610.novalocal chronyd[829]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 31 06:12:45 np0005603610.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 06:12:45 np0005603610.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 06:12:45 np0005603610.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 06:12:45 np0005603610.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 06:12:45 np0005603610.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 06:12:45 np0005603610.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 06:12:45 np0005603610.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 06:12:45 np0005603610.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 06:12:45 np0005603610.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 06:12:45 np0005603610.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3211] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 06:12:45 np0005603610.novalocal systemd-udevd[6953]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3324] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3343] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3347] device (eth1): carrier: link connected
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3349] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3354] policy: auto-activating connection 'Wired connection 1' (2f00db0b-ca36-316b-8cc5-7f01c76d6ed5)
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3357] device (eth1): Activation: starting connection 'Wired connection 1' (2f00db0b-ca36-316b-8cc5-7f01c76d6ed5)
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3358] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3362] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3365] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:12:45 np0005603610.novalocal NetworkManager[857]: <info>  [1769839965.3369] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:12:46 np0005603610.novalocal sshd-session[6957]: Accepted publickey for zuul from 38.102.83.114 port 47064 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:12:46 np0005603610.novalocal systemd-logind[801]: New session 3 of user zuul.
Jan 31 06:12:46 np0005603610.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 31 06:12:46 np0005603610.novalocal sshd-session[6957]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:12:47 np0005603610.novalocal python3[6984]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-8b99-369b-0000000001f6-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:12:57 np0005603610.novalocal sudo[7062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejuncalxhogngqthuhseyhzdosmderrw ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:12:57 np0005603610.novalocal sudo[7062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:57 np0005603610.novalocal python3[7064]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:12:57 np0005603610.novalocal sudo[7062]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:57 np0005603610.novalocal sudo[7135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgdomrvtctylijgiyczcwxpwxpzfpvmt ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:12:57 np0005603610.novalocal sudo[7135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:57 np0005603610.novalocal python3[7137]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839976.9541595-206-139651596850331/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=0ef70d0372f7e203cca964ab42ac6644076bc6a2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:12:57 np0005603610.novalocal sudo[7135]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:57 np0005603610.novalocal sudo[7185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljnpdyaaguchylvfvajpqkqsdsdwbfmi ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:12:57 np0005603610.novalocal sudo[7185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:58 np0005603610.novalocal python3[7187]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[857]: <info>  [1769839978.2013] caught SIGTERM, shutting down normally.
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[857]: <info>  [1769839978.2019] dhcp4 (eth0): canceled DHCP transaction
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[857]: <info>  [1769839978.2019] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[857]: <info>  [1769839978.2019] dhcp4 (eth0): state changed no lease
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[857]: <info>  [1769839978.2021] manager: NetworkManager state is now CONNECTING
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Stopping Network Manager...
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[857]: <info>  [1769839978.2241] dhcp4 (eth1): canceled DHCP transaction
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[857]: <info>  [1769839978.2241] dhcp4 (eth1): state changed no lease
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[857]: <info>  [1769839978.3859] exiting (success)
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Stopped Network Manager.
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: NetworkManager.service: Consumed 1.913s CPU time, 10.1M memory peak.
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Starting Network Manager...
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4251] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:0cf93558-72ef-4562-a65b-1a5f29acc8ec)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4253] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4288] manager[0x55dc8702f000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Starting Hostname Service...
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Started Hostname Service.
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4857] hostname: hostname: using hostnamed
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4857] hostname: static hostname changed from (none) to "np0005603610.novalocal"
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4861] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4866] manager[0x55dc8702f000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4866] manager[0x55dc8702f000]: rfkill: WWAN hardware radio set enabled
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4886] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4887] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4888] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4888] manager: Networking is enabled by state file
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4889] settings: Loaded settings plugin: keyfile (internal)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4892] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4914] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4920] dhcp: init: Using DHCP client 'internal'
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4922] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4926] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4931] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4937] device (lo): Activation: starting connection 'lo' (300d976d-4218-4e05-b077-90422be37423)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4941] device (eth0): carrier: link connected
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4943] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4946] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4947] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4952] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4956] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4960] device (eth1): carrier: link connected
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4962] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4965] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (2f00db0b-ca36-316b-8cc5-7f01c76d6ed5) (indicated)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4965] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4968] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4972] device (eth1): Activation: starting connection 'Wired connection 1' (2f00db0b-ca36-316b-8cc5-7f01c76d6ed5)
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Started Network Manager.
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4977] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4989] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4992] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4994] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4995] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4997] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.4999] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5001] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5003] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5009] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5012] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5018] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5025] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5036] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5039] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5043] device (lo): Activation: successful, device activated.
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5062] dhcp4 (eth0): state changed new lease, address=38.129.56.169
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.5067] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 06:12:58 np0005603610.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 06:12:58 np0005603610.novalocal sudo[7185]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.6473] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.6519] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.6521] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.6523] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.6525] device (eth0): Activation: successful, device activated.
Jan 31 06:12:58 np0005603610.novalocal NetworkManager[7204]: <info>  [1769839978.6529] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 06:12:58 np0005603610.novalocal python3[7260]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-8b99-369b-0000000000d3-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:13:08 np0005603610.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:13:28 np0005603610.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.4736] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 06:13:43 np0005603610.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:13:43 np0005603610.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.4984] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.4987] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.4993] device (eth1): Activation: successful, device activated.
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.4999] manager: startup complete
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5001] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <warn>  [1769840023.5005] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5011] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 06:13:43 np0005603610.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5091] dhcp4 (eth1): canceled DHCP transaction
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5092] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5092] dhcp4 (eth1): state changed no lease
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5105] policy: auto-activating connection 'ci-private-network' (63d96858-969a-5d6a-a372-7ecf8e421040)
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5110] device (eth1): Activation: starting connection 'ci-private-network' (63d96858-969a-5d6a-a372-7ecf8e421040)
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5112] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5115] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5123] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5133] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5541] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5543] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:13:43 np0005603610.novalocal NetworkManager[7204]: <info>  [1769840023.5548] device (eth1): Activation: successful, device activated.
Jan 31 06:13:53 np0005603610.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:13:58 np0005603610.novalocal sshd-session[6960]: Received disconnect from 38.102.83.114 port 47064:11: disconnected by user
Jan 31 06:13:58 np0005603610.novalocal sshd-session[6960]: Disconnected from user zuul 38.102.83.114 port 47064
Jan 31 06:13:58 np0005603610.novalocal sshd-session[6957]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:13:58 np0005603610.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 06:13:58 np0005603610.novalocal systemd[1]: session-3.scope: Consumed 1.240s CPU time.
Jan 31 06:13:58 np0005603610.novalocal systemd-logind[801]: Session 3 logged out. Waiting for processes to exit.
Jan 31 06:13:58 np0005603610.novalocal systemd-logind[801]: Removed session 3.
Jan 31 06:14:13 np0005603610.novalocal sshd-session[7300]: Accepted publickey for zuul from 38.102.83.114 port 49544 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:14:13 np0005603610.novalocal systemd-logind[801]: New session 4 of user zuul.
Jan 31 06:14:13 np0005603610.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 31 06:14:13 np0005603610.novalocal sshd-session[7300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:14:13 np0005603610.novalocal sudo[7379]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfkyxqwhxstcfrvfzdyyfeyyfgnwlssf ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:14:13 np0005603610.novalocal sudo[7379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:14:13 np0005603610.novalocal python3[7381]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:14:13 np0005603610.novalocal sudo[7379]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:14 np0005603610.novalocal sudo[7452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmgkoarjetdvobhhryqyrpuvscyhuqhl ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:14:14 np0005603610.novalocal sudo[7452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:14:14 np0005603610.novalocal python3[7454]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840053.6062605-373-52424328964316/source _original_basename=tmpjua46b7g follow=False checksum=9316f16cb173efcd9fb25ed4519cb4ede33986d5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:14:14 np0005603610.novalocal sudo[7452]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:16 np0005603610.novalocal sshd-session[7303]: Connection closed by 38.102.83.114 port 49544
Jan 31 06:14:16 np0005603610.novalocal sshd-session[7300]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:14:16 np0005603610.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 06:14:16 np0005603610.novalocal systemd-logind[801]: Session 4 logged out. Waiting for processes to exit.
Jan 31 06:14:16 np0005603610.novalocal systemd-logind[801]: Removed session 4.
Jan 31 06:15:11 np0005603610.novalocal systemd[4309]: Created slice User Background Tasks Slice.
Jan 31 06:15:11 np0005603610.novalocal systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 06:15:11 np0005603610.novalocal systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 06:24:11 np0005603610.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 06:24:11 np0005603610.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 06:24:11 np0005603610.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 06:24:11 np0005603610.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 06:27:24 np0005603610.novalocal sshd-session[7490]: Accepted publickey for zuul from 38.102.83.114 port 49060 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:27:24 np0005603610.novalocal systemd-logind[801]: New session 5 of user zuul.
Jan 31 06:27:24 np0005603610.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 31 06:27:24 np0005603610.novalocal sshd-session[7490]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:27:24 np0005603610.novalocal sudo[7517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwqcfvulgqqvpkrpjfrhouytsyrsrqcw ; /usr/bin/python3'
Jan 31 06:27:24 np0005603610.novalocal sudo[7517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:24 np0005603610.novalocal python3[7519]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-879a-7837-000000000cd6-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:24 np0005603610.novalocal sudo[7517]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:25 np0005603610.novalocal sudo[7546]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqdedvdyxrujixxvxsteotykjtsydurf ; /usr/bin/python3'
Jan 31 06:27:25 np0005603610.novalocal sudo[7546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:25 np0005603610.novalocal python3[7548]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:25 np0005603610.novalocal sudo[7546]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:25 np0005603610.novalocal sudo[7572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukyxxoglnzimdtdxoolqantrixrdqffu ; /usr/bin/python3'
Jan 31 06:27:25 np0005603610.novalocal sudo[7572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:25 np0005603610.novalocal python3[7574]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:25 np0005603610.novalocal sudo[7572]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:25 np0005603610.novalocal sudo[7598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heqbffpazmikejkfypjuaydaloortddh ; /usr/bin/python3'
Jan 31 06:27:25 np0005603610.novalocal sudo[7598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:25 np0005603610.novalocal python3[7600]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:25 np0005603610.novalocal sudo[7598]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:26 np0005603610.novalocal sudo[7624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrvxpgzrejwrpymtenmzfvfzypvflnmm ; /usr/bin/python3'
Jan 31 06:27:26 np0005603610.novalocal sudo[7624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:26 np0005603610.novalocal python3[7626]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:26 np0005603610.novalocal sudo[7624]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:26 np0005603610.novalocal sudo[7650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwizdkurxtfisjcxoqnaffxyficfwant ; /usr/bin/python3'
Jan 31 06:27:26 np0005603610.novalocal sudo[7650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:26 np0005603610.novalocal python3[7652]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:26 np0005603610.novalocal sudo[7650]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:27 np0005603610.novalocal sudo[7728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhuqrycxjwvgvueqgzsjwzgatlifqxuo ; /usr/bin/python3'
Jan 31 06:27:27 np0005603610.novalocal sudo[7728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:27 np0005603610.novalocal python3[7730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:27:27 np0005603610.novalocal sudo[7728]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:27 np0005603610.novalocal sudo[7801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfibsaavuowcwurpladpfxtphrdgwevt ; /usr/bin/python3'
Jan 31 06:27:27 np0005603610.novalocal sudo[7801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:27 np0005603610.novalocal python3[7803]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840847.0562506-396-196178873333538/source _original_basename=tmp2uqvm44w follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:27 np0005603610.novalocal sudo[7801]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:28 np0005603610.novalocal sudo[7851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytsrtoavxmydcqedxjzmntkocgxhhaem ; /usr/bin/python3'
Jan 31 06:27:28 np0005603610.novalocal sudo[7851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:29 np0005603610.novalocal python3[7853]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:27:29 np0005603610.novalocal systemd[1]: Reloading.
Jan 31 06:27:29 np0005603610.novalocal systemd-rc-local-generator[7872]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:27:29 np0005603610.novalocal sudo[7851]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:30 np0005603610.novalocal sudo[7907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dneplpfovsfddgozpkbxcwkbothxaesm ; /usr/bin/python3'
Jan 31 06:27:30 np0005603610.novalocal sudo[7907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:30 np0005603610.novalocal python3[7909]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 06:27:30 np0005603610.novalocal sudo[7907]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:30 np0005603610.novalocal sudo[7933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzthcnbzfbhpgejzfsfzztluuzbmfgzg ; /usr/bin/python3'
Jan 31 06:27:30 np0005603610.novalocal sudo[7933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:31 np0005603610.novalocal python3[7935]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:31 np0005603610.novalocal sudo[7933]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:31 np0005603610.novalocal sudo[7961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxjxsxgsrbafvkefqhmepxaepvabphgy ; /usr/bin/python3'
Jan 31 06:27:31 np0005603610.novalocal sudo[7961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:31 np0005603610.novalocal python3[7963]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:31 np0005603610.novalocal sudo[7961]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:31 np0005603610.novalocal sudo[7989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beobaldsmdoovsflaiaqbttshanojatq ; /usr/bin/python3'
Jan 31 06:27:31 np0005603610.novalocal sudo[7989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:31 np0005603610.novalocal python3[7991]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:31 np0005603610.novalocal sudo[7989]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:31 np0005603610.novalocal sudo[8017]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frzomevwnkrkfzhdwqifxaqnrtspkfjl ; /usr/bin/python3'
Jan 31 06:27:31 np0005603610.novalocal sudo[8017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:31 np0005603610.novalocal python3[8019]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:31 np0005603610.novalocal sudo[8017]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:32 np0005603610.novalocal python3[8046]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-879a-7837-000000000cdd-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:33 np0005603610.novalocal python3[8076]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 06:27:35 np0005603610.novalocal sshd-session[7493]: Connection closed by 38.102.83.114 port 49060
Jan 31 06:27:35 np0005603610.novalocal sshd-session[7490]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:27:35 np0005603610.novalocal systemd-logind[801]: Session 5 logged out. Waiting for processes to exit.
Jan 31 06:27:35 np0005603610.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 06:27:35 np0005603610.novalocal systemd[1]: session-5.scope: Consumed 3.331s CPU time.
Jan 31 06:27:35 np0005603610.novalocal systemd-logind[801]: Removed session 5.
Jan 31 06:27:37 np0005603610.novalocal sshd-session[8080]: Accepted publickey for zuul from 38.102.83.114 port 52770 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:27:37 np0005603610.novalocal systemd-logind[801]: New session 6 of user zuul.
Jan 31 06:27:37 np0005603610.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 31 06:27:37 np0005603610.novalocal sshd-session[8080]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:27:37 np0005603610.novalocal sudo[8107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgiisnuivoxcyfsribubinctvtjzoypw ; /usr/bin/python3'
Jan 31 06:27:37 np0005603610.novalocal sudo[8107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:38 np0005603610.novalocal python3[8109]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 06:27:50 np0005603610.novalocal setsebool[8152]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 06:27:50 np0005603610.novalocal setsebool[8152]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 06:28:04 np0005603610.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 31 06:28:04 np0005603610.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:28:04 np0005603610.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 06:28:04 np0005603610.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:28:04 np0005603610.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:28:04 np0005603610.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:28:04 np0005603610.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:28:04 np0005603610.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:28:18 np0005603610.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 31 06:28:18 np0005603610.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:28:18 np0005603610.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 06:28:18 np0005603610.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:28:18 np0005603610.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:28:18 np0005603610.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:28:18 np0005603610.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:28:18 np0005603610.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:28:38 np0005603610.novalocal dbus-broker-launch[793]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 06:28:38 np0005603610.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:28:38 np0005603610.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:28:38 np0005603610.novalocal systemd[1]: Reloading.
Jan 31 06:28:38 np0005603610.novalocal systemd-rc-local-generator[8922]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:28:38 np0005603610.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:28:43 np0005603610.novalocal sudo[8107]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:52 np0005603610.novalocal python3[16718]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-ca6c-e883-00000000000c-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:28:53 np0005603610.novalocal kernel: evm: overlay not supported
Jan 31 06:28:53 np0005603610.novalocal systemd[4309]: Starting D-Bus User Message Bus...
Jan 31 06:28:53 np0005603610.novalocal dbus-broker-launch[17185]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 06:28:53 np0005603610.novalocal dbus-broker-launch[17185]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 06:28:53 np0005603610.novalocal systemd[4309]: Started D-Bus User Message Bus.
Jan 31 06:28:53 np0005603610.novalocal dbus-broker-lau[17185]: Ready
Jan 31 06:28:53 np0005603610.novalocal systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 06:28:53 np0005603610.novalocal systemd[4309]: Created slice Slice /user.
Jan 31 06:28:53 np0005603610.novalocal systemd[4309]: podman-17085.scope: unit configures an IP firewall, but not running as root.
Jan 31 06:28:53 np0005603610.novalocal systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 06:28:53 np0005603610.novalocal systemd[4309]: Started podman-17085.scope.
Jan 31 06:28:53 np0005603610.novalocal systemd[4309]: Started podman-pause-001da559.scope.
Jan 31 06:28:56 np0005603610.novalocal sudo[18554]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feoxsqzausnprmgeaadvbwdebqoeuegv ; /usr/bin/python3'
Jan 31 06:28:56 np0005603610.novalocal sudo[18554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:28:56 np0005603610.novalocal python3[18564]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.80:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.80:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:28:56 np0005603610.novalocal python3[18564]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 06:28:56 np0005603610.novalocal sudo[18554]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:56 np0005603610.novalocal sshd-session[8083]: Connection closed by 38.102.83.114 port 52770
Jan 31 06:28:56 np0005603610.novalocal sshd-session[8080]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:28:56 np0005603610.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 06:28:56 np0005603610.novalocal systemd[1]: session-6.scope: Consumed 44.915s CPU time.
Jan 31 06:28:56 np0005603610.novalocal systemd-logind[801]: Session 6 logged out. Waiting for processes to exit.
Jan 31 06:28:56 np0005603610.novalocal systemd-logind[801]: Removed session 6.
Jan 31 06:29:20 np0005603610.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:29:20 np0005603610.novalocal systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:29:20 np0005603610.novalocal systemd[1]: man-db-cache-update.service: Consumed 40.369s CPU time.
Jan 31 06:29:20 np0005603610.novalocal systemd[1]: run-rb42760afd6944f38b0872450eb9a3aeb.service: Deactivated successfully.
Jan 31 06:29:31 np0005603610.novalocal sshd-session[29654]: Unable to negotiate with 38.129.56.250 port 60050: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 06:29:31 np0005603610.novalocal sshd-session[29655]: Connection closed by 38.129.56.250 port 60048 [preauth]
Jan 31 06:29:31 np0005603610.novalocal sshd-session[29656]: Connection closed by 38.129.56.250 port 60046 [preauth]
Jan 31 06:29:31 np0005603610.novalocal sshd-session[29658]: Unable to negotiate with 38.129.56.250 port 60062: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 06:29:31 np0005603610.novalocal sshd-session[29660]: Unable to negotiate with 38.129.56.250 port 60060: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 06:29:36 np0005603610.novalocal sshd-session[29664]: Accepted publickey for zuul from 38.102.83.114 port 52376 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:29:36 np0005603610.novalocal systemd-logind[801]: New session 7 of user zuul.
Jan 31 06:29:36 np0005603610.novalocal systemd[1]: Started Session 7 of User zuul.
Jan 31 06:29:36 np0005603610.novalocal sshd-session[29664]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:29:36 np0005603610.novalocal python3[29691]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0FMDZV22TCmyOw/BdKuDUy2cGp4BfiRzOwx/JLqMraff8LZ9BOqpkfzg5VkWRHTAeVSl/Uyb0smrpcknokWxg= zuul@np0005603607.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:29:38 np0005603610.novalocal sudo[29715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mglpamfvhrlcapslvfsndvysfwovvjip ; /usr/bin/python3'
Jan 31 06:29:38 np0005603610.novalocal sudo[29715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:38 np0005603610.novalocal python3[29717]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0FMDZV22TCmyOw/BdKuDUy2cGp4BfiRzOwx/JLqMraff8LZ9BOqpkfzg5VkWRHTAeVSl/Uyb0smrpcknokWxg= zuul@np0005603607.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:29:38 np0005603610.novalocal sudo[29715]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:39 np0005603610.novalocal sudo[29741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezyblzuekckgopgyyyzlemiprpyjubtw ; /usr/bin/python3'
Jan 31 06:29:39 np0005603610.novalocal sudo[29741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:39 np0005603610.novalocal python3[29743]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603610.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 06:29:39 np0005603610.novalocal useradd[29745]: new group: name=cloud-admin, GID=1002
Jan 31 06:29:39 np0005603610.novalocal useradd[29745]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 31 06:29:40 np0005603610.novalocal sudo[29741]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:40 np0005603610.novalocal sudo[29775]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpqkrydpopqkohbsessvoxvuyegvdffx ; /usr/bin/python3'
Jan 31 06:29:40 np0005603610.novalocal sudo[29775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:41 np0005603610.novalocal python3[29777]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0FMDZV22TCmyOw/BdKuDUy2cGp4BfiRzOwx/JLqMraff8LZ9BOqpkfzg5VkWRHTAeVSl/Uyb0smrpcknokWxg= zuul@np0005603607.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:29:41 np0005603610.novalocal sudo[29775]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:41 np0005603610.novalocal sudo[29853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reupnxhdovhllvydzhtalduajmvxchjv ; /usr/bin/python3'
Jan 31 06:29:41 np0005603610.novalocal sudo[29853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:41 np0005603610.novalocal python3[29855]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:29:41 np0005603610.novalocal sudo[29853]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:42 np0005603610.novalocal sudo[29926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kszvcrtovtktcikunnuddttjmuuvuzuv ; /usr/bin/python3'
Jan 31 06:29:42 np0005603610.novalocal sudo[29926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:42 np0005603610.novalocal python3[29928]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840981.5817986-170-193536894513609/source _original_basename=tmpgab7bks4 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:29:42 np0005603610.novalocal sudo[29926]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:42 np0005603610.novalocal sudo[29976]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vceddshcfjpppcrqoktdbyelososcapv ; /usr/bin/python3'
Jan 31 06:29:42 np0005603610.novalocal sudo[29976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:43 np0005603610.novalocal python3[29978]: ansible-ansible.builtin.hostname Invoked with name=compute-2 use=systemd
Jan 31 06:29:43 np0005603610.novalocal systemd[1]: Starting Hostname Service...
Jan 31 06:29:43 np0005603610.novalocal systemd[1]: Started Hostname Service.
Jan 31 06:29:43 np0005603610.novalocal systemd-hostnamed[29982]: Changed pretty hostname to 'compute-2'
Jan 31 06:29:43 compute-2 systemd-hostnamed[29982]: Hostname set to <compute-2> (static)
Jan 31 06:29:43 compute-2 NetworkManager[7204]: <info>  [1769840983.2193] hostname: static hostname changed from "np0005603610.novalocal" to "compute-2"
Jan 31 06:29:43 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:29:43 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:29:43 compute-2 sudo[29976]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:43 compute-2 sshd-session[29667]: Connection closed by 38.102.83.114 port 52376
Jan 31 06:29:43 compute-2 sshd-session[29664]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:29:43 compute-2 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 06:29:43 compute-2 systemd[1]: session-7.scope: Consumed 2.017s CPU time.
Jan 31 06:29:43 compute-2 systemd-logind[801]: Session 7 logged out. Waiting for processes to exit.
Jan 31 06:29:43 compute-2 systemd-logind[801]: Removed session 7.
Jan 31 06:29:53 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:30:13 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 06:34:58 compute-2 sshd-session[30004]: Accepted publickey for zuul from 38.129.56.250 port 34206 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:34:58 compute-2 systemd-logind[801]: New session 8 of user zuul.
Jan 31 06:34:58 compute-2 systemd[1]: Started Session 8 of User zuul.
Jan 31 06:34:58 compute-2 sshd-session[30004]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:34:58 compute-2 python3[30080]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:35:00 compute-2 sudo[30194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubbjbyjuwxgescjgiwddtrpmnedepxsu ; /usr/bin/python3'
Jan 31 06:35:00 compute-2 sudo[30194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:00 compute-2 python3[30196]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:00 compute-2 sudo[30194]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:00 compute-2 sudo[30267]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zohwepeduqmvlalxgbfnialowdawmybi ; /usr/bin/python3'
Jan 31 06:35:00 compute-2 sudo[30267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:00 compute-2 python3[30269]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2926812-34137-159375312384638/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:00 compute-2 sudo[30267]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:01 compute-2 sudo[30293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhkjjtieqbeqvqabivfgqbfinoprhvhl ; /usr/bin/python3'
Jan 31 06:35:01 compute-2 sudo[30293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:01 compute-2 python3[30295]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:01 compute-2 sudo[30293]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:01 compute-2 sudo[30366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntesmppgdfvpnitnzqfdgfwkfjddpfdy ; /usr/bin/python3'
Jan 31 06:35:01 compute-2 sudo[30366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:01 compute-2 python3[30368]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2926812-34137-159375312384638/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:01 compute-2 sudo[30366]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:01 compute-2 sudo[30392]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iswywrjyqovptvoexzyaemtemtxgjigv ; /usr/bin/python3'
Jan 31 06:35:01 compute-2 sudo[30392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:01 compute-2 python3[30394]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:01 compute-2 sudo[30392]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:01 compute-2 sudo[30465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpinlpnqgwjaufetaavfutjkpuubsmxj ; /usr/bin/python3'
Jan 31 06:35:01 compute-2 sudo[30465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:01 compute-2 python3[30467]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2926812-34137-159375312384638/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:01 compute-2 sudo[30465]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:02 compute-2 sudo[30491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goalmtrvmzyoahvurxofduyiwabmfqkm ; /usr/bin/python3'
Jan 31 06:35:02 compute-2 sudo[30491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:02 compute-2 python3[30493]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:02 compute-2 sudo[30491]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:02 compute-2 sudo[30564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryrugtapddvrrsefldizcvfudfjqqxtu ; /usr/bin/python3'
Jan 31 06:35:02 compute-2 sudo[30564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:02 compute-2 python3[30566]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2926812-34137-159375312384638/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:02 compute-2 sudo[30564]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:02 compute-2 sudo[30590]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baepcsdzphedfmglxwlfkpegllkgkydu ; /usr/bin/python3'
Jan 31 06:35:02 compute-2 sudo[30590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:02 compute-2 python3[30592]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:02 compute-2 sudo[30590]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:02 compute-2 sudo[30663]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fumlitudlqmvwoqhoxqzunsodtzwdusf ; /usr/bin/python3'
Jan 31 06:35:02 compute-2 sudo[30663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:02 compute-2 python3[30665]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2926812-34137-159375312384638/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:02 compute-2 sudo[30663]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:02 compute-2 sudo[30689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbitenjqsoxlqmlzzphqinjmzkblmzry ; /usr/bin/python3'
Jan 31 06:35:02 compute-2 sudo[30689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:03 compute-2 python3[30691]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:03 compute-2 sudo[30689]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:03 compute-2 sudo[30762]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zipatbwhxhiyvdrqwdrttcimiutjfber ; /usr/bin/python3'
Jan 31 06:35:03 compute-2 sudo[30762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:03 compute-2 python3[30764]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2926812-34137-159375312384638/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:03 compute-2 sudo[30762]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:03 compute-2 sudo[30788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dabblbbmtdhhluppzsfngisntsqwxflm ; /usr/bin/python3'
Jan 31 06:35:03 compute-2 sudo[30788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:03 compute-2 python3[30790]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:03 compute-2 sudo[30788]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:03 compute-2 sudo[30861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhiglrhjnoojyfpxuvvgnawtfydaizke ; /usr/bin/python3'
Jan 31 06:35:03 compute-2 sudo[30861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:03 compute-2 python3[30863]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2926812-34137-159375312384638/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:03 compute-2 sudo[30861]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:15 compute-2 python3[30911]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:40:15 compute-2 sshd-session[30007]: Received disconnect from 38.129.56.250 port 34206:11: disconnected by user
Jan 31 06:40:15 compute-2 sshd-session[30007]: Disconnected from user zuul 38.129.56.250 port 34206
Jan 31 06:40:15 compute-2 sshd-session[30004]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:40:15 compute-2 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 06:40:15 compute-2 systemd[1]: session-8.scope: Consumed 3.767s CPU time.
Jan 31 06:40:15 compute-2 systemd-logind[801]: Session 8 logged out. Waiting for processes to exit.
Jan 31 06:40:15 compute-2 systemd-logind[801]: Removed session 8.
Jan 31 06:56:05 compute-2 sshd-session[30924]: Accepted publickey for zuul from 192.168.122.30 port 43548 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 06:56:05 compute-2 systemd-logind[801]: New session 9 of user zuul.
Jan 31 06:56:05 compute-2 systemd[1]: Started Session 9 of User zuul.
Jan 31 06:56:05 compute-2 sshd-session[30924]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:56:06 compute-2 python3.9[31077]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:07 compute-2 sudo[31256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcthwpidpibshxtpyvobtvauwzbonlzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842567.5340202-59-275042412874362/AnsiballZ_command.py'
Jan 31 06:56:07 compute-2 sudo[31256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:08 compute-2 python3.9[31258]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:56:20 compute-2 sudo[31256]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:20 compute-2 sshd-session[30927]: Connection closed by 192.168.122.30 port 43548
Jan 31 06:56:20 compute-2 sshd-session[30924]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:56:20 compute-2 systemd-logind[801]: Session 9 logged out. Waiting for processes to exit.
Jan 31 06:56:20 compute-2 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 06:56:20 compute-2 systemd[1]: session-9.scope: Consumed 8.313s CPU time.
Jan 31 06:56:20 compute-2 systemd-logind[801]: Removed session 9.
Jan 31 06:56:40 compute-2 sshd-session[31316]: Accepted publickey for zuul from 192.168.122.30 port 59598 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 06:56:40 compute-2 systemd-logind[801]: New session 10 of user zuul.
Jan 31 06:56:40 compute-2 systemd[1]: Started Session 10 of User zuul.
Jan 31 06:56:40 compute-2 sshd-session[31316]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:56:41 compute-2 python3.9[31469]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 06:56:42 compute-2 python3.9[31643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:43 compute-2 sudo[31793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzcxbsupvcjtyffvxcjwyofybyntxzqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842602.8371935-95-17662169349873/AnsiballZ_command.py'
Jan 31 06:56:43 compute-2 sudo[31793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:43 compute-2 python3.9[31795]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:56:43 compute-2 sudo[31793]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:44 compute-2 sudo[31946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiumrklnnpvsmbukinjkufvhlmpzcyim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842603.820949-132-118853810707489/AnsiballZ_stat.py'
Jan 31 06:56:44 compute-2 sudo[31946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:44 compute-2 python3.9[31948]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:56:44 compute-2 sudo[31946]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:44 compute-2 sudo[32098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfrjzpxvauepjztwhqgcblcicjtflubq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842604.541597-155-171463161292429/AnsiballZ_file.py'
Jan 31 06:56:44 compute-2 sudo[32098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:45 compute-2 python3.9[32100]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:45 compute-2 sudo[32098]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:45 compute-2 sudo[32250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unbdgnubsoxkwspzcethjnsxjpjtnnwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842605.3335319-180-264876603559985/AnsiballZ_stat.py'
Jan 31 06:56:45 compute-2 sudo[32250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:45 compute-2 python3.9[32252]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:56:45 compute-2 sudo[32250]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:46 compute-2 sudo[32373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilnyqxqxyviwxvtazunqdiciwecxecgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842605.3335319-180-264876603559985/AnsiballZ_copy.py'
Jan 31 06:56:46 compute-2 sudo[32373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:46 compute-2 python3.9[32375]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842605.3335319-180-264876603559985/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:46 compute-2 sudo[32373]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:46 compute-2 sudo[32525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubsdjlpgttueizrjmpobdndjwjpwyye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842606.6776047-224-183044850616932/AnsiballZ_setup.py'
Jan 31 06:56:46 compute-2 sudo[32525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:47 compute-2 python3.9[32527]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:47 compute-2 sudo[32525]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:47 compute-2 sudo[32681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itreohrwsmctlqzdiqoddrebpsikcpqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842607.560754-249-248205136727699/AnsiballZ_file.py'
Jan 31 06:56:47 compute-2 sudo[32681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:47 compute-2 python3.9[32683]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:56:47 compute-2 sudo[32681]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:48 compute-2 sudo[32833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mepyapkqcawguojktsrrqqjimjijttcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842608.1645014-276-4069546664210/AnsiballZ_file.py'
Jan 31 06:56:48 compute-2 sudo[32833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:48 compute-2 python3.9[32835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:56:48 compute-2 sudo[32833]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:49 compute-2 python3.9[32985]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:56:52 compute-2 python3.9[33238]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:53 compute-2 python3.9[33388]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:54 compute-2 python3.9[33542]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:55 compute-2 sudo[33698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkblpcmbddkvvafmbotexyxamjobwiyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842615.41964-420-230379566471367/AnsiballZ_setup.py'
Jan 31 06:56:55 compute-2 sudo[33698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:55 compute-2 python3.9[33700]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:56:56 compute-2 sudo[33698]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:56 compute-2 sudo[33782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-labehclquhfbthrnohybfykmajwacdig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842615.41964-420-230379566471367/AnsiballZ_dnf.py'
Jan 31 06:56:56 compute-2 sudo[33782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:56 compute-2 python3.9[33784]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:57:44 compute-2 sshd[1006]: Timeout before authentication for connection from 180.76.172.156 to 38.129.56.169, pid = 30922
Jan 31 06:57:47 compute-2 systemd[1]: Reloading.
Jan 31 06:57:47 compute-2 systemd-rc-local-generator[33982]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:57:47 compute-2 systemd[1]: Starting dnf makecache...
Jan 31 06:57:47 compute-2 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 06:57:47 compute-2 dnf[33993]: Failed determining last makecache time.
Jan 31 06:57:47 compute-2 dnf[33993]: delorean-openstack-barbican-42b4c41831408a8e323 144 kB/s | 3.0 kB     00:00
Jan 31 06:57:47 compute-2 dnf[33993]: delorean-python-glean-642fffe0203a8ffcc2443db52 161 kB/s | 3.0 kB     00:00
Jan 31 06:57:47 compute-2 dnf[33993]: delorean-openstack-cinder-1c00d6490d88e436f26ef 156 kB/s | 3.0 kB     00:00
Jan 31 06:57:47 compute-2 dnf[33993]: delorean-python-stevedore-c4acc5639fd2329372142 165 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-python-cloudkitty-tests-tempest-783703 167 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-diskimage-builder-61b717cc45660834fe9a 166 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 systemd[1]: Reloading.
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-openstack-nova-eaa65f0b85123a4ee343246 171 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-python-designate-tests-tempest-347fdbc 164 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-openstack-glance-1fd12c29b339f30fe823e 156 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 systemd-rc-local-generator[34036]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 140 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-openstack-manila-d783d10e75495b73866db 181 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-openstack-neutron-95cadbd379667c8520c8 161 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-openstack-octavia-5975097dd4b021385178 130 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-openstack-watcher-c014f81a8647287f6dcc 127 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-python-tcib-78032d201b02cee27e8e644c61 177 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 182 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-openstack-swift-dc98a8463506ac520c469a 163 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 systemd[1]: Reloading.
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-python-tempestconf-8515371b7cceebd4282 151 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: delorean-openstack-heat-ui-013accbfd179753bc3f0 172 kB/s | 3.0 kB     00:00
Jan 31 06:57:48 compute-2 systemd-rc-local-generator[34078]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:57:48 compute-2 dnf[33993]: CentOS Stream 9 - BaseOS                         61 kB/s | 6.1 kB     00:00
Jan 31 06:57:48 compute-2 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 06:57:48 compute-2 dnf[33993]: CentOS Stream 9 - AppStream                      62 kB/s | 6.5 kB     00:00
Jan 31 06:57:48 compute-2 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Jan 31 06:57:48 compute-2 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Jan 31 06:57:48 compute-2 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Jan 31 06:57:48 compute-2 dnf[33993]: CentOS Stream 9 - CRB                            57 kB/s | 6.0 kB     00:00
Jan 31 06:57:48 compute-2 dnf[33993]: CentOS Stream 9 - Extras packages                69 kB/s | 7.3 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: dlrn-antelope-testing                           141 kB/s | 3.0 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: dlrn-antelope-build-deps                        158 kB/s | 3.0 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: centos9-rabbitmq                                118 kB/s | 3.0 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: centos9-storage                                 119 kB/s | 3.0 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: centos9-opstools                                133 kB/s | 3.0 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: NFV SIG OpenvSwitch                             125 kB/s | 3.0 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: repo-setup-centos-appstream                     162 kB/s | 4.4 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: repo-setup-centos-baseos                        156 kB/s | 3.9 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: repo-setup-centos-highavailability              158 kB/s | 3.9 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: repo-setup-centos-powertools                    164 kB/s | 4.3 kB     00:00
Jan 31 06:57:49 compute-2 dnf[33993]: Extra Packages for Enterprise Linux 9 - x86_64   98 kB/s |  31 kB     00:00
Jan 31 06:57:50 compute-2 dnf[33993]: Metadata cache created.
Jan 31 06:57:50 compute-2 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 06:57:50 compute-2 systemd[1]: Finished dnf makecache.
Jan 31 06:57:50 compute-2 systemd[1]: dnf-makecache.service: Consumed 2.018s CPU time.
Jan 31 06:58:49 compute-2 kernel: SELinux:  Converting 2727 SID table entries...
Jan 31 06:58:49 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:58:49 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 31 06:58:49 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:58:49 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:58:49 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:58:49 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:58:49 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:58:49 compute-2 dbus-broker-launch[793]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 06:58:49 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:58:49 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:58:49 compute-2 systemd[1]: Reloading.
Jan 31 06:58:49 compute-2 systemd-rc-local-generator[34419]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:58:49 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:58:50 compute-2 sudo[33782]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:50 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:58:50 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:58:50 compute-2 systemd[1]: run-r2bf0ff0517294f26bbea6c6e4b156e6e.service: Deactivated successfully.
Jan 31 06:58:51 compute-2 sudo[35334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hirvkfxglbtxvorertlhyywipqrljxks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842731.366101-457-119175873011552/AnsiballZ_command.py'
Jan 31 06:58:51 compute-2 sudo[35334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:51 compute-2 python3.9[35336]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:52 compute-2 sudo[35334]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:54 compute-2 sudo[35615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voqfwupspwnhxmrhrdczhdixystburno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842733.52453-481-143934473628888/AnsiballZ_selinux.py'
Jan 31 06:58:54 compute-2 sudo[35615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:54 compute-2 python3.9[35617]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 06:58:54 compute-2 sudo[35615]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:55 compute-2 sudo[35767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axbizdwrhxmyggadcwgimjpcakzbjazl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842735.2140858-513-57119178068960/AnsiballZ_command.py'
Jan 31 06:58:55 compute-2 sudo[35767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:55 compute-2 python3.9[35769]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 06:58:57 compute-2 sudo[35767]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:58 compute-2 sudo[35920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypdcihdombbewqxyhpziefwnugkdtzvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842738.1192453-536-56058689146659/AnsiballZ_file.py'
Jan 31 06:58:58 compute-2 sudo[35920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:58 compute-2 python3.9[35922]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:58 compute-2 sudo[35920]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:59 compute-2 sudo[36072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daovhlcpwdmtnifalsdljhunuqvpygpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842739.2900763-561-264119234972173/AnsiballZ_mount.py'
Jan 31 06:58:59 compute-2 sudo[36072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:59 compute-2 python3.9[36074]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 06:58:59 compute-2 sudo[36072]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:01 compute-2 sudo[36224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncgakjczvioxedhdzlgfrhdjennpktys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842741.4646485-645-220245744176360/AnsiballZ_file.py'
Jan 31 06:59:01 compute-2 sudo[36224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:02 compute-2 python3.9[36226]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:02 compute-2 sudo[36224]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:06 compute-2 sudo[36376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgzfzcvnsivfmfkejovoejnfgpxwwikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842746.2842839-669-87853126217008/AnsiballZ_stat.py'
Jan 31 06:59:06 compute-2 sudo[36376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:07 compute-2 python3.9[36378]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:07 compute-2 sudo[36376]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:07 compute-2 sudo[36499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mudyovhqukeflpjbzxxsqyqidyherypx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842746.2842839-669-87853126217008/AnsiballZ_copy.py'
Jan 31 06:59:07 compute-2 sudo[36499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:07 compute-2 python3.9[36501]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842746.2842839-669-87853126217008/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:07 compute-2 sudo[36499]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:09 compute-2 sudo[36651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmwhudlmsdnxamtbkutvtdhrfmufqrha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842749.5862267-741-226194934643081/AnsiballZ_stat.py'
Jan 31 06:59:09 compute-2 sudo[36651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:10 compute-2 python3.9[36653]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:59:10 compute-2 sudo[36651]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:10 compute-2 sudo[36803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvwfqkueacmpixjrmsevkabslmmcdryd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842750.3474405-765-81575674917933/AnsiballZ_command.py'
Jan 31 06:59:10 compute-2 sudo[36803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:10 compute-2 python3.9[36805]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:10 compute-2 sudo[36803]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:11 compute-2 sudo[36956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkwkmwykqqypqwsomglzcztfnxzsfnvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842751.488492-789-247277268899471/AnsiballZ_file.py'
Jan 31 06:59:11 compute-2 sudo[36956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:12 compute-2 python3.9[36958]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:12 compute-2 sudo[36956]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:12 compute-2 sudo[37108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnxytqkrctlkckjdpjqstehkiniyyajd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842752.486568-822-206779690892746/AnsiballZ_getent.py'
Jan 31 06:59:12 compute-2 sudo[37108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:13 compute-2 python3.9[37110]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 06:59:13 compute-2 sudo[37108]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:13 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:59:13 compute-2 sudo[37262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnnwygeiimaffdgvzgibzyqqnwkflcfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842753.3844097-845-101795999598632/AnsiballZ_group.py'
Jan 31 06:59:13 compute-2 sudo[37262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:13 compute-2 python3.9[37264]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:59:14 compute-2 groupadd[37265]: group added to /etc/group: name=qemu, GID=107
Jan 31 06:59:14 compute-2 groupadd[37265]: group added to /etc/gshadow: name=qemu
Jan 31 06:59:14 compute-2 groupadd[37265]: new group: name=qemu, GID=107
Jan 31 06:59:14 compute-2 sudo[37262]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:14 compute-2 sudo[37420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-komnnrmeweacdnhthyrprvvprhdtuoyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842754.294057-869-249828958563781/AnsiballZ_user.py'
Jan 31 06:59:14 compute-2 sudo[37420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:15 compute-2 python3.9[37422]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 06:59:15 compute-2 useradd[37424]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 06:59:15 compute-2 sudo[37420]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:15 compute-2 sudo[37580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-angojlrarqzzllwniyiwlmmixapsfgjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842755.4482226-894-33065295101110/AnsiballZ_getent.py'
Jan 31 06:59:15 compute-2 sudo[37580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:15 compute-2 python3.9[37582]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 06:59:15 compute-2 sudo[37580]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:16 compute-2 sudo[37733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxhigahypxxjyyrlqauhmggfiskjmjil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842756.0733337-918-242770721438512/AnsiballZ_group.py'
Jan 31 06:59:16 compute-2 sudo[37733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:16 compute-2 python3.9[37735]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:59:16 compute-2 groupadd[37736]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 31 06:59:16 compute-2 groupadd[37736]: group added to /etc/gshadow: name=hugetlbfs
Jan 31 06:59:16 compute-2 groupadd[37736]: new group: name=hugetlbfs, GID=42477
Jan 31 06:59:16 compute-2 sudo[37733]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:17 compute-2 sudo[37891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxrufspyzrwwdxvjoiiarsfqpikjmuir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842756.8749568-944-151226286065991/AnsiballZ_file.py'
Jan 31 06:59:17 compute-2 sudo[37891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:17 compute-2 python3.9[37893]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 06:59:17 compute-2 sudo[37891]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:18 compute-2 sudo[38043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttzskoexultfiruvwuowwotvdkvffhvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842757.8695688-977-71883117472255/AnsiballZ_dnf.py'
Jan 31 06:59:18 compute-2 sudo[38043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:18 compute-2 python3.9[38045]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:59:20 compute-2 sudo[38043]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:21 compute-2 sudo[38197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovkijtnofqkkwkcvzgsolopuoltalmzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842761.0742028-1002-136761872938392/AnsiballZ_file.py'
Jan 31 06:59:21 compute-2 sudo[38197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:21 compute-2 python3.9[38199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:21 compute-2 sudo[38197]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:21 compute-2 sudo[38349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlhcbtpvfboogcopdbuuywoscxsfohwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842761.7583528-1026-36228779095687/AnsiballZ_stat.py'
Jan 31 06:59:21 compute-2 sudo[38349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:22 compute-2 python3.9[38351]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:22 compute-2 sudo[38349]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:22 compute-2 sudo[38472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxsswsybrvysoxlvhabdqajcbhmvlbvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842761.7583528-1026-36228779095687/AnsiballZ_copy.py'
Jan 31 06:59:22 compute-2 sudo[38472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:22 compute-2 python3.9[38474]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842761.7583528-1026-36228779095687/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:22 compute-2 sudo[38472]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:23 compute-2 sudo[38624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcivrfsybezplrvudgtobkkmhqmujmno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842762.9730532-1071-261745877868798/AnsiballZ_systemd.py'
Jan 31 06:59:23 compute-2 sudo[38624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:23 compute-2 python3.9[38626]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:59:23 compute-2 systemd[1]: Starting Load Kernel Modules...
Jan 31 06:59:23 compute-2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 06:59:23 compute-2 kernel: Bridge firewalling registered
Jan 31 06:59:23 compute-2 systemd-modules-load[38630]: Inserted module 'br_netfilter'
Jan 31 06:59:23 compute-2 systemd[1]: Finished Load Kernel Modules.
Jan 31 06:59:23 compute-2 sudo[38624]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:24 compute-2 sudo[38783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqrsoqhmkqdovwgpgubrzwsarhtfeuan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842764.2040703-1095-215088958864496/AnsiballZ_stat.py'
Jan 31 06:59:24 compute-2 sudo[38783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:24 compute-2 python3.9[38785]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:24 compute-2 sudo[38783]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:24 compute-2 sudo[38906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axojtwmgeayqsjbkdqnefjipuvoaspjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842764.2040703-1095-215088958864496/AnsiballZ_copy.py'
Jan 31 06:59:24 compute-2 sudo[38906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:25 compute-2 python3.9[38908]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842764.2040703-1095-215088958864496/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:25 compute-2 sudo[38906]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:25 compute-2 sudo[39058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yegjjpgwokmdwyvhactqzyncittrvzwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842765.6570754-1148-217579692574696/AnsiballZ_dnf.py'
Jan 31 06:59:25 compute-2 sudo[39058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:26 compute-2 python3.9[39060]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:59:29 compute-2 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Jan 31 06:59:29 compute-2 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Jan 31 06:59:29 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:59:29 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:59:29 compute-2 systemd[1]: Reloading.
Jan 31 06:59:29 compute-2 systemd-rc-local-generator[39115]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:59:29 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:59:30 compute-2 sudo[39058]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:31 compute-2 python3.9[41655]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:59:32 compute-2 python3.9[42913]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 06:59:32 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:59:32 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:59:32 compute-2 systemd[1]: man-db-cache-update.service: Consumed 3.449s CPU time.
Jan 31 06:59:32 compute-2 systemd[1]: run-ra1228de02d3342c184133d405d95aaf1.service: Deactivated successfully.
Jan 31 06:59:32 compute-2 python3.9[43112]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:59:33 compute-2 sudo[43262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azyufdbpjwshoskjjdrejcyzerhmfcbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842773.2703066-1265-191327748949529/AnsiballZ_command.py'
Jan 31 06:59:33 compute-2 sudo[43262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:33 compute-2 python3.9[43264]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:33 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 06:59:34 compute-2 systemd[1]: Starting Authorization Manager...
Jan 31 06:59:34 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 06:59:34 compute-2 polkitd[43481]: Started polkitd version 0.117
Jan 31 06:59:34 compute-2 polkitd[43481]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 06:59:34 compute-2 polkitd[43481]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 06:59:34 compute-2 polkitd[43481]: Finished loading, compiling and executing 2 rules
Jan 31 06:59:34 compute-2 systemd[1]: Started Authorization Manager.
Jan 31 06:59:34 compute-2 polkitd[43481]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 31 06:59:34 compute-2 sudo[43262]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:35 compute-2 sudo[43649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-undfcjqendkvkucudfnrikvidvrnypix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842774.9972405-1293-180981714104057/AnsiballZ_systemd.py'
Jan 31 06:59:35 compute-2 sudo[43649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:35 compute-2 python3.9[43651]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:59:35 compute-2 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 06:59:35 compute-2 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 06:59:35 compute-2 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 06:59:35 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 06:59:35 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 06:59:35 compute-2 sudo[43649]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:36 compute-2 python3.9[43813]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 06:59:40 compute-2 sudo[43963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdjuekmtkzfcpopqyfdcjxqgqxeqrwrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842780.4957974-1464-254285572243530/AnsiballZ_systemd.py'
Jan 31 06:59:40 compute-2 sudo[43963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:40 compute-2 python3.9[43965]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:59:41 compute-2 systemd[1]: Reloading.
Jan 31 06:59:41 compute-2 systemd-rc-local-generator[43990]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:59:41 compute-2 sudo[43963]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:41 compute-2 sudo[44152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmenutdfrojqeykrcedggxvjcsmbyvxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842781.294491-1464-214487619138881/AnsiballZ_systemd.py'
Jan 31 06:59:41 compute-2 sudo[44152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:41 compute-2 python3.9[44154]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:59:41 compute-2 systemd[1]: Reloading.
Jan 31 06:59:41 compute-2 systemd-rc-local-generator[44180]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:59:42 compute-2 sudo[44152]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:42 compute-2 sudo[44341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrvzqsujhzvymmllzwdykrfeaokzzlrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842782.7203197-1511-218193878372794/AnsiballZ_command.py'
Jan 31 06:59:42 compute-2 sudo[44341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:43 compute-2 python3.9[44343]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:43 compute-2 sudo[44341]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:43 compute-2 sudo[44494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxbvqxkretaqkuteralyxmomehqldynv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842783.4349928-1536-185693679186408/AnsiballZ_command.py'
Jan 31 06:59:43 compute-2 sudo[44494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:43 compute-2 python3.9[44496]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:43 compute-2 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 06:59:43 compute-2 sudo[44494]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:44 compute-2 sudo[44647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kunldwifbpxkvrolskqwnefeqlkkdbhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842784.100415-1560-187735830517337/AnsiballZ_command.py'
Jan 31 06:59:44 compute-2 sudo[44647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:44 compute-2 python3.9[44649]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:45 compute-2 sudo[44647]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:46 compute-2 sudo[44809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmpewxkgziwblpsaceyjpimixeeoruy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842786.273296-1584-273051993948337/AnsiballZ_command.py'
Jan 31 06:59:46 compute-2 sudo[44809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:46 compute-2 python3.9[44811]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:46 compute-2 sudo[44809]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:47 compute-2 sudo[44962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mabrwqkxlkgcokiewqjgexoehspytrtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842786.9483085-1608-171545064276169/AnsiballZ_systemd.py'
Jan 31 06:59:47 compute-2 sudo[44962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:47 compute-2 python3.9[44964]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:59:47 compute-2 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 06:59:47 compute-2 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 06:59:47 compute-2 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 06:59:47 compute-2 systemd[1]: Starting Apply Kernel Variables...
Jan 31 06:59:47 compute-2 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 06:59:47 compute-2 systemd[1]: Finished Apply Kernel Variables.
Jan 31 06:59:47 compute-2 sudo[44962]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:47 compute-2 sshd-session[31319]: Connection closed by 192.168.122.30 port 59598
Jan 31 06:59:48 compute-2 sshd-session[31316]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:59:48 compute-2 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 06:59:48 compute-2 systemd[1]: session-10.scope: Consumed 2min 11.320s CPU time.
Jan 31 06:59:48 compute-2 systemd-logind[801]: Session 10 logged out. Waiting for processes to exit.
Jan 31 06:59:48 compute-2 systemd-logind[801]: Removed session 10.
Jan 31 06:59:56 compute-2 sshd-session[44994]: Accepted publickey for zuul from 192.168.122.30 port 44398 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 06:59:56 compute-2 systemd-logind[801]: New session 11 of user zuul.
Jan 31 06:59:56 compute-2 systemd[1]: Started Session 11 of User zuul.
Jan 31 06:59:56 compute-2 sshd-session[44994]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:59:57 compute-2 python3.9[45147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:59:58 compute-2 sudo[45301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsnqukvjnftxzignvolyqajmekcwztju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842798.0114622-71-84447409155178/AnsiballZ_getent.py'
Jan 31 06:59:58 compute-2 sudo[45301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:58 compute-2 python3.9[45303]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 06:59:58 compute-2 sudo[45301]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:59 compute-2 sudo[45454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdsdyzxrrliyxpsfwyfbkvjrydxugrhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842798.7579834-94-12908385344800/AnsiballZ_group.py'
Jan 31 06:59:59 compute-2 sudo[45454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:59 compute-2 python3.9[45456]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:59:59 compute-2 groupadd[45457]: group added to /etc/group: name=openvswitch, GID=42476
Jan 31 06:59:59 compute-2 groupadd[45457]: group added to /etc/gshadow: name=openvswitch
Jan 31 06:59:59 compute-2 groupadd[45457]: new group: name=openvswitch, GID=42476
Jan 31 06:59:59 compute-2 sudo[45454]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:00 compute-2 sudo[45612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvaqqxwtejpogbdpxjofdkkfwwujwgjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842799.6064384-118-44901432076601/AnsiballZ_user.py'
Jan 31 07:00:00 compute-2 sudo[45612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:00 compute-2 python3.9[45614]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 07:00:00 compute-2 useradd[45616]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 07:00:00 compute-2 useradd[45616]: add 'openvswitch' to group 'hugetlbfs'
Jan 31 07:00:00 compute-2 useradd[45616]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 31 07:00:00 compute-2 sudo[45612]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:01 compute-2 sudo[45772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiachjjuucbevozxibeeosdhzhzdikev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842800.8001842-148-170680739439630/AnsiballZ_setup.py'
Jan 31 07:00:01 compute-2 sudo[45772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:01 compute-2 python3.9[45774]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:00:01 compute-2 sudo[45772]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:01 compute-2 sudo[45856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbnlgtlhkcbgljekwdwtkbofwytfgwsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842800.8001842-148-170680739439630/AnsiballZ_dnf.py'
Jan 31 07:00:01 compute-2 sudo[45856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:02 compute-2 python3.9[45858]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 07:00:02 compute-2 sshd-session[45860]: Connection closed by 92.118.39.56 port 47934
Jan 31 07:00:04 compute-2 sudo[45856]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:05 compute-2 sudo[46021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjmhbdzlxamfyclbctiryribchfeyxim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842804.8931944-190-52241889804699/AnsiballZ_dnf.py'
Jan 31 07:00:05 compute-2 sudo[46021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:05 compute-2 python3.9[46023]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:00:19 compute-2 kernel: SELinux:  Converting 2739 SID table entries...
Jan 31 07:00:19 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:00:19 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:00:19 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:00:19 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:00:19 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:00:19 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:00:19 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:00:19 compute-2 groupadd[46046]: group added to /etc/group: name=unbound, GID=994
Jan 31 07:00:19 compute-2 groupadd[46046]: group added to /etc/gshadow: name=unbound
Jan 31 07:00:19 compute-2 groupadd[46046]: new group: name=unbound, GID=994
Jan 31 07:00:19 compute-2 useradd[46053]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 31 07:00:19 compute-2 dbus-broker-launch[793]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 07:00:19 compute-2 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 07:00:20 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:00:20 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:00:20 compute-2 systemd[1]: Reloading.
Jan 31 07:00:20 compute-2 systemd-rc-local-generator[46552]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:20 compute-2 systemd-sysv-generator[46556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:21 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:00:21 compute-2 sudo[46021]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:22 compute-2 sudo[47120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otxufwpoetypvprljcnbucteolrcrdeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842822.0602117-214-139968056224837/AnsiballZ_systemd.py'
Jan 31 07:00:22 compute-2 sudo[47120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:22 compute-2 python3.9[47122]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:00:22 compute-2 systemd[1]: Reloading.
Jan 31 07:00:23 compute-2 systemd-rc-local-generator[47147]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:23 compute-2 systemd-sysv-generator[47153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:23 compute-2 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 07:00:23 compute-2 chown[47164]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 07:00:23 compute-2 ovs-ctl[47169]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 07:00:23 compute-2 ovs-ctl[47169]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 07:00:23 compute-2 ovs-ctl[47169]: Starting ovsdb-server [  OK  ]
Jan 31 07:00:23 compute-2 ovs-vsctl[47219]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 07:00:23 compute-2 ovs-vsctl[47239]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c06836a7-1d29-4815-800d-4e6d21a36ae0\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 07:00:23 compute-2 ovs-ctl[47169]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 07:00:23 compute-2 ovs-ctl[47169]: Enabling remote OVSDB managers [  OK  ]
Jan 31 07:00:23 compute-2 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 07:00:23 compute-2 ovs-vsctl[47245]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Jan 31 07:00:23 compute-2 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 07:00:23 compute-2 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 07:00:23 compute-2 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 07:00:23 compute-2 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 07:00:23 compute-2 ovs-ctl[47289]: Inserting openvswitch module [  OK  ]
Jan 31 07:00:24 compute-2 ovs-ctl[47258]: Starting ovs-vswitchd [  OK  ]
Jan 31 07:00:24 compute-2 ovs-vsctl[47309]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Jan 31 07:00:24 compute-2 ovs-ctl[47258]: Enabling remote OVSDB managers [  OK  ]
Jan 31 07:00:24 compute-2 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 07:00:24 compute-2 systemd[1]: Starting Open vSwitch...
Jan 31 07:00:24 compute-2 systemd[1]: Finished Open vSwitch.
Jan 31 07:00:24 compute-2 sudo[47120]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:24 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:00:24 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:00:24 compute-2 systemd[1]: run-ra2b6644205dd43aab187175b718c2552.service: Deactivated successfully.
Jan 31 07:00:24 compute-2 python3.9[47462]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:00:25 compute-2 sudo[47612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtotovonexaxmyjuwlmnwasxxnerbxnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842825.1484056-268-119609720597667/AnsiballZ_sefcontext.py'
Jan 31 07:00:25 compute-2 sudo[47612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:25 compute-2 python3.9[47614]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 07:00:26 compute-2 kernel: SELinux:  Converting 2753 SID table entries...
Jan 31 07:00:27 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:00:27 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:00:27 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:00:27 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:00:27 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:00:27 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:00:27 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:00:27 compute-2 sudo[47612]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:28 compute-2 python3.9[47769]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:00:28 compute-2 sudo[47925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndyfhvkefrjqasuxwtcbwdtvdiinvpxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842828.509924-322-180464143216979/AnsiballZ_dnf.py'
Jan 31 07:00:28 compute-2 dbus-broker-launch[793]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 07:00:28 compute-2 sudo[47925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:28 compute-2 python3.9[47927]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:00:30 compute-2 sudo[47925]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:31 compute-2 sudo[48078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oonxycosxnvmodxirndwjowepkojacre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842830.6534657-346-10582589897348/AnsiballZ_command.py'
Jan 31 07:00:31 compute-2 sudo[48078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:31 compute-2 python3.9[48080]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:31 compute-2 sudo[48078]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:32 compute-2 sudo[48365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivhltunannpdvqkxqndgilyqotccqisj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842832.1145365-370-228449086032164/AnsiballZ_file.py'
Jan 31 07:00:32 compute-2 sudo[48365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:32 compute-2 python3.9[48367]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 07:00:32 compute-2 sudo[48365]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:33 compute-2 python3.9[48517]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:00:33 compute-2 sudo[48669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kafofhlztwotxbibajnedkpefyobhlli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842833.6745815-418-95517861966382/AnsiballZ_dnf.py'
Jan 31 07:00:33 compute-2 sudo[48669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:34 compute-2 python3.9[48671]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:00:36 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:00:36 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:00:36 compute-2 systemd[1]: Reloading.
Jan 31 07:00:36 compute-2 systemd-sysv-generator[48708]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:36 compute-2 systemd-rc-local-generator[48702]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:36 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:00:37 compute-2 sudo[48669]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:37 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:00:37 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:00:37 compute-2 systemd[1]: run-r5fcfe03619ee4bafa5fd28d13f668749.service: Deactivated successfully.
Jan 31 07:00:37 compute-2 sudo[48985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpuqktnnfcnejdrmqlyutqlfxrwqdrlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842837.1794927-442-263664979152287/AnsiballZ_systemd.py'
Jan 31 07:00:37 compute-2 sudo[48985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:37 compute-2 python3.9[48987]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:00:37 compute-2 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 07:00:37 compute-2 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 07:00:37 compute-2 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 07:00:37 compute-2 systemd[1]: Stopping Network Manager...
Jan 31 07:00:37 compute-2 NetworkManager[7204]: <info>  [1769842837.7364] caught SIGTERM, shutting down normally.
Jan 31 07:00:37 compute-2 NetworkManager[7204]: <info>  [1769842837.7380] dhcp4 (eth0): canceled DHCP transaction
Jan 31 07:00:37 compute-2 NetworkManager[7204]: <info>  [1769842837.7380] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:00:37 compute-2 NetworkManager[7204]: <info>  [1769842837.7380] dhcp4 (eth0): state changed no lease
Jan 31 07:00:37 compute-2 NetworkManager[7204]: <info>  [1769842837.7382] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 07:00:37 compute-2 NetworkManager[7204]: <info>  [1769842837.7450] exiting (success)
Jan 31 07:00:37 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:00:37 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:00:37 compute-2 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 07:00:37 compute-2 systemd[1]: Stopped Network Manager.
Jan 31 07:00:37 compute-2 systemd[1]: NetworkManager.service: Consumed 21.473s CPU time, 4.4M memory peak, read 0B from disk, written 31.0K to disk.
Jan 31 07:00:37 compute-2 systemd[1]: Starting Network Manager...
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.7892] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:0cf93558-72ef-4562-a65b-1a5f29acc8ec)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.7895] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.7937] manager[0x5571e52be000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 07:00:37 compute-2 systemd[1]: Starting Hostname Service...
Jan 31 07:00:37 compute-2 systemd[1]: Started Hostname Service.
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8533] hostname: hostname: using hostnamed
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8534] hostname: static hostname changed from (none) to "compute-2"
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8539] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8543] manager[0x5571e52be000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8544] manager[0x5571e52be000]: rfkill: WWAN hardware radio set enabled
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8563] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8571] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8572] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8572] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8573] manager: Networking is enabled by state file
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8575] settings: Loaded settings plugin: keyfile (internal)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8578] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8603] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8609] dhcp: init: Using DHCP client 'internal'
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8611] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8616] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8619] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8627] device (lo): Activation: starting connection 'lo' (300d976d-4218-4e05-b077-90422be37423)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8633] device (eth0): carrier: link connected
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8636] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8639] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8640] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8644] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8648] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8653] device (eth1): carrier: link connected
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8656] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8659] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (63d96858-969a-5d6a-a372-7ecf8e421040) (indicated)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8660] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8663] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8668] device (eth1): Activation: starting connection 'ci-private-network' (63d96858-969a-5d6a-a372-7ecf8e421040)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8673] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 07:00:37 compute-2 systemd[1]: Started Network Manager.
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8686] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8688] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8691] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8693] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8695] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8697] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8698] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8701] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8706] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8708] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8714] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8723] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8732] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8734] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8738] device (lo): Activation: successful, device activated.
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8744] dhcp4 (eth0): state changed new lease, address=38.129.56.169
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8749] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 07:00:37 compute-2 systemd[1]: Starting Network Manager Wait Online...
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8860] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8865] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8870] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8872] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8875] device (eth1): Activation: successful, device activated.
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8882] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8884] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8887] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8889] device (eth0): Activation: successful, device activated.
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8893] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 07:00:37 compute-2 NetworkManager[48999]: <info>  [1769842837.8895] manager: startup complete
Jan 31 07:00:37 compute-2 systemd[1]: Finished Network Manager Wait Online.
Jan 31 07:00:37 compute-2 sudo[48985]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:38 compute-2 sudo[49211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqoyfnahloegsbtqhlbznzutnlmmajmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842838.131879-466-113977432942562/AnsiballZ_dnf.py'
Jan 31 07:00:38 compute-2 sudo[49211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:38 compute-2 python3.9[49213]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:00:44 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:00:44 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:00:44 compute-2 systemd[1]: Reloading.
Jan 31 07:00:44 compute-2 systemd-sysv-generator[49265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:44 compute-2 systemd-rc-local-generator[49260]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:44 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:00:45 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:00:45 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:00:45 compute-2 systemd[1]: run-r895605dd05d84d379634dc2690be6150.service: Deactivated successfully.
Jan 31 07:00:45 compute-2 sudo[49211]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:46 compute-2 sudo[49673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahexxpfiicgddohjkfchocfzwwqmcxni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842845.9862847-503-218289515738441/AnsiballZ_stat.py'
Jan 31 07:00:46 compute-2 sudo[49673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:46 compute-2 python3.9[49675]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:00:46 compute-2 sudo[49673]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:47 compute-2 sudo[49825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaoxakklevcoujmkiciciodbgsyawjmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842846.6385856-529-40176031306614/AnsiballZ_ini_file.py'
Jan 31 07:00:47 compute-2 sudo[49825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:47 compute-2 python3.9[49827]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:47 compute-2 sudo[49825]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:47 compute-2 sudo[49979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlahqbizsveuogrylnfjyzqtqzrohwug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842847.556168-559-153243218006622/AnsiballZ_ini_file.py'
Jan 31 07:00:47 compute-2 sudo[49979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:47 compute-2 python3.9[49981]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:47 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:00:47 compute-2 sudo[49979]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:48 compute-2 sudo[50131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnhfpuptxtjygvxeawuoztffacnacwyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842848.0946522-559-53903003169159/AnsiballZ_ini_file.py'
Jan 31 07:00:48 compute-2 sudo[50131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:48 compute-2 python3.9[50133]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:48 compute-2 sudo[50131]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:49 compute-2 sudo[50283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thlbalakqofaxuwnpfdhxbjhzdkjdayp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842848.7912745-604-122820065475827/AnsiballZ_ini_file.py'
Jan 31 07:00:49 compute-2 sudo[50283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:49 compute-2 python3.9[50285]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:49 compute-2 sudo[50283]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:49 compute-2 sudo[50435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxturdljqexjdsviamyptgvubhkayfbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842849.3535438-604-192594634137998/AnsiballZ_ini_file.py'
Jan 31 07:00:49 compute-2 sudo[50435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:49 compute-2 python3.9[50437]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:49 compute-2 sudo[50435]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:50 compute-2 sudo[50587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqzhkvipcqxwpprigogvyupewnobyiab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842849.9285975-649-128692874400285/AnsiballZ_stat.py'
Jan 31 07:00:50 compute-2 sudo[50587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:50 compute-2 python3.9[50589]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:00:50 compute-2 sudo[50587]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:50 compute-2 sudo[50710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkeqdzhautycyrfysgylipdwafipmauo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842849.9285975-649-128692874400285/AnsiballZ_copy.py'
Jan 31 07:00:50 compute-2 sudo[50710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:51 compute-2 python3.9[50712]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842849.9285975-649-128692874400285/.source _original_basename=.4a6r6szf follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:51 compute-2 sudo[50710]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:51 compute-2 sudo[50862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrxncfclozsgvkxurkvzdwtjhcvlmufj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842851.2591932-695-54506312015995/AnsiballZ_file.py'
Jan 31 07:00:51 compute-2 sudo[50862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:51 compute-2 python3.9[50864]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:51 compute-2 sudo[50862]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:52 compute-2 sudo[51014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijiqnsdwueagouxitzxdhqwjtahqokqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842851.9306905-718-62907266860732/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 31 07:00:52 compute-2 sudo[51014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:52 compute-2 python3.9[51016]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 07:00:52 compute-2 sudo[51014]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:52 compute-2 sudo[51166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijnxrfsnzmfftipzkylghyewbphrmvem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842852.71703-745-241474468626494/AnsiballZ_file.py'
Jan 31 07:00:52 compute-2 sudo[51166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:53 compute-2 python3.9[51168]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:53 compute-2 sudo[51166]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:53 compute-2 sudo[51318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjyzpvttuxeqiwshgcqmurtgeyyxtxzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842853.5036-775-30147387871181/AnsiballZ_stat.py'
Jan 31 07:00:53 compute-2 sudo[51318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:54 compute-2 sudo[51318]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:54 compute-2 sudo[51441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vugrpnlbpdwhmqrvweyxfmqijiwdimuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842853.5036-775-30147387871181/AnsiballZ_copy.py'
Jan 31 07:00:54 compute-2 sudo[51441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:54 compute-2 sudo[51441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:55 compute-2 sudo[51593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtwefoulysdgoooqnikzbeuuznyljrxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842854.733472-820-263590925467442/AnsiballZ_slurp.py'
Jan 31 07:00:55 compute-2 sudo[51593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:55 compute-2 python3.9[51595]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 07:00:55 compute-2 sudo[51593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:56 compute-2 sudo[51768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjocogkeylfleltizbcqekoekbwmiomj ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842855.5781813-847-257893771319257/async_wrapper.py j159202543330 300 /home/zuul/.ansible/tmp/ansible-tmp-1769842855.5781813-847-257893771319257/AnsiballZ_edpm_os_net_config.py _'
Jan 31 07:00:56 compute-2 sudo[51768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:56 compute-2 ansible-async_wrapper.py[51770]: Invoked with j159202543330 300 /home/zuul/.ansible/tmp/ansible-tmp-1769842855.5781813-847-257893771319257/AnsiballZ_edpm_os_net_config.py _
Jan 31 07:00:56 compute-2 ansible-async_wrapper.py[51773]: Starting module and watcher
Jan 31 07:00:56 compute-2 ansible-async_wrapper.py[51773]: Start watching 51774 (300)
Jan 31 07:00:56 compute-2 ansible-async_wrapper.py[51774]: Start module (51774)
Jan 31 07:00:56 compute-2 ansible-async_wrapper.py[51770]: Return async_wrapper task started.
Jan 31 07:00:56 compute-2 sudo[51768]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:56 compute-2 python3.9[51775]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 07:00:57 compute-2 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 07:00:57 compute-2 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 07:00:57 compute-2 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 07:00:57 compute-2 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 07:00:57 compute-2 kernel: cfg80211: failed to load regulatory.db
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5036] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5050] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5584] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5587] audit: op="connection-add" uuid="143a8bd8-7b61-4785-954c-55bbe9144f3c" name="br-ex-br" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5608] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5612] audit: op="connection-add" uuid="811f715f-af65-4177-bebe-9bac21cddc65" name="br-ex-port" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5625] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5627] audit: op="connection-add" uuid="e9cab5b5-7433-484c-96bc-fbddfe61089d" name="eth1-port" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5639] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5641] audit: op="connection-add" uuid="979b8e09-db7c-454a-87af-fd8e4da02adf" name="vlan20-port" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5652] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5655] audit: op="connection-add" uuid="270cb257-72ad-4b58-b5d4-957133c784de" name="vlan21-port" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5668] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5669] audit: op="connection-add" uuid="67611492-0268-4774-b5ad-50338ef606b0" name="vlan22-port" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5685] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5686] audit: op="connection-add" uuid="b173ca67-04b0-47c0-9951-44c4a379d500" name="vlan23-port" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5706] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.timestamp,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,802-3-ethernet.mtu" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5721] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5723] audit: op="connection-add" uuid="0cc8e4ab-a9c3-4766-86dc-29b04bcf2d5c" name="br-ex-if" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5764] audit: op="connection-update" uuid="63d96858-969a-5d6a-a372-7ecf8e421040" name="ci-private-network" args="ipv4.addresses,ipv4.dns,ipv4.never-default,ipv4.routes,ipv4.method,ipv4.routing-rules,ovs-external-ids.data,connection.timestamp,connection.slave-type,connection.port-type,connection.controller,connection.master,ipv6.addresses,ipv6.dns,ipv6.routes,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ovs-interface.type" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5779] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5781] audit: op="connection-add" uuid="b9405ef2-428a-4760-84c2-9faad6c0726e" name="vlan20-if" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5796] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5798] audit: op="connection-add" uuid="d30bd2bf-8ace-4702-8b63-59ee9a0c9472" name="vlan21-if" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5812] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5814] audit: op="connection-add" uuid="8570c89a-345c-4bfb-a2a7-abe4810ea04c" name="vlan22-if" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5828] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5830] audit: op="connection-add" uuid="41beb27c-a16b-4fb7-9081-7253c0ed8b91" name="vlan23-if" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5842] audit: op="connection-delete" uuid="2f00db0b-ca36-316b-8cc5-7f01c76d6ed5" name="Wired connection 1" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5853] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.5856] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5863] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5866] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (143a8bd8-7b61-4785-954c-55bbe9144f3c)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5867] audit: op="connection-activate" uuid="143a8bd8-7b61-4785-954c-55bbe9144f3c" name="br-ex-br" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5868] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.5869] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5875] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5878] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (811f715f-af65-4177-bebe-9bac21cddc65)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5880] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.5881] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5885] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5889] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (e9cab5b5-7433-484c-96bc-fbddfe61089d)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5891] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.5893] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5898] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5902] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (979b8e09-db7c-454a-87af-fd8e4da02adf)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5904] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.5904] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5910] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5914] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (270cb257-72ad-4b58-b5d4-957133c784de)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5917] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.5918] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5922] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5925] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (67611492-0268-4774-b5ad-50338ef606b0)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5927] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.5927] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5932] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5936] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (b173ca67-04b0-47c0-9951-44c4a379d500)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5936] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5938] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5939] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5945] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.5946] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5948] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5952] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (0cc8e4ab-a9c3-4766-86dc-29b04bcf2d5c)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5952] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5955] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5957] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5958] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5959] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5969] device (eth1): disconnecting for new activation request.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5970] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5973] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5974] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5975] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5977] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.5979] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5981] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5985] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (b9405ef2-428a-4760-84c2-9faad6c0726e)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5986] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5989] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5991] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5993] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5995] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.5996] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.5998] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6002] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (d30bd2bf-8ace-4702-8b63-59ee9a0c9472)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6002] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6005] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6007] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6008] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6012] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.6013] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6016] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6021] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (8570c89a-345c-4bfb-a2a7-abe4810ea04c)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6022] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6026] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6028] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6030] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6033] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <warn>  [1769842858.6034] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6036] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6040] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (41beb27c-a16b-4fb7-9081-7253c0ed8b91)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6041] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6044] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6046] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6047] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6049] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6062] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6064] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6068] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6070] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6078] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6082] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6086] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6090] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6092] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6096] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6100] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 kernel: ovs-system: entered promiscuous mode
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6116] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6120] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 kernel: Timeout policy base is empty
Jan 31 07:00:58 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6127] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6135] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6139] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6142] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6147] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6154] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 systemd-udevd[51781]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6168] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6170] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6175] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6179] dhcp4 (eth0): canceled DHCP transaction
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6179] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6179] dhcp4 (eth0): state changed no lease
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6181] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6190] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6193] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51776 uid=0 result="fail" reason="Device is not activated"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6222] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6230] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6233] dhcp4 (eth0): state changed new lease, address=38.129.56.169
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6239] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 07:00:58 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6329] device (eth1): disconnecting for new activation request.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6333] audit: op="connection-activate" uuid="63d96858-969a-5d6a-a372-7ecf8e421040" name="ci-private-network" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6338] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.6359] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7078] device (eth1): Activation: starting connection 'ci-private-network' (63d96858-969a-5d6a-a372-7ecf8e421040)
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7082] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7083] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7084] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7091] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7092] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7094] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7095] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7096] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51776 uid=0 result="success"
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7104] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7107] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7112] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7115] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7119] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7122] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7125] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7127] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7130] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7134] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7136] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7139] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7142] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7145] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7148] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7151] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7156] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 kernel: br-ex: entered promiscuous mode
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7206] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7217] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7254] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7260] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7267] device (eth1): Activation: successful, device activated.
Jan 31 07:00:58 compute-2 kernel: vlan22: entered promiscuous mode
Jan 31 07:00:58 compute-2 systemd-udevd[51782]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7302] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7319] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7346] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7349] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7356] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 kernel: vlan23: entered promiscuous mode
Jan 31 07:00:58 compute-2 kernel: vlan20: entered promiscuous mode
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7431] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7453] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:00:58 compute-2 kernel: vlan21: entered promiscuous mode
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7490] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7516] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7532] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7550] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7567] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7569] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7571] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7574] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7579] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7584] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7588] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7593] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7596] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7660] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7678] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7698] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7702] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-2 NetworkManager[48999]: <info>  [1769842858.7707] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:00:59 compute-2 NetworkManager[48999]: <info>  [1769842859.8855] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51776 uid=0 result="success"
Jan 31 07:00:59 compute-2 sudo[52131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaixrudzyacoexybxtdikwdfqdmmoowi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842859.5143142-847-56471331675932/AnsiballZ_async_status.py'
Jan 31 07:00:59 compute-2 sudo[52131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:00 compute-2 NetworkManager[48999]: <info>  [1769842860.0528] checkpoint[0x5571e5294950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 07:01:00 compute-2 NetworkManager[48999]: <info>  [1769842860.0530] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51776 uid=0 result="success"
Jan 31 07:01:00 compute-2 python3.9[52133]: ansible-ansible.legacy.async_status Invoked with jid=j159202543330.51770 mode=status _async_dir=/root/.ansible_async
Jan 31 07:01:00 compute-2 sudo[52131]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:00 compute-2 NetworkManager[48999]: <info>  [1769842860.3504] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51776 uid=0 result="success"
Jan 31 07:01:00 compute-2 NetworkManager[48999]: <info>  [1769842860.3515] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51776 uid=0 result="success"
Jan 31 07:01:00 compute-2 NetworkManager[48999]: <info>  [1769842860.5154] audit: op="networking-control" arg="global-dns-configuration" pid=51776 uid=0 result="success"
Jan 31 07:01:00 compute-2 NetworkManager[48999]: <info>  [1769842860.5181] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 07:01:00 compute-2 NetworkManager[48999]: <info>  [1769842860.5213] audit: op="networking-control" arg="global-dns-configuration" pid=51776 uid=0 result="success"
Jan 31 07:01:00 compute-2 NetworkManager[48999]: <info>  [1769842860.5463] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51776 uid=0 result="success"
Jan 31 07:01:00 compute-2 NetworkManager[48999]: <info>  [1769842860.6482] checkpoint[0x5571e5294a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 07:01:00 compute-2 NetworkManager[48999]: <info>  [1769842860.6487] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51776 uid=0 result="success"
Jan 31 07:01:00 compute-2 ansible-async_wrapper.py[51774]: Module complete (51774)
Jan 31 07:01:01 compute-2 ansible-async_wrapper.py[51773]: Done in kid B.
Jan 31 07:01:01 compute-2 CROND[52142]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 07:01:01 compute-2 run-parts[52145]: (/etc/cron.hourly) starting 0anacron
Jan 31 07:01:01 compute-2 anacron[52153]: Anacron started on 2026-01-31
Jan 31 07:01:01 compute-2 anacron[52153]: Will run job `cron.daily' in 48 min.
Jan 31 07:01:01 compute-2 anacron[52153]: Will run job `cron.weekly' in 68 min.
Jan 31 07:01:01 compute-2 anacron[52153]: Will run job `cron.monthly' in 88 min.
Jan 31 07:01:01 compute-2 anacron[52153]: Jobs will be executed sequentially
Jan 31 07:01:01 compute-2 run-parts[52155]: (/etc/cron.hourly) finished 0anacron
Jan 31 07:01:01 compute-2 CROND[52141]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 07:01:03 compute-2 sudo[52252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kphlaquztrmavkhqwlwsbidjywbyxlap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842859.5143142-847-56471331675932/AnsiballZ_async_status.py'
Jan 31 07:01:03 compute-2 sudo[52252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:03 compute-2 python3.9[52254]: ansible-ansible.legacy.async_status Invoked with jid=j159202543330.51770 mode=status _async_dir=/root/.ansible_async
Jan 31 07:01:03 compute-2 sudo[52252]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:03 compute-2 sudo[52351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hanjhqqgusuggbafswnyrzgzfmgvdhgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842859.5143142-847-56471331675932/AnsiballZ_async_status.py'
Jan 31 07:01:03 compute-2 sudo[52351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:04 compute-2 python3.9[52353]: ansible-ansible.legacy.async_status Invoked with jid=j159202543330.51770 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 07:01:04 compute-2 sudo[52351]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:04 compute-2 sudo[52504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkbgrlnkjstziozjhjtbaxtdgrmqgtnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842864.3542292-928-214355450263434/AnsiballZ_stat.py'
Jan 31 07:01:04 compute-2 sudo[52504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:04 compute-2 python3.9[52506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:04 compute-2 sudo[52504]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:05 compute-2 sudo[52627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpsigfdhezqqslhqddaeygspgfxyjutu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842864.3542292-928-214355450263434/AnsiballZ_copy.py'
Jan 31 07:01:05 compute-2 sudo[52627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:05 compute-2 python3.9[52629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842864.3542292-928-214355450263434/.source.returncode _original_basename=.cw_z8xla follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:05 compute-2 sudo[52627]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:05 compute-2 sudo[52779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmhdsqmrjybnpzczcocxbasvarlfnbix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842865.554733-976-80002989201564/AnsiballZ_stat.py'
Jan 31 07:01:05 compute-2 sudo[52779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:05 compute-2 python3.9[52781]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:05 compute-2 sudo[52779]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:06 compute-2 sudo[52902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvimuhkagmikupjlcxessmkjallaqbjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842865.554733-976-80002989201564/AnsiballZ_copy.py'
Jan 31 07:01:06 compute-2 sudo[52902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:06 compute-2 python3.9[52904]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842865.554733-976-80002989201564/.source.cfg _original_basename=.el_2339d follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:06 compute-2 sudo[52902]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:07 compute-2 sudo[53055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkohxgpigthvnslgwwpyqbwjozkflnca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842866.7298312-1021-272218834791319/AnsiballZ_systemd.py'
Jan 31 07:01:07 compute-2 sudo[53055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:07 compute-2 python3.9[53057]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:01:07 compute-2 systemd[1]: Reloading Network Manager...
Jan 31 07:01:07 compute-2 NetworkManager[48999]: <info>  [1769842867.5207] audit: op="reload" arg="0" pid=53061 uid=0 result="success"
Jan 31 07:01:07 compute-2 NetworkManager[48999]: <info>  [1769842867.5215] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 07:01:07 compute-2 systemd[1]: Reloaded Network Manager.
Jan 31 07:01:07 compute-2 sudo[53055]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:07 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 07:01:07 compute-2 sshd-session[44997]: Connection closed by 192.168.122.30 port 44398
Jan 31 07:01:07 compute-2 sshd-session[44994]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:01:07 compute-2 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 07:01:07 compute-2 systemd[1]: session-11.scope: Consumed 47.037s CPU time.
Jan 31 07:01:07 compute-2 systemd-logind[801]: Session 11 logged out. Waiting for processes to exit.
Jan 31 07:01:07 compute-2 systemd-logind[801]: Removed session 11.
Jan 31 07:01:13 compute-2 sshd-session[53093]: Accepted publickey for zuul from 192.168.122.30 port 50434 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:01:13 compute-2 systemd-logind[801]: New session 12 of user zuul.
Jan 31 07:01:13 compute-2 systemd[1]: Started Session 12 of User zuul.
Jan 31 07:01:13 compute-2 sshd-session[53093]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:01:14 compute-2 python3.9[53247]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:01:15 compute-2 python3.9[53401]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:01:16 compute-2 python3.9[53595]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:01:17 compute-2 sshd-session[53096]: Connection closed by 192.168.122.30 port 50434
Jan 31 07:01:17 compute-2 sshd-session[53093]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:01:17 compute-2 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 07:01:17 compute-2 systemd[1]: session-12.scope: Consumed 1.853s CPU time.
Jan 31 07:01:17 compute-2 systemd-logind[801]: Session 12 logged out. Waiting for processes to exit.
Jan 31 07:01:17 compute-2 systemd-logind[801]: Removed session 12.
Jan 31 07:01:17 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:01:24 compute-2 sshd-session[53624]: Accepted publickey for zuul from 192.168.122.30 port 56814 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:01:24 compute-2 systemd-logind[801]: New session 13 of user zuul.
Jan 31 07:01:24 compute-2 systemd[1]: Started Session 13 of User zuul.
Jan 31 07:01:24 compute-2 sshd-session[53624]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:01:25 compute-2 python3.9[53777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:01:26 compute-2 python3.9[53931]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:01:26 compute-2 sudo[54086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfuaajbyxgdrrgzkepmpkgkosycnzays ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842886.6066825-82-50558797092128/AnsiballZ_setup.py'
Jan 31 07:01:26 compute-2 sudo[54086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:27 compute-2 python3.9[54088]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:01:27 compute-2 sudo[54086]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:27 compute-2 sudo[54170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aghdngbwpouvtsysdlnyucsdcbcmkhpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842886.6066825-82-50558797092128/AnsiballZ_dnf.py'
Jan 31 07:01:27 compute-2 sudo[54170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:27 compute-2 python3.9[54172]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:01:29 compute-2 sudo[54170]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:29 compute-2 sudo[54324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smimruwyetfxoefmuakbxesuohcvtzrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842889.531512-119-168689517539879/AnsiballZ_setup.py'
Jan 31 07:01:29 compute-2 sudo[54324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:30 compute-2 python3.9[54326]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:01:30 compute-2 sudo[54324]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:31 compute-2 sudo[54519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkyxqcrdjmfsnmuuoarfbyvutisnggvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842890.7405696-151-141202283766529/AnsiballZ_file.py'
Jan 31 07:01:31 compute-2 sudo[54519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:31 compute-2 python3.9[54521]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:31 compute-2 sudo[54519]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:31 compute-2 sudo[54671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlozhxdrsezoglpcqptnpszlkunvucvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842891.5987985-175-263959831527713/AnsiballZ_command.py'
Jan 31 07:01:31 compute-2 sudo[54671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:32 compute-2 python3.9[54673]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:01:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck4109290435-merged.mount: Deactivated successfully.
Jan 31 07:01:32 compute-2 podman[54674]: 2026-01-31 07:01:32.248830057 +0000 UTC m=+0.049812456 system refresh
Jan 31 07:01:32 compute-2 sudo[54671]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:32 compute-2 sudo[54835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reizquywckbdvbnnbgwtizopeptbfols ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842892.474504-199-111714457103379/AnsiballZ_stat.py'
Jan 31 07:01:32 compute-2 sudo[54835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:33 compute-2 python3.9[54837]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:33 compute-2 sudo[54835]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:33 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:01:33 compute-2 sudo[54958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfywwauvwblcsdppodbtlxfhnzypjkfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842892.474504-199-111714457103379/AnsiballZ_copy.py'
Jan 31 07:01:33 compute-2 sudo[54958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:33 compute-2 python3.9[54960]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842892.474504-199-111714457103379/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ecfd0c70021b29c1424be3549b8bdda9d53f6f17 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:33 compute-2 sudo[54958]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:34 compute-2 sudo[55110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktlhdmmnoxttopeibarivxnmjtdjvswa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842894.1150646-244-267824959919538/AnsiballZ_stat.py'
Jan 31 07:01:34 compute-2 sudo[55110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:34 compute-2 python3.9[55112]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:34 compute-2 sudo[55110]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:34 compute-2 sudo[55233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwkxrcwdntwrqqibrwvmvqamjlkihwsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842894.1150646-244-267824959919538/AnsiballZ_copy.py'
Jan 31 07:01:34 compute-2 sudo[55233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:34 compute-2 python3.9[55235]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842894.1150646-244-267824959919538/.source.conf follow=False _original_basename=registries.conf.j2 checksum=a4fd3ca7d18166099562a65af8d6da655db34efc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:01:34 compute-2 sudo[55233]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:35 compute-2 sudo[55385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrisytgifreiozsljcqxnbocsfcwaral ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842895.3996146-292-133046862526393/AnsiballZ_ini_file.py'
Jan 31 07:01:35 compute-2 sudo[55385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:35 compute-2 python3.9[55387]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:01:35 compute-2 sudo[55385]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:36 compute-2 sudo[55537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exkdoggtztikrcuiafbaemglfitgrvld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842896.1120217-292-45567058617766/AnsiballZ_ini_file.py'
Jan 31 07:01:36 compute-2 sudo[55537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:36 compute-2 python3.9[55539]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:01:36 compute-2 sudo[55537]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:36 compute-2 sudo[55689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzjtriohfiuzonwzjnfwnvpigsuycaoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842896.6111393-292-218270555556425/AnsiballZ_ini_file.py'
Jan 31 07:01:36 compute-2 sudo[55689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:36 compute-2 python3.9[55691]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:01:36 compute-2 sudo[55689]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:37 compute-2 sudo[55841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufyevuqmprfxnkdohqwmkgygfjcmqiap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842897.0865345-292-211631663402108/AnsiballZ_ini_file.py'
Jan 31 07:01:37 compute-2 sudo[55841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:37 compute-2 python3.9[55843]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:01:37 compute-2 sudo[55841]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:38 compute-2 sudo[55993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbyasbgzgktayxxnnwnardqqnjnnyxpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842897.973124-385-223677485595937/AnsiballZ_dnf.py'
Jan 31 07:01:38 compute-2 sudo[55993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:38 compute-2 python3.9[55995]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:01:40 compute-2 sudo[55993]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:40 compute-2 sudo[56146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcnxhtawybasqtzsizmdkeomoclhwsdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842900.5184922-418-207036032775336/AnsiballZ_setup.py'
Jan 31 07:01:40 compute-2 sudo[56146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:41 compute-2 python3.9[56148]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:01:41 compute-2 sudo[56146]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:41 compute-2 sudo[56300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oretccuetaakgkuaaktupnzkuqsftjrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842901.2425003-443-272944560124600/AnsiballZ_stat.py'
Jan 31 07:01:41 compute-2 sudo[56300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:41 compute-2 python3.9[56302]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:01:41 compute-2 sudo[56300]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:42 compute-2 sudo[56452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcdcmzyjqvbdthyjqyojajmpenxxehfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842901.9610572-469-156896436219893/AnsiballZ_stat.py'
Jan 31 07:01:42 compute-2 sudo[56452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:42 compute-2 python3.9[56454]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:01:42 compute-2 sudo[56452]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:42 compute-2 sudo[56604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcibfokjrrlylilgequmbgcexmyukmmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842902.7159004-499-47922623410311/AnsiballZ_command.py'
Jan 31 07:01:42 compute-2 sudo[56604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:43 compute-2 python3.9[56606]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:01:43 compute-2 sudo[56604]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:43 compute-2 sudo[56757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxqmxvzgajwhtnntbinspbykzoqbnedg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842903.472963-530-144248402656530/AnsiballZ_service_facts.py'
Jan 31 07:01:43 compute-2 sudo[56757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:44 compute-2 python3.9[56759]: ansible-service_facts Invoked
Jan 31 07:01:44 compute-2 network[56776]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:01:44 compute-2 network[56777]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:01:44 compute-2 network[56778]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:01:45 compute-2 irqbalance[800]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 31 07:01:45 compute-2 irqbalance[800]: IRQ 26 affinity is now unmanaged
Jan 31 07:01:45 compute-2 sudo[56757]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:47 compute-2 sudo[57061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wykzgrkdquynznbyhigziysftktlbobw ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769842907.3941047-574-232425342636223/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769842907.3941047-574-232425342636223/args'
Jan 31 07:01:47 compute-2 sudo[57061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:47 compute-2 sudo[57061]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:48 compute-2 sudo[57228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doedhzwogpqywdhnhunqpevtdeqgtldf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842908.0672464-608-69829887040463/AnsiballZ_dnf.py'
Jan 31 07:01:48 compute-2 sudo[57228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:48 compute-2 python3.9[57230]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:01:49 compute-2 sudo[57228]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:51 compute-2 sudo[57381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpvdrxbruwwdzfnybwsatotyplidkxdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842910.5025153-647-147440104850759/AnsiballZ_package_facts.py'
Jan 31 07:01:51 compute-2 sudo[57381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:51 compute-2 python3.9[57383]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 07:01:51 compute-2 sudo[57381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:52 compute-2 sudo[57533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brkweoslkovqxziegsvjwpjdtifqwvtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842912.3900423-677-151712769471387/AnsiballZ_stat.py'
Jan 31 07:01:52 compute-2 sudo[57533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:52 compute-2 python3.9[57535]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:52 compute-2 sudo[57533]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:53 compute-2 sudo[57658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvjgkwhzfjnmkotukhqqrjoyuafckcjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842912.3900423-677-151712769471387/AnsiballZ_copy.py'
Jan 31 07:01:53 compute-2 sudo[57658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:53 compute-2 python3.9[57660]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842912.3900423-677-151712769471387/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:53 compute-2 sudo[57658]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:53 compute-2 sudo[57812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcncgtrphzjovavwdaprukikchvcbtpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842913.6225224-724-84309945234069/AnsiballZ_stat.py'
Jan 31 07:01:53 compute-2 sudo[57812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:54 compute-2 python3.9[57814]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:54 compute-2 sudo[57812]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:54 compute-2 sudo[57937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohskunwriaztcijckmehierhajdmuhfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842913.6225224-724-84309945234069/AnsiballZ_copy.py'
Jan 31 07:01:54 compute-2 sudo[57937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:54 compute-2 python3.9[57939]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842913.6225224-724-84309945234069/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:54 compute-2 sudo[57937]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:56 compute-2 sudo[58091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-patdrmygmqpknsutvyaxlnhwafmfkpac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842915.775445-787-45139933270124/AnsiballZ_lineinfile.py'
Jan 31 07:01:56 compute-2 sudo[58091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:56 compute-2 python3.9[58093]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:56 compute-2 sudo[58091]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:57 compute-2 sudo[58245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odziqusejzvumhujufftdcfutdugonzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842917.4962647-831-65897109432045/AnsiballZ_setup.py'
Jan 31 07:01:57 compute-2 sudo[58245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:58 compute-2 python3.9[58247]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:01:58 compute-2 sudo[58245]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:58 compute-2 sudo[58329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjjtaunoscropstmiipsaachfeterwdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842917.4962647-831-65897109432045/AnsiballZ_systemd.py'
Jan 31 07:01:58 compute-2 sudo[58329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:59 compute-2 python3.9[58331]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:01:59 compute-2 sudo[58329]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:00 compute-2 sudo[58483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qitcaxvpwhenfynddpoociqzdiqdrblp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842920.1028984-878-65864574195037/AnsiballZ_setup.py'
Jan 31 07:02:00 compute-2 sudo[58483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:00 compute-2 python3.9[58485]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:02:00 compute-2 sudo[58483]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:01 compute-2 sudo[58567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnffzwhjdzlebakygbjuamzcjpxpafxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842920.1028984-878-65864574195037/AnsiballZ_systemd.py'
Jan 31 07:02:01 compute-2 sudo[58567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:01 compute-2 python3.9[58569]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:02:01 compute-2 chronyd[829]: chronyd exiting
Jan 31 07:02:01 compute-2 systemd[1]: Stopping NTP client/server...
Jan 31 07:02:01 compute-2 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 07:02:01 compute-2 systemd[1]: Stopped NTP client/server.
Jan 31 07:02:01 compute-2 systemd[1]: Starting NTP client/server...
Jan 31 07:02:01 compute-2 chronyd[58578]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 07:02:01 compute-2 chronyd[58578]: Frequency -26.730 +/- 0.132 ppm read from /var/lib/chrony/drift
Jan 31 07:02:01 compute-2 chronyd[58578]: Loaded seccomp filter (level 2)
Jan 31 07:02:01 compute-2 systemd[1]: Started NTP client/server.
Jan 31 07:02:01 compute-2 sudo[58567]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:01 compute-2 sshd-session[53627]: Connection closed by 192.168.122.30 port 56814
Jan 31 07:02:01 compute-2 sshd-session[53624]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:02:01 compute-2 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 07:02:01 compute-2 systemd[1]: session-13.scope: Consumed 21.454s CPU time.
Jan 31 07:02:01 compute-2 systemd-logind[801]: Session 13 logged out. Waiting for processes to exit.
Jan 31 07:02:01 compute-2 systemd-logind[801]: Removed session 13.
Jan 31 07:02:07 compute-2 sshd-session[58604]: Accepted publickey for zuul from 192.168.122.30 port 54820 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:02:07 compute-2 systemd-logind[801]: New session 14 of user zuul.
Jan 31 07:02:07 compute-2 systemd[1]: Started Session 14 of User zuul.
Jan 31 07:02:07 compute-2 sshd-session[58604]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:02:08 compute-2 sudo[58757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpjomvgrcgyjrxvlhznnuakpyiirfamt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842927.7564006-28-141157589361916/AnsiballZ_file.py'
Jan 31 07:02:08 compute-2 sudo[58757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:08 compute-2 python3.9[58759]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:08 compute-2 sudo[58757]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:08 compute-2 sudo[58909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alutudfnpyxnxdeesvgkqgzisouhpewe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842928.5617683-64-85973499222493/AnsiballZ_stat.py'
Jan 31 07:02:08 compute-2 sudo[58909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:09 compute-2 python3.9[58911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:09 compute-2 sudo[58909]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:09 compute-2 sudo[59032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewhmchktrbdujdykadjpwxckfwqdqzco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842928.5617683-64-85973499222493/AnsiballZ_copy.py'
Jan 31 07:02:09 compute-2 sudo[59032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:09 compute-2 python3.9[59034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842928.5617683-64-85973499222493/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:09 compute-2 sudo[59032]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:10 compute-2 sshd-session[58607]: Connection closed by 192.168.122.30 port 54820
Jan 31 07:02:10 compute-2 sshd-session[58604]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:02:10 compute-2 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 07:02:10 compute-2 systemd[1]: session-14.scope: Consumed 1.216s CPU time.
Jan 31 07:02:10 compute-2 systemd-logind[801]: Session 14 logged out. Waiting for processes to exit.
Jan 31 07:02:10 compute-2 systemd-logind[801]: Removed session 14.
Jan 31 07:02:15 compute-2 sshd-session[59059]: Accepted publickey for zuul from 192.168.122.30 port 54828 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:02:15 compute-2 systemd-logind[801]: New session 15 of user zuul.
Jan 31 07:02:15 compute-2 systemd[1]: Started Session 15 of User zuul.
Jan 31 07:02:15 compute-2 sshd-session[59059]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:02:16 compute-2 python3.9[59212]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:02:17 compute-2 sudo[59366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyfeqsnohwguzaqqyujtjsnfhxdpinwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842937.1650486-61-70345808625708/AnsiballZ_file.py'
Jan 31 07:02:17 compute-2 sudo[59366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:17 compute-2 python3.9[59368]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:17 compute-2 sudo[59366]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:18 compute-2 sudo[59541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmlnuzrpsknguoozymybhinaslzazdeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842937.96596-85-32671083950807/AnsiballZ_stat.py'
Jan 31 07:02:18 compute-2 sudo[59541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:18 compute-2 python3.9[59543]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:18 compute-2 sudo[59541]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:19 compute-2 sudo[59664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqrmxmjfryzzktajclxbtfdgluhpghzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842937.96596-85-32671083950807/AnsiballZ_copy.py'
Jan 31 07:02:19 compute-2 sudo[59664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:19 compute-2 python3.9[59666]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769842937.96596-85-32671083950807/.source.json _original_basename=.f20gumsd follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:19 compute-2 sudo[59664]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:19 compute-2 sudo[59816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erzebqlxbgvmbubcvfbjlocdlobxugin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842939.783286-154-134588018460031/AnsiballZ_stat.py'
Jan 31 07:02:19 compute-2 sudo[59816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:20 compute-2 python3.9[59818]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:20 compute-2 sudo[59816]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:20 compute-2 sudo[59939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcxujjlrurnxvqdxhmlucgvjxrhmvzii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842939.783286-154-134588018460031/AnsiballZ_copy.py'
Jan 31 07:02:20 compute-2 sudo[59939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:20 compute-2 python3.9[59941]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842939.783286-154-134588018460031/.source _original_basename=.lp4ci_mx follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:20 compute-2 sudo[59939]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:21 compute-2 sudo[60091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqyvislxxvdwzdnsdhvzihcgpeotsso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842941.4227216-203-92243447937239/AnsiballZ_file.py'
Jan 31 07:02:21 compute-2 sudo[60091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:21 compute-2 python3.9[60093]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:02:21 compute-2 sudo[60091]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:22 compute-2 sudo[60243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwjwiljyyqbrklczxgqnlsyelbunxhhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842942.0531685-227-33401120224724/AnsiballZ_stat.py'
Jan 31 07:02:22 compute-2 sudo[60243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:22 compute-2 python3.9[60245]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:22 compute-2 sudo[60243]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:22 compute-2 sudo[60366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irnitmrldghnbdfgqkzhtvksghwnihfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842942.0531685-227-33401120224724/AnsiballZ_copy.py'
Jan 31 07:02:22 compute-2 sudo[60366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:22 compute-2 python3.9[60368]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842942.0531685-227-33401120224724/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:02:22 compute-2 sudo[60366]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:23 compute-2 sudo[60518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwfcozsbczijqtxsxzcyxtcjxtgqidme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842943.0825624-227-277775944919369/AnsiballZ_stat.py'
Jan 31 07:02:23 compute-2 sudo[60518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:23 compute-2 python3.9[60520]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:23 compute-2 sudo[60518]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:23 compute-2 sudo[60641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ardwqtlatdsijuaknasjkovyvbwitvlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842943.0825624-227-277775944919369/AnsiballZ_copy.py'
Jan 31 07:02:23 compute-2 sudo[60641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:23 compute-2 python3.9[60643]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842943.0825624-227-277775944919369/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:02:23 compute-2 sudo[60641]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:24 compute-2 sudo[60793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkjcvimqtbnwhzwabyrktxmguytgbqwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842944.263327-314-204306157148821/AnsiballZ_file.py'
Jan 31 07:02:24 compute-2 sudo[60793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:24 compute-2 python3.9[60795]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:24 compute-2 sudo[60793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:25 compute-2 sudo[60945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqmdizboduofrjehclkswmnbyupqibhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842944.8889558-337-43046636404389/AnsiballZ_stat.py'
Jan 31 07:02:25 compute-2 sudo[60945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:25 compute-2 python3.9[60947]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:25 compute-2 sudo[60945]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:25 compute-2 sudo[61068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aexfevxkccgybqaxfiijhuymcygdtmxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842944.8889558-337-43046636404389/AnsiballZ_copy.py'
Jan 31 07:02:25 compute-2 sudo[61068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:25 compute-2 python3.9[61070]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842944.8889558-337-43046636404389/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:25 compute-2 sudo[61068]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:26 compute-2 sudo[61220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqdgcnuddovzotvgkfbrrbicyrrphrzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842946.1828525-382-17351041790633/AnsiballZ_stat.py'
Jan 31 07:02:26 compute-2 sudo[61220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:26 compute-2 python3.9[61222]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:26 compute-2 sudo[61220]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:26 compute-2 sudo[61343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wisuzyvmzxpvyszttdwrhgppnogkxcev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842946.1828525-382-17351041790633/AnsiballZ_copy.py'
Jan 31 07:02:26 compute-2 sudo[61343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:27 compute-2 python3.9[61345]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842946.1828525-382-17351041790633/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:27 compute-2 sudo[61343]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:27 compute-2 sudo[61495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbjylkyrwyyelygtevptyjbbucyfkjlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842947.3841298-427-188326283538177/AnsiballZ_systemd.py'
Jan 31 07:02:27 compute-2 sudo[61495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:28 compute-2 python3.9[61497]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:02:28 compute-2 systemd[1]: Reloading.
Jan 31 07:02:28 compute-2 systemd-sysv-generator[61527]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:28 compute-2 systemd-rc-local-generator[61524]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:28 compute-2 systemd[1]: Reloading.
Jan 31 07:02:28 compute-2 systemd-sysv-generator[61562]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:28 compute-2 systemd-rc-local-generator[61554]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:28 compute-2 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 07:02:28 compute-2 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 07:02:28 compute-2 sudo[61495]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:29 compute-2 sudo[61723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyolrvgtrtlllpvwahwjetnxqtlumkaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842948.904216-453-148407913435088/AnsiballZ_stat.py'
Jan 31 07:02:29 compute-2 sudo[61723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:29 compute-2 python3.9[61725]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:29 compute-2 sudo[61723]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:29 compute-2 sudo[61846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcwjyroyofnbbtfpernwipdvgxzfxiyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842948.904216-453-148407913435088/AnsiballZ_copy.py'
Jan 31 07:02:29 compute-2 sudo[61846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:29 compute-2 python3.9[61848]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842948.904216-453-148407913435088/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:29 compute-2 sudo[61846]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:30 compute-2 sudo[61998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzoteapnnekshtbvmkrbofppflbngqqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842950.1918538-496-223946220499373/AnsiballZ_stat.py'
Jan 31 07:02:30 compute-2 sudo[61998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:30 compute-2 python3.9[62000]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:30 compute-2 sudo[61998]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:30 compute-2 sudo[62121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alxlyjrmpscccpcctfjegmwlhkhvoorh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842950.1918538-496-223946220499373/AnsiballZ_copy.py'
Jan 31 07:02:30 compute-2 sudo[62121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:31 compute-2 python3.9[62123]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842950.1918538-496-223946220499373/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:31 compute-2 sudo[62121]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:31 compute-2 sudo[62273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyqybscaxklhlwmmwfwrhgfoluarvpva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842951.3199449-541-166838154213656/AnsiballZ_systemd.py'
Jan 31 07:02:31 compute-2 sudo[62273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:31 compute-2 python3.9[62275]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:02:31 compute-2 systemd[1]: Reloading.
Jan 31 07:02:31 compute-2 systemd-sysv-generator[62301]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:31 compute-2 systemd-rc-local-generator[62298]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:32 compute-2 systemd[1]: Reloading.
Jan 31 07:02:32 compute-2 systemd-sysv-generator[62340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:32 compute-2 systemd-rc-local-generator[62337]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:32 compute-2 systemd[1]: Starting Create netns directory...
Jan 31 07:02:32 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 07:02:32 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 07:02:32 compute-2 systemd[1]: Finished Create netns directory.
Jan 31 07:02:32 compute-2 sudo[62273]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:33 compute-2 python3.9[62501]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:02:33 compute-2 network[62518]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:02:33 compute-2 network[62519]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:02:33 compute-2 network[62520]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:02:35 compute-2 sudo[62780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-allmnglopwctewftrfeefanwokwuazlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842955.6932867-589-195879847696554/AnsiballZ_systemd.py'
Jan 31 07:02:35 compute-2 sudo[62780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:36 compute-2 python3.9[62782]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:02:36 compute-2 systemd[1]: Reloading.
Jan 31 07:02:36 compute-2 systemd-rc-local-generator[62803]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:36 compute-2 systemd-sysv-generator[62812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:36 compute-2 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 07:02:36 compute-2 iptables.init[62822]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 07:02:36 compute-2 iptables.init[62822]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 07:02:36 compute-2 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 07:02:36 compute-2 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 07:02:36 compute-2 sudo[62780]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:37 compute-2 sudo[63016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cavadfzvpqlzqjmogtkxzqaovpdyxcrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842956.8774328-589-87824964352838/AnsiballZ_systemd.py'
Jan 31 07:02:37 compute-2 sudo[63016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:37 compute-2 python3.9[63018]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:02:37 compute-2 sudo[63016]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:38 compute-2 sudo[63170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqwfnlkffbaclalgzshodqcnrfkhozbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842958.4952924-638-233397931969535/AnsiballZ_systemd.py'
Jan 31 07:02:38 compute-2 sudo[63170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:38 compute-2 python3.9[63172]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:02:39 compute-2 systemd[1]: Reloading.
Jan 31 07:02:39 compute-2 systemd-sysv-generator[63204]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:39 compute-2 systemd-rc-local-generator[63200]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:39 compute-2 systemd[1]: Starting Netfilter Tables...
Jan 31 07:02:39 compute-2 systemd[1]: Finished Netfilter Tables.
Jan 31 07:02:39 compute-2 sudo[63170]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:39 compute-2 sudo[63362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elpzoaitdjmctswjuothtdgzsgawbkda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842959.5495424-662-41706143007457/AnsiballZ_command.py'
Jan 31 07:02:39 compute-2 sudo[63362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:40 compute-2 python3.9[63364]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:02:40 compute-2 sudo[63362]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:41 compute-2 sudo[63515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxgqneoyobvhqsgrzcubqdmtseyirxft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842960.7381504-704-7262111342617/AnsiballZ_stat.py'
Jan 31 07:02:41 compute-2 sudo[63515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:41 compute-2 python3.9[63517]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:41 compute-2 sudo[63515]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:41 compute-2 sudo[63640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwxmycdxozbrjeeubcsncllcfuylzijg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842960.7381504-704-7262111342617/AnsiballZ_copy.py'
Jan 31 07:02:41 compute-2 sudo[63640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:41 compute-2 python3.9[63642]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842960.7381504-704-7262111342617/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:41 compute-2 sudo[63640]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:42 compute-2 sudo[63793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxdgvkxhmhyeprydssiwceaqcrqtqpcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842961.9042656-749-267438296338964/AnsiballZ_systemd.py'
Jan 31 07:02:42 compute-2 sudo[63793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:42 compute-2 python3.9[63795]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:02:42 compute-2 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 07:02:42 compute-2 sshd[1006]: Received SIGHUP; restarting.
Jan 31 07:02:42 compute-2 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 07:02:42 compute-2 sshd[1006]: Server listening on 0.0.0.0 port 22.
Jan 31 07:02:42 compute-2 sshd[1006]: Server listening on :: port 22.
Jan 31 07:02:42 compute-2 sudo[63793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:42 compute-2 sudo[63949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqnmfzsdwtohpruxcbrcxxdprevzileo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842962.6897583-773-109934747760064/AnsiballZ_file.py'
Jan 31 07:02:42 compute-2 sudo[63949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:43 compute-2 python3.9[63951]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:43 compute-2 sudo[63949]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:43 compute-2 sudo[64101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xugufrfgpybppitcnonnezszwjtitotn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842963.377009-797-93533706121156/AnsiballZ_stat.py'
Jan 31 07:02:43 compute-2 sudo[64101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:43 compute-2 python3.9[64103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:43 compute-2 sudo[64101]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:44 compute-2 sudo[64224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzawbehhwjftoxdeffejzzaqgwdjdvca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842963.377009-797-93533706121156/AnsiballZ_copy.py'
Jan 31 07:02:44 compute-2 sudo[64224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:44 compute-2 python3.9[64226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842963.377009-797-93533706121156/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:44 compute-2 sudo[64224]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:45 compute-2 sudo[64376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfbcbkzzxbrbeeglrmeomqjeylnzztev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842964.7967424-850-69919844708902/AnsiballZ_timezone.py'
Jan 31 07:02:45 compute-2 sudo[64376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:45 compute-2 python3.9[64378]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 07:02:45 compute-2 systemd[1]: Starting Time & Date Service...
Jan 31 07:02:45 compute-2 systemd[1]: Started Time & Date Service.
Jan 31 07:02:45 compute-2 sudo[64376]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:46 compute-2 sudo[64532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvyuxawxluycgktekdhtzofeiidtuokq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842965.844027-879-6169741107664/AnsiballZ_file.py'
Jan 31 07:02:46 compute-2 sudo[64532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:46 compute-2 python3.9[64534]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:46 compute-2 sudo[64532]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:46 compute-2 sudo[64684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtwxtjafsyjfjicnvovbeixouomkejiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842966.4364107-901-28217638423871/AnsiballZ_stat.py'
Jan 31 07:02:46 compute-2 sudo[64684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:46 compute-2 python3.9[64686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:46 compute-2 sudo[64684]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:47 compute-2 sudo[64807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axumiksudlucijwnvervygtuftrntyea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842966.4364107-901-28217638423871/AnsiballZ_copy.py'
Jan 31 07:02:47 compute-2 sudo[64807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:47 compute-2 python3.9[64809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842966.4364107-901-28217638423871/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:47 compute-2 sudo[64807]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:47 compute-2 sudo[64959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmsyfrddjtnlhmnzprqswjtcogdawmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842967.642853-947-80480248746954/AnsiballZ_stat.py'
Jan 31 07:02:47 compute-2 sudo[64959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:48 compute-2 python3.9[64961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:48 compute-2 sudo[64959]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:48 compute-2 sudo[65082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qobestclezmvjndhmicemtimrwkyvhym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842967.642853-947-80480248746954/AnsiballZ_copy.py'
Jan 31 07:02:48 compute-2 sudo[65082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:48 compute-2 python3.9[65084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842967.642853-947-80480248746954/.source.yaml _original_basename=.hbxqt4vq follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:48 compute-2 sudo[65082]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:49 compute-2 sudo[65234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcfjhcrxfhstcxysymelsozwoylvbnht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842968.8931334-992-259983454991966/AnsiballZ_stat.py'
Jan 31 07:02:49 compute-2 sudo[65234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:49 compute-2 python3.9[65236]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:49 compute-2 sudo[65234]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:49 compute-2 sudo[65357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxmhbxkqeoyovzjdmsuyxfifkxhubpvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842968.8931334-992-259983454991966/AnsiballZ_copy.py'
Jan 31 07:02:49 compute-2 sudo[65357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:49 compute-2 python3.9[65359]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842968.8931334-992-259983454991966/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:49 compute-2 sudo[65357]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:50 compute-2 sudo[65509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxmzeqlxjefqpyhtaucsafgkzihlrasz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842970.090864-1037-278811758912363/AnsiballZ_command.py'
Jan 31 07:02:50 compute-2 sudo[65509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:50 compute-2 python3.9[65511]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:02:50 compute-2 sudo[65509]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:51 compute-2 sudo[65662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amutoiebtoljqsdoecdcnodqtqmfiwur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842970.873311-1061-103929770712113/AnsiballZ_command.py'
Jan 31 07:02:51 compute-2 sudo[65662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:51 compute-2 python3.9[65664]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:02:51 compute-2 sudo[65662]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:51 compute-2 sudo[65815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etilfbvwduiuazoacaxyxfkmseyxmckm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769842971.4746397-1085-70496826620170/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 07:02:51 compute-2 sudo[65815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:52 compute-2 python3[65817]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 07:02:52 compute-2 sudo[65815]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:52 compute-2 sudo[65967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghrkixmkwepjcvafmbewlrjtxtwrpldn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842972.218795-1108-35552000848732/AnsiballZ_stat.py'
Jan 31 07:02:52 compute-2 sudo[65967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:52 compute-2 python3.9[65969]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:52 compute-2 sudo[65967]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:52 compute-2 sudo[66090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssgtlnhhhfvlzkqmyhfaersozdtqgnnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842972.218795-1108-35552000848732/AnsiballZ_copy.py'
Jan 31 07:02:52 compute-2 sudo[66090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:53 compute-2 python3.9[66092]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842972.218795-1108-35552000848732/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:53 compute-2 sudo[66090]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:53 compute-2 sudo[66242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phmrxzppfstrundvywgilvdectodoxzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842973.6863167-1154-266765305155935/AnsiballZ_stat.py'
Jan 31 07:02:53 compute-2 sudo[66242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:54 compute-2 python3.9[66244]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:54 compute-2 sudo[66242]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:54 compute-2 sudo[66365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqlazmcvbzkrsqmexcnmggwdvvuvdete ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842973.6863167-1154-266765305155935/AnsiballZ_copy.py'
Jan 31 07:02:54 compute-2 sudo[66365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:54 compute-2 python3.9[66367]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842973.6863167-1154-266765305155935/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:54 compute-2 sudo[66365]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:55 compute-2 sudo[66517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thtixvlnglmsvxhhtdwnkfsbsbtpwcuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842974.855788-1199-46495036327982/AnsiballZ_stat.py'
Jan 31 07:02:55 compute-2 sudo[66517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:55 compute-2 python3.9[66519]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:55 compute-2 sudo[66517]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:55 compute-2 sudo[66640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwzzqfqttbjrndtuzofpaogohvtddxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842974.855788-1199-46495036327982/AnsiballZ_copy.py'
Jan 31 07:02:55 compute-2 sudo[66640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:55 compute-2 python3.9[66642]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842974.855788-1199-46495036327982/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:55 compute-2 sudo[66640]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:56 compute-2 sudo[66792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tewpuudbjsishqvqhgstejliezhrinvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842976.0125797-1244-76377195554611/AnsiballZ_stat.py'
Jan 31 07:02:56 compute-2 sudo[66792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:56 compute-2 python3.9[66794]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:56 compute-2 sudo[66792]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:56 compute-2 sudo[66915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxcfzxdivftdzxbyqcdieegxaapjipnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842976.0125797-1244-76377195554611/AnsiballZ_copy.py'
Jan 31 07:02:56 compute-2 sudo[66915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:56 compute-2 python3.9[66917]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842976.0125797-1244-76377195554611/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:56 compute-2 sudo[66915]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:57 compute-2 sudo[67067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trwfjoprwmzocivdmsziuybuiednxtby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842977.202572-1289-180371504027596/AnsiballZ_stat.py'
Jan 31 07:02:57 compute-2 sudo[67067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:57 compute-2 python3.9[67069]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:57 compute-2 sudo[67067]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:57 compute-2 sudo[67190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkschmeaegiuynoicztisbuhhriaourf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842977.202572-1289-180371504027596/AnsiballZ_copy.py'
Jan 31 07:02:57 compute-2 sudo[67190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:58 compute-2 python3.9[67192]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842977.202572-1289-180371504027596/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:58 compute-2 sudo[67190]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:58 compute-2 sudo[67342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywkaqzygkzgbckxzgopndmaiuvizslkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842978.5181026-1334-170072037855227/AnsiballZ_file.py'
Jan 31 07:02:58 compute-2 sudo[67342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:58 compute-2 python3.9[67344]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:59 compute-2 sudo[67342]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:59 compute-2 sudo[67494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejjinzpqcypweedkwnlmkmwnfudeanyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842979.182475-1357-271076360677802/AnsiballZ_command.py'
Jan 31 07:02:59 compute-2 sudo[67494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:59 compute-2 python3.9[67496]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:02:59 compute-2 sudo[67494]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:00 compute-2 sudo[67653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koqbvgeglzvvfvmpcoyzixossttcvltq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842979.8528779-1382-124471837127187/AnsiballZ_blockinfile.py'
Jan 31 07:03:00 compute-2 sudo[67653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:00 compute-2 python3.9[67655]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:00 compute-2 sudo[67653]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:00 compute-2 sudo[67806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpkgcxkdsukflkzqtaadgmkgjiejzdgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842980.7283976-1408-166892274657919/AnsiballZ_file.py'
Jan 31 07:03:00 compute-2 sudo[67806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:01 compute-2 python3.9[67808]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:01 compute-2 sudo[67806]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:01 compute-2 sudo[67958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxcdqpflvtylwjkrvdqbhltfiblepozo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842981.3410833-1408-86257930736850/AnsiballZ_file.py'
Jan 31 07:03:01 compute-2 sudo[67958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:01 compute-2 python3.9[67960]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:01 compute-2 sudo[67958]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:02 compute-2 sudo[68110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsuiyykmvcaehtviyobeczbxwhnmpjxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842981.9743435-1453-194090512787156/AnsiballZ_mount.py'
Jan 31 07:03:02 compute-2 sudo[68110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:02 compute-2 python3.9[68112]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 07:03:02 compute-2 sudo[68110]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:02 compute-2 sudo[68263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivqdyuwfbpjltgggdeqyxdqaxmjwqllh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842982.6944706-1453-94664875100534/AnsiballZ_mount.py'
Jan 31 07:03:02 compute-2 sudo[68263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:03 compute-2 python3.9[68265]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 07:03:03 compute-2 sudo[68263]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:03 compute-2 sshd-session[59062]: Connection closed by 192.168.122.30 port 54828
Jan 31 07:03:03 compute-2 sshd-session[59059]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:03:03 compute-2 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 07:03:03 compute-2 systemd[1]: session-15.scope: Consumed 27.686s CPU time.
Jan 31 07:03:03 compute-2 systemd-logind[801]: Session 15 logged out. Waiting for processes to exit.
Jan 31 07:03:03 compute-2 systemd-logind[801]: Removed session 15.
Jan 31 07:03:08 compute-2 sshd-session[68291]: Accepted publickey for zuul from 192.168.122.30 port 52308 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:03:08 compute-2 systemd-logind[801]: New session 16 of user zuul.
Jan 31 07:03:08 compute-2 systemd[1]: Started Session 16 of User zuul.
Jan 31 07:03:08 compute-2 sshd-session[68291]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:03:09 compute-2 sudo[68444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdhippcqjahfkhmdozuljqshsluqikve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842988.9003308-25-61320426019103/AnsiballZ_tempfile.py'
Jan 31 07:03:09 compute-2 sudo[68444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:09 compute-2 python3.9[68446]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 07:03:09 compute-2 sudo[68444]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:10 compute-2 sudo[68596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwkwnyldlflxttwfswfrtouhholsenwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842989.7458465-61-259421650298370/AnsiballZ_stat.py'
Jan 31 07:03:10 compute-2 sudo[68596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:10 compute-2 python3.9[68598]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:03:10 compute-2 sudo[68596]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:11 compute-2 sudo[68748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkaqtshpzabilcesiofcbxbyzevjjhlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842990.6135478-92-269675461178490/AnsiballZ_setup.py'
Jan 31 07:03:11 compute-2 sudo[68748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:11 compute-2 python3.9[68750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:03:11 compute-2 sudo[68748]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:12 compute-2 sudo[68900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eguotxesirkoajtktidbdpqhblvonkik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842991.6644852-117-243811966586019/AnsiballZ_blockinfile.py'
Jan 31 07:03:12 compute-2 sudo[68900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:12 compute-2 python3.9[68902]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlvTGYGifalEmozttYlZ79wRHZPo6p3FfxUn+H8fCt//gLYJvHB9ygqCWO8F06xZhwaSJlU3R5k49AFtcq6rCaf4D9FuDYpYU5B1qGxpqY2S/6r/PmC9TmJJe6DJfuIf95os5YrDLR82BbT8dLFvu76PfZiMt0+kvm9gj1Q6XCUTgIsIvY9pyPySu0V4JDeT8EBgROR7WA5Fev80wO2/RlFXH9xVIupO8rswjwWPuIXoua1w44d35HWWHBdMAFXeZZMopWHWwY+fIlyz4B8y/TWDow7KZxG9GHKZ04e73/RA972Gub2LC0SlBFsBqaSnub8ooOcA3jZ3R2bjHAVkZvLgCK9UFSgwvvfyOWxtkJgj5KalAy9vZeGQ02ndAPNkQ6B1GnnRHaR5yGPG78q9Nd8RDmzhTr1iwnYLHhup04nAUnUDw5ubZFyF9bW1KQWvDv+4cfFeT8mhARMCxu7Imzne5FDq9OZAA9VLfnA26YFT0MpGjGl332cx20iz3Z4IU=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBM/OyT9HQGjLM76vSXpTFer+lkr//u0v4BsUk+Rcai
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPttGgqMF5HnqNXeajmhgAAhQFj1yReXfFmUGT6cv24PcfDX+VeASpBgDGWJKvbu1EgrSPUu2R8sDzajVI5+ETk=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwK9tbwI1sVhVFn3RGaEAgpi2689y9VdIyBp+cw+RWFupGnK46xr4HB/N67Aw+A+3FJtEl1Zq1cnt3Gy8PYb6XnLd4xH/NFtUI3ukhekrtKvSmysEjpRGIamjt1BkH4Lxh79PNkk13AVMQN92Wo271/fHEvcV7HaC0Q5VypZMd+77ZvI9NuEG1nofpvI8+32YECZBLpoC5KQK7EibqD9MUR2OmapGZhV+5B5jdb0ZvNb966Q0kwAGV8E+xgHSVnh5eCWC8oxgWkycmQd2co9E79fiIHEioABE9aDUGKw0+nsZ7HrvjG/ENeg5C6fjdJE4MsPq3FNHAiTCQPZ7QZgv/CSudt7WYyLTztGL9ksWqaTUeDocKVKPlJlzGrn/TXgMoix8+qbFzxVixIROb2nqElyEy6mo0Xxt2b4aisil9ZQhWVMQY0hGX5vtVv0E6+svzjSTfkyZolbjyRsolJF4pH7+klLEmlWGDlgSoCDZeK/XEi7xq3yaCymuWtX2fAX8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICzJ5+1VSPloOqHhejNen2lHjfV4Hvj7nbRbNJjS6dtd
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBECc0+1u2G3haTNDUnwK7F3+bqZqLNjR6ayEsOJcH6U6RkqhSd2eAlivxlw9dfPuir2TFrYzGTtSXuJ8iauDAtQ=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDir5Ux7IUuKTsqwrpZRFpieFX7Hi9Bsaw7N3jCiMd+vuHlEKHLX54HbyTIVnox1XbNjeYynLRRz7VKBfder8IEerGmST/uWuX5FOdve7vDdY++9J6qYkj1Gf6v6BGp8BT97bbPdvaQdLP6YS2jFEfOz4s0oJkgr8dsHjPU70e1P0b7vKxqo3z/E/XCe2BUGEv5j/z9GTl2oQ9/KoTvahfr6qfonnQK9E0gsJKDB9S1UPNFkJUxvVPfKfEao207dmT8EmQL2ZdwDwecA2Mg0SneGaNmEFWDW4CWQjdbHuikc3vsZ1do7kzq2+tz+WLEXqdb4Ig4S0OfV/MAcaC/C1DRfZHxZN3vSayrm99nFc8oPaLnRtT8Jz1dVonMOpwLm3xMm6nAeGNTzM0ImTrJTusVmKNRQI3x6VPiEcWdKNvN5sVcrN9uyINDMuzpXIxc1LmpmR/338EfP4HYhfsTqdM0worzzewvh2XhAVxQAiNYRRUbLvR4/EE5SjXTjSA4ID0=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ8oYZpZvdB1n917+wvTxetgtueloCox+7yBQBW8LHZX
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCEH62xmPSqzu7EFth8e8ITel7fLvoU9FKlxQN/eSXzUuR/7sZGPhcgLzjrJmEcn4Za0K2VNu6+z559d/AEJY2U=
                                             create=True mode=0644 path=/tmp/ansible.q01peblb state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:12 compute-2 sudo[68900]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:12 compute-2 sudo[69052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrairomkzetrxdavgkrajrhoivzijrnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842992.370697-140-42343807525717/AnsiballZ_command.py'
Jan 31 07:03:12 compute-2 sudo[69052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:12 compute-2 python3.9[69054]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.q01peblb' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:12 compute-2 sudo[69052]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:13 compute-2 sudo[69206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaoixoqkecytcifbavcgehpilbjpsbqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842993.13697-164-247955853218564/AnsiballZ_file.py'
Jan 31 07:03:13 compute-2 sudo[69206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:13 compute-2 python3.9[69208]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.q01peblb state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:13 compute-2 sudo[69206]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:14 compute-2 sshd-session[68294]: Connection closed by 192.168.122.30 port 52308
Jan 31 07:03:14 compute-2 sshd-session[68291]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:03:14 compute-2 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 07:03:14 compute-2 systemd[1]: session-16.scope: Consumed 2.795s CPU time.
Jan 31 07:03:14 compute-2 systemd-logind[801]: Session 16 logged out. Waiting for processes to exit.
Jan 31 07:03:14 compute-2 systemd-logind[801]: Removed session 16.
Jan 31 07:03:15 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 07:03:19 compute-2 sshd-session[69235]: Accepted publickey for zuul from 192.168.122.30 port 53550 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:03:19 compute-2 systemd-logind[801]: New session 17 of user zuul.
Jan 31 07:03:19 compute-2 systemd[1]: Started Session 17 of User zuul.
Jan 31 07:03:19 compute-2 sshd-session[69235]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:03:20 compute-2 python3.9[69388]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:03:21 compute-2 sudo[69542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqndolfnfnxtwgdontspkczjhkvxksuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843000.9806197-58-174980768333512/AnsiballZ_systemd.py'
Jan 31 07:03:21 compute-2 sudo[69542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:21 compute-2 python3.9[69544]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 07:03:21 compute-2 sudo[69542]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:22 compute-2 sudo[69696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adamamotbcykdxijyulzlaqkfasofodt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843002.0408568-82-155474320552825/AnsiballZ_systemd.py'
Jan 31 07:03:22 compute-2 sudo[69696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:22 compute-2 python3.9[69698]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:03:22 compute-2 sudo[69696]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:23 compute-2 sudo[69849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhnwgmosvorcmhivallfmmxyynbgtesn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843002.9195442-110-143913631342962/AnsiballZ_command.py'
Jan 31 07:03:23 compute-2 sudo[69849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:23 compute-2 python3.9[69851]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:23 compute-2 sudo[69849]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:24 compute-2 sudo[70002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejkvkiyqdwhppryzxakwskactvqwitwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843003.7219408-133-271425827606926/AnsiballZ_stat.py'
Jan 31 07:03:24 compute-2 sudo[70002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:24 compute-2 python3.9[70004]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:03:24 compute-2 sudo[70002]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:24 compute-2 sudo[70156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfcidliatjarwngmnpxjvlwgmusfoitx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843004.4924085-157-15228771395383/AnsiballZ_command.py'
Jan 31 07:03:24 compute-2 sudo[70156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:24 compute-2 python3.9[70158]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:24 compute-2 sudo[70156]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:25 compute-2 sudo[70311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuacyflnhpmnfidxqxnilxhakjbkjqes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843005.1640997-182-212696591384325/AnsiballZ_file.py'
Jan 31 07:03:25 compute-2 sudo[70311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:25 compute-2 python3.9[70313]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:25 compute-2 sudo[70311]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:26 compute-2 sshd-session[69238]: Connection closed by 192.168.122.30 port 53550
Jan 31 07:03:26 compute-2 sshd-session[69235]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:03:26 compute-2 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 07:03:26 compute-2 systemd[1]: session-17.scope: Consumed 3.450s CPU time.
Jan 31 07:03:26 compute-2 systemd-logind[801]: Session 17 logged out. Waiting for processes to exit.
Jan 31 07:03:26 compute-2 systemd-logind[801]: Removed session 17.
Jan 31 07:03:31 compute-2 sshd-session[70339]: Accepted publickey for zuul from 192.168.122.30 port 52098 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:03:31 compute-2 systemd-logind[801]: New session 18 of user zuul.
Jan 31 07:03:31 compute-2 systemd[1]: Started Session 18 of User zuul.
Jan 31 07:03:31 compute-2 sshd-session[70339]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:03:32 compute-2 python3.9[70492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:03:33 compute-2 sudo[70646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knoxjkegvmcegzbrnuofplazwaffwqwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843013.2631354-64-258987573499495/AnsiballZ_setup.py'
Jan 31 07:03:33 compute-2 sudo[70646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:33 compute-2 python3.9[70648]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:03:33 compute-2 sudo[70646]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:34 compute-2 sudo[70730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agywztqyayodhvkeuysjqkugavngymji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843013.2631354-64-258987573499495/AnsiballZ_dnf.py'
Jan 31 07:03:34 compute-2 sudo[70730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:34 compute-2 python3.9[70732]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 07:03:36 compute-2 sudo[70730]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:36 compute-2 python3.9[70883]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:37 compute-2 python3.9[71034]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:03:38 compute-2 python3.9[71184]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:03:38 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:03:39 compute-2 python3.9[71335]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:03:39 compute-2 sshd-session[70342]: Connection closed by 192.168.122.30 port 52098
Jan 31 07:03:39 compute-2 sshd-session[70339]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:03:39 compute-2 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 07:03:39 compute-2 systemd[1]: session-18.scope: Consumed 5.088s CPU time.
Jan 31 07:03:39 compute-2 systemd-logind[801]: Session 18 logged out. Waiting for processes to exit.
Jan 31 07:03:39 compute-2 systemd-logind[801]: Removed session 18.
Jan 31 07:03:48 compute-2 sshd-session[71360]: Accepted publickey for zuul from 38.129.56.250 port 33270 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 07:03:48 compute-2 systemd-logind[801]: New session 19 of user zuul.
Jan 31 07:03:48 compute-2 systemd[1]: Started Session 19 of User zuul.
Jan 31 07:03:48 compute-2 sshd-session[71360]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:03:48 compute-2 sudo[71436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-susdhheqljbjgamdeiaguhftoasnxiet ; /usr/bin/python3'
Jan 31 07:03:48 compute-2 sudo[71436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:48 compute-2 useradd[71440]: new group: name=ceph-admin, GID=42478
Jan 31 07:03:48 compute-2 useradd[71440]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 31 07:03:48 compute-2 sudo[71436]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:48 compute-2 sudo[71522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxdlzgphyhoxloyjuncxgztyprodgyip ; /usr/bin/python3'
Jan 31 07:03:48 compute-2 sudo[71522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:48 compute-2 sudo[71522]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:49 compute-2 sudo[71595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdttzlmeaquiadmovzfkcsowbwblvxlc ; /usr/bin/python3'
Jan 31 07:03:49 compute-2 sudo[71595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:49 compute-2 sudo[71595]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:49 compute-2 sudo[71645]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agcminubwntslxzjtmdmxsabgpmjxadf ; /usr/bin/python3'
Jan 31 07:03:49 compute-2 sudo[71645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:49 compute-2 sudo[71645]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:50 compute-2 sudo[71672]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrftsgzaooinrkypsvihjtmmvzltplqa ; /usr/bin/python3'
Jan 31 07:03:50 compute-2 sudo[71672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:50 compute-2 sudo[71672]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:50 compute-2 sudo[71698]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgcxrphxwpxwqtpvtrhrunpevkyhqcdi ; /usr/bin/python3'
Jan 31 07:03:50 compute-2 sudo[71698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:50 compute-2 sudo[71698]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:50 compute-2 sudo[71724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxadzatmomwdhrwpmcnpfkdywycuclkn ; /usr/bin/python3'
Jan 31 07:03:50 compute-2 sudo[71724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:51 compute-2 sudo[71724]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:51 compute-2 sudo[71802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nekbuieywlwvovdgzlirckmdeytewivq ; /usr/bin/python3'
Jan 31 07:03:51 compute-2 sudo[71802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:51 compute-2 sudo[71802]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:51 compute-2 sudo[71875]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-molpvnuelvaaixzdvsmggxybibixedkr ; /usr/bin/python3'
Jan 31 07:03:51 compute-2 sudo[71875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:51 compute-2 sudo[71875]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:52 compute-2 sudo[71977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtqjcnpgaobtadgdtpuvvvznxulwmjvo ; /usr/bin/python3'
Jan 31 07:03:52 compute-2 sudo[71977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:52 compute-2 sudo[71977]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:52 compute-2 sudo[72050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcmuytnbtoasftprokkxbtjzrtktyswk ; /usr/bin/python3'
Jan 31 07:03:52 compute-2 sudo[72050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:52 compute-2 sudo[72050]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:53 compute-2 sudo[72100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srvwtanphjzfeehajrwchxhgnfslufdw ; /usr/bin/python3'
Jan 31 07:03:53 compute-2 sudo[72100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:53 compute-2 python3[72102]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:03:54 compute-2 sudo[72100]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:54 compute-2 sudo[72195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anhfqeqpwrkorbruozfvbsphpvmbkbtg ; /usr/bin/python3'
Jan 31 07:03:54 compute-2 sudo[72195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:55 compute-2 python3[72197]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 07:03:56 compute-2 sudo[72195]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:57 compute-2 sudo[72222]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfpgmrktchzaeasppgkfdskucdukcafv ; /usr/bin/python3'
Jan 31 07:03:57 compute-2 sudo[72222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:57 compute-2 python3[72224]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:03:57 compute-2 sudo[72222]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:57 compute-2 sudo[72248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihusnebmiwagkmfixchuhwiuxbmnspef ; /usr/bin/python3'
Jan 31 07:03:57 compute-2 sudo[72248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:57 compute-2 python3[72250]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:57 compute-2 kernel: loop: module loaded
Jan 31 07:03:57 compute-2 kernel: loop3: detected capacity change from 0 to 14680064
Jan 31 07:03:57 compute-2 sudo[72248]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:57 compute-2 sudo[72282]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyxhnbfvfaeyqjyajoamuowzsxpartgo ; /usr/bin/python3'
Jan 31 07:03:57 compute-2 sudo[72282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:57 compute-2 python3[72284]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:58 compute-2 lvm[72287]: PV /dev/loop3 not used.
Jan 31 07:03:58 compute-2 lvm[72296]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:03:58 compute-2 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 07:03:58 compute-2 sudo[72282]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:58 compute-2 lvm[72298]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 07:03:58 compute-2 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 07:03:58 compute-2 sudo[72374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgloitajvdcglpzyovpfflsntgsrjmhw ; /usr/bin/python3'
Jan 31 07:03:58 compute-2 sudo[72374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:58 compute-2 python3[72376]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:03:58 compute-2 sudo[72374]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:58 compute-2 sudo[72447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdjnrmkhhigcowlccvkysmslvpyloqdt ; /usr/bin/python3'
Jan 31 07:03:58 compute-2 sudo[72447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:58 compute-2 python3[72449]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843038.3653393-37139-148082142659351/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:58 compute-2 sudo[72447]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:59 compute-2 sudo[72497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqygeutegnhuqnrciijybzfldaxaqera ; /usr/bin/python3'
Jan 31 07:03:59 compute-2 sudo[72497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:59 compute-2 python3[72499]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:04:00 compute-2 systemd[1]: Reloading.
Jan 31 07:04:00 compute-2 systemd-rc-local-generator[72527]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:00 compute-2 systemd-sysv-generator[72530]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:00 compute-2 systemd[1]: Starting Ceph OSD losetup...
Jan 31 07:04:00 compute-2 bash[72539]: /dev/loop3: [64513]:4355789 (/var/lib/ceph-osd-0.img)
Jan 31 07:04:00 compute-2 systemd[1]: Finished Ceph OSD losetup.
Jan 31 07:04:00 compute-2 lvm[72540]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:04:00 compute-2 lvm[72540]: VG ceph_vg0 finished
Jan 31 07:04:00 compute-2 sudo[72497]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:02 compute-2 python3[72564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:04:06 compute-2 sshd-session[72608]: Invalid user solana from 92.118.39.56 port 58404
Jan 31 07:04:07 compute-2 sshd-session[72608]: Connection closed by invalid user solana 92.118.39.56 port 58404 [preauth]
Jan 31 07:04:10 compute-2 chronyd[58578]: Selected source 23.133.168.244 (pool.ntp.org)
Jan 31 07:05:53 compute-2 sshd-session[72610]: Accepted publickey for ceph-admin from 192.168.122.100 port 39440 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:53 compute-2 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 07:05:53 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 07:05:53 compute-2 systemd-logind[801]: New session 20 of user ceph-admin.
Jan 31 07:05:53 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 07:05:53 compute-2 systemd[1]: Starting User Manager for UID 42477...
Jan 31 07:05:53 compute-2 systemd[72614]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:53 compute-2 sshd-session[72628]: Accepted publickey for ceph-admin from 192.168.122.100 port 39450 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:53 compute-2 systemd[72614]: Queued start job for default target Main User Target.
Jan 31 07:05:53 compute-2 systemd-logind[801]: New session 22 of user ceph-admin.
Jan 31 07:05:53 compute-2 systemd[72614]: Created slice User Application Slice.
Jan 31 07:05:53 compute-2 systemd[72614]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:05:53 compute-2 systemd[72614]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:05:53 compute-2 systemd[72614]: Reached target Paths.
Jan 31 07:05:53 compute-2 systemd[72614]: Reached target Timers.
Jan 31 07:05:53 compute-2 systemd[72614]: Starting D-Bus User Message Bus Socket...
Jan 31 07:05:53 compute-2 systemd[72614]: Starting Create User's Volatile Files and Directories...
Jan 31 07:05:53 compute-2 systemd[72614]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:05:53 compute-2 systemd[72614]: Reached target Sockets.
Jan 31 07:05:53 compute-2 systemd[72614]: Finished Create User's Volatile Files and Directories.
Jan 31 07:05:53 compute-2 systemd[72614]: Reached target Basic System.
Jan 31 07:05:53 compute-2 systemd[72614]: Reached target Main User Target.
Jan 31 07:05:53 compute-2 systemd[72614]: Startup finished in 212ms.
Jan 31 07:05:53 compute-2 systemd[1]: Started User Manager for UID 42477.
Jan 31 07:05:53 compute-2 systemd[1]: Started Session 20 of User ceph-admin.
Jan 31 07:05:53 compute-2 systemd[1]: Started Session 22 of User ceph-admin.
Jan 31 07:05:53 compute-2 sshd-session[72610]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:53 compute-2 sshd-session[72628]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:53 compute-2 sudo[72635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:53 compute-2 sudo[72635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:53 compute-2 sudo[72635]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:53 compute-2 sudo[72660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:53 compute-2 sudo[72660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:53 compute-2 sudo[72660]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:54 compute-2 sshd-session[72685]: Accepted publickey for ceph-admin from 192.168.122.100 port 39452 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:54 compute-2 systemd-logind[801]: New session 23 of user ceph-admin.
Jan 31 07:05:54 compute-2 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 07:05:54 compute-2 sshd-session[72685]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:54 compute-2 sudo[72689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:54 compute-2 sudo[72689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:54 compute-2 sudo[72689]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:54 compute-2 sudo[72714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-2
Jan 31 07:05:54 compute-2 sudo[72714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:54 compute-2 sudo[72714]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:54 compute-2 sshd-session[72739]: Accepted publickey for ceph-admin from 192.168.122.100 port 39458 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:54 compute-2 systemd-logind[801]: New session 24 of user ceph-admin.
Jan 31 07:05:54 compute-2 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 07:05:54 compute-2 sshd-session[72739]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:54 compute-2 sudo[72743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:54 compute-2 sudo[72743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:54 compute-2 sudo[72743]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:54 compute-2 sudo[72768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 31 07:05:54 compute-2 sudo[72768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:54 compute-2 sudo[72768]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:54 compute-2 sshd-session[72793]: Accepted publickey for ceph-admin from 192.168.122.100 port 39464 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:54 compute-2 systemd-logind[801]: New session 25 of user ceph-admin.
Jan 31 07:05:54 compute-2 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 07:05:54 compute-2 sshd-session[72793]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:54 compute-2 sudo[72797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:54 compute-2 sudo[72797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:54 compute-2 sudo[72797]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:54 compute-2 sudo[72822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:54 compute-2 sudo[72822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:54 compute-2 sudo[72822]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:55 compute-2 sshd-session[72847]: Accepted publickey for ceph-admin from 192.168.122.100 port 39478 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:55 compute-2 systemd-logind[801]: New session 26 of user ceph-admin.
Jan 31 07:05:55 compute-2 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 07:05:55 compute-2 sshd-session[72847]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:55 compute-2 sudo[72851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:55 compute-2 sudo[72851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:55 compute-2 sudo[72851]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:55 compute-2 sudo[72876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:55 compute-2 sudo[72876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:55 compute-2 sudo[72876]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:55 compute-2 sshd-session[72901]: Accepted publickey for ceph-admin from 192.168.122.100 port 39480 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:55 compute-2 systemd-logind[801]: New session 27 of user ceph-admin.
Jan 31 07:05:55 compute-2 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 07:05:55 compute-2 sshd-session[72901]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:55 compute-2 sudo[72905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:55 compute-2 sudo[72905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:55 compute-2 sudo[72905]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:55 compute-2 sudo[72930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 31 07:05:55 compute-2 sudo[72930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:55 compute-2 sudo[72930]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:55 compute-2 sshd-session[72955]: Accepted publickey for ceph-admin from 192.168.122.100 port 39482 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:55 compute-2 systemd-logind[801]: New session 28 of user ceph-admin.
Jan 31 07:05:55 compute-2 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 07:05:55 compute-2 sshd-session[72955]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:55 compute-2 sudo[72959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:55 compute-2 sudo[72959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:55 compute-2 sudo[72959]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:56 compute-2 sudo[72984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:56 compute-2 sudo[72984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:56 compute-2 sudo[72984]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:56 compute-2 sshd-session[73009]: Accepted publickey for ceph-admin from 192.168.122.100 port 39492 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:56 compute-2 systemd-logind[801]: New session 29 of user ceph-admin.
Jan 31 07:05:56 compute-2 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 07:05:56 compute-2 sshd-session[73009]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:56 compute-2 sudo[73013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:56 compute-2 sudo[73013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:56 compute-2 sudo[73013]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:56 compute-2 sudo[73038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 31 07:05:56 compute-2 sudo[73038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:56 compute-2 sudo[73038]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:56 compute-2 sshd-session[73063]: Accepted publickey for ceph-admin from 192.168.122.100 port 39494 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:56 compute-2 systemd-logind[801]: New session 30 of user ceph-admin.
Jan 31 07:05:56 compute-2 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 07:05:56 compute-2 sshd-session[73063]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:57 compute-2 sshd-session[73090]: Accepted publickey for ceph-admin from 192.168.122.100 port 39508 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:57 compute-2 systemd-logind[801]: New session 31 of user ceph-admin.
Jan 31 07:05:57 compute-2 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 07:05:57 compute-2 sshd-session[73090]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:57 compute-2 sudo[73094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:57 compute-2 sudo[73094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:57 compute-2 sudo[73094]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:57 compute-2 sudo[73119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 31 07:05:57 compute-2 sudo[73119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:57 compute-2 sudo[73119]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:57 compute-2 sshd-session[73144]: Accepted publickey for ceph-admin from 192.168.122.100 port 39524 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:57 compute-2 systemd-logind[801]: New session 32 of user ceph-admin.
Jan 31 07:05:57 compute-2 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 07:05:57 compute-2 sshd-session[73144]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:57 compute-2 sudo[73148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:57 compute-2 sudo[73148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:57 compute-2 sudo[73148]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:57 compute-2 sudo[73173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-2
Jan 31 07:05:57 compute-2 sudo[73173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:57 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:05:57 compute-2 sudo[73173]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:21 compute-2 sshd-session[73219]: Invalid user ubuntu from 92.118.39.56 port 44274
Jan 31 07:06:21 compute-2 sshd-session[73219]: Connection closed by invalid user ubuntu 92.118.39.56 port 44274 [preauth]
Jan 31 07:06:30 compute-2 sudo[73221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:30 compute-2 sudo[73221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:30 compute-2 sudo[73221]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-2 sudo[73246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:06:30 compute-2 sudo[73246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:30 compute-2 sudo[73246]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-2 sudo[73271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:30 compute-2 sudo[73271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:30 compute-2 sudo[73271]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-2 sudo[73296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:30 compute-2 sudo[73296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:30 compute-2 sudo[73296]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-2 sudo[73321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:30 compute-2 sudo[73321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:30 compute-2 sudo[73321]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-2 sudo[73346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:06:30 compute-2 sudo[73346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:31 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:31 compute-2 sudo[73346]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:31 compute-2 sudo[73391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:31 compute-2 sudo[73391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:31 compute-2 sudo[73391]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:31 compute-2 sudo[73416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:31 compute-2 sudo[73416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:31 compute-2 sudo[73416]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:31 compute-2 sudo[73441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:31 compute-2 sudo[73441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:31 compute-2 sudo[73441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:31 compute-2 sudo[73466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:06:31 compute-2 sudo[73466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:31 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:31 compute-2 sudo[73466]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:31 compute-2 sudo[73529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:31 compute-2 sudo[73529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:31 compute-2 sudo[73529]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:31 compute-2 sudo[73554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:31 compute-2 sudo[73554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:31 compute-2 sudo[73554]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:31 compute-2 sudo[73579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:31 compute-2 sudo[73579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:31 compute-2 sudo[73579]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:31 compute-2 sudo[73604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:06:31 compute-2 sudo[73604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:31 compute-2 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73641 (sysctl)
Jan 31 07:06:32 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:32 compute-2 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 07:06:32 compute-2 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 07:06:32 compute-2 sudo[73604]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-2 sudo[73663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:32 compute-2 sudo[73663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-2 sudo[73663]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-2 sudo[73688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:32 compute-2 sudo[73688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-2 sudo[73688]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-2 sudo[73713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:32 compute-2 sudo[73713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-2 sudo[73713]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-2 sudo[73738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 07:06:32 compute-2 sudo[73738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:32 compute-2 sudo[73738]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-2 sudo[73782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:32 compute-2 sudo[73782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-2 sudo[73782]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-2 sudo[73807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:32 compute-2 sudo[73807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-2 sudo[73807]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-2 sudo[73832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:32 compute-2 sudo[73832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-2 sudo[73832]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-2 sudo[73857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 07:06:32 compute-2 sudo[73857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:35 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat3232185189-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 07:06:51 compute-2 podman[73917]: 2026-01-31 07:06:51.342996035 +0000 UTC m=+18.327352278 container create 184d39a1b9aa934f77a7d20065eb56995486fb619e75a56b3ad0f85a5a3f4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 31 07:06:51 compute-2 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck3780426411-merged.mount: Deactivated successfully.
Jan 31 07:06:51 compute-2 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 07:06:51 compute-2 systemd[1]: Started libpod-conmon-184d39a1b9aa934f77a7d20065eb56995486fb619e75a56b3ad0f85a5a3f4b8e.scope.
Jan 31 07:06:51 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:06:51 compute-2 podman[73917]: 2026-01-31 07:06:51.420352141 +0000 UTC m=+18.404708414 container init 184d39a1b9aa934f77a7d20065eb56995486fb619e75a56b3ad0f85a5a3f4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:06:51 compute-2 podman[73917]: 2026-01-31 07:06:51.425620653 +0000 UTC m=+18.409976896 container start 184d39a1b9aa934f77a7d20065eb56995486fb619e75a56b3ad0f85a5a3f4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:06:51 compute-2 podman[73917]: 2026-01-31 07:06:51.330633381 +0000 UTC m=+18.314989644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:51 compute-2 podman[73917]: 2026-01-31 07:06:51.429070067 +0000 UTC m=+18.413426310 container attach 184d39a1b9aa934f77a7d20065eb56995486fb619e75a56b3ad0f85a5a3f4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:06:51 compute-2 agitated_sanderson[73980]: 167 167
Jan 31 07:06:51 compute-2 systemd[1]: libpod-184d39a1b9aa934f77a7d20065eb56995486fb619e75a56b3ad0f85a5a3f4b8e.scope: Deactivated successfully.
Jan 31 07:06:51 compute-2 podman[73917]: 2026-01-31 07:06:51.431793311 +0000 UTC m=+18.416149554 container died 184d39a1b9aa934f77a7d20065eb56995486fb619e75a56b3ad0f85a5a3f4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:06:51 compute-2 systemd[1]: var-lib-containers-storage-overlay-77b84035902fe9a451b6712b9dec6108d98ee444d52d886711d54a02866dcaee-merged.mount: Deactivated successfully.
Jan 31 07:06:51 compute-2 podman[73917]: 2026-01-31 07:06:51.463845089 +0000 UTC m=+18.448201332 container remove 184d39a1b9aa934f77a7d20065eb56995486fb619e75a56b3ad0f85a5a3f4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:06:51 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:51 compute-2 systemd[1]: libpod-conmon-184d39a1b9aa934f77a7d20065eb56995486fb619e75a56b3ad0f85a5a3f4b8e.scope: Deactivated successfully.
Jan 31 07:06:51 compute-2 podman[74003]: 2026-01-31 07:06:51.574928628 +0000 UTC m=+0.039124251 container create aebc92c919c24f96761d4307c5ea98ec0e884a3d4f272fb99d1a1c80d077414c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:06:51 compute-2 systemd[1]: Started libpod-conmon-aebc92c919c24f96761d4307c5ea98ec0e884a3d4f272fb99d1a1c80d077414c.scope.
Jan 31 07:06:51 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:06:51 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb7329b823a42c16e74258421bde18c58ff7b8cb5c7d8d022da73aec90f1290/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:51 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb7329b823a42c16e74258421bde18c58ff7b8cb5c7d8d022da73aec90f1290/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:51 compute-2 podman[74003]: 2026-01-31 07:06:51.633992658 +0000 UTC m=+0.098188301 container init aebc92c919c24f96761d4307c5ea98ec0e884a3d4f272fb99d1a1c80d077414c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:06:51 compute-2 podman[74003]: 2026-01-31 07:06:51.639559528 +0000 UTC m=+0.103755161 container start aebc92c919c24f96761d4307c5ea98ec0e884a3d4f272fb99d1a1c80d077414c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:51 compute-2 podman[74003]: 2026-01-31 07:06:51.642656222 +0000 UTC m=+0.106851875 container attach aebc92c919c24f96761d4307c5ea98ec0e884a3d4f272fb99d1a1c80d077414c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:06:51 compute-2 podman[74003]: 2026-01-31 07:06:51.559683924 +0000 UTC m=+0.023879577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]: [
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:     {
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:         "available": false,
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:         "ceph_device": false,
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:         "lsm_data": {},
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:         "lvs": [],
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:         "path": "/dev/sr0",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:         "rejected_reasons": [
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "Insufficient space (<5GB)",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "Has a FileSystem"
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:         ],
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:         "sys_api": {
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "actuators": null,
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "device_nodes": "sr0",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "devname": "sr0",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "human_readable_size": "482.00 KB",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "id_bus": "ata",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "model": "QEMU DVD-ROM",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "nr_requests": "2",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "parent": "/dev/sr0",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "partitions": {},
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "path": "/dev/sr0",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "removable": "1",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "rev": "2.5+",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "ro": "0",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "rotational": "1",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "sas_address": "",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "sas_device_handle": "",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "scheduler_mode": "mq-deadline",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "sectors": 0,
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "sectorsize": "2048",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "size": 493568.0,
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "support_discard": "2048",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "type": "disk",
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:             "vendor": "QEMU"
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:         }
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]:     }
Jan 31 07:06:52 compute-2 infallible_elbakyan[74020]: ]
Jan 31 07:06:52 compute-2 systemd[1]: libpod-aebc92c919c24f96761d4307c5ea98ec0e884a3d4f272fb99d1a1c80d077414c.scope: Deactivated successfully.
Jan 31 07:06:52 compute-2 systemd[1]: libpod-aebc92c919c24f96761d4307c5ea98ec0e884a3d4f272fb99d1a1c80d077414c.scope: Consumed 1.103s CPU time.
Jan 31 07:06:52 compute-2 podman[74003]: 2026-01-31 07:06:52.749102753 +0000 UTC m=+1.213298376 container died aebc92c919c24f96761d4307c5ea98ec0e884a3d4f272fb99d1a1c80d077414c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:06:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-5fb7329b823a42c16e74258421bde18c58ff7b8cb5c7d8d022da73aec90f1290-merged.mount: Deactivated successfully.
Jan 31 07:06:54 compute-2 podman[74003]: 2026-01-31 07:06:54.210977181 +0000 UTC m=+2.675172804 container remove aebc92c919c24f96761d4307c5ea98ec0e884a3d4f272fb99d1a1c80d077414c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:06:54 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:54 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:54 compute-2 systemd[1]: libpod-conmon-aebc92c919c24f96761d4307c5ea98ec0e884a3d4f272fb99d1a1c80d077414c.scope: Deactivated successfully.
Jan 31 07:06:54 compute-2 sudo[73857]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:54 compute-2 sudo[75098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75098]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 07:06:54 compute-2 sudo[75123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75123]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:54 compute-2 sudo[75148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75148]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph
Jan 31 07:06:54 compute-2 sudo[75173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75173]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:54 compute-2 sudo[75198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75198]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:06:54 compute-2 sudo[75223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75223]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:54 compute-2 sudo[75248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75248]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:06:54 compute-2 sudo[75273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75273]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:54 compute-2 sudo[75298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75298]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:06:54 compute-2 sudo[75323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75323]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:54 compute-2 sudo[75371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75371]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:06:54 compute-2 sudo[75396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75396]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:54 compute-2 sudo[75421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75421]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:06:54 compute-2 sudo[75446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75446]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:54 compute-2 sudo[75471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:54 compute-2 sudo[75471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:54 compute-2 sudo[75471]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 31 07:06:55 compute-2 sudo[75496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75496]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[75521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75521]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:06:55 compute-2 sudo[75546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75546]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[75571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75571]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:06:55 compute-2 sudo[75596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75596]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[75621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75621]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:06:55 compute-2 sudo[75646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75646]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[75671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75671]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:06:55 compute-2 sudo[75696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75696]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[75721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75721]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:06:55 compute-2 sudo[75746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75746]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[75794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75794]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:06:55 compute-2 sudo[75819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75819]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[75844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75844]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:06:55 compute-2 sudo[75869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75869]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[75894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75894]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:06:55 compute-2 sudo[75919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75919]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[75944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75944]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 07:06:55 compute-2 sudo[75969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75969]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[75994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[75994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[75994]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[76019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph
Jan 31 07:06:55 compute-2 sudo[76019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[76019]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:55 compute-2 sudo[76044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:55 compute-2 sudo[76044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:55 compute-2 sudo[76044]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:06:56 compute-2 sudo[76069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76069]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:56 compute-2 sudo[76094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76094]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:06:56 compute-2 sudo[76119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76119]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:56 compute-2 sudo[76144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76144]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:06:56 compute-2 sudo[76169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76169]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:56 compute-2 sudo[76217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76217]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:06:56 compute-2 sudo[76242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76242]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:56 compute-2 sudo[76267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76267]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:06:56 compute-2 sudo[76292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76292]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:56 compute-2 sudo[76317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76317]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 31 07:06:56 compute-2 sudo[76342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76342]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:56 compute-2 sudo[76367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76367]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:06:56 compute-2 sudo[76392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76392]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:56 compute-2 sudo[76417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76417]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:06:56 compute-2 sudo[76442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76442]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:56 compute-2 sudo[76467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76467]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring.new
Jan 31 07:06:56 compute-2 sudo[76492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76492]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:56 compute-2 sudo[76517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76517]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:06:56 compute-2 sudo[76542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76542]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:56 compute-2 sudo[76567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76567]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:56 compute-2 sudo[76592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring.new
Jan 31 07:06:56 compute-2 sudo[76592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:56 compute-2 sudo[76592]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:57 compute-2 sudo[76640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:57 compute-2 sudo[76640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:57 compute-2 sudo[76640]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:57 compute-2 sudo[76665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring.new
Jan 31 07:06:57 compute-2 sudo[76665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:57 compute-2 sudo[76665]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:57 compute-2 sudo[76690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:57 compute-2 sudo[76690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:57 compute-2 sudo[76690]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:57 compute-2 sudo[76715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring.new
Jan 31 07:06:57 compute-2 sudo[76715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:57 compute-2 sudo[76715]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:57 compute-2 sudo[76740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:57 compute-2 sudo[76740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:57 compute-2 sudo[76740]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:57 compute-2 sudo[76765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring.new /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:06:57 compute-2 sudo[76765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:57 compute-2 sudo[76765]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:57 compute-2 sudo[76790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:57 compute-2 sudo[76790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:57 compute-2 sudo[76790]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:57 compute-2 sudo[76815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:57 compute-2 sudo[76815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:57 compute-2 sudo[76815]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:57 compute-2 sudo[76840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:57 compute-2 sudo[76840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:57 compute-2 sudo[76840]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:57 compute-2 sudo[76865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:06:57 compute-2 sudo[76865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:57 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:57 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:57 compute-2 podman[76930]: 2026-01-31 07:06:57.750785594 +0000 UTC m=+0.037919837 container create 2a4509caa32c4169084e0184e311d551d174b94b58528e9c2b432b4d4f596a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:06:57 compute-2 systemd[1]: Started libpod-conmon-2a4509caa32c4169084e0184e311d551d174b94b58528e9c2b432b4d4f596a6f.scope.
Jan 31 07:06:57 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:06:57 compute-2 podman[76930]: 2026-01-31 07:06:57.801161139 +0000 UTC m=+0.088295412 container init 2a4509caa32c4169084e0184e311d551d174b94b58528e9c2b432b4d4f596a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:57 compute-2 podman[76930]: 2026-01-31 07:06:57.806834612 +0000 UTC m=+0.093968865 container start 2a4509caa32c4169084e0184e311d551d174b94b58528e9c2b432b4d4f596a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:06:57 compute-2 podman[76930]: 2026-01-31 07:06:57.809679319 +0000 UTC m=+0.096813652 container attach 2a4509caa32c4169084e0184e311d551d174b94b58528e9c2b432b4d4f596a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:06:57 compute-2 sad_shannon[76946]: 167 167
Jan 31 07:06:57 compute-2 systemd[1]: libpod-2a4509caa32c4169084e0184e311d551d174b94b58528e9c2b432b4d4f596a6f.scope: Deactivated successfully.
Jan 31 07:06:57 compute-2 podman[76930]: 2026-01-31 07:06:57.810752979 +0000 UTC m=+0.097887232 container died 2a4509caa32c4169084e0184e311d551d174b94b58528e9c2b432b4d4f596a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:57 compute-2 podman[76930]: 2026-01-31 07:06:57.738288276 +0000 UTC m=+0.025422529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:57 compute-2 podman[76930]: 2026-01-31 07:06:57.851473801 +0000 UTC m=+0.138608054 container remove 2a4509caa32c4169084e0184e311d551d174b94b58528e9c2b432b4d4f596a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:06:57 compute-2 systemd[1]: libpod-conmon-2a4509caa32c4169084e0184e311d551d174b94b58528e9c2b432b4d4f596a6f.scope: Deactivated successfully.
Jan 31 07:06:57 compute-2 podman[76968]: 2026-01-31 07:06:57.905147015 +0000 UTC m=+0.034773192 container create 74fb29425816cc878c61d6dd3b9c2d286ee738dbbbf601b21ade853bb5b3587e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:06:57 compute-2 systemd[1]: Started libpod-conmon-74fb29425816cc878c61d6dd3b9c2d286ee738dbbbf601b21ade853bb5b3587e.scope.
Jan 31 07:06:57 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:06:57 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dd5723fa46967e587a384500ccee2c23396045e553c34be6c346e389729e3b4/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:57 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dd5723fa46967e587a384500ccee2c23396045e553c34be6c346e389729e3b4/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:57 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dd5723fa46967e587a384500ccee2c23396045e553c34be6c346e389729e3b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:57 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dd5723fa46967e587a384500ccee2c23396045e553c34be6c346e389729e3b4/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:57 compute-2 podman[76968]: 2026-01-31 07:06:57.964831593 +0000 UTC m=+0.094457770 container init 74fb29425816cc878c61d6dd3b9c2d286ee738dbbbf601b21ade853bb5b3587e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_diffie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 31 07:06:57 compute-2 podman[76968]: 2026-01-31 07:06:57.970484126 +0000 UTC m=+0.100110303 container start 74fb29425816cc878c61d6dd3b9c2d286ee738dbbbf601b21ade853bb5b3587e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_diffie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:06:57 compute-2 podman[76968]: 2026-01-31 07:06:57.973806846 +0000 UTC m=+0.103433023 container attach 74fb29425816cc878c61d6dd3b9c2d286ee738dbbbf601b21ade853bb5b3587e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_diffie, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:06:57 compute-2 podman[76968]: 2026-01-31 07:06:57.888177036 +0000 UTC m=+0.017803233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:58 compute-2 systemd[1]: libpod-74fb29425816cc878c61d6dd3b9c2d286ee738dbbbf601b21ade853bb5b3587e.scope: Deactivated successfully.
Jan 31 07:06:58 compute-2 podman[76968]: 2026-01-31 07:06:58.030176412 +0000 UTC m=+0.159802589 container died 74fb29425816cc878c61d6dd3b9c2d286ee738dbbbf601b21ade853bb5b3587e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:06:58 compute-2 podman[76968]: 2026-01-31 07:06:58.07143024 +0000 UTC m=+0.201056417 container remove 74fb29425816cc878c61d6dd3b9c2d286ee738dbbbf601b21ade853bb5b3587e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 31 07:06:58 compute-2 systemd[1]: libpod-conmon-74fb29425816cc878c61d6dd3b9c2d286ee738dbbbf601b21ade853bb5b3587e.scope: Deactivated successfully.
Jan 31 07:06:58 compute-2 systemd[1]: Reloading.
Jan 31 07:06:58 compute-2 systemd-sysv-generator[77052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:06:58 compute-2 systemd-rc-local-generator[77047]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:06:58 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:58 compute-2 systemd[1]: Reloading.
Jan 31 07:06:58 compute-2 systemd-sysv-generator[77088]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:06:58 compute-2 systemd-rc-local-generator[77083]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:06:58 compute-2 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 07:06:58 compute-2 systemd[1]: Reloading.
Jan 31 07:06:58 compute-2 systemd-rc-local-generator[77125]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:06:58 compute-2 systemd-sysv-generator[77128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:06:58 compute-2 systemd[1]: Reached target Ceph cluster f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:06:58 compute-2 systemd[1]: Reloading.
Jan 31 07:06:58 compute-2 systemd-rc-local-generator[77166]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:06:58 compute-2 systemd-sysv-generator[77169]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:06:58 compute-2 systemd[1]: Reloading.
Jan 31 07:06:58 compute-2 systemd-rc-local-generator[77202]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:06:58 compute-2 systemd-sysv-generator[77205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:06:59 compute-2 systemd[1]: Created slice Slice /system/ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:06:59 compute-2 systemd[1]: Reached target System Time Set.
Jan 31 07:06:59 compute-2 systemd[1]: Reached target System Time Synchronized.
Jan 31 07:06:59 compute-2 systemd[1]: Starting Ceph mon.compute-2 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:06:59 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:59 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:06:59 compute-2 podman[77262]: 2026-01-31 07:06:59.281841907 +0000 UTC m=+0.035230136 container create 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:06:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c225dcc0e11b569034386cf9423fcd39e2046673b1d7cba8ccb3d55db4dd900/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c225dcc0e11b569034386cf9423fcd39e2046673b1d7cba8ccb3d55db4dd900/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c225dcc0e11b569034386cf9423fcd39e2046673b1d7cba8ccb3d55db4dd900/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:59 compute-2 podman[77262]: 2026-01-31 07:06:59.333892006 +0000 UTC m=+0.087280255 container init 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:06:59 compute-2 podman[77262]: 2026-01-31 07:06:59.337810643 +0000 UTC m=+0.091198872 container start 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:06:59 compute-2 bash[77262]: 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67
Jan 31 07:06:59 compute-2 podman[77262]: 2026-01-31 07:06:59.265790072 +0000 UTC m=+0.019178321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:59 compute-2 systemd[1]: Started Ceph mon.compute-2 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:06:59 compute-2 ceph-mon[77282]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:06:59 compute-2 ceph-mon[77282]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pidfile_write: ignore empty --pid-file
Jan 31 07:06:59 compute-2 ceph-mon[77282]: load: jerasure load: lrc 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Git sha 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: DB SUMMARY
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: DB Session ID:  XEFS1N7FWZKLAXQ54AIG
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                                     Options.env: 0x559ef1a7ec40
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                                Options.info_log: 0x559ef3d26fc0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                                 Options.wal_dir: 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                    Options.write_buffer_manager: 0x559ef3d36b40
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                               Options.row_cache: None
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                              Options.wal_filter: None
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.wal_compression: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.max_background_jobs: 2
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:06:59 compute-2 sudo[76865]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Compression algorithms supported:
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         kZSTD supported: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:           Options.merge_operator: 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559ef3d26c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559ef3d1f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:          Options.compression: NoCompression
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5168d376-4658-426b-b196-7d1a850c0693
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843219378743, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843219380390, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843219380485, "job": 1, "event": "recovery_finished"}
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559ef3d48e00
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: DB pointer 0x559ef3dd2000
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Jan 31 07:06:59 compute-2 ceph-mon[77282]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Jan 31 07:06:59 compute-2 ceph-mon[77282]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(???) e0 preinit fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).mds e1 new map
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 1 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e14 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e14 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e14 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).osd e14 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Deploying daemon crash.compute-1 on compute-1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2157664835' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d19aa227-e399-4341-9824-b20a6ddbc903"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2157664835' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d19aa227-e399-4341-9824-b20a6ddbc903"}]': finished
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e4: 1 total, 0 up, 1 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2276134975' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2276134975' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe"}]': finished
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e5: 2 total, 0 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2504108929' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/779569343' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Deploying daemon osd.0 on compute-0
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Deploying daemon osd.1 on compute-1
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e6: 2 total, 0 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4156760717' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e7: 2 total, 0 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e8: 2 total, 0 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: purged_snaps scrub starts
Jan 31 07:06:59 compute-2 ceph-mon[77282]: purged_snaps scrub ok
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: purged_snaps scrub starts
Jan 31 07:06:59 compute-2 ceph-mon[77282]: purged_snaps scrub ok
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: OSD bench result of 1880.086077 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438] boot
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e9: 2 total, 1 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e10: 2 total, 1 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e11: 2 total, 1 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: OSD bench result of 8461.932853 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795] boot
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e12: 2 total, 2 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e13: 2 total, 2 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mgrmap e9: compute-0.hhuoua(active, since 82s)
Jan 31 07:06:59 compute-2 ceph-mon[77282]: osdmap e14: 2 total, 2 up, 2 in
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Deploying daemon mon.compute-2 on compute-2
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 31 07:06:59 compute-2 ceph-mon[77282]: Cluster is now healthy
Jan 31 07:06:59 compute-2 ceph-mon[77282]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Jan 31 07:07:01 compute-2 ceph-mon[77282]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Jan 31 07:07:01 compute-2 ceph-mon[77282]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 31 07:07:01 compute-2 ceph-mon[77282]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 07:07:01 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:02 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 07:07:03 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 07:07:04 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 07:07:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 31 07:07:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:04 compute-2 ceph-mon[77282]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T07:06:58.003662Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,os=Linux}
Jan 31 07:07:04 compute-2 ceph-mon[77282]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:04 compute-2 ceph-mon[77282]: Deploying daemon mon.compute-1 on compute-1
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-2 ceph-mon[77282]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-2 ceph-mon[77282]: mon.compute-2 calling monitor election
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:04 compute-2 ceph-mon[77282]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-2 ceph-mon[77282]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 31 07:07:04 compute-2 ceph-mon[77282]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:07:04 compute-2 ceph-mon[77282]: fsmap 
Jan 31 07:07:04 compute-2 ceph-mon[77282]: osdmap e14: 2 total, 2 up, 2 in
Jan 31 07:07:04 compute-2 ceph-mon[77282]: mgrmap e9: compute-0.hhuoua(active, since 104s)
Jan 31 07:07:04 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:04 compute-2 sudo[77321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:04 compute-2 sudo[77321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:04 compute-2 sudo[77321]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:04 compute-2 sudo[77346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:04 compute-2 sudo[77346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:04 compute-2 sudo[77346]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e2 handle_auth_request failed to assign global_id
Jan 31 07:07:04 compute-2 sudo[77371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:04 compute-2 sudo[77371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:04 compute-2 sudo[77371]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:04 compute-2 sudo[77396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:04 compute-2 sudo[77396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 07:07:05 compute-2 podman[77462]: 2026-01-31 07:07:05.026914115 +0000 UTC m=+0.017653349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:05 compute-2 podman[77462]: 2026-01-31 07:07:05.198269067 +0000 UTC m=+0.189008271 container create 01acacde81aa0154e5e4164c464be579cacf8944b1ca9bc7a26c2c6736295701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:07:05 compute-2 ceph-mon[77282]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 31 07:07:05 compute-2 ceph-mon[77282]: paxos.1).electionLogic(10) init, last seen epoch 10
Jan 31 07:07:05 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:05 compute-2 systemd[1]: Started libpod-conmon-01acacde81aa0154e5e4164c464be579cacf8944b1ca9bc7a26c2c6736295701.scope.
Jan 31 07:07:05 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:05 compute-2 podman[77462]: 2026-01-31 07:07:05.289388605 +0000 UTC m=+0.280127829 container init 01acacde81aa0154e5e4164c464be579cacf8944b1ca9bc7a26c2c6736295701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 31 07:07:05 compute-2 podman[77462]: 2026-01-31 07:07:05.297136165 +0000 UTC m=+0.287875369 container start 01acacde81aa0154e5e4164c464be579cacf8944b1ca9bc7a26c2c6736295701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:07:05 compute-2 podman[77462]: 2026-01-31 07:07:05.302147701 +0000 UTC m=+0.292886905 container attach 01acacde81aa0154e5e4164c464be579cacf8944b1ca9bc7a26c2c6736295701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:07:05 compute-2 compassionate_chatterjee[77478]: 167 167
Jan 31 07:07:05 compute-2 systemd[1]: libpod-01acacde81aa0154e5e4164c464be579cacf8944b1ca9bc7a26c2c6736295701.scope: Deactivated successfully.
Jan 31 07:07:05 compute-2 conmon[77478]: conmon 01acacde81aa0154e5e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01acacde81aa0154e5e4164c464be579cacf8944b1ca9bc7a26c2c6736295701.scope/container/memory.events
Jan 31 07:07:05 compute-2 podman[77462]: 2026-01-31 07:07:05.305543812 +0000 UTC m=+0.296283096 container died 01acacde81aa0154e5e4164c464be579cacf8944b1ca9bc7a26c2c6736295701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:05 compute-2 systemd[1]: var-lib-containers-storage-overlay-0f0ebbb102fa824561390234fb6d2d8a2c326e5535dfe1d5161deb7f4fb59d7e-merged.mount: Deactivated successfully.
Jan 31 07:07:05 compute-2 podman[77462]: 2026-01-31 07:07:05.356335688 +0000 UTC m=+0.347074932 container remove 01acacde81aa0154e5e4164c464be579cacf8944b1ca9bc7a26c2c6736295701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:07:05 compute-2 systemd[1]: libpod-conmon-01acacde81aa0154e5e4164c464be579cacf8944b1ca9bc7a26c2c6736295701.scope: Deactivated successfully.
Jan 31 07:07:05 compute-2 systemd[1]: Reloading.
Jan 31 07:07:05 compute-2 systemd-rc-local-generator[77521]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:07:05 compute-2 systemd-sysv-generator[77524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:07:05 compute-2 systemd[1]: Reloading.
Jan 31 07:07:05 compute-2 systemd-rc-local-generator[77560]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:07:05 compute-2 systemd-sysv-generator[77564]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:07:05 compute-2 systemd[1]: Starting Ceph mgr.compute-2.wmgest for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:07:06 compute-2 podman[77617]: 2026-01-31 07:07:06.00099843 +0000 UTC m=+0.035664466 container create 3f61f2a28d52a782b18d66b3516ca27bbc13d94ef2f0754ed4c51e0290b44e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8821f4f7c094aa6da80e69cbb6df1840df2a1335cef04f7c5fed94a834315d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8821f4f7c094aa6da80e69cbb6df1840df2a1335cef04f7c5fed94a834315d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8821f4f7c094aa6da80e69cbb6df1840df2a1335cef04f7c5fed94a834315d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8821f4f7c094aa6da80e69cbb6df1840df2a1335cef04f7c5fed94a834315d/merged/var/lib/ceph/mgr/ceph-compute-2.wmgest supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:06 compute-2 podman[77617]: 2026-01-31 07:07:06.059221977 +0000 UTC m=+0.093888023 container init 3f61f2a28d52a782b18d66b3516ca27bbc13d94ef2f0754ed4c51e0290b44e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 07:07:06 compute-2 podman[77617]: 2026-01-31 07:07:06.06225441 +0000 UTC m=+0.096920436 container start 3f61f2a28d52a782b18d66b3516ca27bbc13d94ef2f0754ed4c51e0290b44e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:07:06 compute-2 bash[77617]: 3f61f2a28d52a782b18d66b3516ca27bbc13d94ef2f0754ed4c51e0290b44e49
Jan 31 07:07:06 compute-2 podman[77617]: 2026-01-31 07:07:05.983121016 +0000 UTC m=+0.017787072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:06 compute-2 systemd[1]: Started Ceph mgr.compute-2.wmgest for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:07:06 compute-2 sudo[77396]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:06 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:06 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:06 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:07 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:09 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:09 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:09 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:10 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:10 compute-2 ceph-mgr[77635]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:07:10 compute-2 ceph-mgr[77635]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 31 07:07:10 compute-2 ceph-mgr[77635]: pidfile_write: ignore empty --pid-file
Jan 31 07:07:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:10 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'alerts'
Jan 31 07:07:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:07:10 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 07:07:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:10 compute-2 ceph-mon[77282]: mon.compute-2 calling monitor election
Jan 31 07:07:10 compute-2 ceph-mon[77282]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-2 ceph-mon[77282]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-2 ceph-mon[77282]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-2 ceph-mon[77282]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:07:10 compute-2 ceph-mon[77282]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:07:10 compute-2 ceph-mon[77282]: fsmap 
Jan 31 07:07:10 compute-2 ceph-mon[77282]: osdmap e14: 2 total, 2 up, 2 in
Jan 31 07:07:10 compute-2 ceph-mon[77282]: mgrmap e9: compute-0.hhuoua(active, since 110s)
Jan 31 07:07:10 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:07:10 compute-2 ceph-mgr[77635]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 07:07:10 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'balancer'
Jan 31 07:07:10 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:10.962+0000 7f13c6562140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 07:07:11 compute-2 ceph-mgr[77635]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 07:07:11 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'cephadm'
Jan 31 07:07:11 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:11.222+0000 7f13c6562140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 07:07:11 compute-2 ceph-mon[77282]: mon.compute-1 calling monitor election
Jan 31 07:07:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1344602823' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hodsiu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:07:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hodsiu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 07:07:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:07:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e15 e15: 2 total, 2 up, 2 in
Jan 31 07:07:12 compute-2 ceph-mon[77282]: Deploying daemon mgr.compute-1.hodsiu on compute-1
Jan 31 07:07:12 compute-2 ceph-mon[77282]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1344602823' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:12 compute-2 ceph-mon[77282]: osdmap e15: 2 total, 2 up, 2 in
Jan 31 07:07:13 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'crash'
Jan 31 07:07:13 compute-2 sudo[77671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:13 compute-2 sudo[77671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:13 compute-2 sudo[77671]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:13 compute-2 sudo[77696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:13 compute-2 sudo[77696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:13 compute-2 sudo[77696]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:13 compute-2 sudo[77721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:13 compute-2 sudo[77721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:13 compute-2 sudo[77721]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:13 compute-2 ceph-mgr[77635]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 07:07:13 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'dashboard'
Jan 31 07:07:13 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:13.539+0000 7f13c6562140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 07:07:13 compute-2 sudo[77746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:13 compute-2 sudo[77746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e16 e16: 2 total, 2 up, 2 in
Jan 31 07:07:13 compute-2 podman[77810]: 2026-01-31 07:07:13.873633369 +0000 UTC m=+0.032720437 container create 6a84380d80a3d0e0ee4fc22f46b59af440f88c4237b9868c3c1901a1d5d94af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 07:07:13 compute-2 systemd[1]: Started libpod-conmon-6a84380d80a3d0e0ee4fc22f46b59af440f88c4237b9868c3c1901a1d5d94af7.scope.
Jan 31 07:07:13 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:13 compute-2 podman[77810]: 2026-01-31 07:07:13.943962584 +0000 UTC m=+0.103049652 container init 6a84380d80a3d0e0ee4fc22f46b59af440f88c4237b9868c3c1901a1d5d94af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:07:13 compute-2 podman[77810]: 2026-01-31 07:07:13.949435652 +0000 UTC m=+0.108522720 container start 6a84380d80a3d0e0ee4fc22f46b59af440f88c4237b9868c3c1901a1d5d94af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:13 compute-2 podman[77810]: 2026-01-31 07:07:13.953103062 +0000 UTC m=+0.112190230 container attach 6a84380d80a3d0e0ee4fc22f46b59af440f88c4237b9868c3c1901a1d5d94af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:13 compute-2 dreamy_williamson[77826]: 167 167
Jan 31 07:07:13 compute-2 podman[77810]: 2026-01-31 07:07:13.953706218 +0000 UTC m=+0.112793286 container died 6a84380d80a3d0e0ee4fc22f46b59af440f88c4237b9868c3c1901a1d5d94af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 07:07:13 compute-2 systemd[1]: libpod-6a84380d80a3d0e0ee4fc22f46b59af440f88c4237b9868c3c1901a1d5d94af7.scope: Deactivated successfully.
Jan 31 07:07:13 compute-2 podman[77810]: 2026-01-31 07:07:13.858399456 +0000 UTC m=+0.017486544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:13 compute-2 systemd[1]: var-lib-containers-storage-overlay-bebbf939b77e24375a8512c97f3bb865d8e98047a7e10d2affc2faee9d8125c9-merged.mount: Deactivated successfully.
Jan 31 07:07:13 compute-2 podman[77810]: 2026-01-31 07:07:13.980660038 +0000 UTC m=+0.139747096 container remove 6a84380d80a3d0e0ee4fc22f46b59af440f88c4237b9868c3c1901a1d5d94af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:13 compute-2 systemd[1]: libpod-conmon-6a84380d80a3d0e0ee4fc22f46b59af440f88c4237b9868c3c1901a1d5d94af7.scope: Deactivated successfully.
Jan 31 07:07:14 compute-2 systemd[1]: Reloading.
Jan 31 07:07:14 compute-2 systemd-rc-local-generator[77867]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:07:14 compute-2 systemd-sysv-generator[77875]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:07:14 compute-2 systemd[1]: Reloading.
Jan 31 07:07:14 compute-2 systemd-sysv-generator[77914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:07:14 compute-2 systemd-rc-local-generator[77910]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:07:14 compute-2 ceph-mon[77282]: pgmap v65: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4258413547' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:07:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 07:07:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:14 compute-2 ceph-mon[77282]: Deploying daemon crash.compute-2 on compute-2
Jan 31 07:07:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4258413547' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:14 compute-2 ceph-mon[77282]: osdmap e16: 2 total, 2 up, 2 in
Jan 31 07:07:14 compute-2 ceph-mon[77282]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e16 _set_new_cache_sizes cache_size:1019935894 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:14 compute-2 systemd[1]: Starting Ceph crash.compute-2 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:07:14 compute-2 podman[77967]: 2026-01-31 07:07:14.614867017 +0000 UTC m=+0.036739216 container create 7d6c90b37bdf1ca2b6fd92e29946740578ca2bca85f3a9d3d898bc025f7cf04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbaa9387098da6fa6e02736ea7fc61bf7dc1424f8d5ca2f7d46139aa94d6dfa/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbaa9387098da6fa6e02736ea7fc61bf7dc1424f8d5ca2f7d46139aa94d6dfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbaa9387098da6fa6e02736ea7fc61bf7dc1424f8d5ca2f7d46139aa94d6dfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecbaa9387098da6fa6e02736ea7fc61bf7dc1424f8d5ca2f7d46139aa94d6dfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:14 compute-2 podman[77967]: 2026-01-31 07:07:14.683888607 +0000 UTC m=+0.105760836 container init 7d6c90b37bdf1ca2b6fd92e29946740578ca2bca85f3a9d3d898bc025f7cf04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:07:14 compute-2 podman[77967]: 2026-01-31 07:07:14.687833843 +0000 UTC m=+0.109706042 container start 7d6c90b37bdf1ca2b6fd92e29946740578ca2bca85f3a9d3d898bc025f7cf04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:07:14 compute-2 bash[77967]: 7d6c90b37bdf1ca2b6fd92e29946740578ca2bca85f3a9d3d898bc025f7cf04d
Jan 31 07:07:14 compute-2 podman[77967]: 2026-01-31 07:07:14.600811836 +0000 UTC m=+0.022684055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:14 compute-2 systemd[1]: Started Ceph crash.compute-2 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:07:14 compute-2 sudo[77746]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e17 e17: 2 total, 2 up, 2 in
Jan 31 07:07:14 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 07:07:15 compute-2 sudo[77989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:15 compute-2 sudo[77989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:15 compute-2 sudo[77989]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:15 compute-2 sudo[78014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:15 compute-2 sudo[78014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:15 compute-2 sudo[78014]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: 2026-01-31T07:07:15.093+0000 7ff8c2df0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: 2026-01-31T07:07:15.093+0000 7ff8c2df0640 -1 AuthRegistry(0x7ff8bc067150) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: 2026-01-31T07:07:15.094+0000 7ff8c2df0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: 2026-01-31T07:07:15.094+0000 7ff8c2df0640 -1 AuthRegistry(0x7ff8c2def000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: 2026-01-31T07:07:15.095+0000 7ff8c1366640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: 2026-01-31T07:07:15.096+0000 7ff8c0b65640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: 2026-01-31T07:07:15.096+0000 7ff8bbfff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: 2026-01-31T07:07:15.096+0000 7ff8c2df0640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 07:07:15 compute-2 sudo[78039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:15 compute-2 sudo[78039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:15 compute-2 sudo[78039]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:15 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'devicehealth'
Jan 31 07:07:15 compute-2 sudo[78074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:07:15 compute-2 sudo[78074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:15 compute-2 ceph-mgr[77635]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 07:07:15 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 07:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:15.438+0000 7f13c6562140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 07:07:15 compute-2 podman[78138]: 2026-01-31 07:07:15.472633011 +0000 UTC m=+0.034216668 container create 32a4a35c45ab64f936bb5ddf06c4b507b21876e64608e9c0a6f8729ff0e21617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:15 compute-2 systemd[1]: Started libpod-conmon-32a4a35c45ab64f936bb5ddf06c4b507b21876e64608e9c0a6f8729ff0e21617.scope.
Jan 31 07:07:15 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:15 compute-2 podman[78138]: 2026-01-31 07:07:15.521899466 +0000 UTC m=+0.083483123 container init 32a4a35c45ab64f936bb5ddf06c4b507b21876e64608e9c0a6f8729ff0e21617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:15 compute-2 podman[78138]: 2026-01-31 07:07:15.526627534 +0000 UTC m=+0.088211191 container start 32a4a35c45ab64f936bb5ddf06c4b507b21876e64608e9c0a6f8729ff0e21617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:07:15 compute-2 podman[78138]: 2026-01-31 07:07:15.529632136 +0000 UTC m=+0.091215843 container attach 32a4a35c45ab64f936bb5ddf06c4b507b21876e64608e9c0a6f8729ff0e21617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:15 compute-2 practical_ptolemy[78154]: 167 167
Jan 31 07:07:15 compute-2 systemd[1]: libpod-32a4a35c45ab64f936bb5ddf06c4b507b21876e64608e9c0a6f8729ff0e21617.scope: Deactivated successfully.
Jan 31 07:07:15 compute-2 podman[78138]: 2026-01-31 07:07:15.530855988 +0000 UTC m=+0.092439645 container died 32a4a35c45ab64f936bb5ddf06c4b507b21876e64608e9c0a6f8729ff0e21617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:07:15 compute-2 systemd[1]: var-lib-containers-storage-overlay-dc3ea89f8a0df2c3acbae5fe931cfefd3fddf37ea92dce01dd42f93c5cac1dc2-merged.mount: Deactivated successfully.
Jan 31 07:07:15 compute-2 podman[78138]: 2026-01-31 07:07:15.456378712 +0000 UTC m=+0.017962399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:15 compute-2 podman[78138]: 2026-01-31 07:07:15.567671965 +0000 UTC m=+0.129255622 container remove 32a4a35c45ab64f936bb5ddf06c4b507b21876e64608e9c0a6f8729ff0e21617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 07:07:15 compute-2 systemd[1]: libpod-conmon-32a4a35c45ab64f936bb5ddf06c4b507b21876e64608e9c0a6f8729ff0e21617.scope: Deactivated successfully.
Jan 31 07:07:15 compute-2 podman[78178]: 2026-01-31 07:07:15.690131783 +0000 UTC m=+0.039648365 container create 8db4802fa30756d3a86a046a3cb031cc8bebdc90a2e58bae2597b620cf8d3a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:15 compute-2 systemd[1]: Started libpod-conmon-8db4802fa30756d3a86a046a3cb031cc8bebdc90a2e58bae2597b620cf8d3a3a.scope.
Jan 31 07:07:15 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:15 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297154e356d2b96c4bfbdfaed347b0aa7f477a117baedceaad1955b44fd8816/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:15 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297154e356d2b96c4bfbdfaed347b0aa7f477a117baedceaad1955b44fd8816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:15 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297154e356d2b96c4bfbdfaed347b0aa7f477a117baedceaad1955b44fd8816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:15 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297154e356d2b96c4bfbdfaed347b0aa7f477a117baedceaad1955b44fd8816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:15 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4297154e356d2b96c4bfbdfaed347b0aa7f477a117baedceaad1955b44fd8816/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:15 compute-2 podman[78178]: 2026-01-31 07:07:15.673535113 +0000 UTC m=+0.023051715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:15 compute-2 podman[78178]: 2026-01-31 07:07:15.784681884 +0000 UTC m=+0.134198486 container init 8db4802fa30756d3a86a046a3cb031cc8bebdc90a2e58bae2597b620cf8d3a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 07:07:15 compute-2 podman[78178]: 2026-01-31 07:07:15.789558897 +0000 UTC m=+0.139075469 container start 8db4802fa30756d3a86a046a3cb031cc8bebdc90a2e58bae2597b620cf8d3a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:07:15 compute-2 podman[78178]: 2026-01-31 07:07:15.834291357 +0000 UTC m=+0.183807939 container attach 8db4802fa30756d3a86a046a3cb031cc8bebdc90a2e58bae2597b620cf8d3a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:15 compute-2 ceph-mon[77282]: osdmap e17: 2 total, 2 up, 2 in
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2911166990' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:16 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 07:07:16 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 07:07:16 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]:   from numpy import show_config as show_numpy_config
Jan 31 07:07:16 compute-2 ceph-mgr[77635]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 07:07:16 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'influx'
Jan 31 07:07:16 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:16.028+0000 7f13c6562140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 07:07:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e18 e18: 2 total, 2 up, 2 in
Jan 31 07:07:16 compute-2 ceph-mgr[77635]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 07:07:16 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:16.274+0000 7f13c6562140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 07:07:16 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'insights'
Jan 31 07:07:16 compute-2 eloquent_moser[78194]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:07:16 compute-2 eloquent_moser[78194]: --> relative data size: 1.0
Jan 31 07:07:16 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'iostat'
Jan 31 07:07:16 compute-2 eloquent_moser[78194]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:07:16 compute-2 eloquent_moser[78194]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d108d581-3dd3-4742-941a-f201ff187649
Jan 31 07:07:16 compute-2 ceph-mgr[77635]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 07:07:16 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'k8sevents'
Jan 31 07:07:16 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:16.810+0000 7f13c6562140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 07:07:16 compute-2 ceph-mon[77282]: pgmap v68: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2911166990' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:16 compute-2 ceph-mon[77282]: osdmap e18: 2 total, 2 up, 2 in
Jan 31 07:07:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"} v 0) v1
Jan 31 07:07:16 compute-2 ceph-mon[77282]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2266493088' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]: dispatch
Jan 31 07:07:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e19 e19: 3 total, 2 up, 3 in
Jan 31 07:07:17 compute-2 eloquent_moser[78194]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:07:17 compute-2 lvm[78241]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:07:17 compute-2 lvm[78241]: VG ceph_vg0 finished
Jan 31 07:07:17 compute-2 eloquent_moser[78194]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 31 07:07:17 compute-2 eloquent_moser[78194]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 07:07:17 compute-2 eloquent_moser[78194]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:07:17 compute-2 eloquent_moser[78194]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:17 compute-2 eloquent_moser[78194]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 31 07:07:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 31 07:07:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3536322800' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 07:07:17 compute-2 eloquent_moser[78194]:  stderr: got monmap epoch 3
Jan 31 07:07:17 compute-2 eloquent_moser[78194]: --> Creating keyring file for osd.2
Jan 31 07:07:17 compute-2 eloquent_moser[78194]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 31 07:07:17 compute-2 eloquent_moser[78194]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 31 07:07:17 compute-2 eloquent_moser[78194]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid d108d581-3dd3-4742-941a-f201ff187649 --setuser ceph --setgroup ceph
Jan 31 07:07:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2266493088' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]: dispatch
Jan 31 07:07:18 compute-2 ceph-mon[77282]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]: dispatch
Jan 31 07:07:18 compute-2 ceph-mon[77282]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]': finished
Jan 31 07:07:18 compute-2 ceph-mon[77282]: osdmap e19: 3 total, 2 up, 3 in
Jan 31 07:07:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:18 compute-2 ceph-mon[77282]: pgmap v71: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/331184130' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3536322800' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 07:07:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e20 e20: 3 total, 2 up, 3 in
Jan 31 07:07:18 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'localpool'
Jan 31 07:07:19 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 07:07:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e20 _set_new_cache_sizes cache_size:1020053304 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/331184130' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:19 compute-2 ceph-mon[77282]: osdmap e20: 3 total, 2 up, 3 in
Jan 31 07:07:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:19 compute-2 ceph-mon[77282]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:19 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'mirroring'
Jan 31 07:07:20 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'nfs'
Jan 31 07:07:20 compute-2 eloquent_moser[78194]:  stderr: 2026-01-31T07:07:17.676+0000 7f0f772a6740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 07:07:20 compute-2 eloquent_moser[78194]:  stderr: 2026-01-31T07:07:17.676+0000 7f0f772a6740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 07:07:20 compute-2 eloquent_moser[78194]:  stderr: 2026-01-31T07:07:17.676+0000 7f0f772a6740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 07:07:20 compute-2 eloquent_moser[78194]:  stderr: 2026-01-31T07:07:17.676+0000 7f0f772a6740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 31 07:07:20 compute-2 eloquent_moser[78194]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 07:07:20 compute-2 eloquent_moser[78194]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:07:20 compute-2 eloquent_moser[78194]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 07:07:20 compute-2 eloquent_moser[78194]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:20 compute-2 eloquent_moser[78194]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:20 compute-2 eloquent_moser[78194]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:07:20 compute-2 eloquent_moser[78194]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:07:20 compute-2 eloquent_moser[78194]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 07:07:20 compute-2 eloquent_moser[78194]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 07:07:20 compute-2 systemd[1]: libpod-8db4802fa30756d3a86a046a3cb031cc8bebdc90a2e58bae2597b620cf8d3a3a.scope: Deactivated successfully.
Jan 31 07:07:20 compute-2 systemd[1]: libpod-8db4802fa30756d3a86a046a3cb031cc8bebdc90a2e58bae2597b620cf8d3a3a.scope: Consumed 2.647s CPU time.
Jan 31 07:07:20 compute-2 podman[78178]: 2026-01-31 07:07:20.521720268 +0000 UTC m=+4.871236860 container died 8db4802fa30756d3a86a046a3cb031cc8bebdc90a2e58bae2597b620cf8d3a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:07:20 compute-2 ceph-mgr[77635]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 07:07:20 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'orchestrator'
Jan 31 07:07:20 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:20.880+0000 7f13c6562140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 07:07:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e21 e21: 3 total, 2 up, 3 in
Jan 31 07:07:21 compute-2 ceph-mon[77282]: pgmap v73: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:21 compute-2 systemd[1]: var-lib-containers-storage-overlay-4297154e356d2b96c4bfbdfaed347b0aa7f477a117baedceaad1955b44fd8816-merged.mount: Deactivated successfully.
Jan 31 07:07:21 compute-2 podman[78178]: 2026-01-31 07:07:21.244533427 +0000 UTC m=+5.594050009 container remove 8db4802fa30756d3a86a046a3cb031cc8bebdc90a2e58bae2597b620cf8d3a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 31 07:07:21 compute-2 sudo[78074]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:21 compute-2 systemd[1]: libpod-conmon-8db4802fa30756d3a86a046a3cb031cc8bebdc90a2e58bae2597b620cf8d3a3a.scope: Deactivated successfully.
Jan 31 07:07:21 compute-2 sudo[79160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:21 compute-2 sudo[79160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:21 compute-2 sudo[79160]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:21 compute-2 sudo[79185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:21 compute-2 sudo[79185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:21 compute-2 sudo[79185]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:21 compute-2 sudo[79210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:21 compute-2 sudo[79210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:21 compute-2 sudo[79210]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:21 compute-2 sudo[79235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:07:21 compute-2 sudo[79235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:21 compute-2 ceph-mgr[77635]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 07:07:21 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 07:07:21 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:21.617+0000 7f13c6562140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 07:07:21 compute-2 podman[79298]: 2026-01-31 07:07:21.716398498 +0000 UTC m=+0.036471369 container create 6366eba7b3a11440413eb74780629939c2bd18d11088cde747fb80fc7174255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:21 compute-2 systemd[1]: Started libpod-conmon-6366eba7b3a11440413eb74780629939c2bd18d11088cde747fb80fc7174255c.scope.
Jan 31 07:07:21 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:21 compute-2 podman[79298]: 2026-01-31 07:07:21.698180645 +0000 UTC m=+0.018253516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:21 compute-2 podman[79298]: 2026-01-31 07:07:21.796537999 +0000 UTC m=+0.116610890 container init 6366eba7b3a11440413eb74780629939c2bd18d11088cde747fb80fc7174255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:21 compute-2 podman[79298]: 2026-01-31 07:07:21.801284377 +0000 UTC m=+0.121357248 container start 6366eba7b3a11440413eb74780629939c2bd18d11088cde747fb80fc7174255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:21 compute-2 podman[79298]: 2026-01-31 07:07:21.804182366 +0000 UTC m=+0.124255247 container attach 6366eba7b3a11440413eb74780629939c2bd18d11088cde747fb80fc7174255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 31 07:07:21 compute-2 unruffled_galois[79314]: 167 167
Jan 31 07:07:21 compute-2 systemd[1]: libpod-6366eba7b3a11440413eb74780629939c2bd18d11088cde747fb80fc7174255c.scope: Deactivated successfully.
Jan 31 07:07:21 compute-2 conmon[79314]: conmon 6366eba7b3a11440413e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6366eba7b3a11440413eb74780629939c2bd18d11088cde747fb80fc7174255c.scope/container/memory.events
Jan 31 07:07:21 compute-2 podman[79298]: 2026-01-31 07:07:21.807518066 +0000 UTC m=+0.127590927 container died 6366eba7b3a11440413eb74780629939c2bd18d11088cde747fb80fc7174255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:07:21 compute-2 systemd[1]: var-lib-containers-storage-overlay-1dc3cd9a65ff2bb5a64067b07c57d9fff441c767eb7ff62cadc2b27920de93bb-merged.mount: Deactivated successfully.
Jan 31 07:07:21 compute-2 podman[79298]: 2026-01-31 07:07:21.832261237 +0000 UTC m=+0.152334108 container remove 6366eba7b3a11440413eb74780629939c2bd18d11088cde747fb80fc7174255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:21 compute-2 systemd[1]: libpod-conmon-6366eba7b3a11440413eb74780629939c2bd18d11088cde747fb80fc7174255c.scope: Deactivated successfully.
Jan 31 07:07:21 compute-2 ceph-mgr[77635]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 07:07:21 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'osd_support'
Jan 31 07:07:21 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:21.912+0000 7f13c6562140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 07:07:21 compute-2 podman[79337]: 2026-01-31 07:07:21.953519951 +0000 UTC m=+0.035362789 container create 6d6b6ff407d1891fe502443f57d53b6fccea3c24774c768765484234b731da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:21 compute-2 systemd[1]: Started libpod-conmon-6d6b6ff407d1891fe502443f57d53b6fccea3c24774c768765484234b731da2c.scope.
Jan 31 07:07:22 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab326bafa5eab2ae20341787e7ff3e272978c439944ac450a6a8cffc52ef87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab326bafa5eab2ae20341787e7ff3e272978c439944ac450a6a8cffc52ef87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab326bafa5eab2ae20341787e7ff3e272978c439944ac450a6a8cffc52ef87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab326bafa5eab2ae20341787e7ff3e272978c439944ac450a6a8cffc52ef87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:22 compute-2 podman[79337]: 2026-01-31 07:07:22.015476569 +0000 UTC m=+0.097319427 container init 6d6b6ff407d1891fe502443f57d53b6fccea3c24774c768765484234b731da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:22 compute-2 podman[79337]: 2026-01-31 07:07:22.021001349 +0000 UTC m=+0.102844187 container start 6d6b6ff407d1891fe502443f57d53b6fccea3c24774c768765484234b731da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mirzakhani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:22 compute-2 podman[79337]: 2026-01-31 07:07:22.02362194 +0000 UTC m=+0.105464808 container attach 6d6b6ff407d1891fe502443f57d53b6fccea3c24774c768765484234b731da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:22 compute-2 podman[79337]: 2026-01-31 07:07:21.937844606 +0000 UTC m=+0.019687464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:22 compute-2 ceph-mgr[77635]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 07:07:22 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:22.171+0000 7f13c6562140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 07:07:22 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 07:07:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e22 e22: 3 total, 2 up, 3 in
Jan 31 07:07:22 compute-2 ceph-mgr[77635]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 07:07:22 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:22.483+0000 7f13c6562140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 07:07:22 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'progress'
Jan 31 07:07:22 compute-2 ceph-mon[77282]: osdmap e21: 3 total, 2 up, 3 in
Jan 31 07:07:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:22 compute-2 ceph-mon[77282]: pgmap v75: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4225669901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]: {
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:     "2": [
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:         {
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "devices": [
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "/dev/loop3"
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             ],
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "lv_name": "ceph_lv0",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "lv_size": "7511998464",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=hnTo4t-xHY9-2qrB-rufP-IzRE-AEcn-PfbeB9,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d108d581-3dd3-4742-941a-f201ff187649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "lv_uuid": "hnTo4t-xHY9-2qrB-rufP-IzRE-AEcn-PfbeB9",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "name": "ceph_lv0",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "tags": {
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.block_uuid": "hnTo4t-xHY9-2qrB-rufP-IzRE-AEcn-PfbeB9",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.cluster_name": "ceph",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.crush_device_class": "",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.encrypted": "0",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.osd_fsid": "d108d581-3dd3-4742-941a-f201ff187649",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.osd_id": "2",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.type": "block",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:                 "ceph.vdo": "0"
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             },
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "type": "block",
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:             "vg_name": "ceph_vg0"
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:         }
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]:     ]
Jan 31 07:07:22 compute-2 affectionate_mirzakhani[79353]: }
Jan 31 07:07:22 compute-2 systemd[1]: libpod-6d6b6ff407d1891fe502443f57d53b6fccea3c24774c768765484234b731da2c.scope: Deactivated successfully.
Jan 31 07:07:22 compute-2 podman[79337]: 2026-01-31 07:07:22.728682438 +0000 UTC m=+0.810525276 container died 6d6b6ff407d1891fe502443f57d53b6fccea3c24774c768765484234b731da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:07:22 compute-2 ceph-mgr[77635]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 07:07:22 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:22.731+0000 7f13c6562140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 07:07:22 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'prometheus'
Jan 31 07:07:22 compute-2 systemd[1]: var-lib-containers-storage-overlay-7fab326bafa5eab2ae20341787e7ff3e272978c439944ac450a6a8cffc52ef87-merged.mount: Deactivated successfully.
Jan 31 07:07:22 compute-2 podman[79337]: 2026-01-31 07:07:22.780688156 +0000 UTC m=+0.862530994 container remove 6d6b6ff407d1891fe502443f57d53b6fccea3c24774c768765484234b731da2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 07:07:22 compute-2 systemd[1]: libpod-conmon-6d6b6ff407d1891fe502443f57d53b6fccea3c24774c768765484234b731da2c.scope: Deactivated successfully.
Jan 31 07:07:22 compute-2 sudo[79235]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:22 compute-2 sudo[79376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:22 compute-2 sudo[79376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:22 compute-2 sudo[79376]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:22 compute-2 sudo[79401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:22 compute-2 sudo[79401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:22 compute-2 sudo[79401]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:22 compute-2 sudo[79426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:22 compute-2 sudo[79426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:22 compute-2 sudo[79426]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:23 compute-2 sudo[79451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:23 compute-2 sudo[79451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:23 compute-2 podman[79516]: 2026-01-31 07:07:23.335779912 +0000 UTC m=+0.055665139 container create fa451fe05a6ffed035026fedcc8f1dfa85f684fa1268e9aaaf07bc027b3dfa26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pike, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:23 compute-2 systemd[1]: Started libpod-conmon-fa451fe05a6ffed035026fedcc8f1dfa85f684fa1268e9aaaf07bc027b3dfa26.scope.
Jan 31 07:07:23 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:23 compute-2 podman[79516]: 2026-01-31 07:07:23.297302871 +0000 UTC m=+0.017188108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:23 compute-2 podman[79516]: 2026-01-31 07:07:23.430859348 +0000 UTC m=+0.150744595 container init fa451fe05a6ffed035026fedcc8f1dfa85f684fa1268e9aaaf07bc027b3dfa26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:07:23 compute-2 podman[79516]: 2026-01-31 07:07:23.43649613 +0000 UTC m=+0.156381357 container start fa451fe05a6ffed035026fedcc8f1dfa85f684fa1268e9aaaf07bc027b3dfa26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:07:23 compute-2 zealous_pike[79533]: 167 167
Jan 31 07:07:23 compute-2 systemd[1]: libpod-fa451fe05a6ffed035026fedcc8f1dfa85f684fa1268e9aaaf07bc027b3dfa26.scope: Deactivated successfully.
Jan 31 07:07:23 compute-2 podman[79516]: 2026-01-31 07:07:23.456334328 +0000 UTC m=+0.176219555 container attach fa451fe05a6ffed035026fedcc8f1dfa85f684fa1268e9aaaf07bc027b3dfa26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pike, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:07:23 compute-2 podman[79516]: 2026-01-31 07:07:23.456705278 +0000 UTC m=+0.176590505 container died fa451fe05a6ffed035026fedcc8f1dfa85f684fa1268e9aaaf07bc027b3dfa26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pike, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:23 compute-2 systemd[1]: var-lib-containers-storage-overlay-29127b1356ee568564efbdc9ea314bfdb05daf52be8933294c555ae866ba8659-merged.mount: Deactivated successfully.
Jan 31 07:07:23 compute-2 podman[79516]: 2026-01-31 07:07:23.695360293 +0000 UTC m=+0.415245520 container remove fa451fe05a6ffed035026fedcc8f1dfa85f684fa1268e9aaaf07bc027b3dfa26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:07:23 compute-2 systemd[1]: libpod-conmon-fa451fe05a6ffed035026fedcc8f1dfa85f684fa1268e9aaaf07bc027b3dfa26.scope: Deactivated successfully.
Jan 31 07:07:23 compute-2 ceph-mgr[77635]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 07:07:23 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'rbd_support'
Jan 31 07:07:23 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:23.818+0000 7f13c6562140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 07:07:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e23 e23: 3 total, 2 up, 3 in
Jan 31 07:07:23 compute-2 podman[79565]: 2026-01-31 07:07:23.970169646 +0000 UTC m=+0.091750556 container create 354f5ab73f58c4f39c45a27f607223db5a3f12a23d966c237e4dc912f3121cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:07:23 compute-2 podman[79565]: 2026-01-31 07:07:23.894960679 +0000 UTC m=+0.016541609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:24 compute-2 systemd[1]: Started libpod-conmon-354f5ab73f58c4f39c45a27f607223db5a3f12a23d966c237e4dc912f3121cbd.scope.
Jan 31 07:07:24 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9da0d90306778b4dbd446a23b6e79bd4e9cc73f063a4e110fdc174e4eb91777/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9da0d90306778b4dbd446a23b6e79bd4e9cc73f063a4e110fdc174e4eb91777/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9da0d90306778b4dbd446a23b6e79bd4e9cc73f063a4e110fdc174e4eb91777/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9da0d90306778b4dbd446a23b6e79bd4e9cc73f063a4e110fdc174e4eb91777/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9da0d90306778b4dbd446a23b6e79bd4e9cc73f063a4e110fdc174e4eb91777/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:24 compute-2 podman[79565]: 2026-01-31 07:07:24.105812738 +0000 UTC m=+0.227393678 container init 354f5ab73f58c4f39c45a27f607223db5a3f12a23d966c237e4dc912f3121cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:24 compute-2 podman[79565]: 2026-01-31 07:07:24.111624496 +0000 UTC m=+0.233205406 container start 354f5ab73f58c4f39c45a27f607223db5a3f12a23d966c237e4dc912f3121cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate-test, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:24 compute-2 podman[79565]: 2026-01-31 07:07:24.139610163 +0000 UTC m=+0.261191103 container attach 354f5ab73f58c4f39c45a27f607223db5a3f12a23d966c237e4dc912f3121cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:07:24 compute-2 ceph-mgr[77635]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 07:07:24 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'restful'
Jan 31 07:07:24 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:24.165+0000 7f13c6562140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 07:07:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e23 _set_new_cache_sizes cache_size:1020054715 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:24 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate-test[79581]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 31 07:07:24 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate-test[79581]:                             [--no-systemd] [--no-tmpfs]
Jan 31 07:07:24 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate-test[79581]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 07:07:24 compute-2 systemd[1]: libpod-354f5ab73f58c4f39c45a27f607223db5a3f12a23d966c237e4dc912f3121cbd.scope: Deactivated successfully.
Jan 31 07:07:24 compute-2 podman[79565]: 2026-01-31 07:07:24.788361368 +0000 UTC m=+0.909942278 container died 354f5ab73f58c4f39c45a27f607223db5a3f12a23d966c237e4dc912f3121cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:07:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4225669901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:24 compute-2 ceph-mon[77282]: osdmap e22: 3 total, 2 up, 3 in
Jan 31 07:07:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 31 07:07:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:24 compute-2 systemd[1]: var-lib-containers-storage-overlay-e9da0d90306778b4dbd446a23b6e79bd4e9cc73f063a4e110fdc174e4eb91777-merged.mount: Deactivated successfully.
Jan 31 07:07:24 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'rgw'
Jan 31 07:07:25 compute-2 podman[79565]: 2026-01-31 07:07:25.025392725 +0000 UTC m=+1.146973655 container remove 354f5ab73f58c4f39c45a27f607223db5a3f12a23d966c237e4dc912f3121cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate-test, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:25 compute-2 systemd[1]: libpod-conmon-354f5ab73f58c4f39c45a27f607223db5a3f12a23d966c237e4dc912f3121cbd.scope: Deactivated successfully.
Jan 31 07:07:25 compute-2 systemd[1]: Reloading.
Jan 31 07:07:25 compute-2 systemd-rc-local-generator[79644]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:07:25 compute-2 systemd-sysv-generator[79647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:07:25 compute-2 systemd[1]: Reloading.
Jan 31 07:07:25 compute-2 systemd-sysv-generator[79681]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:07:25 compute-2 systemd-rc-local-generator[79676]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:07:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e24 e24: 3 total, 2 up, 3 in
Jan 31 07:07:25 compute-2 systemd[1]: Starting Ceph osd.2 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:07:25 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:25.744+0000 7f13c6562140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 07:07:25 compute-2 ceph-mgr[77635]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 07:07:25 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'rook'
Jan 31 07:07:25 compute-2 podman[79741]: 2026-01-31 07:07:25.885370779 +0000 UTC m=+0.027262158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:25 compute-2 podman[79741]: 2026-01-31 07:07:25.990353742 +0000 UTC m=+0.132245121 container create 243bac32b1a62738449862b74e2a2510bcbebb22f428c8711400fc50af971d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:26 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1008542374eeb55a645ce5c2e7674ff9e4002c938ad50ed90d30f1e13c6b7315/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1008542374eeb55a645ce5c2e7674ff9e4002c938ad50ed90d30f1e13c6b7315/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1008542374eeb55a645ce5c2e7674ff9e4002c938ad50ed90d30f1e13c6b7315/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1008542374eeb55a645ce5c2e7674ff9e4002c938ad50ed90d30f1e13c6b7315/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1008542374eeb55a645ce5c2e7674ff9e4002c938ad50ed90d30f1e13c6b7315/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:26 compute-2 podman[79741]: 2026-01-31 07:07:26.100855863 +0000 UTC m=+0.242747262 container init 243bac32b1a62738449862b74e2a2510bcbebb22f428c8711400fc50af971d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:26 compute-2 podman[79741]: 2026-01-31 07:07:26.105394507 +0000 UTC m=+0.247285886 container start 243bac32b1a62738449862b74e2a2510bcbebb22f428c8711400fc50af971d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 07:07:26 compute-2 podman[79741]: 2026-01-31 07:07:26.158439363 +0000 UTC m=+0.300330742 container attach 243bac32b1a62738449862b74e2a2510bcbebb22f428c8711400fc50af971d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:26 compute-2 ceph-mon[77282]: Deploying daemon osd.2 on compute-2
Jan 31 07:07:26 compute-2 ceph-mon[77282]: pgmap v77: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:26 compute-2 ceph-mon[77282]: osdmap e23: 3 total, 2 up, 3 in
Jan 31 07:07:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:26 compute-2 ceph-mon[77282]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:26 compute-2 ceph-mon[77282]: pgmap v79: 37 pgs: 32 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:26 compute-2 ceph-mon[77282]: osdmap e24: 3 total, 2 up, 3 in
Jan 31 07:07:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:26 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate[79756]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:07:26 compute-2 bash[79741]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:07:26 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate[79756]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 07:07:26 compute-2 bash[79741]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 07:07:26 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate[79756]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 07:07:26 compute-2 bash[79741]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 07:07:26 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate[79756]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:07:26 compute-2 bash[79741]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:07:27 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate[79756]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:27 compute-2 bash[79741]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:27 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate[79756]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:07:27 compute-2 bash[79741]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:07:27 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate[79756]: --> ceph-volume raw activate successful for osd ID: 2
Jan 31 07:07:27 compute-2 bash[79741]: --> ceph-volume raw activate successful for osd ID: 2
Jan 31 07:07:27 compute-2 systemd[1]: libpod-243bac32b1a62738449862b74e2a2510bcbebb22f428c8711400fc50af971d9e.scope: Deactivated successfully.
Jan 31 07:07:27 compute-2 podman[79741]: 2026-01-31 07:07:27.043223328 +0000 UTC m=+1.185114717 container died 243bac32b1a62738449862b74e2a2510bcbebb22f428c8711400fc50af971d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:07:27 compute-2 systemd[1]: var-lib-containers-storage-overlay-1008542374eeb55a645ce5c2e7674ff9e4002c938ad50ed90d30f1e13c6b7315-merged.mount: Deactivated successfully.
Jan 31 07:07:27 compute-2 podman[79741]: 2026-01-31 07:07:27.095138133 +0000 UTC m=+1.237029512 container remove 243bac32b1a62738449862b74e2a2510bcbebb22f428c8711400fc50af971d9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:27 compute-2 podman[79923]: 2026-01-31 07:07:27.247647593 +0000 UTC m=+0.037565099 container create 3047b440b7237d150e615214bda6a381e3d3502b885b7d32a36574639bb9ef11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b7dbee72e9061b71b92a0f354c3d5b682fb99f90309856363a03c3370da85ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b7dbee72e9061b71b92a0f354c3d5b682fb99f90309856363a03c3370da85ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b7dbee72e9061b71b92a0f354c3d5b682fb99f90309856363a03c3370da85ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b7dbee72e9061b71b92a0f354c3d5b682fb99f90309856363a03c3370da85ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b7dbee72e9061b71b92a0f354c3d5b682fb99f90309856363a03c3370da85ed/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:27 compute-2 podman[79923]: 2026-01-31 07:07:27.303916256 +0000 UTC m=+0.093833822 container init 3047b440b7237d150e615214bda6a381e3d3502b885b7d32a36574639bb9ef11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:07:27 compute-2 podman[79923]: 2026-01-31 07:07:27.308359276 +0000 UTC m=+0.098276792 container start 3047b440b7237d150e615214bda6a381e3d3502b885b7d32a36574639bb9ef11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:07:27 compute-2 bash[79923]: 3047b440b7237d150e615214bda6a381e3d3502b885b7d32a36574639bb9ef11
Jan 31 07:07:27 compute-2 podman[79923]: 2026-01-31 07:07:27.233909231 +0000 UTC m=+0.023826757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:27 compute-2 systemd[1]: Started Ceph osd.2 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:07:27 compute-2 sudo[79451]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:27 compute-2 ceph-osd[79942]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:07:27 compute-2 ceph-osd[79942]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 31 07:07:27 compute-2 ceph-osd[79942]: pidfile_write: ignore empty --pid-file
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562db9207c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562db9207c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562db9207c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562db9207c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562dba013000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562dba013000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562dba013000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562dba013000 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562dba013000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562db9207c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:07:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e25 e25: 3 total, 2 up, 3 in
Jan 31 07:07:27 compute-2 ceph-osd[79942]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 31 07:07:27 compute-2 ceph-osd[79942]: load: jerasure load: lrc 
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:07:27 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:07:27 compute-2 ceph-mgr[77635]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 07:07:27 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:27.996+0000 7f13c6562140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 07:07:27 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'selftest'
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:07:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:28 compute-2 ceph-mon[77282]: 2.1 scrub starts
Jan 31 07:07:28 compute-2 ceph-mon[77282]: 2.1 scrub ok
Jan 31 07:07:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:28 compute-2 ceph-mgr[77635]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 07:07:28 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'snap_schedule'
Jan 31 07:07:28 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:28.289+0000 7f13c6562140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 07:07:28 compute-2 ceph-osd[79942]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 07:07:28 compute-2 ceph-osd[79942]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba094c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba095400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba095400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba095400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba095400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluefs mount
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluefs mount shared_bdev_used = 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Git sha 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: DB SUMMARY
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: DB Session ID:  LHUQ64055ANK2LVXSIUP
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                                     Options.env: 0x562dba097dc0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                                Options.info_log: 0x562db9284ba0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.write_buffer_manager: 0x562dba198460
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.row_cache: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                              Options.wal_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.wal_compression: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.max_background_jobs: 4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Compression algorithms supported:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kZSTD supported: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9284600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9284600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9284600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9284600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9284600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9284600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9284600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db92845c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db92845c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db92845c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d74fba04-b3b3-4dd4-900c-bd41bdd6b826
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843248432221, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843248432385, "job": 1, "event": "recovery_finished"}
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: freelist init
Jan 31 07:07:28 compute-2 ceph-osd[79942]: freelist _read_cfg
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluefs umount
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba095400 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:07:28 compute-2 ceph-mgr[77635]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 07:07:28 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'stats'
Jan 31 07:07:28 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:28.547+0000 7f13c6562140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba095400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba095400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba095400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bdev(0x562dba095400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluefs mount
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluefs mount shared_bdev_used = 4718592
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Git sha 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: DB SUMMARY
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: DB Session ID:  LHUQ64055ANK2LVXSIUO
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                                     Options.env: 0x562db93d23f0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                                Options.info_log: 0x562db9285860
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.write_buffer_manager: 0x562dba198460
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.row_cache: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                              Options.wal_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.wal_compression: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.max_background_jobs: 4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Compression algorithms supported:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kZSTD supported: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9261b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9261b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9261b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9261b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9261b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9261b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9261b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9285e20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927b4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9285e20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927b4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:           Options.merge_operator: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562db9285e20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562db927b4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.compression: LZ4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.num_levels: 7
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d74fba04-b3b3-4dd4-900c-bd41bdd6b826
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843248698927, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843248702388, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843248, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d74fba04-b3b3-4dd4-900c-bd41bdd6b826", "db_session_id": "LHUQ64055ANK2LVXSIUO", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843248704986, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843248, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d74fba04-b3b3-4dd4-900c-bd41bdd6b826", "db_session_id": "LHUQ64055ANK2LVXSIUO", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843248707796, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843248, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d74fba04-b3b3-4dd4-900c-bd41bdd6b826", "db_session_id": "LHUQ64055ANK2LVXSIUO", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843248709335, "job": 1, "event": "recovery_finished"}
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562db9339c00
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: DB pointer 0x562dba181a00
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 31 07:07:28 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:07:28 compute-2 ceph-osd[79942]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 07:07:28 compute-2 ceph-osd[79942]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 07:07:28 compute-2 ceph-osd[79942]: _get_class not permitted to load lua
Jan 31 07:07:28 compute-2 ceph-osd[79942]: _get_class not permitted to load sdk
Jan 31 07:07:28 compute-2 ceph-osd[79942]: _get_class not permitted to load test_remote_reads
Jan 31 07:07:28 compute-2 ceph-osd[79942]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 07:07:28 compute-2 ceph-osd[79942]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 07:07:28 compute-2 ceph-osd[79942]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 07:07:28 compute-2 ceph-osd[79942]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 07:07:28 compute-2 ceph-osd[79942]: osd.2 0 load_pgs
Jan 31 07:07:28 compute-2 ceph-osd[79942]: osd.2 0 load_pgs opened 0 pgs
Jan 31 07:07:28 compute-2 ceph-osd[79942]: osd.2 0 log_to_monitors true
Jan 31 07:07:28 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2[79938]: 2026-01-31T07:07:28.747+0000 7f3bca169740 -1 osd.2 0 log_to_monitors true
Jan 31 07:07:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 31 07:07:28 compute-2 ceph-mon[77282]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 07:07:28 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'status'
Jan 31 07:07:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e26 e26: 3 total, 2 up, 3 in
Jan 31 07:07:29 compute-2 ceph-mgr[77635]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 07:07:29 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'telegraf'
Jan 31 07:07:29 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:29.116+0000 7f13c6562140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 07:07:29 compute-2 sudo[80375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:29 compute-2 sudo[80375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:29 compute-2 sudo[80375]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:29 compute-2 sudo[80400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:29 compute-2 sudo[80400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:29 compute-2 sudo[80400]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:29 compute-2 sudo[80425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:29 compute-2 sudo[80425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:29 compute-2 sudo[80425]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:29 compute-2 ceph-mgr[77635]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 07:07:29 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'telemetry'
Jan 31 07:07:29 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:29.424+0000 7f13c6562140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 07:07:29 compute-2 sudo[80450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:07:29 compute-2 sudo[80450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:29 compute-2 ceph-mon[77282]: pgmap v81: 37 pgs: 1 peering, 32 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:29 compute-2 ceph-mon[77282]: osdmap e25: 3 total, 2 up, 3 in
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/386957885' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/386957885' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:29 compute-2 ceph-mon[77282]: osdmap e26: 3 total, 2 up, 3 in
Jan 31 07:07:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:29 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 07:07:29 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 07:07:29 compute-2 podman[80516]: 2026-01-31 07:07:29.817394688 +0000 UTC m=+0.035747599 container create 04806e5b9ec50160b394d640d0eec0b1a0e36ce3af692488fc9cf19b40605cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:07:29 compute-2 systemd[1]: Started libpod-conmon-04806e5b9ec50160b394d640d0eec0b1a0e36ce3af692488fc9cf19b40605cc5.scope.
Jan 31 07:07:29 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:29 compute-2 podman[80516]: 2026-01-31 07:07:29.881479973 +0000 UTC m=+0.099832914 container init 04806e5b9ec50160b394d640d0eec0b1a0e36ce3af692488fc9cf19b40605cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:29 compute-2 podman[80516]: 2026-01-31 07:07:29.887705312 +0000 UTC m=+0.106058263 container start 04806e5b9ec50160b394d640d0eec0b1a0e36ce3af692488fc9cf19b40605cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:29 compute-2 podman[80516]: 2026-01-31 07:07:29.891574846 +0000 UTC m=+0.109927757 container attach 04806e5b9ec50160b394d640d0eec0b1a0e36ce3af692488fc9cf19b40605cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Jan 31 07:07:29 compute-2 inspiring_swirles[80532]: 167 167
Jan 31 07:07:29 compute-2 systemd[1]: libpod-04806e5b9ec50160b394d640d0eec0b1a0e36ce3af692488fc9cf19b40605cc5.scope: Deactivated successfully.
Jan 31 07:07:29 compute-2 conmon[80532]: conmon 04806e5b9ec50160b394 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-04806e5b9ec50160b394d640d0eec0b1a0e36ce3af692488fc9cf19b40605cc5.scope/container/memory.events
Jan 31 07:07:29 compute-2 podman[80516]: 2026-01-31 07:07:29.895940335 +0000 UTC m=+0.114293246 container died 04806e5b9ec50160b394d640d0eec0b1a0e36ce3af692488fc9cf19b40605cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:07:29 compute-2 podman[80516]: 2026-01-31 07:07:29.801501478 +0000 UTC m=+0.019854419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:29 compute-2 systemd[1]: var-lib-containers-storage-overlay-cc29ab18efd7c164ebd49e42bccf56fe2842b09dbd723b964f00f93a952db2d5-merged.mount: Deactivated successfully.
Jan 31 07:07:29 compute-2 podman[80516]: 2026-01-31 07:07:29.934382826 +0000 UTC m=+0.152735737 container remove 04806e5b9ec50160b394d640d0eec0b1a0e36ce3af692488fc9cf19b40605cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:29 compute-2 systemd[1]: libpod-conmon-04806e5b9ec50160b394d640d0eec0b1a0e36ce3af692488fc9cf19b40605cc5.scope: Deactivated successfully.
Jan 31 07:07:30 compute-2 podman[80555]: 2026-01-31 07:07:30.060666725 +0000 UTC m=+0.041614398 container create 24e9ccee5f2bf9f59d74411aa15005d3f02249adad9f95bd55e29d12dc1a5bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hellman, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:07:30 compute-2 systemd[1]: Started libpod-conmon-24e9ccee5f2bf9f59d74411aa15005d3f02249adad9f95bd55e29d12dc1a5bf2.scope.
Jan 31 07:07:30 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1ea88aeabfb22a39106556c30f4c400ef07cdcd0a261dbaf954d11666bef56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1ea88aeabfb22a39106556c30f4c400ef07cdcd0a261dbaf954d11666bef56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1ea88aeabfb22a39106556c30f4c400ef07cdcd0a261dbaf954d11666bef56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1ea88aeabfb22a39106556c30f4c400ef07cdcd0a261dbaf954d11666bef56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:30 compute-2 podman[80555]: 2026-01-31 07:07:30.045292328 +0000 UTC m=+0.026240021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:30 compute-2 podman[80555]: 2026-01-31 07:07:30.141588756 +0000 UTC m=+0.122536449 container init 24e9ccee5f2bf9f59d74411aa15005d3f02249adad9f95bd55e29d12dc1a5bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hellman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:30 compute-2 podman[80555]: 2026-01-31 07:07:30.153066307 +0000 UTC m=+0.134014020 container start 24e9ccee5f2bf9f59d74411aa15005d3f02249adad9f95bd55e29d12dc1a5bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:07:30 compute-2 podman[80555]: 2026-01-31 07:07:30.159574752 +0000 UTC m=+0.140522555 container attach 24e9ccee5f2bf9f59d74411aa15005d3f02249adad9f95bd55e29d12dc1a5bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hellman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:30 compute-2 ceph-mgr[77635]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 07:07:30 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 07:07:30 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:30.190+0000 7f13c6562140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 07:07:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e27 e27: 3 total, 2 up, 3 in
Jan 31 07:07:30 compute-2 ceph-mon[77282]: pgmap v84: 100 pgs: 1 peering, 94 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:30 compute-2 ceph-mon[77282]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 31 07:07:30 compute-2 ceph-mon[77282]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 07:07:30 compute-2 mystifying_hellman[80572]: {
Jan 31 07:07:30 compute-2 mystifying_hellman[80572]:     "d108d581-3dd3-4742-941a-f201ff187649": {
Jan 31 07:07:30 compute-2 mystifying_hellman[80572]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:07:30 compute-2 mystifying_hellman[80572]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:07:30 compute-2 mystifying_hellman[80572]:         "osd_id": 2,
Jan 31 07:07:30 compute-2 mystifying_hellman[80572]:         "osd_uuid": "d108d581-3dd3-4742-941a-f201ff187649",
Jan 31 07:07:30 compute-2 mystifying_hellman[80572]:         "type": "bluestore"
Jan 31 07:07:30 compute-2 mystifying_hellman[80572]:     }
Jan 31 07:07:30 compute-2 mystifying_hellman[80572]: }
Jan 31 07:07:30 compute-2 systemd[1]: libpod-24e9ccee5f2bf9f59d74411aa15005d3f02249adad9f95bd55e29d12dc1a5bf2.scope: Deactivated successfully.
Jan 31 07:07:30 compute-2 podman[80555]: 2026-01-31 07:07:30.932811018 +0000 UTC m=+0.913758711 container died 24e9ccee5f2bf9f59d74411aa15005d3f02249adad9f95bd55e29d12dc1a5bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hellman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:07:30 compute-2 systemd[1]: var-lib-containers-storage-overlay-3f1ea88aeabfb22a39106556c30f4c400ef07cdcd0a261dbaf954d11666bef56-merged.mount: Deactivated successfully.
Jan 31 07:07:31 compute-2 podman[80555]: 2026-01-31 07:07:31.001389945 +0000 UTC m=+0.982337618 container remove 24e9ccee5f2bf9f59d74411aa15005d3f02249adad9f95bd55e29d12dc1a5bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hellman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:31 compute-2 ceph-mgr[77635]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 07:07:31 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'volumes'
Jan 31 07:07:31 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:31.002+0000 7f13c6562140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 07:07:31 compute-2 systemd[1]: libpod-conmon-24e9ccee5f2bf9f59d74411aa15005d3f02249adad9f95bd55e29d12dc1a5bf2.scope: Deactivated successfully.
Jan 31 07:07:31 compute-2 sudo[80450]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:31 compute-2 ceph-mgr[77635]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 07:07:31 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:31.799+0000 7f13c6562140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 07:07:31 compute-2 ceph-mgr[77635]: mgr[py] Loading python module 'zabbix'
Jan 31 07:07:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e28 e28: 3 total, 2 up, 3 in
Jan 31 07:07:31 compute-2 ceph-osd[79942]: osd.2 0 done with init, starting boot process
Jan 31 07:07:31 compute-2 ceph-osd[79942]: osd.2 0 start_boot
Jan 31 07:07:31 compute-2 ceph-osd[79942]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 07:07:31 compute-2 ceph-osd[79942]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 07:07:31 compute-2 ceph-osd[79942]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 07:07:31 compute-2 ceph-osd[79942]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 07:07:31 compute-2 ceph-osd[79942]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 31 07:07:31 compute-2 sudo[80605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:31 compute-2 sudo[80605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:31 compute-2 sudo[80605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:31 compute-2 ceph-mon[77282]: purged_snaps scrub starts
Jan 31 07:07:31 compute-2 ceph-mon[77282]: purged_snaps scrub ok
Jan 31 07:07:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-2 ceph-mon[77282]: 2.2 scrub starts
Jan 31 07:07:31 compute-2 ceph-mon[77282]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 07:07:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:31 compute-2 ceph-mon[77282]: from='osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 07:07:31 compute-2 ceph-mon[77282]: 2.2 scrub ok
Jan 31 07:07:31 compute-2 ceph-mon[77282]: osdmap e27: 3 total, 2 up, 3 in
Jan 31 07:07:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:31 compute-2 ceph-mon[77282]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 07:07:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-2 ceph-mon[77282]: pgmap v86: 131 pgs: 1 creating+peering, 93 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:32 compute-2 sudo[80630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:07:32 compute-2 sudo[80630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:32 compute-2 sudo[80630]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:32 compute-2 sudo[80655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:32 compute-2 sudo[80655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:32 compute-2 sudo[80655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:32 compute-2 ceph-mgr[77635]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 07:07:32 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-2-wmgest[77631]: 2026-01-31T07:07:32.081+0000 7f13c6562140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 07:07:32 compute-2 ceph-mgr[77635]: ms_deliver_dispatch: unhandled message 0x55f773f451e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 31 07:07:32 compute-2 ceph-mgr[77635]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 07:07:32 compute-2 sudo[80680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:32 compute-2 sudo[80680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:32 compute-2 sudo[80680]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:32 compute-2 sudo[80705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:32 compute-2 sudo[80705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:32 compute-2 sudo[80705]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:32 compute-2 sudo[80730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:07:32 compute-2 sudo[80730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:33 compute-2 ceph-mon[77282]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 31 07:07:33 compute-2 ceph-mon[77282]: osdmap e28: 3 total, 2 up, 3 in
Jan 31 07:07:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3259085705' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 31 07:07:33 compute-2 ceph-mon[77282]: Standby manager daemon compute-2.wmgest started
Jan 31 07:07:33 compute-2 ceph-mgr[77635]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 07:07:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e29 e29: 3 total, 2 up, 3 in
Jan 31 07:07:33 compute-2 podman[80827]: 2026-01-31 07:07:33.264339813 +0000 UTC m=+0.099022751 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:07:33 compute-2 podman[80827]: 2026-01-31 07:07:33.821795876 +0000 UTC m=+0.656478804 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:34 compute-2 ceph-mon[77282]: 4.1 scrub starts
Jan 31 07:07:34 compute-2 ceph-mon[77282]: 4.1 scrub ok
Jan 31 07:07:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3259085705' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 07:07:34 compute-2 ceph-mon[77282]: osdmap e29: 3 total, 2 up, 3 in
Jan 31 07:07:34 compute-2 ceph-mon[77282]: mgrmap e10: compute-0.hhuoua(active, since 2m), standbys: compute-2.wmgest
Jan 31 07:07:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wmgest", "id": "compute-2.wmgest"}]: dispatch
Jan 31 07:07:34 compute-2 ceph-mon[77282]: pgmap v89: 131 pgs: 1 creating+peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:34 compute-2 ceph-mon[77282]: 2.3 scrub starts
Jan 31 07:07:34 compute-2 ceph-mon[77282]: 2.3 scrub ok
Jan 31 07:07:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:34 compute-2 sudo[80730]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:35 compute-2 sudo[80911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:35 compute-2 sudo[80911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:35 compute-2 sudo[80911]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:35 compute-2 sudo[80936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:35 compute-2 sudo[80936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:35 compute-2 sudo[80936]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:35 compute-2 sudo[80961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:35 compute-2 sudo[80961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:35 compute-2 sudo[80961]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:35 compute-2 sudo[80986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:07:35 compute-2 sudo[80986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:35 compute-2 ceph-mon[77282]: 4.2 scrub starts
Jan 31 07:07:35 compute-2 ceph-mon[77282]: 4.2 scrub ok
Jan 31 07:07:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:35 compute-2 ceph-mon[77282]: Standby manager daemon compute-1.hodsiu started
Jan 31 07:07:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e30 e30: 3 total, 2 up, 3 in
Jan 31 07:07:36 compute-2 sudo[80986]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:36 compute-2 sudo[81043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:36 compute-2 sudo[81043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:36 compute-2 sudo[81043]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:36 compute-2 sudo[81068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:36 compute-2 sudo[81068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:36 compute-2 sudo[81068]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:36 compute-2 sudo[81093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:36 compute-2 sudo[81093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:36 compute-2 sudo[81093]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:36 compute-2 sudo[81118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 07:07:36 compute-2 sudo[81118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:36 compute-2 ceph-mon[77282]: pgmap v90: 131 pgs: 1 creating+peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/501340508' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 31 07:07:36 compute-2 ceph-mon[77282]: 2.4 deep-scrub starts
Jan 31 07:07:36 compute-2 ceph-mon[77282]: 2.4 deep-scrub ok
Jan 31 07:07:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:36 compute-2 ceph-mon[77282]: mgrmap e11: compute-0.hhuoua(active, since 2m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:07:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr metadata", "who": "compute-1.hodsiu", "id": "compute-1.hodsiu"}]: dispatch
Jan 31 07:07:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/501340508' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 07:07:36 compute-2 ceph-mon[77282]: osdmap e30: 3 total, 2 up, 3 in
Jan 31 07:07:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:37 compute-2 podman[81181]: 2026-01-31 07:07:37.375263857 +0000 UTC m=+0.020893577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:37 compute-2 podman[81181]: 2026-01-31 07:07:37.515073732 +0000 UTC m=+0.160703432 container create d29cdba1b11a0bbd9a86cadd0fc380f719a2b8b08d8de0281035f5994a70305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:07:37 compute-2 systemd[1]: Started libpod-conmon-d29cdba1b11a0bbd9a86cadd0fc380f719a2b8b08d8de0281035f5994a70305c.scope.
Jan 31 07:07:37 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:37 compute-2 podman[81181]: 2026-01-31 07:07:37.587433382 +0000 UTC m=+0.233063092 container init d29cdba1b11a0bbd9a86cadd0fc380f719a2b8b08d8de0281035f5994a70305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:37 compute-2 podman[81181]: 2026-01-31 07:07:37.599001265 +0000 UTC m=+0.244630965 container start d29cdba1b11a0bbd9a86cadd0fc380f719a2b8b08d8de0281035f5994a70305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:37 compute-2 optimistic_tharp[81197]: 167 167
Jan 31 07:07:37 compute-2 systemd[1]: libpod-d29cdba1b11a0bbd9a86cadd0fc380f719a2b8b08d8de0281035f5994a70305c.scope: Deactivated successfully.
Jan 31 07:07:37 compute-2 podman[81181]: 2026-01-31 07:07:37.605326636 +0000 UTC m=+0.250956326 container attach d29cdba1b11a0bbd9a86cadd0fc380f719a2b8b08d8de0281035f5994a70305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:07:37 compute-2 podman[81181]: 2026-01-31 07:07:37.606135287 +0000 UTC m=+0.251764987 container died d29cdba1b11a0bbd9a86cadd0fc380f719a2b8b08d8de0281035f5994a70305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:37 compute-2 systemd[1]: var-lib-containers-storage-overlay-a40437821ff0f54ef954494592c2328a031d01697a0c026cc1675dbbf3919a7a-merged.mount: Deactivated successfully.
Jan 31 07:07:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e31 e31: 3 total, 2 up, 3 in
Jan 31 07:07:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1530384074' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 31 07:07:37 compute-2 podman[81181]: 2026-01-31 07:07:37.659676687 +0000 UTC m=+0.305306387 container remove d29cdba1b11a0bbd9a86cadd0fc380f719a2b8b08d8de0281035f5994a70305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:37 compute-2 systemd[1]: libpod-conmon-d29cdba1b11a0bbd9a86cadd0fc380f719a2b8b08d8de0281035f5994a70305c.scope: Deactivated successfully.
Jan 31 07:07:37 compute-2 ceph-osd[79942]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.038 iops: 6665.722 elapsed_sec: 0.450
Jan 31 07:07:37 compute-2 ceph-osd[79942]: log_channel(cluster) log [WRN] : OSD bench result of 6665.721838 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:07:37 compute-2 ceph-osd[79942]: osd.2 0 waiting for initial osdmap
Jan 31 07:07:37 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2[79938]: 2026-01-31T07:07:37.806+0000 7f3bc6900640 -1 osd.2 0 waiting for initial osdmap
Jan 31 07:07:37 compute-2 ceph-osd[79942]: osd.2 31 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 07:07:37 compute-2 ceph-osd[79942]: osd.2 31 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 07:07:37 compute-2 ceph-osd[79942]: osd.2 31 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 07:07:37 compute-2 ceph-osd[79942]: osd.2 31 check_osdmap_features require_osd_release unknown -> reef
Jan 31 07:07:37 compute-2 ceph-osd[79942]: osd.2 31 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 07:07:37 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-2[79938]: 2026-01-31T07:07:37.984+0000 7f3bc1711640 -1 osd.2 31 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 07:07:37 compute-2 ceph-osd[79942]: osd.2 31 set_numa_affinity not setting numa affinity
Jan 31 07:07:37 compute-2 ceph-osd[79942]: osd.2 31 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 31 07:07:38 compute-2 podman[81220]: 2026-01-31 07:07:38.014038362 +0000 UTC m=+0.055153115 container create b44603beef3e2e211f85c2c26626acd6c30512b168d308fe108f1c8324ba1cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:07:38 compute-2 systemd[1]: Started libpod-conmon-b44603beef3e2e211f85c2c26626acd6c30512b168d308fe108f1c8324ba1cdb.scope.
Jan 31 07:07:38 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03609bdbba71c660426898ca4a2c30f0fe3ae3034bc8f1b89710f0a60adff00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03609bdbba71c660426898ca4a2c30f0fe3ae3034bc8f1b89710f0a60adff00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03609bdbba71c660426898ca4a2c30f0fe3ae3034bc8f1b89710f0a60adff00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03609bdbba71c660426898ca4a2c30f0fe3ae3034bc8f1b89710f0a60adff00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:38 compute-2 podman[81220]: 2026-01-31 07:07:38.08491095 +0000 UTC m=+0.126025713 container init b44603beef3e2e211f85c2c26626acd6c30512b168d308fe108f1c8324ba1cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jepsen, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:07:38 compute-2 podman[81220]: 2026-01-31 07:07:38.091287073 +0000 UTC m=+0.132401826 container start b44603beef3e2e211f85c2c26626acd6c30512b168d308fe108f1c8324ba1cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jepsen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 07:07:38 compute-2 podman[81220]: 2026-01-31 07:07:37.996399174 +0000 UTC m=+0.037513947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:38 compute-2 podman[81220]: 2026-01-31 07:07:38.094365967 +0000 UTC m=+0.135480820 container attach b44603beef3e2e211f85c2c26626acd6c30512b168d308fe108f1c8324ba1cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:38 compute-2 ceph-mon[77282]: pgmap v92: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:38 compute-2 ceph-mon[77282]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1530384074' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 07:07:38 compute-2 ceph-mon[77282]: osdmap e31: 3 total, 2 up, 3 in
Jan 31 07:07:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:38 compute-2 ceph-mon[77282]: OSD bench result of 6665.721838 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:07:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e32 e32: 3 total, 3 up, 3 in
Jan 31 07:07:38 compute-2 ceph-osd[79942]: osd.2 32 state: booting -> active
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]: [
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:     {
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:         "available": false,
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:         "ceph_device": false,
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:         "lsm_data": {},
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:         "lvs": [],
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:         "path": "/dev/sr0",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:         "rejected_reasons": [
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "Insufficient space (<5GB)",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "Has a FileSystem"
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:         ],
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:         "sys_api": {
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "actuators": null,
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "device_nodes": "sr0",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "devname": "sr0",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "human_readable_size": "482.00 KB",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "id_bus": "ata",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "model": "QEMU DVD-ROM",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "nr_requests": "2",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "parent": "/dev/sr0",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "partitions": {},
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "path": "/dev/sr0",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "removable": "1",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "rev": "2.5+",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "ro": "0",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "rotational": "1",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "sas_address": "",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "sas_device_handle": "",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "scheduler_mode": "mq-deadline",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "sectors": 0,
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "sectorsize": "2048",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "size": 493568.0,
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "support_discard": "2048",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "type": "disk",
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:             "vendor": "QEMU"
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:         }
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]:     }
Jan 31 07:07:39 compute-2 elegant_jepsen[81238]: ]
Jan 31 07:07:39 compute-2 systemd[1]: libpod-b44603beef3e2e211f85c2c26626acd6c30512b168d308fe108f1c8324ba1cdb.scope: Deactivated successfully.
Jan 31 07:07:39 compute-2 systemd[1]: libpod-b44603beef3e2e211f85c2c26626acd6c30512b168d308fe108f1c8324ba1cdb.scope: Consumed 1.237s CPU time.
Jan 31 07:07:39 compute-2 podman[81220]: 2026-01-31 07:07:39.35985945 +0000 UTC m=+1.400974233 container died b44603beef3e2e211f85c2c26626acd6c30512b168d308fe108f1c8324ba1cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:39 compute-2 systemd[1]: var-lib-containers-storage-overlay-b03609bdbba71c660426898ca4a2c30f0fe3ae3034bc8f1b89710f0a60adff00-merged.mount: Deactivated successfully.
Jan 31 07:07:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:39 compute-2 podman[81220]: 2026-01-31 07:07:39.415696362 +0000 UTC m=+1.456811115 container remove b44603beef3e2e211f85c2c26626acd6c30512b168d308fe108f1c8324ba1cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:07:39 compute-2 systemd[1]: libpod-conmon-b44603beef3e2e211f85c2c26626acd6c30512b168d308fe108f1c8324ba1cdb.scope: Deactivated successfully.
Jan 31 07:07:39 compute-2 sudo[81118]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-2 sudo[82207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:39 compute-2 sudo[82207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-2 sudo[82207]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-2 sudo[82232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 07:07:39 compute-2 sudo[82232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-2 sudo[82232]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-2 sudo[82257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:39 compute-2 sudo[82257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-2 sudo[82257]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2153731720' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 31 07:07:39 compute-2 ceph-mon[77282]: osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174] boot
Jan 31 07:07:39 compute-2 ceph-mon[77282]: osdmap e32: 3 total, 3 up, 3 in
Jan 31 07:07:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:39 compute-2 ceph-mon[77282]: 2.5 deep-scrub starts
Jan 31 07:07:39 compute-2 ceph-mon[77282]: 2.5 deep-scrub ok
Jan 31 07:07:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:07:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:07:39 compute-2 sudo[82282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph
Jan 31 07:07:39 compute-2 sudo[82282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-2 sudo[82282]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e33 e33: 3 total, 3 up, 3 in
Jan 31 07:07:39 compute-2 sudo[82307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:39 compute-2 sudo[82307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-2 sudo[82307]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.18( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.19( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.11( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.17( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.1e( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.16( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.15( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.10( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.14( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.13( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.13( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.15( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.14( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.12( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.17( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.11( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.16( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.9( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.10( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.f( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.8( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.e( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.d( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.b( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.c( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.d( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.a( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.b( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.c( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.1f( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.6( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.a( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.1( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.7( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.3( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.6( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.5( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.12( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.7( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.4( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.2( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.3( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.2( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.4( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.5( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.e( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.f( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.8( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.1( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.9( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.1a( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.1d( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.1c( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.1a( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.1c( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.1b( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.1b( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.18( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[5.19( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.1e( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.1f( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 33 pg[3.1d( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:39 compute-2 sudo[82332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:07:39 compute-2 sudo[82332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-2 sudo[82332]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-2 sudo[82357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:39 compute-2 sudo[82357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-2 sudo[82357]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-2 sudo[82382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:39 compute-2 sudo[82382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-2 sudo[82382]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-2 sudo[82407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:39 compute-2 sudo[82407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-2 sudo[82407]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-2 sudo[82432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:07:39 compute-2 sudo[82432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-2 sudo[82432]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[82480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82480]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:07:40 compute-2 sudo[82505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82505]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[82530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82530]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:07:40 compute-2 sudo[82555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82555]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[82580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82580]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 31 07:07:40 compute-2 sudo[82605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[82630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82630]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:07:40 compute-2 sudo[82655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[82680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82680]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:07:40 compute-2 sudo[82705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82705]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[82730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82730]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:07:40 compute-2 sudo[82755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82755]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[82780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82780]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:40 compute-2 sudo[82805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82805]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[82830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82830]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:07:40 compute-2 sudo[82855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82855]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 ceph-mon[77282]: pgmap v95: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:40 compute-2 ceph-mon[77282]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 07:07:40 compute-2 ceph-mon[77282]: Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 07:07:40 compute-2 ceph-mon[77282]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 07:07:40 compute-2 ceph-mon[77282]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 07:07:40 compute-2 ceph-mon[77282]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 07:07:40 compute-2 ceph-mon[77282]: 4.3 scrub starts
Jan 31 07:07:40 compute-2 ceph-mon[77282]: 4.3 scrub ok
Jan 31 07:07:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2153731720' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 07:07:40 compute-2 ceph-mon[77282]: osdmap e33: 3 total, 3 up, 3 in
Jan 31 07:07:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e34 e34: 3 total, 3 up, 3 in
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.19( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.1f( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.1e( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.a( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.18( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.c( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.d( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.b( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.a( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.c( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.e( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.8( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.b( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.f( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.9( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.d( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.10( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.16( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.11( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.18( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.17( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.1f( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.19( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.17( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.11( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.16( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.15( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.10( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.13( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.12( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.14( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.15( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.13( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.12( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.0( empty local-lis/les=32/34 n=0 ec=16/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.14( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.6( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.1( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.3( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.7( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.6( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.5( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.0( empty local-lis/les=32/34 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.1( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.7( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.4( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.2( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.5( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.3( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.4( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.2( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.e( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.8( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.f( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.9( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.1a( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.1a( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.1e( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.1c( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.1d( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.1b( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.1d( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[3.1c( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32) [2] r=0 lpr=33 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 34 pg[5.1b( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=0 lpr=33 pi=[20,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:40 compute-2 sudo[82903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[82903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82903]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 31 07:07:40 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 31 07:07:40 compute-2 sudo[82928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:07:40 compute-2 sudo[82928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82928]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[82953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82953]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[82978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:07:40 compute-2 sudo[82978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[82978]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-2 sudo[83003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-2 sudo[83003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-2 sudo[83003]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-2 sudo[83028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:41 compute-2 sudo[83028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:41 compute-2 sudo[83028]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-2 ceph-mon[77282]: Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:41 compute-2 ceph-mon[77282]: Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:41 compute-2 ceph-mon[77282]: Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:41 compute-2 ceph-mon[77282]: 4.4 scrub starts
Jan 31 07:07:41 compute-2 ceph-mon[77282]: 4.4 scrub ok
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4176655533' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 31 07:07:41 compute-2 ceph-mon[77282]: osdmap e34: 3 total, 3 up, 3 in
Jan 31 07:07:41 compute-2 ceph-mon[77282]: 3.18 scrub starts
Jan 31 07:07:41 compute-2 ceph-mon[77282]: 3.18 scrub ok
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:07:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e35 e35: 3 total, 3 up, 3 in
Jan 31 07:07:41 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 31 07:07:41 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 31 07:07:42 compute-2 ceph-mon[77282]: pgmap v98: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:42 compute-2 ceph-mon[77282]: 4.5 scrub starts
Jan 31 07:07:42 compute-2 ceph-mon[77282]: 4.5 scrub ok
Jan 31 07:07:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4176655533' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 07:07:42 compute-2 ceph-mon[77282]: 2.6 scrub starts
Jan 31 07:07:42 compute-2 ceph-mon[77282]: osdmap e35: 3 total, 3 up, 3 in
Jan 31 07:07:42 compute-2 ceph-mon[77282]: 2.6 scrub ok
Jan 31 07:07:42 compute-2 ceph-mon[77282]: 5.11 scrub starts
Jan 31 07:07:42 compute-2 ceph-mon[77282]: 5.11 scrub ok
Jan 31 07:07:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1645037556' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 31 07:07:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e36 e36: 3 total, 3 up, 3 in
Jan 31 07:07:44 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 31 07:07:44 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 31 07:07:45 compute-2 ceph-mon[77282]: pgmap v100: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:45 compute-2 ceph-mon[77282]: 2.7 scrub starts
Jan 31 07:07:45 compute-2 ceph-mon[77282]: 2.7 scrub ok
Jan 31 07:07:45 compute-2 ceph-mon[77282]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1645037556' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 07:07:45 compute-2 ceph-mon[77282]: osdmap e36: 3 total, 3 up, 3 in
Jan 31 07:07:45 compute-2 ceph-mon[77282]: 3.17 scrub starts
Jan 31 07:07:45 compute-2 ceph-mon[77282]: 3.17 scrub ok
Jan 31 07:07:45 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.1e deep-scrub starts
Jan 31 07:07:45 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.1e deep-scrub ok
Jan 31 07:07:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e37 e37: 3 total, 3 up, 3 in
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.19( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754692078s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.028501511s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.18( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754688263s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.028551102s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.1e( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754674911s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.028541565s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.1f( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754626274s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.028511047s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.c( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754758835s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.028648376s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.1f( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754493713s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.028511047s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.1e( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754519463s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.028541565s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.c( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754617691s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.028648376s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.19( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754485130s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.028501511s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.a( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754293442s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.028581619s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.a( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754390717s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.028680801s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.c( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754404068s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.028717041s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.b( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754338264s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.028659821s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.a( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754364967s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.028680801s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.a( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754266739s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.028581619s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.c( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754383087s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.028717041s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.b( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754313469s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.028659821s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.d( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759757042s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034200668s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.d( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759735107s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034200668s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.f( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759569168s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034061432s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.9( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759685516s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034198761s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.f( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759543419s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034061432s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.9( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759658813s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034198761s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.17( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759625435s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034292221s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.10( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759536743s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034212112s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.16( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759541512s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034231186s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.17( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759602547s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034292221s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.16( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759515762s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034231186s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.10( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759509087s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034212112s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.18( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.754467964s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.028551102s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1e( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.760161400s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.035001755s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.19( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759438515s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034322739s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1e( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.760131836s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.035001755s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.18( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759324074s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034273148s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.18( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759288788s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034273148s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.19( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759380341s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034322739s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1f( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759270668s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034324646s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.17( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759255409s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034351349s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.11( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759265900s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034391403s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.17( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759227753s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034351349s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.16( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759262085s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034404755s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1f( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759185791s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034324646s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.10( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759339333s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034545898s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.11( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759180069s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034391403s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.10( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759286880s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034545898s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.14( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759322166s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034595490s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.16( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759147644s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034404755s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.13( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759385109s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034679413s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.14( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759280205s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034595490s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.13( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759363174s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034679413s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.14( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759237289s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034713745s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.15( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759222984s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034662247s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.12( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759159088s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034662247s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.14( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759213448s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034713745s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.12( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759127617s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034662247s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.15( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759146690s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034662247s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759107590s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034761429s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.6( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759040833s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034719467s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.7( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759127617s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034816742s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759075165s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034761429s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.6( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759015083s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034719467s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.7( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759101868s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034816742s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.5( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759054184s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034851074s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.3( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758986473s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034805298s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.5( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.759028435s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034851074s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.3( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758960724s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034805298s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.6( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758938789s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034837723s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.7( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758803368s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034872055s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.3( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758838654s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034912109s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.3( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758798599s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034912109s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.2( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758799553s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034879684s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.6( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758885384s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034837723s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.7( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758741379s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034872055s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.1( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758690834s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034864426s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.2( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758699417s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034879684s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.5( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758627892s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034910202s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.2( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758650780s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034938812s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.4( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758626938s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034955978s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.f( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758640289s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.034990311s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.2( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758603096s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034938812s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.4( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758596420s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034955978s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.f( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758617401s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034990311s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.5( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758592606s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034910202s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1c( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758571625s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.035060883s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1d( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758562088s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.035074234s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1c( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758554459s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.035060883s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1d( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758538246s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.035074234s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1b( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758490562s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.035104752s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.1c( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758460045s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active pruub 28.035100937s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[5.1b( empty local-lis/les=32/34 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758472443s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.035104752s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.1( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758618355s) [0] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.034864426s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[3.1c( empty local-lis/les=32/34 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37 pruub=10.758399010s) [1] r=-1 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 28.035100937s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.14( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.8( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.15( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.9( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.2( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.1( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.3( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.1d( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.1c( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.6( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.1f( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[4.19( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.1d( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.1c( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.b( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.a( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.5( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.c( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.d( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.10( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.f( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.15( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.12( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.18( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.13( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 37 pg[2.1b( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-2 ceph-mon[77282]: 4.6 scrub starts
Jan 31 07:07:46 compute-2 ceph-mon[77282]: 4.6 scrub ok
Jan 31 07:07:46 compute-2 ceph-mon[77282]: pgmap v102: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:46 compute-2 ceph-mon[77282]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 07:07:46 compute-2 ceph-mon[77282]: Cluster is now healthy
Jan 31 07:07:46 compute-2 ceph-mon[77282]: 5.1e deep-scrub starts
Jan 31 07:07:46 compute-2 ceph-mon[77282]: 5.1e deep-scrub ok
Jan 31 07:07:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-2 ceph-mon[77282]: osdmap e37: 3 total, 3 up, 3 in
Jan 31 07:07:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e38 e38: 3 total, 3 up, 3 in
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.1f( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.15( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.6( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.19( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.3( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[4.1d( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [2] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [2] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-2 ceph-mon[77282]: 4.7 scrub starts
Jan 31 07:07:47 compute-2 ceph-mon[77282]: 4.7 scrub ok
Jan 31 07:07:47 compute-2 ceph-mon[77282]: 2.8 scrub starts
Jan 31 07:07:47 compute-2 ceph-mon[77282]: 2.8 scrub ok
Jan 31 07:07:47 compute-2 ceph-mon[77282]: osdmap e38: 3 total, 3 up, 3 in
Jan 31 07:07:48 compute-2 ceph-mon[77282]: pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:49 compute-2 ceph-mon[77282]: 2.11 scrub starts
Jan 31 07:07:49 compute-2 ceph-mon[77282]: 2.11 scrub ok
Jan 31 07:07:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3678408126' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 07:07:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3678408126' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 07:07:49 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 31 07:07:49 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 31 07:07:50 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 31 07:07:50 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 31 07:07:50 compute-2 ceph-mon[77282]: pgmap v106: 131 pgs: 30 activating, 101 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:50 compute-2 ceph-mon[77282]: 4.b scrub starts
Jan 31 07:07:50 compute-2 ceph-mon[77282]: 4.b scrub ok
Jan 31 07:07:50 compute-2 ceph-mon[77282]: 2.14 deep-scrub starts
Jan 31 07:07:50 compute-2 ceph-mon[77282]: 2.14 deep-scrub ok
Jan 31 07:07:50 compute-2 ceph-mon[77282]: 3.15 scrub starts
Jan 31 07:07:50 compute-2 ceph-mon[77282]: 3.15 scrub ok
Jan 31 07:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3677494513' entity='client.admin' 
Jan 31 07:07:50 compute-2 sudo[83053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:50 compute-2 sudo[83053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:50 compute-2 sudo[83053]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:50 compute-2 sudo[83078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:07:50 compute-2 sudo[83078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:50 compute-2 sudo[83078]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:51 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 31 07:07:51 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 31 07:07:51 compute-2 ceph-mon[77282]: 5.13 scrub starts
Jan 31 07:07:51 compute-2 ceph-mon[77282]: 5.13 scrub ok
Jan 31 07:07:51 compute-2 ceph-mon[77282]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 07:07:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:07:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:07:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:51 compute-2 ceph-mon[77282]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 07:07:51 compute-2 ceph-mon[77282]: pgmap v107: 131 pgs: 30 activating, 101 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:51 compute-2 ceph-mon[77282]: Reconfiguring mgr.compute-0.hhuoua (monmap changed)...
Jan 31 07:07:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhuoua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:07:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:07:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:51 compute-2 ceph-mon[77282]: Reconfiguring daemon mgr.compute-0.hhuoua on compute-0
Jan 31 07:07:52 compute-2 ceph-mon[77282]: 3.11 scrub starts
Jan 31 07:07:52 compute-2 ceph-mon[77282]: 3.11 scrub ok
Jan 31 07:07:52 compute-2 ceph-mon[77282]: from='client.14310 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:07:52 compute-2 ceph-mon[77282]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:52 compute-2 ceph-mon[77282]: Saving service ingress.rgw.default spec with placement count:2
Jan 31 07:07:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:52 compute-2 ceph-mon[77282]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 07:07:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:07:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:52 compute-2 ceph-mon[77282]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 07:07:54 compute-2 ceph-mon[77282]: 4.f scrub starts
Jan 31 07:07:54 compute-2 ceph-mon[77282]: 4.f scrub ok
Jan 31 07:07:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:54 compute-2 ceph-mon[77282]: Reconfiguring osd.0 (monmap changed)...
Jan 31 07:07:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 07:07:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:54 compute-2 ceph-mon[77282]: Reconfiguring daemon osd.0 on compute-0
Jan 31 07:07:54 compute-2 ceph-mon[77282]: pgmap v108: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:54 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 31 07:07:54 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 31 07:07:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e2 new map
Jan 31 07:07:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:07:55.106335+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Jan 31 07:07:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e39 e39: 3 total, 3 up, 3 in
Jan 31 07:07:55 compute-2 ceph-mon[77282]: 4.10 scrub starts
Jan 31 07:07:55 compute-2 ceph-mon[77282]: 4.10 scrub ok
Jan 31 07:07:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:55 compute-2 ceph-mon[77282]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 07:07:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:07:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:55 compute-2 ceph-mon[77282]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 07:07:55 compute-2 ceph-mon[77282]: 3.e scrub starts
Jan 31 07:07:55 compute-2 ceph-mon[77282]: 3.e scrub ok
Jan 31 07:07:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 31 07:07:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 31 07:07:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 31 07:07:55 compute-2 ceph-mon[77282]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 07:07:55 compute-2 ceph-mon[77282]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 07:07:55 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 31 07:07:55 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 31 07:07:56 compute-2 ceph-mon[77282]: 4.11 deep-scrub starts
Jan 31 07:07:56 compute-2 ceph-mon[77282]: 4.11 deep-scrub ok
Jan 31 07:07:56 compute-2 ceph-mon[77282]: from='client.14316 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:07:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 07:07:56 compute-2 ceph-mon[77282]: pgmap v109: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:56 compute-2 ceph-mon[77282]: osdmap e39: 3 total, 3 up, 3 in
Jan 31 07:07:56 compute-2 ceph-mon[77282]: fsmap cephfs:0
Jan 31 07:07:56 compute-2 ceph-mon[77282]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:56 compute-2 ceph-mon[77282]: 4.12 scrub starts
Jan 31 07:07:56 compute-2 ceph-mon[77282]: 4.12 scrub ok
Jan 31 07:07:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:56 compute-2 ceph-mon[77282]: 5.b scrub starts
Jan 31 07:07:56 compute-2 ceph-mon[77282]: 5.b scrub ok
Jan 31 07:07:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 07:07:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:57 compute-2 ceph-mon[77282]: 2.16 deep-scrub starts
Jan 31 07:07:57 compute-2 ceph-mon[77282]: 2.16 deep-scrub ok
Jan 31 07:07:57 compute-2 ceph-mon[77282]: Reconfiguring osd.1 (monmap changed)...
Jan 31 07:07:57 compute-2 ceph-mon[77282]: Reconfiguring daemon osd.1 on compute-1
Jan 31 07:07:57 compute-2 ceph-mon[77282]: 4.16 scrub starts
Jan 31 07:07:57 compute-2 ceph-mon[77282]: 4.16 scrub ok
Jan 31 07:07:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:07:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:07:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:57 compute-2 sudo[83103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:57 compute-2 sudo[83103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:57 compute-2 sudo[83103]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:57 compute-2 sudo[83128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:57 compute-2 sudo[83128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:57 compute-2 sudo[83128]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:57 compute-2 sudo[83153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:57 compute-2 sudo[83153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:57 compute-2 sudo[83153]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:58 compute-2 sudo[83178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:58 compute-2 sudo[83178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:58 compute-2 podman[83218]: 2026-01-31 07:07:58.281660688 +0000 UTC m=+0.044895237 container create c1ce93a425439525159b48d1e23ced2f9a4de992bdf7c6f4107ccc868dd837fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:07:58 compute-2 systemd[72614]: Starting Mark boot as successful...
Jan 31 07:07:58 compute-2 systemd[1]: Started libpod-conmon-c1ce93a425439525159b48d1e23ced2f9a4de992bdf7c6f4107ccc868dd837fd.scope.
Jan 31 07:07:58 compute-2 systemd[72614]: Finished Mark boot as successful.
Jan 31 07:07:58 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:58 compute-2 podman[83218]: 2026-01-31 07:07:58.333729707 +0000 UTC m=+0.096964276 container init c1ce93a425439525159b48d1e23ced2f9a4de992bdf7c6f4107ccc868dd837fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:58 compute-2 podman[83218]: 2026-01-31 07:07:58.341097297 +0000 UTC m=+0.104331846 container start c1ce93a425439525159b48d1e23ced2f9a4de992bdf7c6f4107ccc868dd837fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:58 compute-2 podman[83218]: 2026-01-31 07:07:58.344481728 +0000 UTC m=+0.107716277 container attach c1ce93a425439525159b48d1e23ced2f9a4de992bdf7c6f4107ccc868dd837fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:07:58 compute-2 strange_ritchie[83236]: 167 167
Jan 31 07:07:58 compute-2 systemd[1]: libpod-c1ce93a425439525159b48d1e23ced2f9a4de992bdf7c6f4107ccc868dd837fd.scope: Deactivated successfully.
Jan 31 07:07:58 compute-2 podman[83218]: 2026-01-31 07:07:58.346313698 +0000 UTC m=+0.109548257 container died c1ce93a425439525159b48d1e23ced2f9a4de992bdf7c6f4107ccc868dd837fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:07:58 compute-2 podman[83218]: 2026-01-31 07:07:58.262996702 +0000 UTC m=+0.026231271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:58 compute-2 systemd[1]: var-lib-containers-storage-overlay-315fbd29f1a384c7f94c5792ff91163daf58b28f842dba30f979fa56b4612427-merged.mount: Deactivated successfully.
Jan 31 07:07:58 compute-2 podman[83218]: 2026-01-31 07:07:58.377444711 +0000 UTC m=+0.140679260 container remove c1ce93a425439525159b48d1e23ced2f9a4de992bdf7c6f4107ccc868dd837fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 07:07:58 compute-2 systemd[1]: libpod-conmon-c1ce93a425439525159b48d1e23ced2f9a4de992bdf7c6f4107ccc868dd837fd.scope: Deactivated successfully.
Jan 31 07:07:58 compute-2 sudo[83178]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:58 compute-2 sudo[83254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:58 compute-2 sudo[83254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:58 compute-2 sudo[83254]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:58 compute-2 sudo[83279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:58 compute-2 sudo[83279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:58 compute-2 sudo[83279]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:58 compute-2 sudo[83304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:58 compute-2 sudo[83304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:58 compute-2 sudo[83304]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='client.14322 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:07:58 compute-2 ceph-mon[77282]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:58 compute-2 ceph-mon[77282]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 07:07:58 compute-2 ceph-mon[77282]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 07:07:58 compute-2 ceph-mon[77282]: pgmap v111: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:58 compute-2 ceph-mon[77282]: 4.17 scrub starts
Jan 31 07:07:58 compute-2 ceph-mon[77282]: 4.17 scrub ok
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.wmgest", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:07:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:58 compute-2 sudo[83329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:58 compute-2 sudo[83329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:58 compute-2 podman[83371]: 2026-01-31 07:07:58.844696422 +0000 UTC m=+0.041961117 container create e8121d93643c6b345b56a9870522c89c3c18b4f65b6edd853e55a840ba3eeeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:58 compute-2 systemd[1]: Started libpod-conmon-e8121d93643c6b345b56a9870522c89c3c18b4f65b6edd853e55a840ba3eeeef.scope.
Jan 31 07:07:58 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:07:58 compute-2 podman[83371]: 2026-01-31 07:07:58.914850171 +0000 UTC m=+0.112114886 container init e8121d93643c6b345b56a9870522c89c3c18b4f65b6edd853e55a840ba3eeeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:58 compute-2 podman[83371]: 2026-01-31 07:07:58.919565978 +0000 UTC m=+0.116830673 container start e8121d93643c6b345b56a9870522c89c3c18b4f65b6edd853e55a840ba3eeeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:58 compute-2 flamboyant_volhard[83388]: 167 167
Jan 31 07:07:58 compute-2 systemd[1]: libpod-e8121d93643c6b345b56a9870522c89c3c18b4f65b6edd853e55a840ba3eeeef.scope: Deactivated successfully.
Jan 31 07:07:58 compute-2 podman[83371]: 2026-01-31 07:07:58.923963558 +0000 UTC m=+0.121228283 container attach e8121d93643c6b345b56a9870522c89c3c18b4f65b6edd853e55a840ba3eeeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:07:58 compute-2 podman[83371]: 2026-01-31 07:07:58.924406199 +0000 UTC m=+0.121670894 container died e8121d93643c6b345b56a9870522c89c3c18b4f65b6edd853e55a840ba3eeeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 07:07:58 compute-2 podman[83371]: 2026-01-31 07:07:58.830763315 +0000 UTC m=+0.028028030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:58 compute-2 systemd[1]: var-lib-containers-storage-overlay-2827c2bd6fe9cb959753b23d438c5e2c3db053fc1a988f36c9f9f71c2ecc3e80-merged.mount: Deactivated successfully.
Jan 31 07:07:58 compute-2 podman[83371]: 2026-01-31 07:07:58.956363185 +0000 UTC m=+0.153627880 container remove e8121d93643c6b345b56a9870522c89c3c18b4f65b6edd853e55a840ba3eeeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:07:58 compute-2 systemd[1]: libpod-conmon-e8121d93643c6b345b56a9870522c89c3c18b4f65b6edd853e55a840ba3eeeef.scope: Deactivated successfully.
Jan 31 07:07:58 compute-2 sudo[83329]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:59 compute-2 sudo[83406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:59 compute-2 sudo[83406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:59 compute-2 sudo[83406]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:59 compute-2 sudo[83431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:59 compute-2 sudo[83431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:59 compute-2 sudo[83431]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:59 compute-2 sudo[83456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:59 compute-2 sudo[83456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:59 compute-2 sudo[83456]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:59 compute-2 sudo[83481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:07:59 compute-2 sudo[83481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:59 compute-2 podman[83578]: 2026-01-31 07:07:59.595324235 +0000 UTC m=+0.048423653 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:59 compute-2 ceph-mon[77282]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 07:07:59 compute-2 ceph-mon[77282]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 07:07:59 compute-2 ceph-mon[77282]: 2.17 scrub starts
Jan 31 07:07:59 compute-2 ceph-mon[77282]: 2.17 scrub ok
Jan 31 07:07:59 compute-2 ceph-mon[77282]: Reconfiguring mgr.compute-2.wmgest (monmap changed)...
Jan 31 07:07:59 compute-2 ceph-mon[77282]: Reconfiguring daemon mgr.compute-2.wmgest on compute-2
Jan 31 07:07:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:59 compute-2 podman[83578]: 2026-01-31 07:07:59.722450046 +0000 UTC m=+0.175549444 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:07:59 compute-2 sudo[83481]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:00 compute-2 ceph-mon[77282]: pgmap v112: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:00 compute-2 ceph-mon[77282]: 4.1e scrub starts
Jan 31 07:08:00 compute-2 ceph-mon[77282]: 4.1e scrub ok
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1696344557' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1696344557' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:08:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:01 compute-2 ceph-mon[77282]: 2.1a deep-scrub starts
Jan 31 07:08:01 compute-2 ceph-mon[77282]: 2.1a deep-scrub ok
Jan 31 07:08:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3578243099' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:08:02 compute-2 ceph-mon[77282]: pgmap v113: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2320746925' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:08:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:04 compute-2 ceph-mon[77282]: pgmap v114: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1362729985' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 31 07:08:05 compute-2 ceph-mon[77282]: 5.1c scrub starts
Jan 31 07:08:05 compute-2 ceph-mon[77282]: 5.1c scrub ok
Jan 31 07:08:05 compute-2 sudo[83665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:05 compute-2 sudo[83665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:05 compute-2 sudo[83665]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:06 compute-2 sudo[83690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:06 compute-2 sudo[83690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:06 compute-2 sudo[83690]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:06 compute-2 sudo[83715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:06 compute-2 sudo[83715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:06 compute-2 sudo[83715]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:06 compute-2 sudo[83740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:08:06 compute-2 sudo[83740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:06 compute-2 podman[83804]: 2026-01-31 07:08:06.403638787 +0000 UTC m=+0.048223286 container create d12f386875c6e7964c7452ff14c85479acf78823c465276519139342122960a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:06 compute-2 systemd[1]: Started libpod-conmon-d12f386875c6e7964c7452ff14c85479acf78823c465276519139342122960a6.scope.
Jan 31 07:08:06 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:08:06 compute-2 podman[83804]: 2026-01-31 07:08:06.478201177 +0000 UTC m=+0.122785706 container init d12f386875c6e7964c7452ff14c85479acf78823c465276519139342122960a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:08:06 compute-2 podman[83804]: 2026-01-31 07:08:06.483544261 +0000 UTC m=+0.128128760 container start d12f386875c6e7964c7452ff14c85479acf78823c465276519139342122960a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:06 compute-2 podman[83804]: 2026-01-31 07:08:06.38819469 +0000 UTC m=+0.032779199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:06 compute-2 podman[83804]: 2026-01-31 07:08:06.48646434 +0000 UTC m=+0.131048859 container attach d12f386875c6e7964c7452ff14c85479acf78823c465276519139342122960a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:08:06 compute-2 sleepy_hawking[83819]: 167 167
Jan 31 07:08:06 compute-2 systemd[1]: libpod-d12f386875c6e7964c7452ff14c85479acf78823c465276519139342122960a6.scope: Deactivated successfully.
Jan 31 07:08:06 compute-2 podman[83804]: 2026-01-31 07:08:06.487420496 +0000 UTC m=+0.132004995 container died d12f386875c6e7964c7452ff14c85479acf78823c465276519139342122960a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:08:06 compute-2 systemd[1]: var-lib-containers-storage-overlay-d83755da9366b5e39988bbbf8c2a5c9b4edeeaef8253a71681955d22af9b6ac9-merged.mount: Deactivated successfully.
Jan 31 07:08:06 compute-2 podman[83804]: 2026-01-31 07:08:06.520850851 +0000 UTC m=+0.165435350 container remove d12f386875c6e7964c7452ff14c85479acf78823c465276519139342122960a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:08:06 compute-2 systemd[1]: libpod-conmon-d12f386875c6e7964c7452ff14c85479acf78823c465276519139342122960a6.scope: Deactivated successfully.
Jan 31 07:08:06 compute-2 systemd[1]: Reloading.
Jan 31 07:08:06 compute-2 systemd-sysv-generator[83867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:06 compute-2 systemd-rc-local-generator[83864]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:06 compute-2 ceph-mon[77282]: pgmap v115: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.kddbks", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 07:08:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.kddbks", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:08:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:06 compute-2 ceph-mon[77282]: 5.19 scrub starts
Jan 31 07:08:06 compute-2 ceph-mon[77282]: 5.19 scrub ok
Jan 31 07:08:06 compute-2 systemd[1]: Reloading.
Jan 31 07:08:06 compute-2 systemd-sysv-generator[83909]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:06 compute-2 systemd-rc-local-generator[83902]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:07 compute-2 systemd[1]: Starting Ceph rgw.rgw.compute-2.kddbks for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:08:07 compute-2 podman[83965]: 2026-01-31 07:08:07.235536151 +0000 UTC m=+0.046372077 container create a1621e2117a3ca1954845fc6391c11771e9c6fdcef45bd9f6df24b7fcfa2fcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-2-kddbks, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 31 07:08:07 compute-2 podman[83965]: 2026-01-31 07:08:07.220918435 +0000 UTC m=+0.031754381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:07 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6630f2f3371fb7e8aa0ebddb00bc5740a68eeea13929d979597ce382a9b38f80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:07 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6630f2f3371fb7e8aa0ebddb00bc5740a68eeea13929d979597ce382a9b38f80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:07 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6630f2f3371fb7e8aa0ebddb00bc5740a68eeea13929d979597ce382a9b38f80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:07 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6630f2f3371fb7e8aa0ebddb00bc5740a68eeea13929d979597ce382a9b38f80/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.kddbks supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:07 compute-2 podman[83965]: 2026-01-31 07:08:07.339834025 +0000 UTC m=+0.150669981 container init a1621e2117a3ca1954845fc6391c11771e9c6fdcef45bd9f6df24b7fcfa2fcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-2-kddbks, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 07:08:07 compute-2 podman[83965]: 2026-01-31 07:08:07.344188633 +0000 UTC m=+0.155024559 container start a1621e2117a3ca1954845fc6391c11771e9c6fdcef45bd9f6df24b7fcfa2fcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-2-kddbks, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:07 compute-2 bash[83965]: a1621e2117a3ca1954845fc6391c11771e9c6fdcef45bd9f6df24b7fcfa2fcaf
Jan 31 07:08:07 compute-2 systemd[1]: Started Ceph rgw.rgw.compute-2.kddbks for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:08:07 compute-2 sudo[83740]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:07 compute-2 radosgw[83985]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:08:07 compute-2 radosgw[83985]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 31 07:08:07 compute-2 radosgw[83985]: framework: beast
Jan 31 07:08:07 compute-2 radosgw[83985]: framework conf key: endpoint, val: 192.168.122.102:8082
Jan 31 07:08:07 compute-2 radosgw[83985]: init_numa not setting numa affinity
Jan 31 07:08:07 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 31 07:08:07 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 31 07:08:07 compute-2 ceph-mon[77282]: Deploying daemon rgw.rgw.compute-2.kddbks on compute-2
Jan 31 07:08:07 compute-2 ceph-mon[77282]: from='client.14352 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zjvjex", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 07:08:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zjvjex", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:08:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 07:08:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 31 07:08:08 compute-2 ceph-mon[77282]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 07:08:08 compute-2 ceph-mon[77282]: pgmap v116: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:08 compute-2 ceph-mon[77282]: Deploying daemon rgw.rgw.compute-1.zjvjex on compute-1
Jan 31 07:08:08 compute-2 ceph-mon[77282]: 5.8 scrub starts
Jan 31 07:08:08 compute-2 ceph-mon[77282]: 5.8 scrub ok
Jan 31 07:08:08 compute-2 ceph-mon[77282]: from='client.14361 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:08 compute-2 ceph-mon[77282]: osdmap e40: 3 total, 3 up, 3 in
Jan 31 07:08:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 07:08:08 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 07:08:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 07:08:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.njduba", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 07:08:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.njduba", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:08:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:10 compute-2 ceph-mon[77282]: Deploying daemon rgw.rgw.compute-0.njduba on compute-0
Jan 31 07:08:10 compute-2 ceph-mon[77282]: pgmap v118: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:10 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 07:08:10 compute-2 ceph-mon[77282]: osdmap e41: 3 total, 3 up, 3 in
Jan 31 07:08:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 07:08:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 31 07:08:10 compute-2 ceph-mon[77282]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:10 compute-2 sudo[84045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:10 compute-2 sudo[84045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:10 compute-2 sudo[84045]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:11 compute-2 sudo[84070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:11 compute-2 sudo[84070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:11 compute-2 sudo[84070]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:11 compute-2 sudo[84095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:11 compute-2 sudo[84095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:11 compute-2 sudo[84095]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:11 compute-2 sudo[84120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:08:11 compute-2 sudo[84120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='client.14373 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:11 compute-2 ceph-mon[77282]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:08:11 compute-2 ceph-mon[77282]: osdmap e42: 3 total, 3 up, 3 in
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1217523012' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:11 compute-2 ceph-mon[77282]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.ihffma", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.ihffma", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 07:08:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:11 compute-2 podman[84185]: 2026-01-31 07:08:11.362643591 +0000 UTC m=+0.035042069 container create a6c1331d013311c80a82eb8ce65a83d0aed9f351b353720a760f7a50ad1e2fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:11 compute-2 systemd[1]: Started libpod-conmon-a6c1331d013311c80a82eb8ce65a83d0aed9f351b353720a760f7a50ad1e2fd2.scope.
Jan 31 07:08:11 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:08:11 compute-2 podman[84185]: 2026-01-31 07:08:11.432922534 +0000 UTC m=+0.105321032 container init a6c1331d013311c80a82eb8ce65a83d0aed9f351b353720a760f7a50ad1e2fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:08:11 compute-2 podman[84185]: 2026-01-31 07:08:11.438742811 +0000 UTC m=+0.111141289 container start a6c1331d013311c80a82eb8ce65a83d0aed9f351b353720a760f7a50ad1e2fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:08:11 compute-2 podman[84185]: 2026-01-31 07:08:11.442191425 +0000 UTC m=+0.114589903 container attach a6c1331d013311c80a82eb8ce65a83d0aed9f351b353720a760f7a50ad1e2fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:08:11 compute-2 modest_jackson[84202]: 167 167
Jan 31 07:08:11 compute-2 systemd[1]: libpod-a6c1331d013311c80a82eb8ce65a83d0aed9f351b353720a760f7a50ad1e2fd2.scope: Deactivated successfully.
Jan 31 07:08:11 compute-2 podman[84185]: 2026-01-31 07:08:11.346988518 +0000 UTC m=+0.019387016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:11 compute-2 podman[84185]: 2026-01-31 07:08:11.444272181 +0000 UTC m=+0.116670659 container died a6c1331d013311c80a82eb8ce65a83d0aed9f351b353720a760f7a50ad1e2fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:08:11 compute-2 systemd[1]: var-lib-containers-storage-overlay-1c60ab5829513a5c125226edd12a92541542cdfaf7adfe8c37e29d9bda628dd0-merged.mount: Deactivated successfully.
Jan 31 07:08:11 compute-2 podman[84185]: 2026-01-31 07:08:11.481235652 +0000 UTC m=+0.153634130 container remove a6c1331d013311c80a82eb8ce65a83d0aed9f351b353720a760f7a50ad1e2fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:08:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 07:08:11 compute-2 systemd[1]: libpod-conmon-a6c1331d013311c80a82eb8ce65a83d0aed9f351b353720a760f7a50ad1e2fd2.scope: Deactivated successfully.
Jan 31 07:08:11 compute-2 systemd[1]: Reloading.
Jan 31 07:08:11 compute-2 systemd-rc-local-generator[84248]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:11 compute-2 systemd-sysv-generator[84251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:11 compute-2 systemd[1]: Reloading.
Jan 31 07:08:11 compute-2 systemd-rc-local-generator[84288]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:11 compute-2 systemd-sysv-generator[84292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:11 compute-2 systemd[1]: Starting Ceph mds.cephfs.compute-2.ihffma for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:08:12 compute-2 podman[84346]: 2026-01-31 07:08:12.178259843 +0000 UTC m=+0.034535235 container create 9715b5b58114900e7188e7c9774110cb5bce3cbc2bd4a7ffa37561e34db573a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-2-ihffma, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f7c802bb1f3b381c58cc25508130ce1cf3117f2ff3996f7391fd35dd6924e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f7c802bb1f3b381c58cc25508130ce1cf3117f2ff3996f7391fd35dd6924e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f7c802bb1f3b381c58cc25508130ce1cf3117f2ff3996f7391fd35dd6924e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f7c802bb1f3b381c58cc25508130ce1cf3117f2ff3996f7391fd35dd6924e/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.ihffma supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:12 compute-2 podman[84346]: 2026-01-31 07:08:12.222038959 +0000 UTC m=+0.078314371 container init 9715b5b58114900e7188e7c9774110cb5bce3cbc2bd4a7ffa37561e34db573a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-2-ihffma, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:12 compute-2 podman[84346]: 2026-01-31 07:08:12.22907039 +0000 UTC m=+0.085345782 container start 9715b5b58114900e7188e7c9774110cb5bce3cbc2bd4a7ffa37561e34db573a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-2-ihffma, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:12 compute-2 bash[84346]: 9715b5b58114900e7188e7c9774110cb5bce3cbc2bd4a7ffa37561e34db573a2
Jan 31 07:08:12 compute-2 podman[84346]: 2026-01-31 07:08:12.161193971 +0000 UTC m=+0.017469373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:12 compute-2 systemd[1]: Started Ceph mds.cephfs.compute-2.ihffma for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:08:12 compute-2 sudo[84120]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:12 compute-2 ceph-mds[84366]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:08:12 compute-2 ceph-mds[84366]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 31 07:08:12 compute-2 ceph-mds[84366]: main not setting numa affinity
Jan 31 07:08:12 compute-2 ceph-mds[84366]: pidfile_write: ignore empty --pid-file
Jan 31 07:08:12 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-2-ihffma[84362]: starting mds.cephfs.compute-2.ihffma at 
Jan 31 07:08:12 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma Updating MDS map to version 2 from mon.1
Jan 31 07:08:12 compute-2 ceph-mon[77282]: Deploying daemon mds.cephfs.compute-2.ihffma on compute-2
Jan 31 07:08:12 compute-2 ceph-mon[77282]: pgmap v121: 133 pgs: 2 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:12 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 07:08:12 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 07:08:12 compute-2 ceph-mon[77282]: osdmap e43: 3 total, 3 up, 3 in
Jan 31 07:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.voybui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 07:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.voybui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 07:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:12 compute-2 ceph-mon[77282]: 3.1f scrub starts
Jan 31 07:08:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 07:08:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 07:08:12 compute-2 ceph-mon[77282]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:12 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 31 07:08:12 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 31 07:08:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e3 new map
Jan 31 07:08:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:07:55.106335+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.ihffma{-1:24157} state up:standby seq 1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma Updating MDS map to version 3 from mon.1
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma Monitors have assigned me to become a standby.
Jan 31 07:08:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e4 new map
Jan 31 07:08:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:13.111386+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:creating seq 1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma Updating MDS map to version 4 from mon.1
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.4 handle_mds_map i am now mds.0.4
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x1
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x100
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x600
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x601
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x602
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x603
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x604
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x605
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x606
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x607
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x608
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.cache creating system inode with ino:0x609
Jan 31 07:08:13 compute-2 ceph-mds[84366]: mds.0.4 creating_done
Jan 31 07:08:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 07:08:13 compute-2 ceph-mon[77282]: Deploying daemon mds.cephfs.compute-0.voybui on compute-0
Jan 31 07:08:13 compute-2 ceph-mon[77282]: 3.1f scrub ok
Jan 31 07:08:13 compute-2 ceph-mon[77282]: osdmap e44: 3 total, 3 up, 3 in
Jan 31 07:08:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1455749916' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:13 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:13 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1217523012' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:13 compute-2 ceph-mon[77282]: 5.d scrub starts
Jan 31 07:08:13 compute-2 ceph-mon[77282]: 5.d scrub ok
Jan 31 07:08:13 compute-2 ceph-mon[77282]: from='client.14388 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:13 compute-2 ceph-mon[77282]: mds.? [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] up:boot
Jan 31 07:08:13 compute-2 ceph-mon[77282]: daemon mds.cephfs.compute-2.ihffma assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 07:08:13 compute-2 ceph-mon[77282]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 07:08:13 compute-2 ceph-mon[77282]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 07:08:13 compute-2 ceph-mon[77282]: fsmap cephfs:0 1 up:standby
Jan 31 07:08:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.ihffma"}]: dispatch
Jan 31 07:08:13 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:creating}
Jan 31 07:08:13 compute-2 ceph-mon[77282]: daemon mds.cephfs.compute-2.ihffma is now active in filesystem cephfs as rank 0
Jan 31 07:08:13 compute-2 ceph-mon[77282]: 5.1d deep-scrub starts
Jan 31 07:08:13 compute-2 ceph-mon[77282]: 5.1d deep-scrub ok
Jan 31 07:08:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e5 new map
Jan 31 07:08:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:14.127061+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:14 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma Updating MDS map to version 5 from mon.1
Jan 31 07:08:14 compute-2 ceph-mds[84366]: mds.0.4 handle_mds_map i am now mds.0.4
Jan 31 07:08:14 compute-2 ceph-mds[84366]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 31 07:08:14 compute-2 ceph-mds[84366]: mds.0.4 recovery_done -- successful recovery!
Jan 31 07:08:14 compute-2 ceph-mds[84366]: mds.0.4 active_start
Jan 31 07:08:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e6 new map
Jan 31 07:08:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:14.127061+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 07:08:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 07:08:14 compute-2 ceph-mon[77282]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/616184540' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:14 compute-2 ceph-mon[77282]: pgmap v124: 134 pgs: 1 unknown, 133 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1455749916' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 07:08:14 compute-2 ceph-mon[77282]: osdmap e45: 3 total, 3 up, 3 in
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dqeaqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dqeaqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:14 compute-2 ceph-mon[77282]: mds.? [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] up:active
Jan 31 07:08:14 compute-2 ceph-mon[77282]: mds.? [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] up:boot
Jan 31 07:08:14 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 1 up:standby
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.voybui"}]: dispatch
Jan 31 07:08:14 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 1 up:standby
Jan 31 07:08:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/419499773' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 07:08:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 07:08:15 compute-2 ceph-mon[77282]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/616184540' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: 5.1f scrub starts
Jan 31 07:08:15 compute-2 ceph-mon[77282]: 5.1f scrub ok
Jan 31 07:08:15 compute-2 ceph-mon[77282]: Deploying daemon mds.cephfs.compute-1.dqeaqy on compute-1
Jan 31 07:08:15 compute-2 ceph-mon[77282]: osdmap e46: 3 total, 3 up, 3 in
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/616184540' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1299930726' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: 3.1e scrub starts
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 07:08:15 compute-2 ceph-mon[77282]: osdmap e47: 3 total, 3 up, 3 in
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/616184540' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1299930726' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 07:08:16 compute-2 ceph-mon[77282]: pgmap v127: 135 pgs: 2 unknown, 133 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 07:08:16 compute-2 ceph-mon[77282]: 3.1e scrub ok
Jan 31 07:08:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1513498969' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:08:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 07:08:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 07:08:16 compute-2 ceph-mon[77282]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 07:08:16 compute-2 ceph-mon[77282]: osdmap e48: 3 total, 3 up, 3 in
Jan 31 07:08:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-2 ceph-mon[77282]: Deploying daemon haproxy.rgw.default.compute-0.cwtxbj on compute-0
Jan 31 07:08:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e7 new map
Jan 31 07:08:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:14.127061+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.dqeaqy{-1:24170} state up:standby seq 1 addr [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:17 compute-2 radosgw[83985]: LDAP not started since no server URIs were provided in the configuration.
Jan 31 07:08:17 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-2-kddbks[83981]: 2026-01-31T07:08:17.205+0000 7fa862e53940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 31 07:08:17 compute-2 radosgw[83985]: framework: beast
Jan 31 07:08:17 compute-2 radosgw[83985]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 07:08:17 compute-2 radosgw[83985]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 07:08:17 compute-2 radosgw[83985]: starting handler: beast
Jan 31 07:08:17 compute-2 radosgw[83985]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:08:17 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-2 radosgw[83985]: mgrc service_daemon_register rgw.24172 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.kddbks,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=12135664-ebbb-4151-b9c5-06f6a22e158a,zone_name=default,zonegroup_id=50d9470b-8f69-45ea-ac36-a7be21625514,zonegroup_name=default}
Jan 31 07:08:17 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 31 07:08:18 compute-2 ceph-mon[77282]: 5.10 scrub starts
Jan 31 07:08:18 compute-2 ceph-mon[77282]: 5.10 scrub ok
Jan 31 07:08:18 compute-2 ceph-mon[77282]: mds.? [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] up:boot
Jan 31 07:08:18 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:08:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.dqeaqy"}]: dispatch
Jan 31 07:08:18 compute-2 ceph-mon[77282]: pgmap v130: 135 pgs: 1 unknown, 134 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:18 compute-2 ceph-mon[77282]: 3.4 deep-scrub starts
Jan 31 07:08:18 compute-2 ceph-mon[77282]: 3.4 deep-scrub ok
Jan 31 07:08:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e8 new map
Jan 31 07:08:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:17.997329+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.dqeaqy{-1:24170} state up:standby seq 1 addr [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:18 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma Updating MDS map to version 8 from mon.1
Jan 31 07:08:18 compute-2 ceph-mds[84366]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 07:08:18 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-2-ihffma[84362]: 2026-01-31T07:08:18.121+0000 7f73b5590640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 07:08:19 compute-2 ceph-mon[77282]: 3.14 scrub starts
Jan 31 07:08:19 compute-2 ceph-mon[77282]: 3.14 scrub ok
Jan 31 07:08:19 compute-2 ceph-mon[77282]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 07:08:19 compute-2 ceph-mon[77282]: Cluster is now healthy
Jan 31 07:08:19 compute-2 ceph-mon[77282]: mds.? [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] up:active
Jan 31 07:08:19 compute-2 ceph-mon[77282]: mds.? [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] up:standby
Jan 31 07:08:19 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:08:19 compute-2 ceph-mon[77282]: 5.5 scrub starts
Jan 31 07:08:19 compute-2 ceph-mon[77282]: 5.5 scrub ok
Jan 31 07:08:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2513510612' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 31 07:08:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e9 new map
Jan 31 07:08:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:17.997329+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.dqeaqy{-1:24170} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:20 compute-2 ceph-mon[77282]: pgmap v131: 135 pgs: 135 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 8.3 KiB/s wr, 130 op/s
Jan 31 07:08:21 compute-2 sudo[84950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:21 compute-2 sudo[84950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:21 compute-2 sudo[84950]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:21 compute-2 sudo[84975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:21 compute-2 sudo[84975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:21 compute-2 sudo[84975]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:21 compute-2 sudo[85000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:21 compute-2 sudo[85000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:21 compute-2 sudo[85000]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:21 compute-2 sudo[85025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:08:21 compute-2 sudo[85025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:21 compute-2 ceph-mon[77282]: mds.? [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] up:standby
Jan 31 07:08:21 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:08:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 31 07:08:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 31 07:08:22 compute-2 ceph-mon[77282]: 3.13 scrub starts
Jan 31 07:08:22 compute-2 ceph-mon[77282]: 3.13 scrub ok
Jan 31 07:08:22 compute-2 ceph-mon[77282]: Deploying daemon haproxy.rgw.default.compute-2.envbir on compute-2
Jan 31 07:08:22 compute-2 ceph-mon[77282]: pgmap v132: 135 pgs: 135 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 7.3 KiB/s wr, 114 op/s
Jan 31 07:08:22 compute-2 ceph-mon[77282]: 3.2 deep-scrub starts
Jan 31 07:08:22 compute-2 ceph-mon[77282]: 3.2 deep-scrub ok
Jan 31 07:08:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3900091297' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 31 07:08:22 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 31 07:08:22 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 31 07:08:23 compute-2 ceph-mon[77282]: 3.16 scrub starts
Jan 31 07:08:23 compute-2 ceph-mon[77282]: 3.16 scrub ok
Jan 31 07:08:23 compute-2 ceph-mon[77282]: 3.0 scrub starts
Jan 31 07:08:23 compute-2 ceph-mon[77282]: 3.0 scrub ok
Jan 31 07:08:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:08:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:24.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:08:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:24 compute-2 ceph-mon[77282]: pgmap v133: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 120 KiB/s rd, 6.0 KiB/s wr, 233 op/s
Jan 31 07:08:25 compute-2 podman[85089]: 2026-01-31 07:08:25.608427053 +0000 UTC m=+3.479808076 container create 1e6a359c78727eecc8db6369ee206f67b8fdd3b5bd75df4c940407ebdac0d2c4 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_tu)
Jan 31 07:08:25 compute-2 systemd[1]: Started libpod-conmon-1e6a359c78727eecc8db6369ee206f67b8fdd3b5bd75df4c940407ebdac0d2c4.scope.
Jan 31 07:08:25 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:08:25 compute-2 podman[85089]: 2026-01-31 07:08:25.653968747 +0000 UTC m=+3.525349780 container init 1e6a359c78727eecc8db6369ee206f67b8fdd3b5bd75df4c940407ebdac0d2c4 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_tu)
Jan 31 07:08:25 compute-2 podman[85089]: 2026-01-31 07:08:25.659069645 +0000 UTC m=+3.530450668 container start 1e6a359c78727eecc8db6369ee206f67b8fdd3b5bd75df4c940407ebdac0d2c4 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_tu)
Jan 31 07:08:25 compute-2 podman[85089]: 2026-01-31 07:08:25.661842891 +0000 UTC m=+3.533223914 container attach 1e6a359c78727eecc8db6369ee206f67b8fdd3b5bd75df4c940407ebdac0d2c4 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_tu)
Jan 31 07:08:25 compute-2 systemd[1]: libpod-1e6a359c78727eecc8db6369ee206f67b8fdd3b5bd75df4c940407ebdac0d2c4.scope: Deactivated successfully.
Jan 31 07:08:25 compute-2 ecstatic_tu[85202]: 0 0
Jan 31 07:08:25 compute-2 conmon[85202]: conmon 1e6a359c78727eecc8db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e6a359c78727eecc8db6369ee206f67b8fdd3b5bd75df4c940407ebdac0d2c4.scope/container/memory.events
Jan 31 07:08:25 compute-2 podman[85089]: 2026-01-31 07:08:25.664651117 +0000 UTC m=+3.536032150 container died 1e6a359c78727eecc8db6369ee206f67b8fdd3b5bd75df4c940407ebdac0d2c4 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_tu)
Jan 31 07:08:25 compute-2 podman[85089]: 2026-01-31 07:08:25.588652608 +0000 UTC m=+3.460033671 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 31 07:08:25 compute-2 systemd[1]: var-lib-containers-storage-overlay-769f71b56255eee8f1a6b7a875589452e46df755da3fa614b6d36fac722713b7-merged.mount: Deactivated successfully.
Jan 31 07:08:25 compute-2 podman[85089]: 2026-01-31 07:08:25.708605666 +0000 UTC m=+3.579986709 container remove 1e6a359c78727eecc8db6369ee206f67b8fdd3b5bd75df4c940407ebdac0d2c4 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_tu)
Jan 31 07:08:25 compute-2 systemd[1]: libpod-conmon-1e6a359c78727eecc8db6369ee206f67b8fdd3b5bd75df4c940407ebdac0d2c4.scope: Deactivated successfully.
Jan 31 07:08:25 compute-2 systemd[1]: Reloading.
Jan 31 07:08:25 compute-2 systemd-rc-local-generator[85248]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:25 compute-2 systemd-sysv-generator[85253]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:26 compute-2 systemd[1]: Reloading.
Jan 31 07:08:26 compute-2 systemd-sysv-generator[85286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:26 compute-2 systemd-rc-local-generator[85283]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:26.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:26 compute-2 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.envbir for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:08:26 compute-2 podman[85346]: 2026-01-31 07:08:26.400687114 +0000 UTC m=+0.032603933 container create f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:08:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a7b7cade69405a6fe2ccd3ccde98ba7fd6a4f019664e952e77097fd6a0c8a4/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:26 compute-2 podman[85346]: 2026-01-31 07:08:26.445916368 +0000 UTC m=+0.077833207 container init f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:08:26 compute-2 podman[85346]: 2026-01-31 07:08:26.449903257 +0000 UTC m=+0.081820076 container start f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:08:26 compute-2 bash[85346]: f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff
Jan 31 07:08:26 compute-2 podman[85346]: 2026-01-31 07:08:26.385157344 +0000 UTC m=+0.017074183 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 31 07:08:26 compute-2 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.envbir for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:08:26 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir[85361]: [NOTICE] 030/070826 (2) : New worker #1 (4) forked
Jan 31 07:08:26 compute-2 sudo[85025]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:26 compute-2 sudo[85375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:26 compute-2 sudo[85375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:26 compute-2 sudo[85375]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:26 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 31 07:08:26 compute-2 sudo[85400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:26 compute-2 sudo[85400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:26 compute-2 sudo[85400]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:26 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 31 07:08:26 compute-2 sudo[85425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:26 compute-2 sudo[85425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:26 compute-2 sudo[85425]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:26 compute-2 sudo[85450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:08:26 compute-2 sudo[85450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:26 compute-2 ceph-mon[77282]: pgmap v134: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 5.0 KiB/s wr, 194 op/s
Jan 31 07:08:26 compute-2 ceph-mon[77282]: 3.6 scrub starts
Jan 31 07:08:26 compute-2 ceph-mon[77282]: 3.6 scrub ok
Jan 31 07:08:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:08:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:27.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:08:27 compute-2 ceph-mon[77282]: 5.16 scrub starts
Jan 31 07:08:27 compute-2 ceph-mon[77282]: 5.16 scrub ok
Jan 31 07:08:27 compute-2 ceph-mon[77282]: 5.3 scrub starts
Jan 31 07:08:27 compute-2 ceph-mon[77282]: 5.3 scrub ok
Jan 31 07:08:27 compute-2 ceph-mon[77282]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 07:08:27 compute-2 ceph-mon[77282]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 07:08:27 compute-2 ceph-mon[77282]: Deploying daemon keepalived.rgw.default.compute-2.faavbs on compute-2
Jan 31 07:08:27 compute-2 ceph-mon[77282]: 5.12 scrub starts
Jan 31 07:08:27 compute-2 ceph-mon[77282]: 5.12 scrub ok
Jan 31 07:08:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:28.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:28 compute-2 ceph-mon[77282]: pgmap v135: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 91 KiB/s rd, 4.5 KiB/s wr, 176 op/s
Jan 31 07:08:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 31 07:08:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 07:08:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:29 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.0 deep-scrub starts
Jan 31 07:08:29 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.0 deep-scrub ok
Jan 31 07:08:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 07:08:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:29.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 07:08:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 07:08:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 07:08:30 compute-2 ceph-mon[77282]: osdmap e49: 3 total, 3 up, 3 in
Jan 31 07:08:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:30 compute-2 ceph-mon[77282]: 5.15 scrub starts
Jan 31 07:08:30 compute-2 ceph-mon[77282]: 5.15 scrub ok
Jan 31 07:08:30 compute-2 ceph-mon[77282]: pgmap v137: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 62 KiB/s rd, 0 B/s wr, 111 op/s
Jan 31 07:08:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 31 07:08:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:30.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 07:08:31 compute-2 ceph-mon[77282]: 5.0 deep-scrub starts
Jan 31 07:08:31 compute-2 ceph-mon[77282]: 5.0 deep-scrub ok
Jan 31 07:08:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 07:08:31 compute-2 ceph-mon[77282]: osdmap e50: 3 total, 3 up, 3 in
Jan 31 07:08:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:31 compute-2 podman[85516]: 2026-01-31 07:08:31.551883821 +0000 UTC m=+4.518129338 container create d96e545a26e14027f8955332871313b2ee7070df727fab963dc1bf0253698f20 (image=quay.io/ceph/keepalived:2.2.4, name=fervent_jepsen, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vcs-type=git, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Jan 31 07:08:31 compute-2 podman[85516]: 2026-01-31 07:08:31.533712015 +0000 UTC m=+4.499957552 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 31 07:08:31 compute-2 systemd[1]: Started libpod-conmon-d96e545a26e14027f8955332871313b2ee7070df727fab963dc1bf0253698f20.scope.
Jan 31 07:08:31 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:08:31 compute-2 podman[85516]: 2026-01-31 07:08:31.76985046 +0000 UTC m=+4.736095987 container init d96e545a26e14027f8955332871313b2ee7070df727fab963dc1bf0253698f20 (image=quay.io/ceph/keepalived:2.2.4, name=fervent_jepsen, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, release=1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 07:08:31 compute-2 podman[85516]: 2026-01-31 07:08:31.775867158 +0000 UTC m=+4.742112675 container start d96e545a26e14027f8955332871313b2ee7070df727fab963dc1bf0253698f20 (image=quay.io/ceph/keepalived:2.2.4, name=fervent_jepsen, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, version=2.2.4, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc.)
Jan 31 07:08:31 compute-2 fervent_jepsen[85611]: 0 0
Jan 31 07:08:31 compute-2 systemd[1]: libpod-d96e545a26e14027f8955332871313b2ee7070df727fab963dc1bf0253698f20.scope: Deactivated successfully.
Jan 31 07:08:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:31.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:31 compute-2 podman[85516]: 2026-01-31 07:08:31.906573899 +0000 UTC m=+4.872819446 container attach d96e545a26e14027f8955332871313b2ee7070df727fab963dc1bf0253698f20 (image=quay.io/ceph/keepalived:2.2.4, name=fervent_jepsen, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, release=1793, io.openshift.expose-services=, version=2.2.4, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 07:08:31 compute-2 podman[85516]: 2026-01-31 07:08:31.908225217 +0000 UTC m=+4.874470734 container died d96e545a26e14027f8955332871313b2ee7070df727fab963dc1bf0253698f20 (image=quay.io/ceph/keepalived:2.2.4, name=fervent_jepsen, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container)
Jan 31 07:08:31 compute-2 systemd[1]: var-lib-containers-storage-overlay-c2db4126003bf031bbf60e43074c39136c114c0e4d5f55890a63031e8f954db9-merged.mount: Deactivated successfully.
Jan 31 07:08:31 compute-2 podman[85516]: 2026-01-31 07:08:31.971660501 +0000 UTC m=+4.937906018 container remove d96e545a26e14027f8955332871313b2ee7070df727fab963dc1bf0253698f20 (image=quay.io/ceph/keepalived:2.2.4, name=fervent_jepsen, com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, name=keepalived, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc.)
Jan 31 07:08:31 compute-2 systemd[1]: libpod-conmon-d96e545a26e14027f8955332871313b2ee7070df727fab963dc1bf0253698f20.scope: Deactivated successfully.
Jan 31 07:08:32 compute-2 systemd[1]: Reloading.
Jan 31 07:08:32 compute-2 systemd-rc-local-generator[85656]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:32 compute-2 systemd-sysv-generator[85663]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 07:08:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:32 compute-2 ceph-mon[77282]: osdmap e51: 3 total, 3 up, 3 in
Jan 31 07:08:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:32 compute-2 ceph-mon[77282]: pgmap v140: 150 pgs: 15 unknown, 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:32 compute-2 ceph-mon[77282]: 3.10 scrub starts
Jan 31 07:08:32 compute-2 ceph-mon[77282]: 3.10 scrub ok
Jan 31 07:08:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 07:08:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:32.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 07:08:32 compute-2 systemd[1]: Reloading.
Jan 31 07:08:32 compute-2 systemd-sysv-generator[85703]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:32 compute-2 systemd-rc-local-generator[85698]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:32 compute-2 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.faavbs for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:08:32 compute-2 podman[85757]: 2026-01-31 07:08:32.877748547 +0000 UTC m=+0.033571613 container create 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, release=1793, vcs-type=git)
Jan 31 07:08:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5b6791cc42ec36103a1f01b432e43ec7a5cd456719b177a78102dc8e4500bc/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:32 compute-2 podman[85757]: 2026-01-31 07:08:32.926770545 +0000 UTC m=+0.082593641 container init 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 31 07:08:32 compute-2 podman[85757]: 2026-01-31 07:08:32.930343531 +0000 UTC m=+0.086166597 container start 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, release=1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, distribution-scope=public, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64)
Jan 31 07:08:32 compute-2 bash[85757]: 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68
Jan 31 07:08:32 compute-2 podman[85757]: 2026-01-31 07:08:32.86295858 +0000 UTC m=+0.018781656 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 31 07:08:32 compute-2 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.faavbs for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:08:32 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:32 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 31 07:08:32 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:32 2026: Running on Linux 5.14.0-665.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026 (built for Linux 5.14.0)
Jan 31 07:08:32 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:32 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 31 07:08:32 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:32 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 31 07:08:32 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:32 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 31 07:08:32 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:32 2026: Starting VRRP child process, pid=4
Jan 31 07:08:32 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:32 2026: Startup complete
Jan 31 07:08:32 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:32 2026: (VI_0) Entering BACKUP STATE (init)
Jan 31 07:08:32 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:32 2026: VRRP_Script(check_backend) succeeded
Jan 31 07:08:32 compute-2 sudo[85450]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 07:08:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:33 compute-2 ceph-mon[77282]: osdmap e52: 3 total, 3 up, 3 in
Jan 31 07:08:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:33 compute-2 ceph-mon[77282]: osdmap e53: 3 total, 3 up, 3 in
Jan 31 07:08:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:33.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 07:08:34 compute-2 ceph-mon[77282]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 07:08:34 compute-2 ceph-mon[77282]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 07:08:34 compute-2 ceph-mon[77282]: Deploying daemon keepalived.rgw.default.compute-0.rwjfwq on compute-0
Jan 31 07:08:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:34 compute-2 ceph-mon[77282]: pgmap v143: 212 pgs: 1 peering, 62 unknown, 149 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:34 compute-2 ceph-mon[77282]: osdmap e54: 3 total, 3 up, 3 in
Jan 31 07:08:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:34.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:34 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 31 07:08:34 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 31 07:08:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 07:08:35 compute-2 ceph-mon[77282]: 5.4 scrub starts
Jan 31 07:08:35 compute-2 ceph-mon[77282]: 5.4 scrub ok
Jan 31 07:08:35 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.e deep-scrub starts
Jan 31 07:08:35 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.e deep-scrub ok
Jan 31 07:08:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 07:08:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:35.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 07:08:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 07:08:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:36.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 07:08:36 compute-2 ceph-mon[77282]: osdmap e55: 3 total, 3 up, 3 in
Jan 31 07:08:36 compute-2 ceph-mon[77282]: pgmap v146: 274 pgs: 1 peering, 124 unknown, 149 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:36 compute-2 ceph-mon[77282]: 5.e deep-scrub starts
Jan 31 07:08:36 compute-2 ceph-mon[77282]: 5.e deep-scrub ok
Jan 31 07:08:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 07:08:36 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:36 2026: (VI_0) Entering MASTER STATE
Jan 31 07:08:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 07:08:37 compute-2 ceph-mon[77282]: 3.7 scrub starts
Jan 31 07:08:37 compute-2 ceph-mon[77282]: 3.7 scrub ok
Jan 31 07:08:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:37 compute-2 ceph-mon[77282]: osdmap e56: 3 total, 3 up, 3 in
Jan 31 07:08:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:37 compute-2 sudo[85783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:37 compute-2 sudo[85783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-2 sudo[85783]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-2 sudo[85808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:37 compute-2 sudo[85808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-2 sudo[85808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-2 sudo[85833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:37 compute-2 sudo[85833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-2 sudo[85833]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-2 sudo[85858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:08:37 compute-2 sudo[85858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-2 sudo[85858]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-2 sudo[85883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:37 compute-2 sudo[85883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-2 sudo[85883]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-2 sudo[85908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:37 compute-2 sudo[85908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-2 sudo[85908]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-2 sudo[85933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:37 compute-2 sudo[85933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-2 sudo[85933]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:37.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:37 compute-2 sudo[85958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:08:37 compute-2 sudo[85958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 07:08:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 07:08:38 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 31 07:08:38 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 31 07:08:38 compute-2 podman[86056]: 2026-01-31 07:08:38.74782318 +0000 UTC m=+0.501029042 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:39 compute-2 ceph-mon[77282]: 3.f scrub starts
Jan 31 07:08:39 compute-2 ceph-mon[77282]: 3.f scrub ok
Jan 31 07:08:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:39 compute-2 ceph-mon[77282]: pgmap v149: 305 pgs: 1 peering, 93 unknown, 211 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:39 compute-2 ceph-mon[77282]: osdmap e57: 3 total, 3 up, 3 in
Jan 31 07:08:39 compute-2 podman[86056]: 2026-01-31 07:08:39.545442881 +0000 UTC m=+1.298648733 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:08:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 07:08:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:39.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 07:08:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:40.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:40 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 31 07:08:40 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:40 2026: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Jan 31 07:08:40 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs[85772]: Sat Jan 31 07:08:40 2026: (VI_0) Entering BACKUP STATE
Jan 31 07:08:41 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 31 07:08:41 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 31 07:08:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:41.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:42.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:42 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 31 07:08:43 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 31 07:08:43 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 31 07:08:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 07:08:43 compute-2 ceph-mon[77282]: 3.1 scrub starts
Jan 31 07:08:43 compute-2 ceph-mon[77282]: 5.9 scrub starts
Jan 31 07:08:43 compute-2 ceph-mon[77282]: 3.1 scrub ok
Jan 31 07:08:43 compute-2 ceph-mon[77282]: 5.9 scrub ok
Jan 31 07:08:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.16( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[11.13( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[10.12( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[7.1f( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.11( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.2( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.3( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[6.d( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[10.1( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[6.1( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.f( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[11.a( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.9( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.a( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[7.5( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[11.e( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.d( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[10.f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[11.8( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.b( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[6.5( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[7.16( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.1f( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[7.11( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[10.10( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[7.1d( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[10.11( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[11.19( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.5( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[7.a( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.6( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[10.4( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[11.3( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[10.3( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[11.16( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.15( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[11.17( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.c( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[8.1c( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[10.1e( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 58 pg[7.14( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:43.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:44 compute-2 podman[86214]: 2026-01-31 07:08:44.199578735 +0000 UTC m=+0.079671735 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:08:44 compute-2 podman[86235]: 2026-01-31 07:08:44.266268095 +0000 UTC m=+0.051227885 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:08:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:44.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:44 compute-2 podman[86214]: 2026-01-31 07:08:44.295243361 +0000 UTC m=+0.175336341 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 3.9 scrub starts
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 3.9 scrub ok
Jan 31 07:08:44 compute-2 ceph-mon[77282]: pgmap v150: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 5.6 scrub starts
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 5.6 scrub ok
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 3.1a scrub starts
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 3.1a scrub ok
Jan 31 07:08:44 compute-2 ceph-mon[77282]: pgmap v151: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 3.b scrub starts
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 3.b scrub ok
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 5.1a scrub starts
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 3.1b scrub starts
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 3.1b scrub ok
Jan 31 07:08:44 compute-2 ceph-mon[77282]: 5.1a scrub ok
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-2 ceph-mon[77282]: osdmap e58: 3 total, 3 up, 3 in
Jan 31 07:08:44 compute-2 ceph-mon[77282]: pgmap v153: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-2 sshd-session[86173]: Invalid user sol from 92.118.39.56 port 58390
Jan 31 07:08:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[7.1f( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.3( v 41'8 (0'0,41'8] local-lis/les=58/59 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[11.13( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[10.1( v 45'48 (0'0,45'48] local-lis/les=58/59 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[6.d( v 48'39 lc 44'13 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=58/59 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.f( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[6.7( v 48'39 lc 44'21 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[11.a( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.11( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.9( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.a( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[7.5( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[11.e( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.d( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[11.8( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[10.f( v 45'48 (0'0,45'48] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[10.12( v 45'48 (0'0,45'48] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[6.3( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=58/59 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=48'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.16( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.b( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[7.16( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.1f( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[7.1d( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[11.19( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[6.5( v 48'39 lc 44'11 (0'0,48'39] local-lis/les=58/59 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.5( v 41'8 (0'0,41'8] local-lis/les=58/59 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[7.11( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[7.a( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[10.11( v 45'48 (0'0,45'48] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[6.b( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=48'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[10.4( v 45'48 (0'0,45'48] local-lis/les=58/59 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.6( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=58/59 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[10.3( v 57'51 lc 45'38 (0'0,57'51] local-lis/les=58/59 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=57'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[6.f( v 48'39 lc 44'1 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[11.16( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.15( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.1c( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[10.10( v 45'48 (0'0,45'48] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.c( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[11.17( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[10.1e( v 45'48 (0'0,45'48] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[11.3( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[7.14( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 59 pg[8.2( v 41'8 (0'0,41'8] local-lis/les=58/59 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [2] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-2 sshd-session[86173]: Connection closed by invalid user sol 92.118.39.56 port 58390 [preauth]
Jan 31 07:08:44 compute-2 podman[86277]: 2026-01-31 07:08:44.560832577 +0000 UTC m=+0.052975186 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, release=1793, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., name=keepalived, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=2.2.4, architecture=x86_64, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 07:08:44 compute-2 podman[86297]: 2026-01-31 07:08:44.631278907 +0000 UTC m=+0.054896523 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, release=1793, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, vcs-type=git, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc.)
Jan 31 07:08:44 compute-2 podman[86277]: 2026-01-31 07:08:44.699972027 +0000 UTC m=+0.192114606 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, release=1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-type=git, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 31 07:08:44 compute-2 sudo[85958]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:44 compute-2 sudo[86310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:44 compute-2 sudo[86310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:44 compute-2 sudo[86310]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:44 compute-2 sudo[86335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:44 compute-2 sudo[86335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:44 compute-2 sudo[86335]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:45 compute-2 sudo[86361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:45 compute-2 sudo[86361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:45 compute-2 sudo[86361]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:45 compute-2 sudo[86386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:08:45 compute-2 sudo[86386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:45 compute-2 sudo[86386]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 07:08:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 07:08:45 compute-2 ceph-mon[77282]: osdmap e59: 3 total, 3 up, 3 in
Jan 31 07:08:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 07:08:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:45.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:46.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:46 compute-2 ceph-mon[77282]: 5.a scrub starts
Jan 31 07:08:46 compute-2 ceph-mon[77282]: 5.a scrub ok
Jan 31 07:08:46 compute-2 ceph-mon[77282]: pgmap v155: 305 pgs: 46 peering, 259 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:08:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:46 compute-2 ceph-mon[77282]: osdmap e60: 3 total, 3 up, 3 in
Jan 31 07:08:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:08:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:08:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:46 compute-2 ceph-mon[77282]: 5.17 scrub starts
Jan 31 07:08:46 compute-2 ceph-mon[77282]: 5.17 scrub ok
Jan 31 07:08:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:47 compute-2 ceph-mon[77282]: 5.c scrub starts
Jan 31 07:08:47 compute-2 ceph-mon[77282]: 5.c scrub ok
Jan 31 07:08:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 07:08:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:47.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 07:08:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:48.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:48 compute-2 ceph-mon[77282]: pgmap v157: 305 pgs: 50 peering, 255 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 69 B/s, 0 objects/s recovering
Jan 31 07:08:48 compute-2 ceph-mon[77282]: 5.14 scrub starts
Jan 31 07:08:48 compute-2 ceph-mon[77282]: 5.14 scrub ok
Jan 31 07:08:49 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 31 07:08:49 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 31 07:08:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:49.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:49 compute-2 ceph-mon[77282]: pgmap v158: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 284 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 07:08:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:50.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:51 compute-2 ceph-mon[77282]: 3.1d scrub starts
Jan 31 07:08:51 compute-2 ceph-mon[77282]: 3.1d scrub ok
Jan 31 07:08:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:51.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:52.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 07:08:52 compute-2 ceph-mon[77282]: pgmap v159: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 216 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 07:08:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 07:08:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 07:08:52 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/60/0 sis=61 pruub=15.494431496s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'39 mlcod 48'39 active pruub 99.710464478s@ mbc={255={}}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:52 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=15.494158745s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'39 mlcod 48'39 active pruub 99.710449219s@ mbc={255={}}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:52 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/60/0 sis=61 pruub=15.494076729s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 99.710464478s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:52 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=15.493978500s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 99.710449219s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:52 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=58/59 n=2 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=15.492896080s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'39 mlcod 48'39 active pruub 99.709487915s@ mbc={255={}}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:52 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=58/59 n=2 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=15.492863655s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 99.709487915s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:52 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=15.492236137s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'39 mlcod 48'39 active pruub 99.709022522s@ mbc={255={}}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:52 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=15.492181778s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 99.709022522s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-2 sudo[86445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:53 compute-2 sudo[86445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-2 sudo[86445]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:53 compute-2 sudo[86470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:08:53 compute-2 sudo[86470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-2 sudo[86470]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:53 compute-2 sudo[86495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:53 compute-2 sudo[86495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-2 sudo[86495]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:53 compute-2 sudo[86520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:53 compute-2 sudo[86520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-2 sudo[86520]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:53 compute-2 sudo[86545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:53 compute-2 sudo[86545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-2 sudo[86545]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:53 compute-2 sudo[86570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:08:53 compute-2 sudo[86570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 31 07:08:53 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 31 07:08:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 07:08:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:53.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 07:08:53 compute-2 podman[86667]: 2026-01-31 07:08:53.872484426 +0000 UTC m=+0.046759533 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:53 compute-2 ceph-mon[77282]: 3.c scrub starts
Jan 31 07:08:53 compute-2 ceph-mon[77282]: 3.c scrub ok
Jan 31 07:08:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 07:08:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 07:08:53 compute-2 ceph-mon[77282]: osdmap e61: 3 total, 3 up, 3 in
Jan 31 07:08:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:53 compute-2 ceph-mon[77282]: pgmap v161: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 216 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 07:08:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 07:08:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 07:08:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-2 podman[86667]: 2026-01-31 07:08:53.983404952 +0000 UTC m=+0.157680049 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=-1 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:54.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:54 compute-2 podman[86824]: 2026-01-31 07:08:54.457648162 +0000 UTC m=+0.046940128 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:08:54 compute-2 podman[86824]: 2026-01-31 07:08:54.493483689 +0000 UTC m=+0.082775675 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:08:54 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Jan 31 07:08:54 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Jan 31 07:08:54 compute-2 podman[86894]: 2026-01-31 07:08:54.652379184 +0000 UTC m=+0.044555958 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, name=keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-type=git, release=1793, distribution-scope=public, io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived)
Jan 31 07:08:54 compute-2 podman[86916]: 2026-01-31 07:08:54.716191019 +0000 UTC m=+0.048621817 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, vcs-type=git, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20)
Jan 31 07:08:54 compute-2 podman[86894]: 2026-01-31 07:08:54.721157806 +0000 UTC m=+0.113334550 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, name=keepalived, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 31 07:08:54 compute-2 sudo[86570]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:54 compute-2 ceph-mon[77282]: 3.8 scrub starts
Jan 31 07:08:54 compute-2 ceph-mon[77282]: 3.8 scrub ok
Jan 31 07:08:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 07:08:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 07:08:54 compute-2 ceph-mon[77282]: osdmap e62: 3 total, 3 up, 3 in
Jan 31 07:08:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 07:08:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:55.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:56 compute-2 ceph-mon[77282]: 4.8 deep-scrub starts
Jan 31 07:08:56 compute-2 ceph-mon[77282]: 4.8 deep-scrub ok
Jan 31 07:08:56 compute-2 ceph-mon[77282]: osdmap e63: 3 total, 3 up, 3 in
Jan 31 07:08:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:08:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:08:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:08:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:56 compute-2 ceph-mon[77282]: pgmap v164: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 07:08:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 07:08:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 07:08:56 compute-2 ceph-mon[77282]: 3.d scrub starts
Jan 31 07:08:56 compute-2 ceph-mon[77282]: 3.d scrub ok
Jan 31 07:08:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=58/59 n=2 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=12.394001961s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=48'39 mlcod 48'39 active pruub 99.709861755s@ mbc={255={}}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=12.393028259s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=48'39 mlcod 48'39 active pruub 99.708908081s@ mbc={255={}}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=58/59 n=2 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=12.393967628s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 99.709861755s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=12.392983437s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 99.708908081s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 64 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:56.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:56 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 31 07:08:56 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 31 07:08:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:57 compute-2 ceph-mon[77282]: 3.19 scrub starts
Jan 31 07:08:57 compute-2 ceph-mon[77282]: 3.19 scrub ok
Jan 31 07:08:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 07:08:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 07:08:57 compute-2 ceph-mon[77282]: osdmap e64: 3 total, 3 up, 3 in
Jan 31 07:08:57 compute-2 ceph-mon[77282]: 4.14 scrub starts
Jan 31 07:08:57 compute-2 ceph-mon[77282]: 4.14 scrub ok
Jan 31 07:08:57 compute-2 ceph-mon[77282]: 3.3 scrub starts
Jan 31 07:08:57 compute-2 ceph-mon[77282]: 3.3 scrub ok
Jan 31 07:08:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=64/65 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=64/65 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=64/65 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=64/65 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 65 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:57 compute-2 sudo[86931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:57 compute-2 sudo[86931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:57 compute-2 sudo[86931]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:57 compute-2 sudo[86956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:57 compute-2 sudo[86956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:57 compute-2 sudo[86956]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:57.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:58 compute-2 ceph-mon[77282]: osdmap e65: 3 total, 3 up, 3 in
Jan 31 07:08:58 compute-2 ceph-mon[77282]: pgmap v167: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Jan 31 07:08:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 07:08:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 07:08:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 07:08:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:58.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 07:08:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 07:08:59 compute-2 ceph-mon[77282]: osdmap e66: 3 total, 3 up, 3 in
Jan 31 07:08:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 07:08:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 67 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 67 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 67 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 67 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 67 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 67 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 67 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 67 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:08:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:59.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 07:09:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:00.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:00 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 68 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:00 compute-2 ceph-mon[77282]: 3.12 scrub starts
Jan 31 07:09:00 compute-2 ceph-mon[77282]: 3.12 scrub ok
Jan 31 07:09:00 compute-2 ceph-mon[77282]: osdmap e67: 3 total, 3 up, 3 in
Jan 31 07:09:00 compute-2 ceph-mon[77282]: pgmap v170: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 461 B/s, 2 keys/s, 11 objects/s recovering
Jan 31 07:09:00 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 68 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:00 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 68 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:00 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 68 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:00 compute-2 sudo[86982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:00 compute-2 sudo[86982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:00 compute-2 sudo[86982]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:00 compute-2 sudo[87007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:09:00 compute-2 sudo[87007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:00 compute-2 sudo[87007]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 07:09:01 compute-2 ceph-mon[77282]: 2.1f scrub starts
Jan 31 07:09:01 compute-2 ceph-mon[77282]: 2.1f scrub ok
Jan 31 07:09:01 compute-2 ceph-mon[77282]: osdmap e68: 3 total, 3 up, 3 in
Jan 31 07:09:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:09:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:09:01 compute-2 ceph-mon[77282]: osdmap e69: 3 total, 3 up, 3 in
Jan 31 07:09:01 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Jan 31 07:09:01 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Jan 31 07:09:01 compute-2 sshd-session[71363]: Received disconnect from 38.129.56.250 port 33270:11: disconnected by user
Jan 31 07:09:01 compute-2 sshd-session[71363]: Disconnected from user zuul 38.129.56.250 port 33270
Jan 31 07:09:01 compute-2 sshd-session[71360]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:09:01 compute-2 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 07:09:01 compute-2 systemd[1]: session-19.scope: Consumed 7.939s CPU time.
Jan 31 07:09:01 compute-2 systemd-logind[801]: Session 19 logged out. Waiting for processes to exit.
Jan 31 07:09:01 compute-2 systemd-logind[801]: Removed session 19.
Jan 31 07:09:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:01.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:02.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 07:09:02 compute-2 ceph-mon[77282]: pgmap v173: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 342 B/s, 10 objects/s recovering
Jan 31 07:09:02 compute-2 ceph-mon[77282]: 4.15 deep-scrub starts
Jan 31 07:09:02 compute-2 ceph-mon[77282]: 4.15 deep-scrub ok
Jan 31 07:09:02 compute-2 ceph-mon[77282]: osdmap e70: 3 total, 3 up, 3 in
Jan 31 07:09:02 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 31 07:09:02 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 31 07:09:03 compute-2 ceph-mon[77282]: 4.1c scrub starts
Jan 31 07:09:03 compute-2 ceph-mon[77282]: 4.1c scrub ok
Jan 31 07:09:03 compute-2 ceph-mon[77282]: 5.7 scrub starts
Jan 31 07:09:03 compute-2 ceph-mon[77282]: 5.7 scrub ok
Jan 31 07:09:03 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 31 07:09:03 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 31 07:09:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:03.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:04.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:04 compute-2 ceph-mon[77282]: pgmap v175: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 38 B/s, 1 objects/s recovering
Jan 31 07:09:04 compute-2 ceph-mon[77282]: 2.9 scrub starts
Jan 31 07:09:04 compute-2 ceph-mon[77282]: 4.1d scrub starts
Jan 31 07:09:04 compute-2 ceph-mon[77282]: 2.9 scrub ok
Jan 31 07:09:04 compute-2 ceph-mon[77282]: 4.1d scrub ok
Jan 31 07:09:05 compute-2 ceph-mon[77282]: 2.1e scrub starts
Jan 31 07:09:05 compute-2 ceph-mon[77282]: 2.1e scrub ok
Jan 31 07:09:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 07:09:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 07:09:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 07:09:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 07:09:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:05.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 07:09:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:06.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:06 compute-2 ceph-mon[77282]: pgmap v176: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 125 B/s, 4 objects/s recovering
Jan 31 07:09:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 07:09:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 07:09:06 compute-2 ceph-mon[77282]: osdmap e71: 3 total, 3 up, 3 in
Jan 31 07:09:06 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 31 07:09:06 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 31 07:09:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 07:09:07 compute-2 ceph-mon[77282]: 4.1f scrub starts
Jan 31 07:09:07 compute-2 ceph-mon[77282]: 4.1f scrub ok
Jan 31 07:09:07 compute-2 ceph-mon[77282]: 3.5 scrub starts
Jan 31 07:09:07 compute-2 ceph-mon[77282]: 3.5 scrub ok
Jan 31 07:09:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 07:09:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 07:09:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:07.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:07 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:07 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:08.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:08 compute-2 ceph-mon[77282]: pgmap v178: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 134 B/s, 7 objects/s recovering
Jan 31 07:09:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 07:09:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 07:09:08 compute-2 ceph-mon[77282]: osdmap e72: 3 total, 3 up, 3 in
Jan 31 07:09:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 07:09:08 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:08 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:08 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:08 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:09 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 31 07:09:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 07:09:09 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 31 07:09:09 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 74 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=74 pruub=14.769082069s) [0] r=-1 lpr=74 pi=[58,74)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 115.710632324s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:09 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 74 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=58/59 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=74 pruub=14.768900871s) [0] r=-1 lpr=74 pi=[58,74)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 115.710632324s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:09 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 74 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=74) [2] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:09 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 74 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=74) [2] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:09 compute-2 ceph-mon[77282]: 2.e scrub starts
Jan 31 07:09:09 compute-2 ceph-mon[77282]: 2.e scrub ok
Jan 31 07:09:09 compute-2 ceph-mon[77282]: osdmap e73: 3 total, 3 up, 3 in
Jan 31 07:09:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 07:09:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 07:09:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 07:09:09 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 75 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:09 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 75 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:09 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 75 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:09 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 75 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:09.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 07:09:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:10.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 07:09:10 compute-2 ceph-mon[77282]: pgmap v181: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 135 B/s, 7 objects/s recovering
Jan 31 07:09:10 compute-2 ceph-mon[77282]: 4.9 scrub starts
Jan 31 07:09:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 07:09:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 07:09:10 compute-2 ceph-mon[77282]: osdmap e74: 3 total, 3 up, 3 in
Jan 31 07:09:10 compute-2 ceph-mon[77282]: 4.9 scrub ok
Jan 31 07:09:10 compute-2 ceph-mon[77282]: osdmap e75: 3 total, 3 up, 3 in
Jan 31 07:09:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 07:09:10 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 76 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=73/54 les/c/f=74/55/0 sis=76) [2] r=0 lpr=76 pi=[54,76)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:10 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 76 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=73/54 les/c/f=74/55/0 sis=76) [2] r=0 lpr=76 pi=[54,76)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:10 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 76 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=73/54 les/c/f=74/55/0 sis=76) [2] r=0 lpr=76 pi=[54,76)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:10 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 76 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=73/54 les/c/f=74/55/0 sis=76) [2] r=0 lpr=76 pi=[54,76)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:11 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 31 07:09:11 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 31 07:09:11 compute-2 ceph-mon[77282]: osdmap e76: 3 total, 3 up, 3 in
Jan 31 07:09:11 compute-2 ceph-mon[77282]: 5.f scrub starts
Jan 31 07:09:11 compute-2 ceph-mon[77282]: 5.f scrub ok
Jan 31 07:09:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 07:09:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 07:09:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 07:09:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:11 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 77 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:11 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 77 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:11 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 77 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:11 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 77 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:11 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 77 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=76/77 n=5 ec=54/42 lis/c=73/54 les/c/f=74/55/0 sis=76) [2] r=0 lpr=76 pi=[54,76)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:11 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 77 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=76/77 n=6 ec=54/42 lis/c=73/54 les/c/f=74/55/0 sis=76) [2] r=0 lpr=76 pi=[54,76)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:11.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:12.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:12 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts
Jan 31 07:09:12 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok
Jan 31 07:09:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 07:09:12 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 78 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=77/78 n=6 ec=54/42 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:12 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 78 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=77/78 n=5 ec=54/42 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:12 compute-2 ceph-mon[77282]: pgmap v185: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:12 compute-2 ceph-mon[77282]: 4.19 scrub starts
Jan 31 07:09:12 compute-2 ceph-mon[77282]: 4.19 scrub ok
Jan 31 07:09:12 compute-2 ceph-mon[77282]: 2.19 scrub starts
Jan 31 07:09:12 compute-2 ceph-mon[77282]: 2.19 scrub ok
Jan 31 07:09:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 07:09:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 07:09:12 compute-2 ceph-mon[77282]: osdmap e77: 3 total, 3 up, 3 in
Jan 31 07:09:13 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Jan 31 07:09:13 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Jan 31 07:09:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 07:09:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:13.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:13 compute-2 ceph-mon[77282]: 2.1d deep-scrub starts
Jan 31 07:09:13 compute-2 ceph-mon[77282]: 2.1d deep-scrub ok
Jan 31 07:09:13 compute-2 ceph-mon[77282]: osdmap e78: 3 total, 3 up, 3 in
Jan 31 07:09:13 compute-2 ceph-mon[77282]: pgmap v188: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 07:09:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 07:09:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 07:09:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 07:09:13 compute-2 ceph-mon[77282]: osdmap e79: 3 total, 3 up, 3 in
Jan 31 07:09:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:14.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 07:09:14 compute-2 ceph-mon[77282]: 2.1c deep-scrub starts
Jan 31 07:09:14 compute-2 ceph-mon[77282]: 6.4 scrub starts
Jan 31 07:09:14 compute-2 ceph-mon[77282]: 2.1c deep-scrub ok
Jan 31 07:09:14 compute-2 ceph-mon[77282]: 6.4 scrub ok
Jan 31 07:09:14 compute-2 ceph-mon[77282]: osdmap e80: 3 total, 3 up, 3 in
Jan 31 07:09:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 07:09:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:15.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:15 compute-2 ceph-mon[77282]: pgmap v191: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 0 op/s; 137 B/s, 5 objects/s recovering
Jan 31 07:09:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 07:09:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 07:09:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 07:09:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 07:09:15 compute-2 ceph-mon[77282]: osdmap e81: 3 total, 3 up, 3 in
Jan 31 07:09:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 07:09:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:16.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 07:09:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:16 compute-2 ceph-mon[77282]: 5.2 scrub starts
Jan 31 07:09:16 compute-2 ceph-mon[77282]: 5.2 scrub ok
Jan 31 07:09:17 compute-2 sudo[87041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:17 compute-2 sudo[87041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-2 sudo[87041]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-2 sudo[87066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:17 compute-2 sudo[87066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-2 sudo[87066]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.b deep-scrub starts
Jan 31 07:09:17 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.b deep-scrub ok
Jan 31 07:09:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:17.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:17 compute-2 ceph-mon[77282]: pgmap v193: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 226 B/s wr, 1 op/s; 170 B/s, 7 objects/s recovering
Jan 31 07:09:17 compute-2 ceph-mon[77282]: 5.1b scrub starts
Jan 31 07:09:17 compute-2 ceph-mon[77282]: 5.1b scrub ok
Jan 31 07:09:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:18.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:18 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Jan 31 07:09:18 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Jan 31 07:09:19 compute-2 ceph-mon[77282]: 6.c deep-scrub starts
Jan 31 07:09:19 compute-2 ceph-mon[77282]: 6.c deep-scrub ok
Jan 31 07:09:19 compute-2 ceph-mon[77282]: 2.b deep-scrub starts
Jan 31 07:09:19 compute-2 ceph-mon[77282]: 2.b deep-scrub ok
Jan 31 07:09:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:19.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:20 compute-2 ceph-mon[77282]: 2.a deep-scrub starts
Jan 31 07:09:20 compute-2 ceph-mon[77282]: 2.a deep-scrub ok
Jan 31 07:09:20 compute-2 ceph-mon[77282]: pgmap v194: 305 pgs: 2 peering, 303 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 2 op/s; 128 B/s, 6 objects/s recovering
Jan 31 07:09:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:20.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 07:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 07:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 07:09:21 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 82 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=82 pruub=10.859189034s) [1] r=-1 lpr=82 pi=[67,82)/1 crt=48'908 mlcod 0'0 active pruub 123.769775391s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:21 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 82 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=82 pruub=10.859090805s) [1] r=-1 lpr=82 pi=[67,82)/1 crt=48'908 mlcod 0'0 active pruub 123.769775391s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:21 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 82 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=82 pruub=10.859064102s) [1] r=-1 lpr=82 pi=[67,82)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 123.769775391s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:21 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 82 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=82 pruub=10.859080315s) [1] r=-1 lpr=82 pi=[67,82)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 123.769775391s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:21 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Jan 31 07:09:21 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Jan 31 07:09:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:21.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:22.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 07:09:22 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 83 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=0 lpr=83 pi=[67,83)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:22 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 83 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=0 lpr=83 pi=[67,83)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:22 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 83 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=0 lpr=83 pi=[67,83)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:22 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 83 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=0 lpr=83 pi=[67,83)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:22 compute-2 ceph-mon[77282]: pgmap v195: 305 pgs: 305 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 1.9 KiB/s rd, 1 op/s; 102 B/s, 4 objects/s recovering
Jan 31 07:09:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 07:09:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 07:09:22 compute-2 ceph-mon[77282]: osdmap e82: 3 total, 3 up, 3 in
Jan 31 07:09:22 compute-2 ceph-mon[77282]: 8.1 scrub starts
Jan 31 07:09:22 compute-2 ceph-mon[77282]: 8.1 scrub ok
Jan 31 07:09:22 compute-2 ceph-mon[77282]: 2.c deep-scrub starts
Jan 31 07:09:22 compute-2 ceph-mon[77282]: 2.c deep-scrub ok
Jan 31 07:09:22 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts
Jan 31 07:09:22 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.d deep-scrub ok
Jan 31 07:09:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 07:09:23 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 84 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=83/84 n=5 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] async=[1] r=0 lpr=83 pi=[67,83)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:23 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 84 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=83/84 n=6 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] async=[1] r=0 lpr=83 pi=[67,83)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:23 compute-2 ceph-mon[77282]: osdmap e83: 3 total, 3 up, 3 in
Jan 31 07:09:23 compute-2 ceph-mon[77282]: 2.d deep-scrub starts
Jan 31 07:09:23 compute-2 ceph-mon[77282]: 2.d deep-scrub ok
Jan 31 07:09:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 07:09:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 07:09:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 07:09:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 07:09:23 compute-2 ceph-mon[77282]: osdmap e84: 3 total, 3 up, 3 in
Jan 31 07:09:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:23.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:24.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:24 compute-2 ceph-mon[77282]: pgmap v198: 305 pgs: 305 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 946 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 31 07:09:24 compute-2 ceph-mon[77282]: 8.7 scrub starts
Jan 31 07:09:24 compute-2 ceph-mon[77282]: 8.7 scrub ok
Jan 31 07:09:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 07:09:24 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 85 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=83/84 n=6 ec=54/42 lis/c=83/67 les/c/f=84/68/0 sis=85 pruub=14.929837227s) [1] async=[1] r=-1 lpr=85 pi=[67,85)/1 crt=48'908 mlcod 48'908 active pruub 130.946304321s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:24 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 85 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=83/84 n=6 ec=54/42 lis/c=83/67 les/c/f=84/68/0 sis=85 pruub=14.929689407s) [1] r=-1 lpr=85 pi=[67,85)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 130.946304321s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:24 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 85 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=83/84 n=5 ec=54/42 lis/c=83/67 les/c/f=84/68/0 sis=85 pruub=14.924436569s) [1] async=[1] r=-1 lpr=85 pi=[67,85)/1 crt=48'908 mlcod 48'908 active pruub 130.941040039s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:24 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 85 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=83/84 n=5 ec=54/42 lis/c=83/67 les/c/f=84/68/0 sis=85 pruub=14.923715591s) [1] r=-1 lpr=85 pi=[67,85)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 130.941040039s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 07:09:25 compute-2 ceph-mon[77282]: osdmap e85: 3 total, 3 up, 3 in
Jan 31 07:09:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 07:09:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 07:09:25 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 86 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=86 pruub=11.299264908s) [1] r=-1 lpr=86 pi=[64,86)/1 crt=48'908 mlcod 0'0 active pruub 128.343856812s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:25 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 86 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=64/65 n=6 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=86 pruub=11.295726776s) [1] r=-1 lpr=86 pi=[64,86)/1 crt=48'908 mlcod 0'0 active pruub 128.340408325s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:25 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 86 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=86 pruub=11.299181938s) [1] r=-1 lpr=86 pi=[64,86)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 128.343856812s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:25 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 86 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=64/65 n=6 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=86 pruub=11.295701981s) [1] r=-1 lpr=86 pi=[64,86)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 128.340408325s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:25.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:26.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:26 compute-2 ceph-mon[77282]: pgmap v201: 305 pgs: 2 active+remapped, 303 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 07:09:26 compute-2 ceph-mon[77282]: 5.18 scrub starts
Jan 31 07:09:26 compute-2 ceph-mon[77282]: 5.18 scrub ok
Jan 31 07:09:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 07:09:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 07:09:26 compute-2 ceph-mon[77282]: osdmap e86: 3 total, 3 up, 3 in
Jan 31 07:09:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 07:09:26 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 87 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1]/[2] r=0 lpr=87 pi=[64,87)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:26 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 87 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1]/[2] r=0 lpr=87 pi=[64,87)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:26 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 87 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=64/65 n=6 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1]/[2] r=0 lpr=87 pi=[64,87)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:26 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 87 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=64/65 n=6 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1]/[2] r=0 lpr=87 pi=[64,87)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 07:09:27 compute-2 ceph-mon[77282]: 8.e scrub starts
Jan 31 07:09:27 compute-2 ceph-mon[77282]: osdmap e87: 3 total, 3 up, 3 in
Jan 31 07:09:27 compute-2 ceph-mon[77282]: 8.e scrub ok
Jan 31 07:09:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:27.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:27 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 88 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=87/88 n=5 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1]/[2] async=[1] r=0 lpr=87 pi=[64,87)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:27 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 88 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=87/88 n=6 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1]/[2] async=[1] r=0 lpr=87 pi=[64,87)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:28.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:28 compute-2 ceph-mon[77282]: pgmap v204: 305 pgs: 2 peering, 303 active+clean; 459 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 07:09:28 compute-2 ceph-mon[77282]: osdmap e88: 3 total, 3 up, 3 in
Jan 31 07:09:28 compute-2 ceph-mon[77282]: 8.13 scrub starts
Jan 31 07:09:28 compute-2 ceph-mon[77282]: 8.13 scrub ok
Jan 31 07:09:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 07:09:28 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 89 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=87/88 n=5 ec=54/42 lis/c=87/64 les/c/f=88/65/0 sis=89 pruub=15.089989662s) [1] async=[1] r=-1 lpr=89 pi=[64,89)/1 crt=48'908 mlcod 48'908 active pruub 135.207305908s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:28 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 89 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=87/88 n=6 ec=54/42 lis/c=87/64 les/c/f=88/65/0 sis=89 pruub=15.092934608s) [1] async=[1] r=-1 lpr=89 pi=[64,89)/1 crt=48'908 mlcod 48'908 active pruub 135.210571289s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:28 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 89 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=87/88 n=5 ec=54/42 lis/c=87/64 les/c/f=88/65/0 sis=89 pruub=15.089619637s) [1] r=-1 lpr=89 pi=[64,89)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 135.207305908s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:28 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 89 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=87/88 n=6 ec=54/42 lis/c=87/64 les/c/f=88/65/0 sis=89 pruub=15.092743874s) [1] r=-1 lpr=89 pi=[64,89)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 135.210571289s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 07:09:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:29.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:29 compute-2 ceph-mon[77282]: osdmap e89: 3 total, 3 up, 3 in
Jan 31 07:09:29 compute-2 ceph-mon[77282]: 8.1a scrub starts
Jan 31 07:09:29 compute-2 ceph-mon[77282]: 8.1a scrub ok
Jan 31 07:09:29 compute-2 ceph-mon[77282]: pgmap v207: 305 pgs: 2 peering, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 150 B/s, 0 objects/s recovering
Jan 31 07:09:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:30.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:30 compute-2 ceph-mon[77282]: 8.1d scrub starts
Jan 31 07:09:30 compute-2 ceph-mon[77282]: osdmap e90: 3 total, 3 up, 3 in
Jan 31 07:09:30 compute-2 ceph-mon[77282]: 8.1d scrub ok
Jan 31 07:09:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:31.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 07:09:31 compute-2 ceph-mon[77282]: 3.1c scrub starts
Jan 31 07:09:31 compute-2 ceph-mon[77282]: 3.1c scrub ok
Jan 31 07:09:31 compute-2 ceph-mon[77282]: 8.1e scrub starts
Jan 31 07:09:31 compute-2 ceph-mon[77282]: 8.1e scrub ok
Jan 31 07:09:31 compute-2 ceph-mon[77282]: pgmap v209: 305 pgs: 305 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 180 B/s, 3 objects/s recovering
Jan 31 07:09:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 31 07:09:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:32.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:32 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 31 07:09:32 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 31 07:09:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 07:09:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 07:09:32 compute-2 ceph-mon[77282]: osdmap e91: 3 total, 3 up, 3 in
Jan 31 07:09:32 compute-2 ceph-mon[77282]: osdmap e92: 3 total, 3 up, 3 in
Jan 31 07:09:33 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 31 07:09:33 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 31 07:09:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:33.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 07:09:34 compute-2 ceph-mon[77282]: 4.13 scrub starts
Jan 31 07:09:34 compute-2 ceph-mon[77282]: 4.13 scrub ok
Jan 31 07:09:34 compute-2 ceph-mon[77282]: 2.10 scrub starts
Jan 31 07:09:34 compute-2 ceph-mon[77282]: 2.10 scrub ok
Jan 31 07:09:34 compute-2 ceph-mon[77282]: pgmap v212: 305 pgs: 305 active+clean; 460 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 2 objects/s recovering
Jan 31 07:09:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 31 07:09:34 compute-2 ceph-mon[77282]: 4.c scrub starts
Jan 31 07:09:34 compute-2 ceph-mon[77282]: 4.c scrub ok
Jan 31 07:09:34 compute-2 ceph-mon[77282]: 2.15 scrub starts
Jan 31 07:09:34 compute-2 ceph-mon[77282]: 2.15 scrub ok
Jan 31 07:09:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:34.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 07:09:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 07:09:35 compute-2 ceph-mon[77282]: osdmap e93: 3 total, 3 up, 3 in
Jan 31 07:09:35 compute-2 ceph-mon[77282]: osdmap e94: 3 total, 3 up, 3 in
Jan 31 07:09:35 compute-2 sshd-session[87100]: Accepted publickey for zuul from 192.168.122.30 port 44976 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:09:35 compute-2 systemd-logind[801]: New session 33 of user zuul.
Jan 31 07:09:35 compute-2 systemd[1]: Started Session 33 of User zuul.
Jan 31 07:09:35 compute-2 sshd-session[87100]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:09:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 07:09:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:35.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:36 compute-2 ceph-mon[77282]: 9.1 scrub starts
Jan 31 07:09:36 compute-2 ceph-mon[77282]: 9.1 scrub ok
Jan 31 07:09:36 compute-2 ceph-mon[77282]: pgmap v215: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 31 07:09:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 07:09:36 compute-2 ceph-mon[77282]: osdmap e95: 3 total, 3 up, 3 in
Jan 31 07:09:36 compute-2 python3.9[87253]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:09:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:36.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 07:09:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:37 compute-2 ceph-mon[77282]: 9.2 scrub starts
Jan 31 07:09:37 compute-2 ceph-mon[77282]: 9.2 scrub ok
Jan 31 07:09:37 compute-2 ceph-mon[77282]: 4.d scrub starts
Jan 31 07:09:37 compute-2 ceph-mon[77282]: 4.d scrub ok
Jan 31 07:09:37 compute-2 ceph-mon[77282]: osdmap e96: 3 total, 3 up, 3 in
Jan 31 07:09:37 compute-2 sudo[87439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:37 compute-2 sudo[87439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:37 compute-2 sudo[87439]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:37 compute-2 sudo[87491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmihqbniejmmvhjyyrjxgvzldkdtwoxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843377.3971107-59-88258636506308/AnsiballZ_command.py'
Jan 31 07:09:37 compute-2 sudo[87491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:09:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 07:09:37 compute-2 sudo[87492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:37 compute-2 sudo[87492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:37 compute-2 sudo[87492]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:37 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 31 07:09:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:37.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:37 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 31 07:09:37 compute-2 python3.9[87498]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:09:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:38.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:38 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 31 07:09:38 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 31 07:09:38 compute-2 ceph-mon[77282]: pgmap v218: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 07:09:38 compute-2 ceph-mon[77282]: osdmap e97: 3 total, 3 up, 3 in
Jan 31 07:09:38 compute-2 ceph-mon[77282]: 2.12 scrub starts
Jan 31 07:09:38 compute-2 ceph-mon[77282]: 2.12 scrub ok
Jan 31 07:09:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 07:09:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:39.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:39 compute-2 ceph-mon[77282]: 9.4 scrub starts
Jan 31 07:09:39 compute-2 ceph-mon[77282]: 9.4 scrub ok
Jan 31 07:09:39 compute-2 ceph-mon[77282]: 2.18 scrub starts
Jan 31 07:09:39 compute-2 ceph-mon[77282]: 2.18 scrub ok
Jan 31 07:09:39 compute-2 ceph-mon[77282]: osdmap e98: 3 total, 3 up, 3 in
Jan 31 07:09:39 compute-2 ceph-mon[77282]: pgmap v221: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 07:09:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 07:09:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:40.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:40 compute-2 ceph-mon[77282]: osdmap e99: 3 total, 3 up, 3 in
Jan 31 07:09:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:41.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:41 compute-2 ceph-mon[77282]: 9.c scrub starts
Jan 31 07:09:41 compute-2 ceph-mon[77282]: 9.c scrub ok
Jan 31 07:09:41 compute-2 ceph-mon[77282]: pgmap v223: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 1 objects/s recovering
Jan 31 07:09:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 31 07:09:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 07:09:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:42.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:42 compute-2 ceph-mon[77282]: 4.a scrub starts
Jan 31 07:09:42 compute-2 ceph-mon[77282]: 4.a scrub ok
Jan 31 07:09:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 07:09:42 compute-2 ceph-mon[77282]: osdmap e100: 3 total, 3 up, 3 in
Jan 31 07:09:42 compute-2 ceph-mon[77282]: 4.e scrub starts
Jan 31 07:09:42 compute-2 ceph-mon[77282]: 4.e scrub ok
Jan 31 07:09:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:43.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:43 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 31 07:09:43 compute-2 ceph-mon[77282]: pgmap v225: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 39 B/s, 1 objects/s recovering
Jan 31 07:09:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 31 07:09:43 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 31 07:09:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 07:09:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:44.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:44 compute-2 sudo[87491]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:45 compute-2 ceph-mon[77282]: 2.13 scrub starts
Jan 31 07:09:45 compute-2 ceph-mon[77282]: 2.13 scrub ok
Jan 31 07:09:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 07:09:45 compute-2 ceph-mon[77282]: osdmap e101: 3 total, 3 up, 3 in
Jan 31 07:09:45 compute-2 sshd-session[87103]: Connection closed by 192.168.122.30 port 44976
Jan 31 07:09:45 compute-2 sshd-session[87100]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:09:45 compute-2 systemd[1]: session-33.scope: Deactivated successfully.
Jan 31 07:09:45 compute-2 systemd[1]: session-33.scope: Consumed 7.903s CPU time.
Jan 31 07:09:45 compute-2 systemd-logind[801]: Session 33 logged out. Waiting for processes to exit.
Jan 31 07:09:45 compute-2 systemd-logind[801]: Removed session 33.
Jan 31 07:09:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:45.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:45 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.1b deep-scrub starts
Jan 31 07:09:45 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.1b deep-scrub ok
Jan 31 07:09:46 compute-2 ceph-mon[77282]: pgmap v227: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 31 07:09:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 31 07:09:46 compute-2 ceph-mon[77282]: 2.1b deep-scrub starts
Jan 31 07:09:46 compute-2 ceph-mon[77282]: 2.1b deep-scrub ok
Jan 31 07:09:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 07:09:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 102 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=102 pruub=10.438390732s) [1] r=-1 lpr=102 pi=[67,102)/1 crt=48'908 mlcod 0'0 active pruub 147.770568848s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:46 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 102 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=102 pruub=10.438304901s) [1] r=-1 lpr=102 pi=[67,102)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 147.770568848s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:46.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 07:09:47 compute-2 ceph-mon[77282]: osdmap e102: 3 total, 3 up, 3 in
Jan 31 07:09:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 07:09:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 103 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=103) [1]/[2] r=0 lpr=103 pi=[67,103)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:47 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 103 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=103) [1]/[2] r=0 lpr=103 pi=[67,103)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:47.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 07:09:48 compute-2 ceph-mon[77282]: osdmap e103: 3 total, 3 up, 3 in
Jan 31 07:09:48 compute-2 ceph-mon[77282]: pgmap v230: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 31 07:09:48 compute-2 ceph-mon[77282]: 4.1a scrub starts
Jan 31 07:09:48 compute-2 ceph-mon[77282]: 4.1a scrub ok
Jan 31 07:09:48 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 104 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=104) [2] r=0 lpr=104 pi=[69,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:48 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 104 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=103/104 n=5 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=103) [1]/[2] async=[1] r=0 lpr=103 pi=[67,103)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:49 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 31 07:09:49 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 31 07:09:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 07:09:49 compute-2 ceph-mon[77282]: osdmap e104: 3 total, 3 up, 3 in
Jan 31 07:09:49 compute-2 ceph-mon[77282]: 2.f scrub starts
Jan 31 07:09:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 07:09:49 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 105 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=103/104 n=5 ec=54/42 lis/c=103/67 les/c/f=104/68/0 sis=105 pruub=15.207094193s) [1] async=[1] r=-1 lpr=105 pi=[67,105)/1 crt=48'908 mlcod 48'908 active pruub 155.615905762s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:49 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 105 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=103/104 n=5 ec=54/42 lis/c=103/67 les/c/f=104/68/0 sis=105 pruub=15.206976891s) [1] r=-1 lpr=105 pi=[67,105)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 155.615905762s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:49 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 105 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=105) [2]/[1] r=-1 lpr=105 pi=[69,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:49 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 105 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=105) [2]/[1] r=-1 lpr=105 pi=[69,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:49.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:49 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 31 07:09:50 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 31 07:09:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 07:09:50 compute-2 ceph-mon[77282]: 2.f scrub ok
Jan 31 07:09:50 compute-2 ceph-mon[77282]: osdmap e105: 3 total, 3 up, 3 in
Jan 31 07:09:50 compute-2 ceph-mon[77282]: pgmap v233: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 460 KiB data, 140 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:50 compute-2 ceph-mon[77282]: 8.16 scrub starts
Jan 31 07:09:50 compute-2 ceph-mon[77282]: 8.16 scrub ok
Jan 31 07:09:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:50.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:50 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Jan 31 07:09:51 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Jan 31 07:09:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 07:09:51 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 107 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=105/69 les/c/f=106/70/0 sis=107) [2] r=0 lpr=107 pi=[69,107)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:51 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 107 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=105/69 les/c/f=106/70/0 sis=107) [2] r=0 lpr=107 pi=[69,107)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:51 compute-2 ceph-mon[77282]: 9.14 deep-scrub starts
Jan 31 07:09:51 compute-2 ceph-mon[77282]: 9.14 deep-scrub ok
Jan 31 07:09:51 compute-2 ceph-mon[77282]: osdmap e106: 3 total, 3 up, 3 in
Jan 31 07:09:51 compute-2 ceph-mon[77282]: 11.13 deep-scrub starts
Jan 31 07:09:51 compute-2 ceph-mon[77282]: 11.13 deep-scrub ok
Jan 31 07:09:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:51.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:52 compute-2 ceph-mon[77282]: osdmap e107: 3 total, 3 up, 3 in
Jan 31 07:09:52 compute-2 ceph-mon[77282]: pgmap v236: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/223 objects misplaced (1.794%); 27 B/s, 0 objects/s recovering
Jan 31 07:09:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 31 07:09:52 compute-2 ceph-mon[77282]: 4.18 scrub starts
Jan 31 07:09:52 compute-2 ceph-mon[77282]: 4.18 scrub ok
Jan 31 07:09:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 07:09:52 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 108 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=107/108 n=5 ec=54/42 lis/c=105/69 les/c/f=106/70/0 sis=107) [2] r=0 lpr=107 pi=[69,107)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:52.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 07:09:53 compute-2 ceph-mon[77282]: osdmap e108: 3 total, 3 up, 3 in
Jan 31 07:09:53 compute-2 ceph-mon[77282]: 4.1b scrub starts
Jan 31 07:09:53 compute-2 ceph-mon[77282]: 4.1b scrub ok
Jan 31 07:09:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:53.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:54 compute-2 ceph-mon[77282]: 9.1c deep-scrub starts
Jan 31 07:09:54 compute-2 ceph-mon[77282]: 9.1c deep-scrub ok
Jan 31 07:09:54 compute-2 ceph-mon[77282]: pgmap v238: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/223 objects misplaced (1.794%); 26 B/s, 0 objects/s recovering
Jan 31 07:09:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 31 07:09:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:54.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 07:09:54 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 31 07:09:55 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 31 07:09:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 07:09:55 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 110 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=77/78 n=5 ec=54/42 lis/c=77/77 les/c/f=78/78/0 sis=110 pruub=13.426394463s) [0] r=-1 lpr=110 pi=[77,110)/1 crt=48'908 mlcod 0'0 active pruub 160.091674805s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:55 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 110 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=77/78 n=5 ec=54/42 lis/c=77/77 les/c/f=78/78/0 sis=110 pruub=13.426311493s) [0] r=-1 lpr=110 pi=[77,110)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 160.091674805s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 07:09:55 compute-2 ceph-mon[77282]: osdmap e109: 3 total, 3 up, 3 in
Jan 31 07:09:55 compute-2 ceph-mon[77282]: 7.1f scrub starts
Jan 31 07:09:55 compute-2 ceph-mon[77282]: 7.1f scrub ok
Jan 31 07:09:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 31 07:09:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:55.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:55 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 31 07:09:55 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 31 07:09:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:56.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 07:09:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 111 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=77/78 n=5 ec=54/42 lis/c=77/77 les/c/f=78/78/0 sis=111) [0]/[2] r=0 lpr=111 pi=[77,111)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:56 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 111 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=77/78 n=5 ec=54/42 lis/c=77/77 les/c/f=78/78/0 sis=111) [0]/[2] r=0 lpr=111 pi=[77,111)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:56 compute-2 ceph-mon[77282]: pgmap v240: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 42 B/s, 1 objects/s recovering
Jan 31 07:09:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 07:09:56 compute-2 ceph-mon[77282]: osdmap e110: 3 total, 3 up, 3 in
Jan 31 07:09:56 compute-2 ceph-mon[77282]: 10.12 scrub starts
Jan 31 07:09:56 compute-2 ceph-mon[77282]: 10.12 scrub ok
Jan 31 07:09:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 07:09:57 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 112 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=111/112 n=5 ec=54/42 lis/c=77/77 les/c/f=78/78/0 sis=111) [0]/[2] async=[0] r=0 lpr=111 pi=[77,111)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:57 compute-2 ceph-mon[77282]: 11.2 scrub starts
Jan 31 07:09:57 compute-2 ceph-mon[77282]: 11.2 scrub ok
Jan 31 07:09:57 compute-2 ceph-mon[77282]: 3.a scrub starts
Jan 31 07:09:57 compute-2 ceph-mon[77282]: 3.a scrub ok
Jan 31 07:09:57 compute-2 ceph-mon[77282]: osdmap e111: 3 total, 3 up, 3 in
Jan 31 07:09:57 compute-2 ceph-mon[77282]: 11.6 scrub starts
Jan 31 07:09:57 compute-2 ceph-mon[77282]: 11.6 scrub ok
Jan 31 07:09:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 31 07:09:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 07:09:57 compute-2 ceph-mon[77282]: osdmap e112: 3 total, 3 up, 3 in
Jan 31 07:09:57 compute-2 sudo[87585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:57 compute-2 sudo[87585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:57.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:57 compute-2 sudo[87585]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:57 compute-2 sudo[87610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:57 compute-2 sudo[87610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:57 compute-2 sudo[87610]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 07:09:58 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 113 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=111/112 n=5 ec=54/42 lis/c=111/77 les/c/f=112/78/0 sis=113 pruub=14.899805069s) [0] async=[0] r=-1 lpr=113 pi=[77,113)/1 crt=48'908 mlcod 48'908 active pruub 164.724090576s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:58 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 113 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=111/112 n=5 ec=54/42 lis/c=111/77 les/c/f=112/78/0 sis=113 pruub=14.899676323s) [0] r=-1 lpr=113 pi=[77,113)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 164.724090576s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:58 compute-2 ceph-mon[77282]: pgmap v243: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 0 objects/s recovering
Jan 31 07:09:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 07:09:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 114 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=114 pruub=9.512154579s) [0] r=-1 lpr=114 pi=[64,114)/1 crt=48'908 mlcod 0'0 active pruub 160.344696045s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 114 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=114 pruub=9.512063980s) [0] r=-1 lpr=114 pi=[64,114)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 160.344696045s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:59 compute-2 ceph-mon[77282]: osdmap e113: 3 total, 3 up, 3 in
Jan 31 07:09:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 31 07:09:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 07:09:59 compute-2 ceph-mon[77282]: osdmap e114: 3 total, 3 up, 3 in
Jan 31 07:09:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 07:09:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 115 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=115) [0]/[2] r=0 lpr=115 pi=[64,115)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:59 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 115 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=64/65 n=5 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=115) [0]/[2] r=0 lpr=115 pi=[64,115)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:09:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:59.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:59 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 31 07:09:59 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 31 07:10:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:10:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:00.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:10:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 07:10:00 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 116 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=115/116 n=5 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[64,115)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:10:00 compute-2 ceph-mon[77282]: pgmap v246: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:00 compute-2 ceph-mon[77282]: osdmap e115: 3 total, 3 up, 3 in
Jan 31 07:10:00 compute-2 ceph-mon[77282]: 8.11 scrub starts
Jan 31 07:10:00 compute-2 ceph-mon[77282]: 8.11 scrub ok
Jan 31 07:10:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:10:01 compute-2 sudo[87637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:01 compute-2 sudo[87637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:01 compute-2 sudo[87637]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:01 compute-2 sudo[87664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:10:01 compute-2 sudo[87664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:01 compute-2 sudo[87664]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:01 compute-2 sudo[87689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:01 compute-2 sudo[87689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:01 compute-2 sudo[87689]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:01 compute-2 sshd-session[87662]: Accepted publickey for zuul from 192.168.122.30 port 58478 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:10:01 compute-2 systemd-logind[801]: New session 34 of user zuul.
Jan 31 07:10:01 compute-2 systemd[1]: Started Session 34 of User zuul.
Jan 31 07:10:01 compute-2 sshd-session[87662]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:10:01 compute-2 sudo[87715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:10:01 compute-2 sudo[87715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:01 compute-2 podman[87871]: 2026-01-31 07:10:01.534480546 +0000 UTC m=+0.058814757 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:10:01 compute-2 podman[87982]: 2026-01-31 07:10:01.68824219 +0000 UTC m=+0.051742279 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:10:01 compute-2 podman[87871]: 2026-01-31 07:10:01.694690481 +0000 UTC m=+0.219024672 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:10:01 compute-2 python3.9[87981]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 07:10:01 compute-2 ceph-mon[77282]: 11.9 scrub starts
Jan 31 07:10:01 compute-2 ceph-mon[77282]: 11.9 scrub ok
Jan 31 07:10:01 compute-2 ceph-mon[77282]: osdmap e116: 3 total, 3 up, 3 in
Jan 31 07:10:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 31 07:10:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 07:10:01 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 117 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=115/116 n=5 ec=54/42 lis/c=115/64 les/c/f=116/65/0 sis=117 pruub=14.966401100s) [0] async=[0] r=-1 lpr=117 pi=[64,117)/1 crt=48'908 mlcod 48'908 active pruub 168.071304321s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:10:01 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 117 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=115/116 n=5 ec=54/42 lis/c=115/64 les/c/f=116/65/0 sis=117 pruub=14.966291428s) [0] r=-1 lpr=117 pi=[64,117)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 168.071304321s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:10:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:01.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:01 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 31 07:10:01 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 31 07:10:02 compute-2 podman[88194]: 2026-01-31 07:10:02.192927795 +0000 UTC m=+0.055409186 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:10:02 compute-2 podman[88194]: 2026-01-31 07:10:02.204310758 +0000 UTC m=+0.066792109 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:10:02 compute-2 podman[88261]: 2026-01-31 07:10:02.375722861 +0000 UTC m=+0.044130976 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, description=keepalived for Ceph, version=2.2.4, io.openshift.tags=Ceph keepalived, release=1793, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 07:10:02 compute-2 podman[88261]: 2026-01-31 07:10:02.389336974 +0000 UTC m=+0.057745089 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph)
Jan 31 07:10:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:10:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:02.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:10:02 compute-2 sudo[87715]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:02 compute-2 sudo[88292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:02 compute-2 sudo[88292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:02 compute-2 sudo[88292]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:02 compute-2 sudo[88341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:10:02 compute-2 sudo[88341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:02 compute-2 sudo[88341]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:02 compute-2 sudo[88389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:02 compute-2 sudo[88389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:02 compute-2 sudo[88389]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:02 compute-2 sudo[88415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:10:02 compute-2 sudo[88415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:02 compute-2 ceph-mon[77282]: 5.1 scrub starts
Jan 31 07:10:02 compute-2 ceph-mon[77282]: 5.1 scrub ok
Jan 31 07:10:02 compute-2 ceph-mon[77282]: pgmap v250: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 07:10:02 compute-2 ceph-mon[77282]: osdmap e117: 3 total, 3 up, 3 in
Jan 31 07:10:02 compute-2 ceph-mon[77282]: 8.2 scrub starts
Jan 31 07:10:02 compute-2 ceph-mon[77282]: 8.2 scrub ok
Jan 31 07:10:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 07:10:02 compute-2 sudo[88415]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:03 compute-2 python3.9[88489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:10:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:10:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:03.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:10:03 compute-2 ceph-mon[77282]: 7.1 scrub starts
Jan 31 07:10:03 compute-2 ceph-mon[77282]: 7.1 scrub ok
Jan 31 07:10:03 compute-2 ceph-mon[77282]: osdmap e118: 3 total, 3 up, 3 in
Jan 31 07:10:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:10:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:10:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:10:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:10:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:10:03 compute-2 ceph-mon[77282]: pgmap v253: 305 pgs: 1 peering, 304 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 86 B/s, 3 objects/s recovering
Jan 31 07:10:04 compute-2 sudo[88675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztowlgxpaytzftizurklkiakdlozcvlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843403.7156482-95-111432595364393/AnsiballZ_command.py'
Jan 31 07:10:04 compute-2 sudo[88675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:04 compute-2 python3.9[88677]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:10:04 compute-2 sudo[88675]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:04.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:04 compute-2 ceph-mon[77282]: 7.7 scrub starts
Jan 31 07:10:04 compute-2 ceph-mon[77282]: 7.7 scrub ok
Jan 31 07:10:05 compute-2 sudo[88829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfjwclkbubsyggnaeptmeokahndtctnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843404.7698689-132-19192371906277/AnsiballZ_stat.py'
Jan 31 07:10:05 compute-2 sudo[88829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:05 compute-2 python3.9[88831]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:10:05 compute-2 sudo[88829]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:05.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:05 compute-2 ceph-mon[77282]: 7.c scrub starts
Jan 31 07:10:05 compute-2 ceph-mon[77282]: 7.c scrub ok
Jan 31 07:10:05 compute-2 ceph-mon[77282]: 11.b scrub starts
Jan 31 07:10:05 compute-2 ceph-mon[77282]: 11.b scrub ok
Jan 31 07:10:05 compute-2 ceph-mon[77282]: pgmap v254: 305 pgs: 1 peering, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 59 B/s, 2 objects/s recovering
Jan 31 07:10:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:06.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:06 compute-2 sudo[88983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-larytazztbbgmgxadkqtysaovagtkhon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843405.8490176-164-177395281898869/AnsiballZ_file.py'
Jan 31 07:10:06 compute-2 sudo[88983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:06 compute-2 python3.9[88985]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:10:06 compute-2 sudo[88983]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:06 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.f deep-scrub starts
Jan 31 07:10:06 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.f deep-scrub ok
Jan 31 07:10:07 compute-2 ceph-mon[77282]: 10.f deep-scrub starts
Jan 31 07:10:07 compute-2 ceph-mon[77282]: 10.f deep-scrub ok
Jan 31 07:10:07 compute-2 sudo[89136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehukoqkqrzhfxgdtfkvzabdpbgvhvtgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843407.0193927-191-202045821254048/AnsiballZ_file.py'
Jan 31 07:10:07 compute-2 sudo[89136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:07 compute-2 python3.9[89138]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:10:07 compute-2 sudo[89136]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:07.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:08 compute-2 python3.9[89288]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:10:08 compute-2 ceph-mon[77282]: 11.c scrub starts
Jan 31 07:10:08 compute-2 ceph-mon[77282]: 11.c scrub ok
Jan 31 07:10:08 compute-2 ceph-mon[77282]: pgmap v255: 305 pgs: 1 peering, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 1 objects/s recovering
Jan 31 07:10:08 compute-2 network[89305]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:10:08 compute-2 network[89306]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:10:08 compute-2 network[89307]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:10:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:08.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:09 compute-2 sudo[89345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:09 compute-2 sudo[89345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:09 compute-2 sudo[89345]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:09 compute-2 sudo[89374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:10:09 compute-2 sudo[89374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:09 compute-2 sudo[89374]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:09 compute-2 ceph-mon[77282]: 7.d scrub starts
Jan 31 07:10:09 compute-2 ceph-mon[77282]: 7.d scrub ok
Jan 31 07:10:09 compute-2 ceph-mon[77282]: 11.d scrub starts
Jan 31 07:10:09 compute-2 ceph-mon[77282]: 11.d scrub ok
Jan 31 07:10:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 31 07:10:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 07:10:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:10:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:09.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:10:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:10.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:10 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 119 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=119) [2] r=0 lpr=119 pi=[85,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:10:10 compute-2 ceph-mon[77282]: pgmap v256: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Jan 31 07:10:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 07:10:10 compute-2 ceph-mon[77282]: osdmap e119: 3 total, 3 up, 3 in
Jan 31 07:10:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 07:10:10 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 120 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=120) [2]/[1] r=-1 lpr=120 pi=[85,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:10:10 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 120 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=120) [2]/[1] r=-1 lpr=120 pi=[85,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:10:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:11 compute-2 ceph-mon[77282]: osdmap e120: 3 total, 3 up, 3 in
Jan 31 07:10:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 31 07:10:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 07:10:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:11.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:12 compute-2 python3.9[89619]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:10:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:12.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:12 compute-2 ceph-mon[77282]: 7.12 scrub starts
Jan 31 07:10:12 compute-2 ceph-mon[77282]: 7.12 scrub ok
Jan 31 07:10:12 compute-2 ceph-mon[77282]: pgmap v259: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:12 compute-2 ceph-mon[77282]: 11.10 scrub starts
Jan 31 07:10:12 compute-2 ceph-mon[77282]: 11.10 scrub ok
Jan 31 07:10:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 07:10:12 compute-2 ceph-mon[77282]: osdmap e121: 3 total, 3 up, 3 in
Jan 31 07:10:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e122 e122: 3 total, 3 up, 3 in
Jan 31 07:10:12 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 122 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=120/85 les/c/f=121/86/0 sis=122) [2] r=0 lpr=122 pi=[85,122)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:10:12 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 122 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=120/85 les/c/f=121/86/0 sis=122) [2] r=0 lpr=122 pi=[85,122)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:10:12 compute-2 python3.9[89769]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:10:13 compute-2 ceph-mon[77282]: osdmap e122: 3 total, 3 up, 3 in
Jan 31 07:10:13 compute-2 ceph-mon[77282]: pgmap v262: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e123 e123: 3 total, 3 up, 3 in
Jan 31 07:10:13 compute-2 ceph-osd[79942]: osd.2 pg_epoch: 123 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=122/123 n=5 ec=54/42 lis/c=120/85 les/c/f=121/86/0 sis=122) [2] r=0 lpr=122 pi=[85,122)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:10:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:13.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:14 compute-2 python3.9[89924]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:10:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:14.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e124 e124: 3 total, 3 up, 3 in
Jan 31 07:10:14 compute-2 ceph-mon[77282]: 7.15 scrub starts
Jan 31 07:10:14 compute-2 ceph-mon[77282]: 7.15 scrub ok
Jan 31 07:10:14 compute-2 ceph-mon[77282]: osdmap e123: 3 total, 3 up, 3 in
Jan 31 07:10:14 compute-2 ceph-mon[77282]: osdmap e124: 3 total, 3 up, 3 in
Jan 31 07:10:15 compute-2 sudo[90081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfycveqopnhtzbqqyehjpupacavljjvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843414.9509294-335-213525247814192/AnsiballZ_setup.py'
Jan 31 07:10:15 compute-2 sudo[90081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:15 compute-2 python3.9[90083]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:10:15 compute-2 sudo[90081]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e125 e125: 3 total, 3 up, 3 in
Jan 31 07:10:15 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 6.1 deep-scrub starts
Jan 31 07:10:15 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 6.1 deep-scrub ok
Jan 31 07:10:15 compute-2 ceph-mon[77282]: 7.17 scrub starts
Jan 31 07:10:15 compute-2 ceph-mon[77282]: 7.17 scrub ok
Jan 31 07:10:15 compute-2 ceph-mon[77282]: 11.11 scrub starts
Jan 31 07:10:15 compute-2 ceph-mon[77282]: 11.11 scrub ok
Jan 31 07:10:15 compute-2 ceph-mon[77282]: pgmap v265: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 0 objects/s recovering
Jan 31 07:10:15 compute-2 ceph-mon[77282]: osdmap e125: 3 total, 3 up, 3 in
Jan 31 07:10:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:10:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:15.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:10:15 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Jan 31 07:10:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:15.955207) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:10:15 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Jan 31 07:10:15 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843415955301, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7294, "num_deletes": 256, "total_data_size": 13357049, "memory_usage": 13588304, "flush_reason": "Manual Compaction"}
Jan 31 07:10:15 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416007155, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 7839399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 241, "largest_seqno": 7299, "table_properties": {"data_size": 7811094, "index_size": 18590, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 81305, "raw_average_key_size": 23, "raw_value_size": 7743864, "raw_average_value_size": 2243, "num_data_blocks": 822, "num_entries": 3451, "num_filter_entries": 3451, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 1769843219, "file_creation_time": 1769843415, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 52040 microseconds, and 17888 cpu microseconds.
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:16.007255) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 7839399 bytes OK
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:16.007277) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:16.010713) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:16.010734) EVENT_LOG_v1 {"time_micros": 1769843416010728, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:16.010754) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13319187, prev total WAL file size 13319187, number of live WAL files 2.
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:16.012755) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(7655KB) 8(1648B)]
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416012824, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 7841047, "oldest_snapshot_seqno": -1}
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3198 keys, 7835615 bytes, temperature: kUnknown
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416058848, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 7835615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7808001, "index_size": 18544, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 77089, "raw_average_key_size": 24, "raw_value_size": 7743922, "raw_average_value_size": 2421, "num_data_blocks": 822, "num_entries": 3198, "num_filter_entries": 3198, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769843416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:16.059140) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 7835615 bytes
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:16.060379) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.0 rd, 169.9 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3456, records dropped: 258 output_compression: NoCompression
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:16.060400) EVENT_LOG_v1 {"time_micros": 1769843416060388, "job": 4, "event": "compaction_finished", "compaction_time_micros": 46125, "compaction_time_cpu_micros": 15198, "output_level": 6, "num_output_files": 1, "total_output_size": 7835615, "num_input_records": 3456, "num_output_records": 3198, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416061088, "job": 4, "event": "table_file_deletion", "file_number": 14}
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416061135, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 07:10:16 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:10:16.012653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:10:16 compute-2 sudo[90166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtmrapwjqnvwldysemqlgqmcdocvrhzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843414.9509294-335-213525247814192/AnsiballZ_dnf.py'
Jan 31 07:10:16 compute-2 sudo[90166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:16 compute-2 python3.9[90168]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:10:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:10:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:16.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:10:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:16 compute-2 ceph-mon[77282]: 7.19 scrub starts
Jan 31 07:10:16 compute-2 ceph-mon[77282]: 7.19 scrub ok
Jan 31 07:10:16 compute-2 ceph-mon[77282]: 11.15 scrub starts
Jan 31 07:10:16 compute-2 ceph-mon[77282]: 11.15 scrub ok
Jan 31 07:10:16 compute-2 ceph-mon[77282]: 6.1 deep-scrub starts
Jan 31 07:10:16 compute-2 ceph-mon[77282]: 6.1 deep-scrub ok
Jan 31 07:10:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:10:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:17.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:10:17 compute-2 ceph-mon[77282]: pgmap v267: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 1 objects/s recovering
Jan 31 07:10:17 compute-2 sudo[90216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:17 compute-2 sudo[90216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:17 compute-2 sudo[90216]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:18 compute-2 sudo[90241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:18 compute-2 sudo[90241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:18 compute-2 sudo[90241]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:18.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:18 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 31 07:10:18 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 31 07:10:18 compute-2 ceph-mon[77282]: 11.18 scrub starts
Jan 31 07:10:18 compute-2 ceph-mon[77282]: 11.18 scrub ok
Jan 31 07:10:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:19.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e126 e126: 3 total, 3 up, 3 in
Jan 31 07:10:20 compute-2 ceph-mon[77282]: 11.1f deep-scrub starts
Jan 31 07:10:20 compute-2 ceph-mon[77282]: 11.1f deep-scrub ok
Jan 31 07:10:20 compute-2 ceph-mon[77282]: 7.16 scrub starts
Jan 31 07:10:20 compute-2 ceph-mon[77282]: 7.16 scrub ok
Jan 31 07:10:20 compute-2 ceph-mon[77282]: pgmap v268: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s; 36 B/s, 2 objects/s recovering
Jan 31 07:10:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:10:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:20.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e127 e127: 3 total, 3 up, 3 in
Jan 31 07:10:21 compute-2 ceph-mon[77282]: 10.13 deep-scrub starts
Jan 31 07:10:21 compute-2 ceph-mon[77282]: 10.13 deep-scrub ok
Jan 31 07:10:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:10:21 compute-2 ceph-mon[77282]: osdmap e126: 3 total, 3 up, 3 in
Jan 31 07:10:21 compute-2 ceph-mon[77282]: 7.1a scrub starts
Jan 31 07:10:21 compute-2 ceph-mon[77282]: 7.1a scrub ok
Jan 31 07:10:21 compute-2 ceph-mon[77282]: osdmap e127: 3 total, 3 up, 3 in
Jan 31 07:10:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:21.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e128 e128: 3 total, 3 up, 3 in
Jan 31 07:10:22 compute-2 ceph-mon[77282]: 7.1c scrub starts
Jan 31 07:10:22 compute-2 ceph-mon[77282]: 7.1c scrub ok
Jan 31 07:10:22 compute-2 ceph-mon[77282]: pgmap v271: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s; 0 B/s, 1 objects/s recovering
Jan 31 07:10:22 compute-2 ceph-mon[77282]: osdmap e128: 3 total, 3 up, 3 in
Jan 31 07:10:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:22.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e129 e129: 3 total, 3 up, 3 in
Jan 31 07:10:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:23.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 e130: 3 total, 3 up, 3 in
Jan 31 07:10:24 compute-2 ceph-mon[77282]: 10.1b deep-scrub starts
Jan 31 07:10:24 compute-2 ceph-mon[77282]: 10.1b deep-scrub ok
Jan 31 07:10:24 compute-2 ceph-mon[77282]: osdmap e129: 3 total, 3 up, 3 in
Jan 31 07:10:24 compute-2 ceph-mon[77282]: pgmap v274: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:24.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:24 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.11 deep-scrub starts
Jan 31 07:10:24 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.11 deep-scrub ok
Jan 31 07:10:25 compute-2 ceph-mon[77282]: osdmap e130: 3 total, 3 up, 3 in
Jan 31 07:10:25 compute-2 ceph-mon[77282]: 10.6 scrub starts
Jan 31 07:10:25 compute-2 ceph-mon[77282]: 10.6 scrub ok
Jan 31 07:10:25 compute-2 ceph-mon[77282]: 7.11 deep-scrub starts
Jan 31 07:10:25 compute-2 ceph-mon[77282]: 7.11 deep-scrub ok
Jan 31 07:10:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:25.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:26 compute-2 ceph-mon[77282]: 10.7 scrub starts
Jan 31 07:10:26 compute-2 ceph-mon[77282]: 10.7 scrub ok
Jan 31 07:10:26 compute-2 ceph-mon[77282]: pgmap v276: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:26.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:27 compute-2 ceph-mon[77282]: 10.19 scrub starts
Jan 31 07:10:27 compute-2 ceph-mon[77282]: 10.19 scrub ok
Jan 31 07:10:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:27.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:27 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 31 07:10:27 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 31 07:10:28 compute-2 ceph-mon[77282]: pgmap v277: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:28 compute-2 ceph-mon[77282]: 7.5 scrub starts
Jan 31 07:10:28 compute-2 ceph-mon[77282]: 7.5 scrub ok
Jan 31 07:10:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:28.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:29.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:30 compute-2 ceph-mon[77282]: pgmap v278: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 15 B/s, 0 objects/s recovering
Jan 31 07:10:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:30.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:31 compute-2 ceph-mon[77282]: 10.18 deep-scrub starts
Jan 31 07:10:31 compute-2 ceph-mon[77282]: 10.18 deep-scrub ok
Jan 31 07:10:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:31.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:31 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 31 07:10:31 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 31 07:10:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:32.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:32 compute-2 ceph-mon[77282]: 10.8 scrub starts
Jan 31 07:10:32 compute-2 ceph-mon[77282]: 10.8 scrub ok
Jan 31 07:10:32 compute-2 ceph-mon[77282]: pgmap v279: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 07:10:32 compute-2 ceph-mon[77282]: 8.f scrub starts
Jan 31 07:10:32 compute-2 ceph-mon[77282]: 8.f scrub ok
Jan 31 07:10:33 compute-2 ceph-mon[77282]: 10.5 scrub starts
Jan 31 07:10:33 compute-2 ceph-mon[77282]: 10.5 scrub ok
Jan 31 07:10:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:33.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:34.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:34 compute-2 ceph-mon[77282]: 10.9 scrub starts
Jan 31 07:10:34 compute-2 ceph-mon[77282]: 10.9 scrub ok
Jan 31 07:10:34 compute-2 ceph-mon[77282]: pgmap v280: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 07:10:34 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 31 07:10:35 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 31 07:10:35 compute-2 ceph-mon[77282]: 10.2 scrub starts
Jan 31 07:10:35 compute-2 ceph-mon[77282]: 10.2 scrub ok
Jan 31 07:10:35 compute-2 ceph-mon[77282]: 8.9 scrub starts
Jan 31 07:10:35 compute-2 ceph-mon[77282]: 8.9 scrub ok
Jan 31 07:10:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:35.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:10:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:36.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:10:36 compute-2 ceph-mon[77282]: pgmap v281: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1 B/s, 0 objects/s recovering
Jan 31 07:10:36 compute-2 ceph-mon[77282]: 7.1b scrub starts
Jan 31 07:10:36 compute-2 ceph-mon[77282]: 7.1b scrub ok
Jan 31 07:10:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:37.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:38 compute-2 sudo[90345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:38 compute-2 sudo[90345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:38 compute-2 sudo[90345]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:38 compute-2 sudo[90370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:38 compute-2 sudo[90370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:38 compute-2 sudo[90370]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:38.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:38 compute-2 ceph-mon[77282]: 10.a scrub starts
Jan 31 07:10:38 compute-2 ceph-mon[77282]: 10.a scrub ok
Jan 31 07:10:38 compute-2 ceph-mon[77282]: pgmap v282: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1 B/s, 0 objects/s recovering
Jan 31 07:10:39 compute-2 ceph-mon[77282]: 10.b deep-scrub starts
Jan 31 07:10:39 compute-2 ceph-mon[77282]: 10.b deep-scrub ok
Jan 31 07:10:39 compute-2 ceph-mon[77282]: 10.14 scrub starts
Jan 31 07:10:39 compute-2 ceph-mon[77282]: 10.14 scrub ok
Jan 31 07:10:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:39.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:40.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:40 compute-2 ceph-mon[77282]: pgmap v283: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1 B/s, 0 objects/s recovering
Jan 31 07:10:40 compute-2 ceph-mon[77282]: 7.e scrub starts
Jan 31 07:10:40 compute-2 ceph-mon[77282]: 7.e scrub ok
Jan 31 07:10:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:41 compute-2 ceph-mon[77282]: pgmap v284: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:41.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:42.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:43 compute-2 ceph-mon[77282]: 10.c scrub starts
Jan 31 07:10:43 compute-2 ceph-mon[77282]: 10.c scrub ok
Jan 31 07:10:43 compute-2 ceph-mon[77282]: 7.f scrub starts
Jan 31 07:10:43 compute-2 ceph-mon[77282]: 7.f scrub ok
Jan 31 07:10:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:43.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:44 compute-2 ceph-mon[77282]: 10.15 scrub starts
Jan 31 07:10:44 compute-2 ceph-mon[77282]: 10.15 scrub ok
Jan 31 07:10:44 compute-2 ceph-mon[77282]: pgmap v285: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:44.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:44 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 31 07:10:44 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 31 07:10:45 compute-2 ceph-mon[77282]: 10.1 scrub starts
Jan 31 07:10:45 compute-2 ceph-mon[77282]: 10.1 scrub ok
Jan 31 07:10:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:10:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:45.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:10:46 compute-2 ceph-mon[77282]: pgmap v286: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:46.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:47 compute-2 ceph-mon[77282]: 10.d deep-scrub starts
Jan 31 07:10:47 compute-2 ceph-mon[77282]: 10.d deep-scrub ok
Jan 31 07:10:47 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 31 07:10:47 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 31 07:10:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:47.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:48 compute-2 ceph-mon[77282]: 10.e scrub starts
Jan 31 07:10:48 compute-2 ceph-mon[77282]: 10.e scrub ok
Jan 31 07:10:48 compute-2 ceph-mon[77282]: pgmap v287: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:48 compute-2 ceph-mon[77282]: 8.3 scrub starts
Jan 31 07:10:48 compute-2 ceph-mon[77282]: 8.3 scrub ok
Jan 31 07:10:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:48.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:49.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:50.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:50 compute-2 ceph-mon[77282]: pgmap v288: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:51 compute-2 ceph-mon[77282]: 10.16 deep-scrub starts
Jan 31 07:10:51 compute-2 ceph-mon[77282]: 10.16 deep-scrub ok
Jan 31 07:10:51 compute-2 ceph-mon[77282]: 7.2 scrub starts
Jan 31 07:10:51 compute-2 ceph-mon[77282]: 7.2 scrub ok
Jan 31 07:10:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:51.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:52.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:52 compute-2 ceph-mon[77282]: pgmap v289: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:53 compute-2 ceph-mon[77282]: 7.18 scrub starts
Jan 31 07:10:53 compute-2 ceph-mon[77282]: 7.18 scrub ok
Jan 31 07:10:53 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 31 07:10:53 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 31 07:10:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:53.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:54.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:54 compute-2 ceph-mon[77282]: pgmap v290: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:54 compute-2 ceph-mon[77282]: 7.1d scrub starts
Jan 31 07:10:54 compute-2 ceph-mon[77282]: 7.1d scrub ok
Jan 31 07:10:54 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 31 07:10:54 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 31 07:10:55 compute-2 ceph-mon[77282]: 11.a scrub starts
Jan 31 07:10:55 compute-2 ceph-mon[77282]: 11.a scrub ok
Jan 31 07:10:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:10:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:55.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:10:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:56.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:56 compute-2 ceph-mon[77282]: pgmap v291: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:56 compute-2 ceph-mon[77282]: 7.8 scrub starts
Jan 31 07:10:56 compute-2 ceph-mon[77282]: 7.8 scrub ok
Jan 31 07:10:57 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 31 07:10:57 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 31 07:10:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:57.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:58 compute-2 sshd-session[90434]: Invalid user sol from 92.118.39.56 port 44282
Jan 31 07:10:58 compute-2 ceph-mon[77282]: 10.17 scrub starts
Jan 31 07:10:58 compute-2 ceph-mon[77282]: 10.17 scrub ok
Jan 31 07:10:58 compute-2 ceph-mon[77282]: pgmap v292: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:58 compute-2 sshd-session[90434]: Connection closed by invalid user sol 92.118.39.56 port 44282 [preauth]
Jan 31 07:10:58 compute-2 sudo[90436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:58 compute-2 sudo[90436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:58 compute-2 sudo[90436]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:58 compute-2 sudo[90461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:58 compute-2 sudo[90461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:58 compute-2 sudo[90461]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:58.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:59 compute-2 ceph-mon[77282]: 11.e scrub starts
Jan 31 07:10:59 compute-2 ceph-mon[77282]: 11.e scrub ok
Jan 31 07:10:59 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 31 07:10:59 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 31 07:10:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:10:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:59.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:00 compute-2 ceph-mon[77282]: 7.3 scrub starts
Jan 31 07:11:00 compute-2 ceph-mon[77282]: 7.3 scrub ok
Jan 31 07:11:00 compute-2 ceph-mon[77282]: 10.1a scrub starts
Jan 31 07:11:00 compute-2 ceph-mon[77282]: 10.1a scrub ok
Jan 31 07:11:00 compute-2 ceph-mon[77282]: pgmap v293: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:00 compute-2 ceph-mon[77282]: 10.11 scrub starts
Jan 31 07:11:00 compute-2 ceph-mon[77282]: 10.11 scrub ok
Jan 31 07:11:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:00.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:01 compute-2 ceph-mon[77282]: 7.b scrub starts
Jan 31 07:11:01 compute-2 ceph-mon[77282]: 7.b scrub ok
Jan 31 07:11:01 compute-2 ceph-mon[77282]: 10.1c scrub starts
Jan 31 07:11:01 compute-2 ceph-mon[77282]: 10.1c scrub ok
Jan 31 07:11:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:02 compute-2 ceph-mon[77282]: 10.1d scrub starts
Jan 31 07:11:02 compute-2 ceph-mon[77282]: 10.1d scrub ok
Jan 31 07:11:02 compute-2 ceph-mon[77282]: pgmap v294: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:02.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:11:02 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 31 07:11:02 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 31 07:11:02 compute-2 sudo[90166]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:03 compute-2 ceph-mon[77282]: 8.a scrub starts
Jan 31 07:11:03 compute-2 ceph-mon[77282]: 8.a scrub ok
Jan 31 07:11:03 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 31 07:11:03 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 31 07:11:03 compute-2 sudo[90638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxuwufxmhfpuwfzbcqzwaruncaosctrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843463.59577-372-222055205596681/AnsiballZ_command.py'
Jan 31 07:11:03 compute-2 sudo[90638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:03.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:04 compute-2 python3.9[90640]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:11:04 compute-2 ceph-mon[77282]: 7.6 deep-scrub starts
Jan 31 07:11:04 compute-2 ceph-mon[77282]: 7.6 deep-scrub ok
Jan 31 07:11:04 compute-2 ceph-mon[77282]: pgmap v295: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:04 compute-2 ceph-mon[77282]: 8.d scrub starts
Jan 31 07:11:04 compute-2 ceph-mon[77282]: 8.d scrub ok
Jan 31 07:11:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:04.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:04 compute-2 sudo[90638]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:05 compute-2 ceph-mon[77282]: 10.1f scrub starts
Jan 31 07:11:05 compute-2 ceph-mon[77282]: 10.1f scrub ok
Jan 31 07:11:05 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Jan 31 07:11:05 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Jan 31 07:11:05 compute-2 sudo[90926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irtldapksxiwjumjurhldgjiubrjiaiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843464.9792373-396-264371241504668/AnsiballZ_selinux.py'
Jan 31 07:11:05 compute-2 sudo[90926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:05.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:06 compute-2 python3.9[90928]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 07:11:06 compute-2 sudo[90926]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:06 compute-2 ceph-mon[77282]: pgmap v296: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:06 compute-2 ceph-mon[77282]: 11.8 deep-scrub starts
Jan 31 07:11:06 compute-2 ceph-mon[77282]: 11.8 deep-scrub ok
Jan 31 07:11:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:06.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:11:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:06 compute-2 sudo[91078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rifyjzwyqaauzvuixhlnfitvaodssbyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843466.758991-429-190123555883494/AnsiballZ_command.py'
Jan 31 07:11:06 compute-2 sudo[91078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:07 compute-2 python3.9[91080]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 07:11:07 compute-2 sudo[91078]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:07 compute-2 sudo[91231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtmzpqxexikshfwaxbcbzprfknvaflng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843467.3633986-453-263894260719185/AnsiballZ_file.py'
Jan 31 07:11:07 compute-2 sudo[91231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:07 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 31 07:11:07 compute-2 python3.9[91233]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:11:07 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 31 07:11:07 compute-2 sudo[91231]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:07.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:08 compute-2 sudo[91383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkqdlyjixxxcixxpulumtsthyvpzlben ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843468.059278-476-156119001524444/AnsiballZ_mount.py'
Jan 31 07:11:08 compute-2 sudo[91383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:08 compute-2 ceph-mon[77282]: 7.9 scrub starts
Jan 31 07:11:08 compute-2 ceph-mon[77282]: 7.9 scrub ok
Jan 31 07:11:08 compute-2 ceph-mon[77282]: 11.1a scrub starts
Jan 31 07:11:08 compute-2 ceph-mon[77282]: 11.1a scrub ok
Jan 31 07:11:08 compute-2 ceph-mon[77282]: pgmap v297: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:08 compute-2 ceph-mon[77282]: 7.a scrub starts
Jan 31 07:11:08 compute-2 ceph-mon[77282]: 7.a scrub ok
Jan 31 07:11:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:08.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:08 compute-2 python3.9[91385]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 07:11:08 compute-2 sudo[91383]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:09 compute-2 sudo[91411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:09 compute-2 sudo[91411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:09 compute-2 sudo[91411]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:09 compute-2 sudo[91436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:11:09 compute-2 sudo[91436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:09 compute-2 sudo[91436]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:09 compute-2 sudo[91461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:09 compute-2 sudo[91461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:09 compute-2 sudo[91461]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:09 compute-2 sudo[91486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:11:09 compute-2 sudo[91486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:09 compute-2 podman[91582]: 2026-01-31 07:11:09.673466198 +0000 UTC m=+0.052866807 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:11:09 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 31 07:11:09 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 31 07:11:09 compute-2 ceph-mon[77282]: 7.10 deep-scrub starts
Jan 31 07:11:09 compute-2 ceph-mon[77282]: 7.10 deep-scrub ok
Jan 31 07:11:09 compute-2 ceph-mon[77282]: 8.19 scrub starts
Jan 31 07:11:09 compute-2 ceph-mon[77282]: 8.19 scrub ok
Jan 31 07:11:09 compute-2 podman[91582]: 2026-01-31 07:11:09.792816249 +0000 UTC m=+0.172216858 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:11:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:09.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:10 compute-2 podman[91736]: 2026-01-31 07:11:10.254690067 +0000 UTC m=+0.047429224 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:11:10 compute-2 podman[91736]: 2026-01-31 07:11:10.269351046 +0000 UTC m=+0.062090173 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:11:10 compute-2 podman[91798]: 2026-01-31 07:11:10.442711674 +0000 UTC m=+0.060349216 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=keepalived, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 31 07:11:10 compute-2 podman[91798]: 2026-01-31 07:11:10.482495524 +0000 UTC m=+0.100133056 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph)
Jan 31 07:11:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:10.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:10 compute-2 sudo[91486]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:10 compute-2 sudo[91831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:10 compute-2 sudo[91831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:10 compute-2 sudo[91831]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:10 compute-2 sudo[91856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:11:10 compute-2 sudo[91856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:10 compute-2 sudo[91856]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:10 compute-2 sudo[91881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:10 compute-2 sudo[91881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:10 compute-2 sudo[91881]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:10 compute-2 sudo[91906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:11:10 compute-2 sudo[91906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:10 compute-2 ceph-mon[77282]: 7.4 scrub starts
Jan 31 07:11:10 compute-2 ceph-mon[77282]: 7.4 scrub ok
Jan 31 07:11:10 compute-2 ceph-mon[77282]: pgmap v298: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:10 compute-2 ceph-mon[77282]: 10.10 scrub starts
Jan 31 07:11:10 compute-2 ceph-mon[77282]: 10.10 scrub ok
Jan 31 07:11:10 compute-2 ceph-mon[77282]: 7.1e deep-scrub starts
Jan 31 07:11:10 compute-2 ceph-mon[77282]: 7.1e deep-scrub ok
Jan 31 07:11:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:11 compute-2 sudo[91906]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:11 compute-2 systemd[72614]: Created slice User Background Tasks Slice.
Jan 31 07:11:11 compute-2 systemd[72614]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 07:11:11 compute-2 systemd[72614]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 07:11:11 compute-2 sudo[92089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skunonuikfarjoqfciklfpdlepouskti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843471.2994409-561-193499446381305/AnsiballZ_file.py'
Jan 31 07:11:11 compute-2 sudo[92089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:11 compute-2 python3.9[92091]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:11:11 compute-2 sudo[92089]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:11.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:11:12 compute-2 ceph-mon[77282]: pgmap v299: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:12.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:12 compute-2 sudo[92241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljuuitfznfonzywtqmxvtvrropgvymaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843472.1262343-585-154033860401926/AnsiballZ_stat.py'
Jan 31 07:11:12 compute-2 sudo[92241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:12 compute-2 python3.9[92243]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:11:12 compute-2 sudo[92241]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:13 compute-2 sudo[92320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoawpxbrevdrlxqtsmshkbkeuviswylm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843472.1262343-585-154033860401926/AnsiballZ_file.py'
Jan 31 07:11:13 compute-2 sudo[92320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:13 compute-2 ceph-mon[77282]: 7.13 deep-scrub starts
Jan 31 07:11:13 compute-2 ceph-mon[77282]: 7.13 deep-scrub ok
Jan 31 07:11:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:13 compute-2 ceph-mon[77282]: 11.12 scrub starts
Jan 31 07:11:13 compute-2 ceph-mon[77282]: 11.12 scrub ok
Jan 31 07:11:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:11:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:11:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:11:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:11:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:11:13 compute-2 python3.9[92322]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:11:13 compute-2 sudo[92320]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:13.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:14 compute-2 ceph-mon[77282]: 8.12 scrub starts
Jan 31 07:11:14 compute-2 ceph-mon[77282]: 8.12 scrub ok
Jan 31 07:11:14 compute-2 ceph-mon[77282]: pgmap v300: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:14.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:14 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 31 07:11:14 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 31 07:11:14 compute-2 sudo[92472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcyfqifzqzrgxfomwhwasvdxhdlscvig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843474.4567342-648-12090015938377/AnsiballZ_stat.py'
Jan 31 07:11:14 compute-2 sudo[92472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:14 compute-2 python3.9[92474]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:11:14 compute-2 sudo[92472]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:15 compute-2 ceph-mon[77282]: 10.4 scrub starts
Jan 31 07:11:15 compute-2 ceph-mon[77282]: 10.4 scrub ok
Jan 31 07:11:15 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 31 07:11:15 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 31 07:11:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:15.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:16 compute-2 sudo[92627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjogvhknwzsaxavfmdhuqynvlqyevnuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843475.7701318-688-198651598060496/AnsiballZ_getent.py'
Jan 31 07:11:16 compute-2 sudo[92627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:16 compute-2 ceph-mon[77282]: pgmap v301: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:16 compute-2 ceph-mon[77282]: 8.b scrub starts
Jan 31 07:11:16 compute-2 ceph-mon[77282]: 8.b scrub ok
Jan 31 07:11:16 compute-2 python3.9[92629]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 07:11:16 compute-2 sudo[92627]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:16.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:16 compute-2 sudo[92780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxhgrlzuvqdmvceysadwjsvarpbfdzwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843476.7043216-717-91975030957014/AnsiballZ_getent.py'
Jan 31 07:11:16 compute-2 sudo[92780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:17 compute-2 python3.9[92782]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 07:11:17 compute-2 sudo[92780]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:17 compute-2 sudo[92934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xglfblgclngsnplpkhwxzqkszbvkzqui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843477.3661323-741-203306172331728/AnsiballZ_group.py'
Jan 31 07:11:17 compute-2 sudo[92934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:17 compute-2 python3.9[92936]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:11:17 compute-2 sudo[92934]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:17.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:18 compute-2 sudo[92961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:18 compute-2 sudo[92961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:18 compute-2 sudo[92961]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:18 compute-2 sudo[92986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:11:18 compute-2 sudo[92986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:18 compute-2 sudo[92986]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:18 compute-2 sudo[93063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:18 compute-2 sudo[93063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:18 compute-2 sudo[93063]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:18 compute-2 sudo[93106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:18 compute-2 sudo[93106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:18 compute-2 sudo[93106]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:18 compute-2 sudo[93186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyeehhfqhoezkcclidoafivcrkgbqegi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843478.2588365-767-197918143632825/AnsiballZ_file.py'
Jan 31 07:11:18 compute-2 sudo[93186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:18.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:18 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Jan 31 07:11:18 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Jan 31 07:11:18 compute-2 python3.9[93188]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 07:11:18 compute-2 sudo[93186]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:18 compute-2 ceph-mon[77282]: pgmap v302: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:19 compute-2 sudo[93339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uobfeiiahpmtirrwgxxlmpohpmnfhvll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843479.133275-800-218700913958883/AnsiballZ_dnf.py'
Jan 31 07:11:19 compute-2 sudo[93339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:19 compute-2 python3.9[93341]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:11:19 compute-2 ceph-mon[77282]: 8.1f deep-scrub starts
Jan 31 07:11:19 compute-2 ceph-mon[77282]: 8.1f deep-scrub ok
Jan 31 07:11:19 compute-2 ceph-mon[77282]: 6.6 scrub starts
Jan 31 07:11:19 compute-2 ceph-mon[77282]: 6.6 scrub ok
Jan 31 07:11:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:19.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:20.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:20 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 31 07:11:20 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 31 07:11:20 compute-2 ceph-mon[77282]: 8.10 scrub starts
Jan 31 07:11:20 compute-2 ceph-mon[77282]: 8.10 scrub ok
Jan 31 07:11:20 compute-2 ceph-mon[77282]: pgmap v303: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:21 compute-2 sudo[93339]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:21 compute-2 sudo[93493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scabqhzqnukgcgdziphvbrwoamggjhnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843481.198279-825-83755153108475/AnsiballZ_file.py'
Jan 31 07:11:21 compute-2 sudo[93493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:21 compute-2 python3.9[93495]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:11:21 compute-2 sudo[93493]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:21 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 31 07:11:21 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 31 07:11:21 compute-2 ceph-mon[77282]: 10.3 scrub starts
Jan 31 07:11:21 compute-2 ceph-mon[77282]: 10.3 scrub ok
Jan 31 07:11:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:22.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:22 compute-2 sudo[93645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmspmfwnjtrpufsdfmoezfexpqrsuwpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843481.8384292-849-63299371098342/AnsiballZ_stat.py'
Jan 31 07:11:22 compute-2 sudo[93645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:22 compute-2 python3.9[93647]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:11:22 compute-2 sudo[93645]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:22.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:22 compute-2 ceph-mon[77282]: pgmap v304: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:22 compute-2 ceph-mon[77282]: 11.19 scrub starts
Jan 31 07:11:22 compute-2 ceph-mon[77282]: 6.9 deep-scrub starts
Jan 31 07:11:22 compute-2 ceph-mon[77282]: 11.19 scrub ok
Jan 31 07:11:22 compute-2 ceph-mon[77282]: 6.9 deep-scrub ok
Jan 31 07:11:23 compute-2 ceph-mon[77282]: 6.b deep-scrub starts
Jan 31 07:11:23 compute-2 ceph-mon[77282]: 6.b deep-scrub ok
Jan 31 07:11:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:24.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:24.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:24 compute-2 ceph-mon[77282]: pgmap v305: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:24 compute-2 ceph-mon[77282]: 6.f scrub starts
Jan 31 07:11:24 compute-2 ceph-mon[77282]: 6.f scrub ok
Jan 31 07:11:25 compute-2 sudo[93725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agtsftsfqillxnqqtlrgiftwyrjbfmeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843481.8384292-849-63299371098342/AnsiballZ_file.py'
Jan 31 07:11:25 compute-2 sudo[93725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:25 compute-2 python3.9[93727]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:11:25 compute-2 sudo[93725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:25 compute-2 ceph-mon[77282]: 9.19 scrub starts
Jan 31 07:11:25 compute-2 ceph-mon[77282]: 9.19 scrub ok
Jan 31 07:11:25 compute-2 ceph-mon[77282]: pgmap v306: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:25 compute-2 sudo[93877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jolyosvivihfokoktvybtzgwnwwybqsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843485.7681706-888-169569013873167/AnsiballZ_stat.py'
Jan 31 07:11:25 compute-2 sudo[93877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:26.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:26 compute-2 python3.9[93879]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:11:26 compute-2 sudo[93877]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:26 compute-2 sudo[93955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rykiwfaavubmogvbqitwudxzpjenflxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843485.7681706-888-169569013873167/AnsiballZ_file.py'
Jan 31 07:11:26 compute-2 sudo[93955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:26 compute-2 python3.9[93957]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:11:26 compute-2 sudo[93955]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:26.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:26 compute-2 ceph-mon[77282]: 11.1d scrub starts
Jan 31 07:11:26 compute-2 ceph-mon[77282]: 11.1d scrub ok
Jan 31 07:11:26 compute-2 ceph-mon[77282]: 9.1a scrub starts
Jan 31 07:11:26 compute-2 ceph-mon[77282]: 9.1a scrub ok
Jan 31 07:11:27 compute-2 sudo[94108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pecmsclpgivwoxucwuxvotcsxyrgfozi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843487.156622-933-160383875681297/AnsiballZ_dnf.py'
Jan 31 07:11:27 compute-2 sudo[94108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:27 compute-2 python3.9[94110]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:11:27 compute-2 ceph-mon[77282]: pgmap v307: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:28.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:28 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 31 07:11:28 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 31 07:11:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:28.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:28 compute-2 sudo[94108]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:28 compute-2 ceph-mon[77282]: 8.18 deep-scrub starts
Jan 31 07:11:28 compute-2 ceph-mon[77282]: 8.18 deep-scrub ok
Jan 31 07:11:29 compute-2 python3.9[94262]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:11:29 compute-2 ceph-mon[77282]: 8.5 scrub starts
Jan 31 07:11:29 compute-2 ceph-mon[77282]: 8.5 scrub ok
Jan 31 07:11:29 compute-2 ceph-mon[77282]: 9.1b scrub starts
Jan 31 07:11:29 compute-2 ceph-mon[77282]: 9.1b scrub ok
Jan 31 07:11:29 compute-2 ceph-mon[77282]: pgmap v308: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:30.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:30.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:30 compute-2 python3.9[94414]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 07:11:31 compute-2 ceph-mon[77282]: 11.1c scrub starts
Jan 31 07:11:31 compute-2 ceph-mon[77282]: 11.1c scrub ok
Jan 31 07:11:31 compute-2 python3.9[94565]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:11:31 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 31 07:11:31 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 31 07:11:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:32.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:32 compute-2 ceph-mon[77282]: 11.1e deep-scrub starts
Jan 31 07:11:32 compute-2 ceph-mon[77282]: 11.1e deep-scrub ok
Jan 31 07:11:32 compute-2 ceph-mon[77282]: pgmap v309: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:32 compute-2 sudo[94715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvmtbppzbtwywnqleiukebeeluqgmqsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843491.9176106-1056-242027399148994/AnsiballZ_systemd.py'
Jan 31 07:11:32 compute-2 sudo[94715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:32.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:32 compute-2 python3.9[94717]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:11:32 compute-2 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 07:11:32 compute-2 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 07:11:32 compute-2 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 07:11:32 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 07:11:32 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 07:11:32 compute-2 sudo[94715]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:33 compute-2 ceph-mon[77282]: 7.14 scrub starts
Jan 31 07:11:33 compute-2 ceph-mon[77282]: 7.14 scrub ok
Jan 31 07:11:33 compute-2 python3.9[94879]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 07:11:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:34.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:34 compute-2 ceph-mon[77282]: 11.1b scrub starts
Jan 31 07:11:34 compute-2 ceph-mon[77282]: 11.1b scrub ok
Jan 31 07:11:34 compute-2 ceph-mon[77282]: pgmap v310: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:34.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:36.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:36 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 31 07:11:36 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 31 07:11:36 compute-2 ceph-mon[77282]: 9.1e scrub starts
Jan 31 07:11:36 compute-2 ceph-mon[77282]: 9.1e scrub ok
Jan 31 07:11:36 compute-2 ceph-mon[77282]: pgmap v311: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:36.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:37 compute-2 ceph-mon[77282]: 8.1b scrub starts
Jan 31 07:11:37 compute-2 ceph-mon[77282]: 8.1b scrub ok
Jan 31 07:11:37 compute-2 ceph-mon[77282]: 8.6 scrub starts
Jan 31 07:11:37 compute-2 ceph-mon[77282]: 8.6 scrub ok
Jan 31 07:11:37 compute-2 sudo[95031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nestfnfweoyptqlfxyxmhjfixhbjmwcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843497.4594085-1227-230977590609462/AnsiballZ_systemd.py'
Jan 31 07:11:37 compute-2 sudo[95031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:37 compute-2 python3.9[95033]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:11:38 compute-2 sudo[95031]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:38.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:38 compute-2 sudo[95185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfwbofsmdcqkhbrumiafbtprvgfiocgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843498.1384752-1227-176485468407353/AnsiballZ_systemd.py'
Jan 31 07:11:38 compute-2 sudo[95185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:38 compute-2 sudo[95188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:38 compute-2 sudo[95188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:38 compute-2 sudo[95188]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:38 compute-2 sudo[95213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:38 compute-2 sudo[95213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:38 compute-2 sudo[95213]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:38 compute-2 ceph-mon[77282]: 9.1f scrub starts
Jan 31 07:11:38 compute-2 ceph-mon[77282]: 9.1f scrub ok
Jan 31 07:11:38 compute-2 ceph-mon[77282]: pgmap v312: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:38.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:38 compute-2 python3.9[95187]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:11:38 compute-2 sudo[95185]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:39 compute-2 sshd-session[87738]: Connection closed by 192.168.122.30 port 58478
Jan 31 07:11:39 compute-2 sshd-session[87662]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:11:39 compute-2 systemd-logind[801]: Session 34 logged out. Waiting for processes to exit.
Jan 31 07:11:39 compute-2 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 07:11:39 compute-2 systemd[1]: session-34.scope: Consumed 1min 4.430s CPU time.
Jan 31 07:11:39 compute-2 systemd-logind[801]: Removed session 34.
Jan 31 07:11:39 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 31 07:11:39 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 31 07:11:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:40.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:40.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:40 compute-2 ceph-mon[77282]: 8.4 scrub starts
Jan 31 07:11:40 compute-2 ceph-mon[77282]: 8.4 scrub ok
Jan 31 07:11:40 compute-2 ceph-mon[77282]: 11.16 scrub starts
Jan 31 07:11:40 compute-2 ceph-mon[77282]: 11.16 scrub ok
Jan 31 07:11:40 compute-2 ceph-mon[77282]: pgmap v313: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:42.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:42.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:42 compute-2 ceph-mon[77282]: pgmap v314: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:44.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:44.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:44 compute-2 ceph-mon[77282]: pgmap v315: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:44 compute-2 sshd-session[95267]: Accepted publickey for zuul from 192.168.122.30 port 52484 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:11:44 compute-2 systemd-logind[801]: New session 35 of user zuul.
Jan 31 07:11:44 compute-2 systemd[1]: Started Session 35 of User zuul.
Jan 31 07:11:44 compute-2 sshd-session[95267]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:11:45 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 31 07:11:45 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 31 07:11:45 compute-2 python3.9[95421]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:11:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:46.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:46.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:11:46 compute-2 ceph-mon[77282]: 8.15 scrub starts
Jan 31 07:11:46 compute-2 ceph-mon[77282]: 8.15 scrub ok
Jan 31 07:11:46 compute-2 ceph-mon[77282]: pgmap v316: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:46 compute-2 sudo[95575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwukfowldkwvyxglpxoqimmceqksqrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843506.5929854-70-198405080172702/AnsiballZ_getent.py'
Jan 31 07:11:46 compute-2 sudo[95575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:47 compute-2 python3.9[95577]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 07:11:47 compute-2 sudo[95575]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:47 compute-2 ceph-mon[77282]: 11.5 scrub starts
Jan 31 07:11:47 compute-2 ceph-mon[77282]: 11.5 scrub ok
Jan 31 07:11:47 compute-2 sudo[95729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lserkpmzzdckrgiadfmhxouvdbsizmhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843507.5680351-106-220505096091991/AnsiballZ_setup.py'
Jan 31 07:11:47 compute-2 sudo[95729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:48.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:48 compute-2 python3.9[95731]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:11:48 compute-2 sudo[95729]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:48.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:48 compute-2 sudo[95813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exdkyvjqyjijhdvpzlqughhvglbkpjic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843507.5680351-106-220505096091991/AnsiballZ_dnf.py'
Jan 31 07:11:48 compute-2 sudo[95813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:48 compute-2 ceph-mon[77282]: pgmap v317: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:48 compute-2 python3.9[95815]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 07:11:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:50.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:50 compute-2 sudo[95813]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:50 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 31 07:11:50 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 31 07:11:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:50.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:50 compute-2 sudo[95967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iufzuqsnxonpeimcpubkouijjdnmppig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843510.5674796-148-264250963722437/AnsiballZ_dnf.py'
Jan 31 07:11:50 compute-2 sudo[95967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:50 compute-2 ceph-mon[77282]: 11.4 scrub starts
Jan 31 07:11:50 compute-2 ceph-mon[77282]: 11.4 scrub ok
Jan 31 07:11:50 compute-2 ceph-mon[77282]: pgmap v318: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:50 compute-2 python3.9[95969]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:11:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:51 compute-2 ceph-mon[77282]: 11.7 scrub starts
Jan 31 07:11:51 compute-2 ceph-mon[77282]: 11.7 scrub ok
Jan 31 07:11:51 compute-2 ceph-mon[77282]: 8.c scrub starts
Jan 31 07:11:51 compute-2 ceph-mon[77282]: 8.c scrub ok
Jan 31 07:11:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:52.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:52 compute-2 sudo[95967]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:52.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:52 compute-2 ceph-mon[77282]: pgmap v319: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:53 compute-2 sudo[96122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukgkukqrepmqtphmnrypuuozyaklodea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843512.6743202-172-205680069678423/AnsiballZ_systemd.py'
Jan 31 07:11:53 compute-2 sudo[96122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:53 compute-2 python3.9[96124]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:11:53 compute-2 sudo[96122]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:54.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:54 compute-2 python3.9[96277]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:11:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:54.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:54 compute-2 ceph-mon[77282]: pgmap v320: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:55 compute-2 sudo[96428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgzedswwsflspfdbowqqqistpzcfpals ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843514.6974456-226-155339415604021/AnsiballZ_sefcontext.py'
Jan 31 07:11:55 compute-2 sudo[96428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:55 compute-2 python3.9[96430]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 07:11:55 compute-2 sudo[96428]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:55 compute-2 ceph-mon[77282]: pgmap v321: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:56.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:56 compute-2 python3.9[96580]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:11:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:56.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:11:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:56 compute-2 sudo[96736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdaaokrldotioolfnuwqliakhwmdxfic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843516.780649-280-165634817313687/AnsiballZ_dnf.py'
Jan 31 07:11:56 compute-2 sudo[96736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:57 compute-2 python3.9[96739]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:11:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:58.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:58 compute-2 ceph-mon[77282]: 11.1 scrub starts
Jan 31 07:11:58 compute-2 ceph-mon[77282]: 11.1 scrub ok
Jan 31 07:11:58 compute-2 ceph-mon[77282]: pgmap v322: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:58 compute-2 sudo[96736]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:58 compute-2 sudo[96741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:58 compute-2 sudo[96741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:58 compute-2 sudo[96741]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:11:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:58.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:58 compute-2 sudo[96790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:58 compute-2 sudo[96790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:58 compute-2 sudo[96790]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:59 compute-2 sudo[96941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iapzotmmeoxzfbzpinrtskreqyvcfvpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843518.8186164-304-144214592458054/AnsiballZ_command.py'
Jan 31 07:11:59 compute-2 sudo[96941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:59 compute-2 python3.9[96943]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:11:59 compute-2 ceph-mon[77282]: 11.f scrub starts
Jan 31 07:11:59 compute-2 ceph-mon[77282]: 11.f scrub ok
Jan 31 07:11:59 compute-2 sudo[96941]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:00.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:00 compute-2 ceph-mon[77282]: 8.8 scrub starts
Jan 31 07:12:00 compute-2 ceph-mon[77282]: 8.8 scrub ok
Jan 31 07:12:00 compute-2 ceph-mon[77282]: pgmap v323: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:00.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:00 compute-2 sudo[97228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohltflwoibmjrkooikljsijutqkrtgkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843520.3182752-329-160157498603901/AnsiballZ_file.py'
Jan 31 07:12:00 compute-2 sudo[97228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:00 compute-2 python3.9[97230]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 07:12:00 compute-2 sudo[97228]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:01 compute-2 python3.9[97381]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:12:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:02.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:02 compute-2 sudo[97533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liodtdjgoqqzaybqokcyihqojfmcmshj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843521.9165664-376-192147593535040/AnsiballZ_dnf.py'
Jan 31 07:12:02 compute-2 sudo[97533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:02 compute-2 python3.9[97535]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:12:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:02.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:02 compute-2 ceph-mon[77282]: 8.17 scrub starts
Jan 31 07:12:02 compute-2 ceph-mon[77282]: 8.17 scrub ok
Jan 31 07:12:02 compute-2 ceph-mon[77282]: pgmap v324: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:03 compute-2 sudo[97533]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:04.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:04 compute-2 sudo[97687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iquknevksokofogbcmkufwqsldgyaasl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843524.0754492-403-274572357380952/AnsiballZ_dnf.py'
Jan 31 07:12:04 compute-2 sudo[97687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:04 compute-2 python3.9[97689]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:12:04 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 31 07:12:04 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 31 07:12:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:04.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:04 compute-2 ceph-mon[77282]: 11.14 scrub starts
Jan 31 07:12:04 compute-2 ceph-mon[77282]: 11.14 scrub ok
Jan 31 07:12:04 compute-2 ceph-mon[77282]: pgmap v325: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:05 compute-2 ceph-mon[77282]: 11.3 scrub starts
Jan 31 07:12:05 compute-2 ceph-mon[77282]: 11.3 scrub ok
Jan 31 07:12:05 compute-2 sudo[97687]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:06.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:06.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:06 compute-2 sudo[97841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faukaavvrlhzmgsjxxetqlywhzsbheye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843526.5516112-439-35010179583858/AnsiballZ_stat.py'
Jan 31 07:12:06 compute-2 sudo[97841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:06 compute-2 ceph-mon[77282]: pgmap v326: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:06 compute-2 python3.9[97843]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:12:06 compute-2 sudo[97841]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:07 compute-2 sudo[97996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnirheuvbxfnhustaddxyotiddvdezfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843527.1577787-464-227577888954038/AnsiballZ_slurp.py'
Jan 31 07:12:07 compute-2 sudo[97996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:07 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 31 07:12:07 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 31 07:12:07 compute-2 python3.9[97998]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 07:12:07 compute-2 sudo[97996]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:08.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:08.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:08 compute-2 ceph-mon[77282]: 8.14 scrub starts
Jan 31 07:12:08 compute-2 ceph-mon[77282]: 8.14 scrub ok
Jan 31 07:12:08 compute-2 ceph-mon[77282]: pgmap v327: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:08 compute-2 ceph-mon[77282]: 11.17 scrub starts
Jan 31 07:12:08 compute-2 ceph-mon[77282]: 11.17 scrub ok
Jan 31 07:12:08 compute-2 sshd-session[95270]: Connection closed by 192.168.122.30 port 52484
Jan 31 07:12:08 compute-2 sshd-session[95267]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:12:08 compute-2 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 07:12:08 compute-2 systemd[1]: session-35.scope: Consumed 15.751s CPU time.
Jan 31 07:12:08 compute-2 systemd-logind[801]: Session 35 logged out. Waiting for processes to exit.
Jan 31 07:12:08 compute-2 systemd-logind[801]: Removed session 35.
Jan 31 07:12:09 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 31 07:12:09 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 31 07:12:09 compute-2 ceph-mon[77282]: 6.2 scrub starts
Jan 31 07:12:09 compute-2 ceph-mon[77282]: 6.2 scrub ok
Jan 31 07:12:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:10.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:10 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 31 07:12:10 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 31 07:12:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:10.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:10 compute-2 ceph-mon[77282]: 6.a scrub starts
Jan 31 07:12:10 compute-2 ceph-mon[77282]: 6.a scrub ok
Jan 31 07:12:10 compute-2 ceph-mon[77282]: pgmap v328: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:10 compute-2 ceph-mon[77282]: 8.1c scrub starts
Jan 31 07:12:10 compute-2 ceph-mon[77282]: 8.1c scrub ok
Jan 31 07:12:11 compute-2 ceph-mon[77282]: 10.1e scrub starts
Jan 31 07:12:11 compute-2 ceph-mon[77282]: 10.1e scrub ok
Jan 31 07:12:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:12.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:12.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:12 compute-2 ceph-mon[77282]: pgmap v329: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:13 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 31 07:12:13 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 31 07:12:13 compute-2 sshd-session[98026]: Accepted publickey for zuul from 192.168.122.30 port 43768 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:12:13 compute-2 systemd-logind[801]: New session 36 of user zuul.
Jan 31 07:12:13 compute-2 systemd[1]: Started Session 36 of User zuul.
Jan 31 07:12:13 compute-2 sshd-session[98026]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:12:13 compute-2 ceph-mon[77282]: 6.3 scrub starts
Jan 31 07:12:13 compute-2 ceph-mon[77282]: 6.3 scrub ok
Jan 31 07:12:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:14.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:14 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 31 07:12:14 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 31 07:12:14 compute-2 python3.9[98179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:12:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:14.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:14 compute-2 ceph-mon[77282]: pgmap v330: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:14 compute-2 ceph-mon[77282]: 9.b scrub starts
Jan 31 07:12:14 compute-2 ceph-mon[77282]: 9.b scrub ok
Jan 31 07:12:15 compute-2 python3.9[98334]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:12:15 compute-2 ceph-mon[77282]: 9.7 scrub starts
Jan 31 07:12:15 compute-2 ceph-mon[77282]: 9.7 scrub ok
Jan 31 07:12:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:16 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 31 07:12:16 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 31 07:12:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:16.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:16 compute-2 python3.9[98527]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:12:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:16 compute-2 ceph-mon[77282]: 6.7 scrub starts
Jan 31 07:12:16 compute-2 ceph-mon[77282]: 6.7 scrub ok
Jan 31 07:12:16 compute-2 ceph-mon[77282]: pgmap v331: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:17 compute-2 sshd-session[98029]: Connection closed by 192.168.122.30 port 43768
Jan 31 07:12:17 compute-2 sshd-session[98026]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:12:17 compute-2 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 07:12:17 compute-2 systemd[1]: session-36.scope: Consumed 1.856s CPU time.
Jan 31 07:12:17 compute-2 systemd-logind[801]: Session 36 logged out. Waiting for processes to exit.
Jan 31 07:12:17 compute-2 systemd-logind[801]: Removed session 36.
Jan 31 07:12:17 compute-2 ceph-mon[77282]: 9.13 scrub starts
Jan 31 07:12:17 compute-2 ceph-mon[77282]: 9.13 scrub ok
Jan 31 07:12:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:18.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:18 compute-2 sudo[98555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:18 compute-2 sudo[98555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-2 sudo[98555]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-2 sudo[98580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:12:18 compute-2 sudo[98580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-2 sudo[98580]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-2 sudo[98605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:18 compute-2 sudo[98605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-2 sudo[98605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-2 sudo[98630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:12:18 compute-2 sudo[98630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:18.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:18 compute-2 sudo[98731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:18 compute-2 sudo[98731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-2 sudo[98731]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-2 podman[98724]: 2026-01-31 07:12:18.6882412 +0000 UTC m=+0.052872904 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:12:18 compute-2 sudo[98770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:18 compute-2 sudo[98770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-2 sudo[98770]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-2 podman[98724]: 2026-01-31 07:12:18.781407718 +0000 UTC m=+0.146039402 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:12:18 compute-2 ceph-mon[77282]: 6.5 scrub starts
Jan 31 07:12:18 compute-2 ceph-mon[77282]: 6.5 scrub ok
Jan 31 07:12:18 compute-2 ceph-mon[77282]: pgmap v332: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:19 compute-2 podman[98929]: 2026-01-31 07:12:19.235623884 +0000 UTC m=+0.042990232 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:12:19 compute-2 podman[98929]: 2026-01-31 07:12:19.249583807 +0000 UTC m=+0.056950155 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:12:19 compute-2 podman[98993]: 2026-01-31 07:12:19.402377654 +0000 UTC m=+0.041446590 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793)
Jan 31 07:12:19 compute-2 podman[98993]: 2026-01-31 07:12:19.417435667 +0000 UTC m=+0.056504573 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, name=keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 31 07:12:19 compute-2 sudo[98630]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:19 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 31 07:12:19 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 31 07:12:19 compute-2 sudo[99027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:19 compute-2 sudo[99027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:19 compute-2 sudo[99027]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:19 compute-2 sudo[99052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:12:19 compute-2 sudo[99052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:19 compute-2 sudo[99052]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:19 compute-2 sudo[99077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:19 compute-2 sudo[99077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:19 compute-2 sudo[99077]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:19 compute-2 sudo[99102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:12:19 compute-2 sudo[99102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:19 compute-2 ceph-mon[77282]: 6.d scrub starts
Jan 31 07:12:19 compute-2 ceph-mon[77282]: 6.d scrub ok
Jan 31 07:12:19 compute-2 ceph-mon[77282]: pgmap v333: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:19 compute-2 sudo[99102]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:20.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:20.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:20 compute-2 ceph-mon[77282]: 9.17 scrub starts
Jan 31 07:12:20 compute-2 ceph-mon[77282]: 9.17 scrub ok
Jan 31 07:12:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:12:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:12:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:12:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:12:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:12:21 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 31 07:12:21 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 31 07:12:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:21 compute-2 ceph-mon[77282]: pgmap v334: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:22.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:22.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:22 compute-2 ceph-mon[77282]: 9.3 scrub starts
Jan 31 07:12:22 compute-2 ceph-mon[77282]: 9.3 scrub ok
Jan 31 07:12:23 compute-2 sshd-session[99159]: Accepted publickey for zuul from 192.168.122.30 port 38868 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:12:23 compute-2 systemd-logind[801]: New session 37 of user zuul.
Jan 31 07:12:23 compute-2 systemd[1]: Started Session 37 of User zuul.
Jan 31 07:12:23 compute-2 sshd-session[99159]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:12:23 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 31 07:12:23 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 31 07:12:23 compute-2 ceph-mon[77282]: 9.6 scrub starts
Jan 31 07:12:23 compute-2 ceph-mon[77282]: 9.6 scrub ok
Jan 31 07:12:23 compute-2 ceph-mon[77282]: pgmap v335: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:24.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:24 compute-2 python3.9[99312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:12:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:24.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:24 compute-2 python3.9[99466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:12:24 compute-2 ceph-mon[77282]: 9.e scrub starts
Jan 31 07:12:24 compute-2 ceph-mon[77282]: 9.e scrub ok
Jan 31 07:12:24 compute-2 ceph-mon[77282]: 9.5 scrub starts
Jan 31 07:12:24 compute-2 ceph-mon[77282]: 9.5 scrub ok
Jan 31 07:12:25 compute-2 sudo[99571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:25 compute-2 sudo[99571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:25 compute-2 sudo[99571]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:25 compute-2 sudo[99620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:12:25 compute-2 sudo[99620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:25 compute-2 sudo[99620]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:25 compute-2 sudo[99669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqdgvgerusgbfquvnfeqdxcqxsgqmirz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843545.4877896-82-84841613769593/AnsiballZ_setup.py'
Jan 31 07:12:25 compute-2 sudo[99669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:26 compute-2 python3.9[99673]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:12:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:26.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:26 compute-2 sudo[99669]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:26 compute-2 ceph-mon[77282]: pgmap v336: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:26 compute-2 sudo[99755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwsgsdvpwdgthjacmioverbenlvlosrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843545.4877896-82-84841613769593/AnsiballZ_dnf.py'
Jan 31 07:12:26 compute-2 sudo[99755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:26.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:26 compute-2 python3.9[99757]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:12:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:28.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:28 compute-2 sudo[99755]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:28 compute-2 ceph-mon[77282]: pgmap v337: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:28 compute-2 sudo[99909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aatsfwegocahzdoznujtvvctodevzfzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843548.4211898-118-95161029507058/AnsiballZ_setup.py'
Jan 31 07:12:28 compute-2 sudo[99909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:28.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:28 compute-2 python3.9[99911]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:12:29 compute-2 sudo[99909]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:29 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 31 07:12:29 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 31 07:12:29 compute-2 sudo[100105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmsxjfyxryidykbxthxrtydpjrdozzdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843549.5861378-151-49605961803659/AnsiballZ_file.py'
Jan 31 07:12:29 compute-2 sudo[100105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:30.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:30 compute-2 python3.9[100107]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:12:30 compute-2 sudo[100105]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:30 compute-2 ceph-mon[77282]: 6.8 scrub starts
Jan 31 07:12:30 compute-2 ceph-mon[77282]: 6.8 scrub ok
Jan 31 07:12:30 compute-2 ceph-mon[77282]: pgmap v338: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:30 compute-2 ceph-mon[77282]: 9.18 scrub starts
Jan 31 07:12:30 compute-2 ceph-mon[77282]: 9.18 scrub ok
Jan 31 07:12:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:30.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:30 compute-2 sudo[100257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xttqfmqzbjasonmqhrsyylypzxmebaja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843550.3875399-176-112174144527037/AnsiballZ_command.py'
Jan 31 07:12:30 compute-2 sudo[100257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:30 compute-2 python3.9[100259]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:12:30 compute-2 sudo[100257]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:31 compute-2 sudo[100423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzlbbqbpfhkbmylrawyxjmcxdgwipgxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843551.3474736-201-225840829846062/AnsiballZ_stat.py'
Jan 31 07:12:31 compute-2 sudo[100423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:31 compute-2 python3.9[100425]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:12:31 compute-2 sudo[100423]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:32 compute-2 sudo[100501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwfgxjbacqkjdcfortdypinlorgmepfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843551.3474736-201-225840829846062/AnsiballZ_file.py'
Jan 31 07:12:32 compute-2 sudo[100501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:32.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:32 compute-2 python3.9[100503]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:12:32 compute-2 sudo[100501]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:32 compute-2 ceph-mon[77282]: 9.a scrub starts
Jan 31 07:12:32 compute-2 ceph-mon[77282]: pgmap v339: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:32 compute-2 ceph-mon[77282]: 9.a scrub ok
Jan 31 07:12:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:32.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:32 compute-2 sudo[100653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nncnujtcmoesbsnagfimvefgyttkmcgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843552.5012934-235-30829537858677/AnsiballZ_stat.py'
Jan 31 07:12:32 compute-2 sudo[100653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:32 compute-2 python3.9[100655]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:12:32 compute-2 sudo[100653]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:33 compute-2 sudo[100732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-burviotchmwxpbgajfcfdtdnppaljtzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843552.5012934-235-30829537858677/AnsiballZ_file.py'
Jan 31 07:12:33 compute-2 sudo[100732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:33 compute-2 python3.9[100734]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:12:33 compute-2 sudo[100732]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:33 compute-2 sudo[100884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fipsmvdvashirmiirxmxyopaocxogcjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843553.623893-274-33643781176142/AnsiballZ_ini_file.py'
Jan 31 07:12:33 compute-2 sudo[100884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:34.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:34 compute-2 python3.9[100886]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:12:34 compute-2 sudo[100884]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:34 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 31 07:12:34 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 31 07:12:34 compute-2 sudo[101036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glkvvuywwpnspcfqiddllrjgleladbxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843554.5721834-274-204378743224038/AnsiballZ_ini_file.py'
Jan 31 07:12:34 compute-2 sudo[101036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:34.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:34 compute-2 ceph-mon[77282]: pgmap v340: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:34 compute-2 python3.9[101038]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:12:34 compute-2 sudo[101036]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:35 compute-2 sudo[101189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rstelwxwewvebqcenvbyemxihkwlspsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843555.0921001-274-104934499367136/AnsiballZ_ini_file.py'
Jan 31 07:12:35 compute-2 sudo[101189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:35 compute-2 python3.9[101191]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:12:35 compute-2 sudo[101189]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:35 compute-2 ceph-mon[77282]: 9.8 scrub starts
Jan 31 07:12:35 compute-2 ceph-mon[77282]: 9.8 scrub ok
Jan 31 07:12:35 compute-2 ceph-mon[77282]: pgmap v341: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:35 compute-2 sudo[101341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvfhctxssgafagrhuvympykgsckcktou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843555.7553835-274-90156844869036/AnsiballZ_ini_file.py'
Jan 31 07:12:36 compute-2 sudo[101341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:36.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:36 compute-2 python3.9[101343]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:12:36 compute-2 sudo[101341]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:36.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:37 compute-2 ceph-mon[77282]: 6.e scrub starts
Jan 31 07:12:37 compute-2 ceph-mon[77282]: 6.e scrub ok
Jan 31 07:12:37 compute-2 sudo[101494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggoblblktibqlvntzeynmqwymiawxdbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843557.0382917-367-247862764462092/AnsiballZ_dnf.py'
Jan 31 07:12:37 compute-2 sudo[101494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:37 compute-2 python3.9[101496]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:12:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:38.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:38 compute-2 ceph-mon[77282]: pgmap v342: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:38 compute-2 sudo[101498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:38 compute-2 sudo[101498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:38 compute-2 sudo[101498]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:38 compute-2 sudo[101523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:38 compute-2 sudo[101523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:38 compute-2 sudo[101523]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:38.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:38 compute-2 sudo[101494]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:40.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:40 compute-2 ceph-mon[77282]: pgmap v343: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:40 compute-2 sudo[101699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmpwoctzmhlddtmtmnchadcirulkhdii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843560.5522099-400-27657730423235/AnsiballZ_setup.py'
Jan 31 07:12:40 compute-2 sudo[101699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:40.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:41 compute-2 python3.9[101701]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:12:41 compute-2 sudo[101699]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:41 compute-2 ceph-mon[77282]: 9.d scrub starts
Jan 31 07:12:41 compute-2 ceph-mon[77282]: 9.d scrub ok
Jan 31 07:12:41 compute-2 sudo[101854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiabydlocwiuuriaguegldwgkspxzupk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843561.2743137-424-120351881772/AnsiballZ_stat.py'
Jan 31 07:12:41 compute-2 sudo[101854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:41 compute-2 python3.9[101856]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:12:41 compute-2 sudo[101854]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:42.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:42 compute-2 sudo[102006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mihfugsgcxynulpvosdhuyyfiahahzpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843561.9629695-452-261191691568215/AnsiballZ_stat.py'
Jan 31 07:12:42 compute-2 sudo[102006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:42 compute-2 python3.9[102008]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:12:42 compute-2 sudo[102006]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:42 compute-2 ceph-mon[77282]: 9.f scrub starts
Jan 31 07:12:42 compute-2 ceph-mon[77282]: 9.f scrub ok
Jan 31 07:12:42 compute-2 ceph-mon[77282]: pgmap v344: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:42 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 31 07:12:42 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 31 07:12:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:42.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:42 compute-2 sudo[102158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwiagwkfzmnkaoocqlrjmudzmhyfyrzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843562.6960964-481-51985629860280/AnsiballZ_command.py'
Jan 31 07:12:42 compute-2 sudo[102158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:43 compute-2 python3.9[102160]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:12:43 compute-2 sudo[102158]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:43 compute-2 ceph-mon[77282]: 9.10 scrub starts
Jan 31 07:12:43 compute-2 ceph-mon[77282]: 9.10 scrub ok
Jan 31 07:12:43 compute-2 ceph-mon[77282]: 9.9 scrub starts
Jan 31 07:12:43 compute-2 ceph-mon[77282]: 9.9 scrub ok
Jan 31 07:12:43 compute-2 sudo[102312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvrzsqkumsfjbcfayfrbltbhekpkfwam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843563.4602118-512-246140705351453/AnsiballZ_service_facts.py'
Jan 31 07:12:43 compute-2 sudo[102312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:44 compute-2 python3.9[102314]: ansible-service_facts Invoked
Jan 31 07:12:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:44.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:44 compute-2 network[102331]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:12:44 compute-2 network[102332]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:12:44 compute-2 network[102333]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:12:44 compute-2 ceph-mon[77282]: pgmap v345: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:12:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:44.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:12:45 compute-2 ceph-mon[77282]: 9.11 scrub starts
Jan 31 07:12:45 compute-2 ceph-mon[77282]: 9.11 scrub ok
Jan 31 07:12:46 compute-2 sudo[102312]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:46.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:46 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 31 07:12:46 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 31 07:12:46 compute-2 ceph-mon[77282]: pgmap v346: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:47 compute-2 ceph-mon[77282]: 9.16 scrub starts
Jan 31 07:12:47 compute-2 ceph-mon[77282]: 9.16 scrub ok
Jan 31 07:12:47 compute-2 ceph-mon[77282]: 9.12 scrub starts
Jan 31 07:12:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:48 compute-2 sudo[102618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqwfiatvvsnllkunhrqimeganiskhoob ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769843568.2237997-557-18144021867945/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769843568.2237997-557-18144021867945/args'
Jan 31 07:12:48 compute-2 sudo[102618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:48 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 31 07:12:48 compute-2 ceph-osd[79942]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 31 07:12:48 compute-2 sudo[102618]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:48 compute-2 ceph-mon[77282]: 9.12 scrub ok
Jan 31 07:12:48 compute-2 ceph-mon[77282]: pgmap v347: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:48.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:49 compute-2 sudo[102786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmqbecbtzoyklvguxlqhusjbvchkfwas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843569.114768-590-277498292181642/AnsiballZ_dnf.py'
Jan 31 07:12:49 compute-2 sudo[102786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:49 compute-2 python3.9[102788]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:12:49 compute-2 ceph-mon[77282]: 9.1d scrub starts
Jan 31 07:12:49 compute-2 ceph-mon[77282]: 9.1d scrub ok
Jan 31 07:12:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:50 compute-2 ceph-mon[77282]: pgmap v348: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:50 compute-2 sudo[102786]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:50.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:51 compute-2 ceph-mon[77282]: 9.15 scrub starts
Jan 31 07:12:51 compute-2 ceph-mon[77282]: 9.15 scrub ok
Jan 31 07:12:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:52 compute-2 sudo[102940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bajzqpwvdfpjmkscmqexcshrtlvlmzqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843571.4989088-629-196498529976329/AnsiballZ_package_facts.py'
Jan 31 07:12:52 compute-2 sudo[102940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:12:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:52.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:12:52 compute-2 python3.9[102942]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 07:12:52 compute-2 sudo[102940]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:52 compute-2 ceph-mon[77282]: pgmap v349: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:52.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:53 compute-2 sudo[103093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chuecgfayxoxutdiqfaxiyjsanfvehur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843573.4279776-659-93824050818334/AnsiballZ_stat.py'
Jan 31 07:12:53 compute-2 sudo[103093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:53 compute-2 python3.9[103095]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:12:53 compute-2 sudo[103093]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:54 compute-2 sudo[103171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zldhexllmeiacsidstnvhieydomdomsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843573.4279776-659-93824050818334/AnsiballZ_file.py'
Jan 31 07:12:54 compute-2 sudo[103171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:54.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:54 compute-2 python3.9[103173]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:12:54 compute-2 sudo[103171]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:54 compute-2 ceph-mon[77282]: pgmap v350: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:54 compute-2 sudo[103323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbovnhpoafnmgslohkwykuaacjezblvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843574.639219-696-105868265111860/AnsiballZ_stat.py'
Jan 31 07:12:54 compute-2 sudo[103323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:54.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:55 compute-2 python3.9[103325]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:12:55 compute-2 sudo[103323]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:55 compute-2 sudo[103402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkzezgqwmodhoiopbtuxdorncozrvool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843574.639219-696-105868265111860/AnsiballZ_file.py'
Jan 31 07:12:55 compute-2 sudo[103402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:55 compute-2 python3.9[103404]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:12:55 compute-2 sudo[103402]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:56.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:56 compute-2 ceph-mon[77282]: pgmap v351: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:56.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:57 compute-2 sudo[103555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzqtdocmzsagwbcblwztpkrfdxevexnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843576.6778138-751-119582294313848/AnsiballZ_lineinfile.py'
Jan 31 07:12:57 compute-2 sudo[103555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:57 compute-2 python3.9[103557]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:12:57 compute-2 sudo[103555]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:58.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:58 compute-2 sudo[103707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrdpuqwxvzsumeugleonacvodalqzwks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843578.4771378-794-99373363481335/AnsiballZ_setup.py'
Jan 31 07:12:58 compute-2 sudo[103707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:58 compute-2 ceph-mon[77282]: pgmap v352: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:58 compute-2 sudo[103710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:58 compute-2 sudo[103710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:58 compute-2 sudo[103710]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:12:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:58.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:58 compute-2 python3.9[103709]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:12:58 compute-2 sudo[103735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:58 compute-2 sudo[103735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:58 compute-2 sudo[103735]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:59 compute-2 sudo[103707]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:59 compute-2 sudo[103842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdqplqoyfrjbtxjdiknytcpclietszbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843578.4771378-794-99373363481335/AnsiballZ_systemd.py'
Jan 31 07:12:59 compute-2 sudo[103842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:00 compute-2 python3.9[103844]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:13:00 compute-2 sudo[103842]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:00.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:00 compute-2 ceph-mon[77282]: pgmap v353: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:00.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:01 compute-2 sshd-session[99162]: Connection closed by 192.168.122.30 port 38868
Jan 31 07:13:01 compute-2 sshd-session[99159]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:13:01 compute-2 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 07:13:01 compute-2 systemd[1]: session-37.scope: Consumed 19.950s CPU time.
Jan 31 07:13:01 compute-2 systemd-logind[801]: Session 37 logged out. Waiting for processes to exit.
Jan 31 07:13:01 compute-2 systemd-logind[801]: Removed session 37.
Jan 31 07:13:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:01 compute-2 ceph-mon[77282]: pgmap v354: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:02.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:02.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:04.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:04 compute-2 ceph-mon[77282]: pgmap v355: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:04.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:06.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:06 compute-2 ceph-mon[77282]: pgmap v356: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:06.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:07 compute-2 sshd-session[103874]: Invalid user sol from 92.118.39.56 port 58380
Jan 31 07:13:07 compute-2 sshd-session[103874]: Connection closed by invalid user sol 92.118.39.56 port 58380 [preauth]
Jan 31 07:13:07 compute-2 sshd-session[103878]: Accepted publickey for zuul from 192.168.122.30 port 47148 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:13:07 compute-2 systemd-logind[801]: New session 38 of user zuul.
Jan 31 07:13:07 compute-2 systemd[1]: Started Session 38 of User zuul.
Jan 31 07:13:07 compute-2 sshd-session[103878]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:13:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:08.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:08 compute-2 sudo[104031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akjglvrvbvgkjoaksofxervfbkositif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843587.894415-28-226153934552193/AnsiballZ_file.py'
Jan 31 07:13:08 compute-2 sudo[104031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:08 compute-2 python3.9[104033]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:08 compute-2 sudo[104031]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:08 compute-2 ceph-mon[77282]: pgmap v357: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:08.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:09 compute-2 sudo[104184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edwklppidvygcjvmsuoylzroqfmddbnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843588.7373865-65-62776390957172/AnsiballZ_stat.py'
Jan 31 07:13:09 compute-2 sudo[104184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:09 compute-2 python3.9[104186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:09 compute-2 sudo[104184]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:09 compute-2 sudo[104262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvdnposdaylwlqslnmltmjkdofzneljq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843588.7373865-65-62776390957172/AnsiballZ_file.py'
Jan 31 07:13:09 compute-2 sudo[104262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:09 compute-2 python3.9[104264]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:09 compute-2 sudo[104262]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:10 compute-2 sshd-session[103881]: Connection closed by 192.168.122.30 port 47148
Jan 31 07:13:10 compute-2 sshd-session[103878]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:13:10 compute-2 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 07:13:10 compute-2 systemd[1]: session-38.scope: Consumed 1.306s CPU time.
Jan 31 07:13:10 compute-2 systemd-logind[801]: Session 38 logged out. Waiting for processes to exit.
Jan 31 07:13:10 compute-2 systemd-logind[801]: Removed session 38.
Jan 31 07:13:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:10.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:10 compute-2 ceph-mon[77282]: pgmap v358: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:10.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:12.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:12 compute-2 ceph-mon[77282]: pgmap v359: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:12.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:14.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:14 compute-2 ceph-mon[77282]: pgmap v360: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:14.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:15 compute-2 sshd-session[104293]: Accepted publickey for zuul from 192.168.122.30 port 47156 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:13:15 compute-2 systemd-logind[801]: New session 39 of user zuul.
Jan 31 07:13:15 compute-2 systemd[1]: Started Session 39 of User zuul.
Jan 31 07:13:15 compute-2 sshd-session[104293]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:13:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:16.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:16 compute-2 ceph-mon[77282]: pgmap v361: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:16 compute-2 python3.9[104446]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:13:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:16.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:17 compute-2 sudo[104601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqngiototldwudxqdjmalpyvtyleolnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843597.3220007-61-16091926660784/AnsiballZ_file.py'
Jan 31 07:13:17 compute-2 sudo[104601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:17 compute-2 python3.9[104603]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:17 compute-2 sudo[104601]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:18.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:18 compute-2 sudo[104776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwbocisqjsuhmswcvfjyaqpjuppgehho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843598.1232777-85-238241745841339/AnsiballZ_stat.py'
Jan 31 07:13:18 compute-2 sudo[104776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:18 compute-2 ceph-mon[77282]: pgmap v362: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:18 compute-2 python3.9[104778]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:18 compute-2 sudo[104776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:18.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:18 compute-2 sudo[104854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmamaepoclojapiqavprsbgqmoqvxexg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843598.1232777-85-238241745841339/AnsiballZ_file.py'
Jan 31 07:13:18 compute-2 sudo[104854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:19 compute-2 sudo[104858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:19 compute-2 sudo[104858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:19 compute-2 sudo[104858]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:19 compute-2 sudo[104883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:19 compute-2 sudo[104883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:19 compute-2 sudo[104883]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:19 compute-2 python3.9[104857]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.trk45ig0 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:19 compute-2 sudo[104854]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:19 compute-2 sudo[105057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esofzoidksfkcgmwvawdowacsugrxorn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843599.7585418-146-22031099125044/AnsiballZ_stat.py'
Jan 31 07:13:19 compute-2 sudo[105057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:20.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:20 compute-2 python3.9[105059]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:20 compute-2 sudo[105057]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:20 compute-2 sudo[105135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrlsprrdnjnqfnexrmxhuhkqvwmgybqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843599.7585418-146-22031099125044/AnsiballZ_file.py'
Jan 31 07:13:20 compute-2 sudo[105135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:20 compute-2 python3.9[105137]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.k1_anp78 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:20 compute-2 sudo[105135]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:20 compute-2 ceph-mon[77282]: pgmap v363: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:20.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:21 compute-2 sudo[105288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-javlnndsaceysqidrikupdfclgotxabs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843600.9326339-185-168037358136282/AnsiballZ_file.py'
Jan 31 07:13:21 compute-2 sudo[105288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:21 compute-2 python3.9[105290]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:13:21 compute-2 sudo[105288]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:21 compute-2 sudo[105440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dukbiqusceiyrvfwldwgkkteakbprnje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843601.5254116-209-240375310193195/AnsiballZ_stat.py'
Jan 31 07:13:21 compute-2 sudo[105440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:21 compute-2 python3.9[105442]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:21 compute-2 sudo[105440]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:22 compute-2 sudo[105518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-penjagmkzmrjonfedwmpgulxmedapgel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843601.5254116-209-240375310193195/AnsiballZ_file.py'
Jan 31 07:13:22 compute-2 sudo[105518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:13:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:22.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:13:22 compute-2 python3.9[105520]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:13:22 compute-2 sudo[105518]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:22 compute-2 sudo[105670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bflxqejbxulhveabjhdaorxrvjuqjdlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843602.4280524-209-197326756655291/AnsiballZ_stat.py'
Jan 31 07:13:22 compute-2 sudo[105670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:22 compute-2 ceph-mon[77282]: pgmap v364: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.721154) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602721287, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2556, "num_deletes": 251, "total_data_size": 5269679, "memory_usage": 5351040, "flush_reason": "Manual Compaction"}
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602735999, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3439665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7304, "largest_seqno": 9855, "table_properties": {"data_size": 3430016, "index_size": 5631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 24839, "raw_average_key_size": 21, "raw_value_size": 3408362, "raw_average_value_size": 2918, "num_data_blocks": 252, "num_entries": 1168, "num_filter_entries": 1168, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843416, "oldest_key_time": 1769843416, "file_creation_time": 1769843602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 14938 microseconds, and 7820 cpu microseconds.
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.736103) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3439665 bytes OK
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.736126) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.738353) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.738421) EVENT_LOG_v1 {"time_micros": 1769843602738408, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.738449) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5257956, prev total WAL file size 5257956, number of live WAL files 2.
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.739259) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3359KB)], [15(7651KB)]
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602739329, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11275280, "oldest_snapshot_seqno": -1}
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3845 keys, 9678335 bytes, temperature: kUnknown
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602786987, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9678335, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9646723, "index_size": 20878, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 92752, "raw_average_key_size": 24, "raw_value_size": 9571523, "raw_average_value_size": 2489, "num_data_blocks": 913, "num_entries": 3845, "num_filter_entries": 3845, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769843602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.787504) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9678335 bytes
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.788980) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.0 rd, 201.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4366, records dropped: 521 output_compression: NoCompression
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.789004) EVENT_LOG_v1 {"time_micros": 1769843602788992, "job": 6, "event": "compaction_finished", "compaction_time_micros": 47976, "compaction_time_cpu_micros": 20974, "output_level": 6, "num_output_files": 1, "total_output_size": 9678335, "num_input_records": 4366, "num_output_records": 3845, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602789636, "job": 6, "event": "table_file_deletion", "file_number": 17}
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602790337, "job": 6, "event": "table_file_deletion", "file_number": 15}
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.739179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.790455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.790461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.790463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.790464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:13:22.790466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-2 python3.9[105672]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:22 compute-2 sudo[105670]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:22.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:23 compute-2 sudo[105749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfbmwknxlberpuygrfrwbtkclfzegajd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843602.4280524-209-197326756655291/AnsiballZ_file.py'
Jan 31 07:13:23 compute-2 sudo[105749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:23 compute-2 python3.9[105751]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:13:23 compute-2 sudo[105749]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:23 compute-2 sudo[105901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzdfkivcpyglpdjlwcehsbgjdiydrhdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843603.6233156-277-2734622696771/AnsiballZ_file.py'
Jan 31 07:13:23 compute-2 sudo[105901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:23 compute-2 python3.9[105903]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:23 compute-2 sudo[105901]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:24.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:24 compute-2 sudo[106053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynkakjooocbmecrpkjehwnndufuobbcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843604.206019-302-110811347402411/AnsiballZ_stat.py'
Jan 31 07:13:24 compute-2 sudo[106053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:24 compute-2 python3.9[106055]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:24 compute-2 sudo[106053]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:24 compute-2 ceph-mon[77282]: pgmap v365: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:24 compute-2 sudo[106131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkkdertgjpgysvayzxcjddickngaymak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843604.206019-302-110811347402411/AnsiballZ_file.py'
Jan 31 07:13:24 compute-2 sudo[106131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:24 compute-2 python3.9[106133]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:24.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:24 compute-2 sudo[106131]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:25 compute-2 sudo[106284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fadbvscaygnrpogrhycprgatebwkubdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843605.284918-338-219224958853770/AnsiballZ_stat.py'
Jan 31 07:13:25 compute-2 sudo[106284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:25 compute-2 python3.9[106286]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:25 compute-2 sudo[106284]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:25 compute-2 sudo[106309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:25 compute-2 sudo[106309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:25 compute-2 sudo[106309]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:25 compute-2 sudo[106337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:13:25 compute-2 sudo[106337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:25 compute-2 sudo[106337]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:25 compute-2 sudo[106386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:25 compute-2 sudo[106386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:25 compute-2 sudo[106386]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:25 compute-2 sudo[106437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izkxbcnfkvmjwknfaogumoqhdhssyvjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843605.284918-338-219224958853770/AnsiballZ_file.py'
Jan 31 07:13:25 compute-2 sudo[106437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:25 compute-2 sudo[106438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:13:25 compute-2 sudo[106438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:26 compute-2 python3.9[106452]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:26 compute-2 sudo[106437]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:26.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:26 compute-2 podman[106560]: 2026-01-31 07:13:26.271478 +0000 UTC m=+0.048847012 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:13:26 compute-2 podman[106560]: 2026-01-31 07:13:26.393277385 +0000 UTC m=+0.170646367 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:13:26 compute-2 ceph-mon[77282]: pgmap v366: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:26 compute-2 podman[106769]: 2026-01-31 07:13:26.864764146 +0000 UTC m=+0.049825820 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:13:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:26 compute-2 podman[106769]: 2026-01-31 07:13:26.882368779 +0000 UTC m=+0.067430453 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:13:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:26.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:27 compute-2 podman[106883]: 2026-01-31 07:13:27.046640211 +0000 UTC m=+0.049855570 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, architecture=x86_64, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=)
Jan 31 07:13:27 compute-2 sudo[106923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxyzajgyewljwvczdfhxfkirkqfmxoiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843606.3703475-374-122221194759042/AnsiballZ_systemd.py'
Jan 31 07:13:27 compute-2 sudo[106923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:27 compute-2 podman[106883]: 2026-01-31 07:13:27.064395328 +0000 UTC m=+0.067610667 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, version=2.2.4, description=keepalived for Ceph, name=keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 07:13:27 compute-2 sudo[106438]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:27 compute-2 sudo[106945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:27 compute-2 sudo[106945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:27 compute-2 sudo[106945]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:27 compute-2 sudo[106970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:13:27 compute-2 sudo[106970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:27 compute-2 sudo[106970]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:27 compute-2 sudo[106995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:27 compute-2 sudo[106995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:27 compute-2 sudo[106995]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:27 compute-2 sudo[107020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:13:27 compute-2 sudo[107020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:27 compute-2 python3.9[106931]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:13:27 compute-2 systemd[1]: Reloading.
Jan 31 07:13:27 compute-2 systemd-rc-local-generator[107071]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:13:27 compute-2 systemd-sysv-generator[107076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:13:27 compute-2 sudo[106923]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:27 compute-2 sudo[107020]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:28 compute-2 sudo[107264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbwriufjwtqxngkggotsxkbsfbcrtniu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843607.8416293-397-252107788524434/AnsiballZ_stat.py'
Jan 31 07:13:28 compute-2 sudo[107264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:28.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:28 compute-2 ceph-mon[77282]: pgmap v367: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:13:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:13:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:13:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:13:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:13:28 compute-2 python3.9[107266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:28 compute-2 sudo[107264]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:28 compute-2 sudo[107342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txphqhvjuabynjsduyfnvblpsltddyjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843607.8416293-397-252107788524434/AnsiballZ_file.py'
Jan 31 07:13:28 compute-2 sudo[107342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:28 compute-2 python3.9[107344]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:28 compute-2 sudo[107342]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:28.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:29 compute-2 sudo[107495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daxeylbspldsdwhvmdlowjhyploxgaxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843608.9098527-434-118074742060343/AnsiballZ_stat.py'
Jan 31 07:13:29 compute-2 sudo[107495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:29 compute-2 python3.9[107497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:29 compute-2 sudo[107495]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:29 compute-2 sudo[107573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzbvraavzsaqutvqfkoiuytavjvzqezo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843608.9098527-434-118074742060343/AnsiballZ_file.py'
Jan 31 07:13:29 compute-2 sudo[107573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:29 compute-2 python3.9[107575]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:29 compute-2 sudo[107573]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:30 compute-2 sudo[107725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntvirdwnvnjdcxtvhnkgkecjsrfwosxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843609.9242275-470-212505978183137/AnsiballZ_systemd.py'
Jan 31 07:13:30 compute-2 sudo[107725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:30.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:30 compute-2 python3.9[107727]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:13:30 compute-2 systemd[1]: Reloading.
Jan 31 07:13:30 compute-2 systemd-sysv-generator[107756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:13:30 compute-2 systemd-rc-local-generator[107751]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:13:30 compute-2 ceph-mon[77282]: pgmap v368: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:30.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:31 compute-2 systemd[1]: Starting Create netns directory...
Jan 31 07:13:31 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 07:13:31 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 07:13:31 compute-2 systemd[1]: Finished Create netns directory.
Jan 31 07:13:31 compute-2 sudo[107725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:32.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:32 compute-2 python3.9[107919]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:13:32 compute-2 network[107936]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:13:32 compute-2 network[107937]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:13:32 compute-2 network[107938]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:13:32 compute-2 ceph-mon[77282]: pgmap v369: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:32.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:33 compute-2 sudo[107991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:33 compute-2 sudo[107991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:33 compute-2 sudo[107991]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:33 compute-2 sudo[108020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:13:33 compute-2 sudo[108020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:33 compute-2 sudo[108020]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:34 compute-2 ceph-mon[77282]: pgmap v370: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:34.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:35.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:36.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:36 compute-2 ceph-mon[77282]: pgmap v371: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:37.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:37 compute-2 sudo[108251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmjzbikcerzdbgoyvjxvkciumoksfxyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843617.5504153-548-42682665431779/AnsiballZ_stat.py'
Jan 31 07:13:37 compute-2 sudo[108251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:37 compute-2 python3.9[108253]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:37 compute-2 sudo[108251]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:38 compute-2 sudo[108329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyzdnvtvodknnwczarlkpclzrzkirqfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843617.5504153-548-42682665431779/AnsiballZ_file.py'
Jan 31 07:13:38 compute-2 sudo[108329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:38.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:38 compute-2 python3.9[108331]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:38 compute-2 sudo[108329]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:38 compute-2 ceph-mon[77282]: pgmap v372: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:39 compute-2 sudo[108482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyvzqbpvbvewahbjfnhmdqzdwwykziyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843618.9037364-587-137100850437755/AnsiballZ_file.py'
Jan 31 07:13:39 compute-2 sudo[108482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:39 compute-2 sudo[108483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:39 compute-2 sudo[108483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:39 compute-2 sudo[108483]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:39 compute-2 sudo[108510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:39 compute-2 sudo[108510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:39 compute-2 sudo[108510]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:39 compute-2 python3.9[108496]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:39.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:39 compute-2 sudo[108482]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:39 compute-2 sudo[108684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqrdalrurfnbhxrudjjbxjzrgezhnocz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843619.5124283-611-114470516497556/AnsiballZ_stat.py'
Jan 31 07:13:39 compute-2 sudo[108684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:39 compute-2 python3.9[108686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:39 compute-2 sudo[108684]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:40 compute-2 sudo[108762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yguucxcrgzrqykyajkllzutocazdjfyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843619.5124283-611-114470516497556/AnsiballZ_file.py'
Jan 31 07:13:40 compute-2 sudo[108762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:40.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:40 compute-2 python3.9[108764]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:40 compute-2 sudo[108762]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:40 compute-2 ceph-mon[77282]: pgmap v373: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:41 compute-2 sudo[108915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-narawxpirsgsfyrqtpgccmtbtjiqeuwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843620.8138444-656-150003154198265/AnsiballZ_timezone.py'
Jan 31 07:13:41 compute-2 sudo[108915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:41 compute-2 python3.9[108917]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 07:13:41 compute-2 systemd[1]: Starting Time & Date Service...
Jan 31 07:13:41 compute-2 systemd[1]: Started Time & Date Service.
Jan 31 07:13:41 compute-2 sudo[108915]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:42 compute-2 sudo[109071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxokcixcqadgoivdddlnnimgqklbhapx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843621.87197-683-222563900077181/AnsiballZ_file.py'
Jan 31 07:13:42 compute-2 sudo[109071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:42.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:42 compute-2 python3.9[109073]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:42 compute-2 sudo[109071]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:42 compute-2 sudo[109223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdgiqsbrqrktvfjmatvfrgjrakrsxjpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843622.660017-707-113582257063222/AnsiballZ_stat.py'
Jan 31 07:13:42 compute-2 sudo[109223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:42 compute-2 ceph-mon[77282]: pgmap v374: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:43 compute-2 python3.9[109225]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:43 compute-2 sudo[109223]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:43 compute-2 sudo[109302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iruiuuzargdiftrumgxzhgvipsszeewu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843622.660017-707-113582257063222/AnsiballZ_file.py'
Jan 31 07:13:43 compute-2 sudo[109302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:43 compute-2 python3.9[109304]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:43 compute-2 sudo[109302]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:44 compute-2 ceph-mon[77282]: pgmap v375: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:44.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:44 compute-2 sudo[109454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftpifatgvjftyzizrtipezzoimcurkgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843623.970693-744-7964349235451/AnsiballZ_stat.py'
Jan 31 07:13:44 compute-2 sudo[109454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:44 compute-2 python3.9[109456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:44 compute-2 sudo[109454]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:44 compute-2 sudo[109532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-estzgvhomjkoqqtbadmcfvnabulbtzum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843623.970693-744-7964349235451/AnsiballZ_file.py'
Jan 31 07:13:44 compute-2 sudo[109532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:44 compute-2 python3.9[109534]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yjnru8gm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:44 compute-2 sudo[109532]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:45 compute-2 sudo[109685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzbciqrczdwppesjiecvethaxtxzmzwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843625.1488063-779-272175697103258/AnsiballZ_stat.py'
Jan 31 07:13:45 compute-2 sudo[109685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:45 compute-2 python3.9[109687]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:45 compute-2 sudo[109685]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:45 compute-2 sudo[109763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnltqnsihjlnbrwmidqcuptcoigmzgea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843625.1488063-779-272175697103258/AnsiballZ_file.py'
Jan 31 07:13:45 compute-2 sudo[109763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:45 compute-2 python3.9[109765]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:45 compute-2 sudo[109763]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:13:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:46.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:13:46 compute-2 ceph-mon[77282]: pgmap v376: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:46 compute-2 sudo[109915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsrxeqrplygsfwwlmghkziragqksnmqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843626.2992444-818-135863264173632/AnsiballZ_command.py'
Jan 31 07:13:46 compute-2 sudo[109915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:46 compute-2 python3.9[109917]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:13:46 compute-2 sudo[109915]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:47.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:47 compute-2 sudo[110069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzpshwsltpcqbqfrpkvkwjbwobxqvhte ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843627.1590369-842-254183070360364/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 07:13:47 compute-2 sudo[110069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:47 compute-2 python3[110071]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 07:13:47 compute-2 sudo[110069]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:13:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:48.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:13:48 compute-2 sudo[110221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjcmphserklgtwfmwikxcyogremajkzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843628.0685613-866-185149059861508/AnsiballZ_stat.py'
Jan 31 07:13:48 compute-2 sudo[110221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:48 compute-2 python3.9[110223]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:48 compute-2 sudo[110221]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:48 compute-2 ceph-mon[77282]: pgmap v377: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:48 compute-2 sudo[110299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqfpuanjnapjwoztckmmcnbtvrtgkybr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843628.0685613-866-185149059861508/AnsiballZ_file.py'
Jan 31 07:13:48 compute-2 sudo[110299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:48 compute-2 python3.9[110301]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:48 compute-2 sudo[110299]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:49.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:49 compute-2 sudo[110452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzpppfdetueeutmlysrplyyfakowtmro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843629.201279-902-31575867956176/AnsiballZ_stat.py'
Jan 31 07:13:49 compute-2 sudo[110452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:49 compute-2 python3.9[110454]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:49 compute-2 sudo[110452]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:50 compute-2 sudo[110577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czssjpenjpgqvboekunguffeasjzheyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843629.201279-902-31575867956176/AnsiballZ_copy.py'
Jan 31 07:13:50 compute-2 sudo[110577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:50.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:50 compute-2 python3.9[110579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843629.201279-902-31575867956176/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:50 compute-2 sudo[110577]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:50 compute-2 ceph-mon[77282]: pgmap v378: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:50 compute-2 sudo[110729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecihnpeafyluccazoiztvmtijpdkedxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843630.5633805-947-218221239507342/AnsiballZ_stat.py'
Jan 31 07:13:50 compute-2 sudo[110729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:51 compute-2 python3.9[110731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:51 compute-2 sudo[110729]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:51 compute-2 sudo[110808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxzqnslpmephkbajqssqdvcshicknhvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843630.5633805-947-218221239507342/AnsiballZ_file.py'
Jan 31 07:13:51 compute-2 sudo[110808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:51.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:51 compute-2 python3.9[110810]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:51 compute-2 sudo[110808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:51 compute-2 sudo[110960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nihuupnydudloukmmroxkwfsuzxoohih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843631.635592-983-213054682524743/AnsiballZ_stat.py'
Jan 31 07:13:51 compute-2 sudo[110960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:52 compute-2 python3.9[110962]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:52 compute-2 sudo[110960]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:52.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:52 compute-2 sudo[111038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmutcknqbmxhdvaleeohlsuebqqkvhbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843631.635592-983-213054682524743/AnsiballZ_file.py'
Jan 31 07:13:52 compute-2 sudo[111038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:52 compute-2 python3.9[111040]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:52 compute-2 sudo[111038]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:52 compute-2 ceph-mon[77282]: pgmap v379: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:53 compute-2 sudo[111191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppgnbavlqexejvpnauwixogrbflclgks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843632.815-1019-137095747886928/AnsiballZ_stat.py'
Jan 31 07:13:53 compute-2 sudo[111191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:53 compute-2 python3.9[111193]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:53 compute-2 sudo[111191]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:13:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:53.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:13:53 compute-2 sudo[111269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voafznaenwneuawvmnfkgtxouwlvxxnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843632.815-1019-137095747886928/AnsiballZ_file.py'
Jan 31 07:13:53 compute-2 sudo[111269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:53 compute-2 python3.9[111271]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:53 compute-2 sudo[111269]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:54.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:54 compute-2 sudo[111421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hktfwztpipnbwvqjbamqfzjgeigtbvhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843634.0803425-1058-246729487815815/AnsiballZ_command.py'
Jan 31 07:13:54 compute-2 sudo[111421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:54 compute-2 python3.9[111423]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:13:54 compute-2 sudo[111421]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:54 compute-2 ceph-mon[77282]: pgmap v380: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:55 compute-2 sudo[111577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mulcuagintkydcomgqgsddjuqgoyyoxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843634.7623873-1082-273306781536090/AnsiballZ_blockinfile.py'
Jan 31 07:13:55 compute-2 sudo[111577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:55 compute-2 python3.9[111579]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:55 compute-2 sudo[111577]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:55.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:55 compute-2 sudo[111729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmzzsrucattogthrkrgrdhadpmwkyase ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843635.6572838-1108-8615877151273/AnsiballZ_file.py'
Jan 31 07:13:55 compute-2 sudo[111729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:56 compute-2 python3.9[111731]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:56 compute-2 sudo[111729]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:56.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:56 compute-2 sudo[111881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efwxiutielfmqheuxzahxghgmmlrlwsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843636.1961677-1108-177966714730516/AnsiballZ_file.py'
Jan 31 07:13:56 compute-2 sudo[111881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:56 compute-2 python3.9[111883]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:56 compute-2 sudo[111881]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:56 compute-2 ceph-mon[77282]: pgmap v381: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:57.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:57 compute-2 sudo[112034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwfakgsztegxiunwrwfvsbkhjjxepmzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843637.04417-1154-218850828044899/AnsiballZ_mount.py'
Jan 31 07:13:57 compute-2 sudo[112034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:57 compute-2 python3.9[112036]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 07:13:57 compute-2 sudo[112034]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:57 compute-2 sudo[112186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrbjtfvbnvxjwxnwupvinvmbxrxsgeoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843637.774438-1154-132930889970403/AnsiballZ_mount.py'
Jan 31 07:13:57 compute-2 sudo[112186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:58 compute-2 python3.9[112188]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 07:13:58 compute-2 sudo[112186]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:58 compute-2 ceph-mon[77282]: pgmap v382: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:58 compute-2 sshd-session[104296]: Connection closed by 192.168.122.30 port 47156
Jan 31 07:13:58 compute-2 sshd-session[104293]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:13:58 compute-2 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 07:13:58 compute-2 systemd[1]: session-39.scope: Consumed 22.467s CPU time.
Jan 31 07:13:58 compute-2 systemd-logind[801]: Session 39 logged out. Waiting for processes to exit.
Jan 31 07:13:58 compute-2 systemd-logind[801]: Removed session 39.
Jan 31 07:13:59 compute-2 sudo[112214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:59 compute-2 sudo[112214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:59 compute-2 sudo[112214]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:59 compute-2 sudo[112239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:59 compute-2 sudo[112239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:59 compute-2 sudo[112239]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:13:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:59.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:14:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:00.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:14:00 compute-2 ceph-mon[77282]: pgmap v383: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:14:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:01.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:14:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:02.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:02 compute-2 ceph-mon[77282]: pgmap v384: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:03.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:04 compute-2 sshd-session[112266]: Accepted publickey for zuul from 192.168.122.30 port 50656 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:14:04 compute-2 systemd-logind[801]: New session 40 of user zuul.
Jan 31 07:14:04 compute-2 systemd[1]: Started Session 40 of User zuul.
Jan 31 07:14:04 compute-2 sshd-session[112266]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:14:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:04.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:04 compute-2 sudo[112419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhbseuwpqcgyzzzokhdywgjxazaentyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843644.1271634-25-233106248551262/AnsiballZ_tempfile.py'
Jan 31 07:14:04 compute-2 sudo[112419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:04 compute-2 python3.9[112421]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 07:14:04 compute-2 sudo[112419]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:04 compute-2 ceph-mon[77282]: pgmap v385: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:05 compute-2 sudo[112572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmrbaroozsfmskdfyfypwfsolfdrdweu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843644.9381716-62-94212736399773/AnsiballZ_stat.py'
Jan 31 07:14:05 compute-2 sudo[112572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:14:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:05.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:14:05 compute-2 python3.9[112574]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:14:05 compute-2 sudo[112572]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:06 compute-2 sudo[112726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dekzptfuqqyfggzepvfmjhzikycidajf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843645.7229679-85-51021405729111/AnsiballZ_slurp.py'
Jan 31 07:14:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:06 compute-2 sudo[112726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:06.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:06 compute-2 python3.9[112728]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 07:14:06 compute-2 sudo[112726]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:06 compute-2 sudo[112878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdslgvhqsvwrlhwngatdfsmcsknvjntj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843646.5901299-109-159880137306945/AnsiballZ_stat.py'
Jan 31 07:14:06 compute-2 sudo[112878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:06 compute-2 ceph-mon[77282]: pgmap v386: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:06 compute-2 python3.9[112880]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.vrxf7sdm follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:06 compute-2 sudo[112878]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:07 compute-2 sudo[113004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eatihlqygqktsxfoqziemsjucxuyfytr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843646.5901299-109-159880137306945/AnsiballZ_copy.py'
Jan 31 07:14:07 compute-2 sudo[113004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:07 compute-2 python3.9[113006]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.vrxf7sdm mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843646.5901299-109-159880137306945/.source.vrxf7sdm _original_basename=.wl9c1muu follow=False checksum=a4502e4e8f59847dd2b7c5f9ecd52d55f7558ce1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:07 compute-2 sudo[113004]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:08 compute-2 ceph-mon[77282]: pgmap v387: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:08.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:08 compute-2 sudo[113156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uarsbdqhihlgscghxeqokokhesxaiczy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843647.9813778-154-105969538250996/AnsiballZ_setup.py'
Jan 31 07:14:08 compute-2 sudo[113156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:08 compute-2 python3.9[113158]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:14:08 compute-2 sudo[113156]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:09.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:09 compute-2 sudo[113309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnqeplnwzobkebjjyameuuufulmfrsxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843649.346031-179-191166126252416/AnsiballZ_blockinfile.py'
Jan 31 07:14:09 compute-2 sudo[113309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:09 compute-2 python3.9[113311]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwK9tbwI1sVhVFn3RGaEAgpi2689y9VdIyBp+cw+RWFupGnK46xr4HB/N67Aw+A+3FJtEl1Zq1cnt3Gy8PYb6XnLd4xH/NFtUI3ukhekrtKvSmysEjpRGIamjt1BkH4Lxh79PNkk13AVMQN92Wo271/fHEvcV7HaC0Q5VypZMd+77ZvI9NuEG1nofpvI8+32YECZBLpoC5KQK7EibqD9MUR2OmapGZhV+5B5jdb0ZvNb966Q0kwAGV8E+xgHSVnh5eCWC8oxgWkycmQd2co9E79fiIHEioABE9aDUGKw0+nsZ7HrvjG/ENeg5C6fjdJE4MsPq3FNHAiTCQPZ7QZgv/CSudt7WYyLTztGL9ksWqaTUeDocKVKPlJlzGrn/TXgMoix8+qbFzxVixIROb2nqElyEy6mo0Xxt2b4aisil9ZQhWVMQY0hGX5vtVv0E6+svzjSTfkyZolbjyRsolJF4pH7+klLEmlWGDlgSoCDZeK/XEi7xq3yaCymuWtX2fAX8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICzJ5+1VSPloOqHhejNen2lHjfV4Hvj7nbRbNJjS6dtd
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBECc0+1u2G3haTNDUnwK7F3+bqZqLNjR6ayEsOJcH6U6RkqhSd2eAlivxlw9dfPuir2TFrYzGTtSXuJ8iauDAtQ=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlvTGYGifalEmozttYlZ79wRHZPo6p3FfxUn+H8fCt//gLYJvHB9ygqCWO8F06xZhwaSJlU3R5k49AFtcq6rCaf4D9FuDYpYU5B1qGxpqY2S/6r/PmC9TmJJe6DJfuIf95os5YrDLR82BbT8dLFvu76PfZiMt0+kvm9gj1Q6XCUTgIsIvY9pyPySu0V4JDeT8EBgROR7WA5Fev80wO2/RlFXH9xVIupO8rswjwWPuIXoua1w44d35HWWHBdMAFXeZZMopWHWwY+fIlyz4B8y/TWDow7KZxG9GHKZ04e73/RA972Gub2LC0SlBFsBqaSnub8ooOcA3jZ3R2bjHAVkZvLgCK9UFSgwvvfyOWxtkJgj5KalAy9vZeGQ02ndAPNkQ6B1GnnRHaR5yGPG78q9Nd8RDmzhTr1iwnYLHhup04nAUnUDw5ubZFyF9bW1KQWvDv+4cfFeT8mhARMCxu7Imzne5FDq9OZAA9VLfnA26YFT0MpGjGl332cx20iz3Z4IU=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBM/OyT9HQGjLM76vSXpTFer+lkr//u0v4BsUk+Rcai
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPttGgqMF5HnqNXeajmhgAAhQFj1yReXfFmUGT6cv24PcfDX+VeASpBgDGWJKvbu1EgrSPUu2R8sDzajVI5+ETk=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDir5Ux7IUuKTsqwrpZRFpieFX7Hi9Bsaw7N3jCiMd+vuHlEKHLX54HbyTIVnox1XbNjeYynLRRz7VKBfder8IEerGmST/uWuX5FOdve7vDdY++9J6qYkj1Gf6v6BGp8BT97bbPdvaQdLP6YS2jFEfOz4s0oJkgr8dsHjPU70e1P0b7vKxqo3z/E/XCe2BUGEv5j/z9GTl2oQ9/KoTvahfr6qfonnQK9E0gsJKDB9S1UPNFkJUxvVPfKfEao207dmT8EmQL2ZdwDwecA2Mg0SneGaNmEFWDW4CWQjdbHuikc3vsZ1do7kzq2+tz+WLEXqdb4Ig4S0OfV/MAcaC/C1DRfZHxZN3vSayrm99nFc8oPaLnRtT8Jz1dVonMOpwLm3xMm6nAeGNTzM0ImTrJTusVmKNRQI3x6VPiEcWdKNvN5sVcrN9uyINDMuzpXIxc1LmpmR/338EfP4HYhfsTqdM0worzzewvh2XhAVxQAiNYRRUbLvR4/EE5SjXTjSA4ID0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ8oYZpZvdB1n917+wvTxetgtueloCox+7yBQBW8LHZX
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCEH62xmPSqzu7EFth8e8ITel7fLvoU9FKlxQN/eSXzUuR/7sZGPhcgLzjrJmEcn4Za0K2VNu6+z559d/AEJY2U=
                                              create=True mode=0644 path=/tmp/ansible.vrxf7sdm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:09 compute-2 sudo[113309]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:10.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:10 compute-2 sudo[113461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiaoolfvgdpajbyndwvawrmgtptiaagw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843650.1404917-203-20492629595407/AnsiballZ_command.py'
Jan 31 07:14:10 compute-2 sudo[113461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:10 compute-2 ceph-mon[77282]: pgmap v388: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:10 compute-2 python3.9[113463]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vrxf7sdm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:14:10 compute-2 sudo[113461]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:11 compute-2 sudo[113616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpvgvtzyntwguzlogtrepwfsfxekagwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843650.9087427-227-275489677806919/AnsiballZ_file.py'
Jan 31 07:14:11 compute-2 sudo[113616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:11.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:11 compute-2 python3.9[113618]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vrxf7sdm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:11 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 07:14:11 compute-2 sudo[113616]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:11 compute-2 sshd-session[112269]: Connection closed by 192.168.122.30 port 50656
Jan 31 07:14:11 compute-2 sshd-session[112266]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:14:11 compute-2 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 07:14:11 compute-2 systemd[1]: session-40.scope: Consumed 3.721s CPU time.
Jan 31 07:14:11 compute-2 systemd-logind[801]: Session 40 logged out. Waiting for processes to exit.
Jan 31 07:14:11 compute-2 systemd-logind[801]: Removed session 40.
Jan 31 07:14:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:14:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:12.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:14:12 compute-2 ceph-mon[77282]: pgmap v389: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:13.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:14.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:14 compute-2 ceph-mon[77282]: pgmap v390: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:14:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:15.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:14:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:16.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:17 compute-2 ceph-mon[77282]: pgmap v391: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:17.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:17 compute-2 sshd-session[113649]: Accepted publickey for zuul from 192.168.122.30 port 41118 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:14:17 compute-2 systemd-logind[801]: New session 41 of user zuul.
Jan 31 07:14:17 compute-2 systemd[1]: Started Session 41 of User zuul.
Jan 31 07:14:17 compute-2 sshd-session[113649]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:14:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:18.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:18 compute-2 ceph-mon[77282]: pgmap v392: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:19 compute-2 python3.9[113802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:14:19 compute-2 sudo[113808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:19 compute-2 sudo[113808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:19 compute-2 sudo[113808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:19 compute-2 sudo[113833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:19.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:19 compute-2 sudo[113833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:19 compute-2 sudo[113833]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:20.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:20 compute-2 sudo[114007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqglcrshasuhunxzovqrfrfzimhmtjjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843659.6795557-58-170389115535807/AnsiballZ_systemd.py'
Jan 31 07:14:20 compute-2 sudo[114007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:20 compute-2 python3.9[114009]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 07:14:21 compute-2 sudo[114007]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:21.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:21 compute-2 sudo[114162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wldsqtmptrqqoiovjjwvxnvksmvucvbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843661.549465-82-108791052374302/AnsiballZ_systemd.py'
Jan 31 07:14:21 compute-2 sudo[114162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:22.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:22 compute-2 python3.9[114164]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:14:22 compute-2 sudo[114162]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:23 compute-2 sudo[114316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpjtvenguveagjuzlcntmsyqdduhuung ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843662.6427774-110-69837343699438/AnsiballZ_command.py'
Jan 31 07:14:23 compute-2 sudo[114316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:23.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:23 compute-2 python3.9[114318]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:14:23 compute-2 sudo[114316]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:24 compute-2 ceph-mon[77282]: pgmap v393: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:24 compute-2 sudo[114469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcnlnrrftoprsinzrjuzkuetwennucye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843663.826922-133-149231650993687/AnsiballZ_stat.py'
Jan 31 07:14:24 compute-2 sudo[114469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:24 compute-2 python3.9[114471]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:14:24 compute-2 sudo[114469]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:25 compute-2 ceph-mon[77282]: pgmap v394: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:25 compute-2 ceph-mon[77282]: pgmap v395: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:25 compute-2 sudo[114622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhbhbcvgkgjdvjydumicktikfxdcpghk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843665.0000136-161-268897913894380/AnsiballZ_file.py'
Jan 31 07:14:25 compute-2 sudo[114622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:25.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:25 compute-2 python3.9[114624]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:25 compute-2 sudo[114622]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:26 compute-2 sshd-session[113652]: Connection closed by 192.168.122.30 port 41118
Jan 31 07:14:26 compute-2 sshd-session[113649]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:14:26 compute-2 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 07:14:26 compute-2 systemd[1]: session-41.scope: Consumed 3.045s CPU time.
Jan 31 07:14:26 compute-2 systemd-logind[801]: Session 41 logged out. Waiting for processes to exit.
Jan 31 07:14:26 compute-2 systemd-logind[801]: Removed session 41.
Jan 31 07:14:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:14:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:26.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:14:26 compute-2 ceph-mon[77282]: pgmap v396: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:14:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:27.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:14:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:28.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:28 compute-2 ceph-mon[77282]: pgmap v397: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:29.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:29.934676) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843669934794, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 844, "num_deletes": 255, "total_data_size": 1758913, "memory_usage": 1785112, "flush_reason": "Manual Compaction"}
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843669957398, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 749363, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9860, "largest_seqno": 10699, "table_properties": {"data_size": 746005, "index_size": 1202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8495, "raw_average_key_size": 19, "raw_value_size": 738934, "raw_average_value_size": 1726, "num_data_blocks": 53, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843603, "oldest_key_time": 1769843603, "file_creation_time": 1769843669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 22965 microseconds, and 2977 cpu microseconds.
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:29.957654) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 749363 bytes OK
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:29.957681) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:29.959179) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:29.959203) EVENT_LOG_v1 {"time_micros": 1769843669959188, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:29.959220) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1754574, prev total WAL file size 1754574, number of live WAL files 2.
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:29.959732) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323536' seq:0, type:0; will stop at (end)
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(731KB)], [18(9451KB)]
Jan 31 07:14:29 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843669959787, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10427698, "oldest_snapshot_seqno": -1}
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3775 keys, 7723246 bytes, temperature: kUnknown
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843670015636, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7723246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7695130, "index_size": 17574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 91750, "raw_average_key_size": 24, "raw_value_size": 7624044, "raw_average_value_size": 2019, "num_data_blocks": 769, "num_entries": 3775, "num_filter_entries": 3775, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769843669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:30.015858) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7723246 bytes
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:30.020311) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.5 rd, 138.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(24.2) write-amplify(10.3) OK, records in: 4273, records dropped: 498 output_compression: NoCompression
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:30.020340) EVENT_LOG_v1 {"time_micros": 1769843670020327, "job": 8, "event": "compaction_finished", "compaction_time_micros": 55915, "compaction_time_cpu_micros": 18422, "output_level": 6, "num_output_files": 1, "total_output_size": 7723246, "num_input_records": 4273, "num_output_records": 3775, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843670020553, "job": 8, "event": "table_file_deletion", "file_number": 20}
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843670021528, "job": 8, "event": "table_file_deletion", "file_number": 18}
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:29.959665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:30.021622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:30.021629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:30.021631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:30.021633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:14:30.021635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:30.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:30 compute-2 ceph-mon[77282]: pgmap v398: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:31.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:31 compute-2 sshd-session[114652]: Accepted publickey for zuul from 192.168.122.30 port 44770 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:14:31 compute-2 systemd-logind[801]: New session 42 of user zuul.
Jan 31 07:14:31 compute-2 systemd[1]: Started Session 42 of User zuul.
Jan 31 07:14:31 compute-2 sshd-session[114652]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:14:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:32.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:32 compute-2 ceph-mon[77282]: pgmap v399: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:32 compute-2 python3.9[114805]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:14:33 compute-2 sudo[114960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckqqjxjvmguxpmebczxlsffahqscwmky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843672.962891-64-31925319804358/AnsiballZ_setup.py'
Jan 31 07:14:33 compute-2 sudo[114960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:33.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:33 compute-2 python3.9[114962]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:14:33 compute-2 sudo[114968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:33 compute-2 sudo[114968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:33 compute-2 sudo[114968]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:33 compute-2 sudo[114993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:14:33 compute-2 sudo[114993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:33 compute-2 sudo[114993]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:33 compute-2 sudo[115018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:33 compute-2 sudo[115018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:33 compute-2 sudo[115018]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:33 compute-2 sudo[115043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:14:33 compute-2 sudo[115043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:33 compute-2 sudo[114960]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:33 compute-2 sudo[115043]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:34 compute-2 sudo[115175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgnyzqpqvuypnwenrisgmpmyfatyncal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843672.962891-64-31925319804358/AnsiballZ_dnf.py'
Jan 31 07:14:34 compute-2 sudo[115175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:34 compute-2 python3.9[115177]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 07:14:34 compute-2 ceph-mon[77282]: pgmap v400: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:14:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:14:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:14:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:14:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:14:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:14:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:35.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:35 compute-2 sudo[115175]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:36.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:36 compute-2 python3.9[115329]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:14:36 compute-2 ceph-mon[77282]: pgmap v401: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:37.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:38 compute-2 python3.9[115481]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:14:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:38.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:38 compute-2 python3.9[115631]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:14:38 compute-2 ceph-mon[77282]: pgmap v402: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:39 compute-2 python3.9[115782]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:14:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:39.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:39 compute-2 sudo[115783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:39 compute-2 sudo[115783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:39 compute-2 sudo[115783]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:39 compute-2 sudo[115808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:39 compute-2 sudo[115808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:39 compute-2 sudo[115808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:39 compute-2 sudo[115857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:39 compute-2 sudo[115857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:39 compute-2 sudo[115857]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:39 compute-2 sudo[115882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:14:39 compute-2 sudo[115882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:39 compute-2 sudo[115882]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:40 compute-2 sshd-session[114655]: Connection closed by 192.168.122.30 port 44770
Jan 31 07:14:40 compute-2 sshd-session[114652]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:14:40 compute-2 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 07:14:40 compute-2 systemd[1]: session-42.scope: Consumed 5.119s CPU time.
Jan 31 07:14:40 compute-2 systemd-logind[801]: Session 42 logged out. Waiting for processes to exit.
Jan 31 07:14:40 compute-2 systemd-logind[801]: Removed session 42.
Jan 31 07:14:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:40.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:40 compute-2 ceph-mon[77282]: pgmap v403: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:14:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:14:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:41.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:42.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:42 compute-2 ceph-mon[77282]: pgmap v404: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:43.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:44.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:44 compute-2 ceph-mon[77282]: pgmap v405: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:45 compute-2 sshd-session[115911]: Accepted publickey for zuul from 192.168.122.30 port 34654 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:14:45 compute-2 systemd-logind[801]: New session 43 of user zuul.
Jan 31 07:14:45 compute-2 systemd[1]: Started Session 43 of User zuul.
Jan 31 07:14:45 compute-2 sshd-session[115911]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:14:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:45.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:46.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:46 compute-2 python3.9[116064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:14:46 compute-2 ceph-mon[77282]: pgmap v406: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:47.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:48 compute-2 sudo[116219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jegiiufxwmavencwvmkkblpesbayvydx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843687.6352746-113-224131892016542/AnsiballZ_file.py'
Jan 31 07:14:48 compute-2 sudo[116219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:48 compute-2 python3.9[116221]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:48 compute-2 sudo[116219]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:48.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:48 compute-2 ceph-mon[77282]: pgmap v407: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:48 compute-2 sudo[116371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avhdbduqhdnfczzrttyxcsyoywxjyhag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843688.363161-113-37128699164236/AnsiballZ_file.py'
Jan 31 07:14:48 compute-2 sudo[116371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:48 compute-2 python3.9[116373]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:48 compute-2 sudo[116371]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:49 compute-2 sudo[116524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djwerrximgkpqubywsbgpnywyrszzsrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843688.9773962-153-121684620971920/AnsiballZ_stat.py'
Jan 31 07:14:49 compute-2 sudo[116524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:49.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:49 compute-2 python3.9[116526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:49 compute-2 sudo[116524]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:50 compute-2 sudo[116647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qctetzsncosfibvrdvbqblsgbfmjjdgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843688.9773962-153-121684620971920/AnsiballZ_copy.py'
Jan 31 07:14:50 compute-2 sudo[116647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:50.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:50 compute-2 python3.9[116649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843688.9773962-153-121684620971920/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=9477bf0dbad38779954ebb8764316d3295eb82df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:50 compute-2 sudo[116647]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:50 compute-2 ceph-mon[77282]: pgmap v408: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:50 compute-2 sudo[116799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcdckzkjjepszysmpahfngyykiswqhbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843690.4230103-153-71636534070372/AnsiballZ_stat.py'
Jan 31 07:14:50 compute-2 sudo[116799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:50 compute-2 python3.9[116801]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:50 compute-2 sudo[116799]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:51 compute-2 sudo[116923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubdznqyjewszlrxqczakqfzlroxqautl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843690.4230103-153-71636534070372/AnsiballZ_copy.py'
Jan 31 07:14:51 compute-2 sudo[116923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:51 compute-2 python3.9[116925]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843690.4230103-153-71636534070372/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=4d19f6ebd16a505bd4f1bae6f0d06a9a74ad0f67 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:51 compute-2 sudo[116923]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:51.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:51 compute-2 sudo[117075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnxfermapdwiaduxxbyhxfciumqciiyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843691.4464488-153-73545262915117/AnsiballZ_stat.py'
Jan 31 07:14:51 compute-2 sudo[117075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:51 compute-2 python3.9[117077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:51 compute-2 sudo[117075]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:52 compute-2 sudo[117198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jknwxvcpthvpyqfpuneempnqqogmpkad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843691.4464488-153-73545262915117/AnsiballZ_copy.py'
Jan 31 07:14:52 compute-2 sudo[117198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:52.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:52 compute-2 python3.9[117200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843691.4464488-153-73545262915117/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=b1da45c0851052f7cac2d6a6fa7491dc7bfc654e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:52 compute-2 sudo[117198]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:52 compute-2 ceph-mon[77282]: pgmap v409: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:52 compute-2 sudo[117350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfiifbjgpytjipkeiwaqkgpvktsyxskp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843692.4947677-283-241367255289205/AnsiballZ_file.py'
Jan 31 07:14:52 compute-2 sudo[117350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:52 compute-2 python3.9[117352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:52 compute-2 sudo[117350]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:53 compute-2 sudo[117503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiofbytwlvindvvcvdkkamnrncoengac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843693.026122-283-247907622518126/AnsiballZ_file.py'
Jan 31 07:14:53 compute-2 sudo[117503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:53 compute-2 python3.9[117505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:53.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:53 compute-2 sudo[117503]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:53 compute-2 sudo[117655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhaqsmnzjjjvxjlvcuggbzxnfuvqsxwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843693.6006033-322-12198586842806/AnsiballZ_stat.py'
Jan 31 07:14:53 compute-2 sudo[117655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:53 compute-2 python3.9[117657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:54 compute-2 sudo[117655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:54.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:54 compute-2 sudo[117778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghvmslsacidrryekuzagxzogvgdbiknq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843693.6006033-322-12198586842806/AnsiballZ_copy.py'
Jan 31 07:14:54 compute-2 sudo[117778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:54 compute-2 python3.9[117780]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843693.6006033-322-12198586842806/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=5674f02ba3b8f309fb45c1c1e148baa98ef10459 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:54 compute-2 sudo[117778]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:54 compute-2 ceph-mon[77282]: pgmap v410: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:54 compute-2 sudo[117930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogyoezxcfbnafvgpjufrhdeqizvcsoxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843694.6450846-322-123198360217975/AnsiballZ_stat.py'
Jan 31 07:14:54 compute-2 sudo[117930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:55 compute-2 python3.9[117932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:55 compute-2 sudo[117930]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:55 compute-2 sudo[118054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnmjqqmngknertgqilxcbvnescendjfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843694.6450846-322-123198360217975/AnsiballZ_copy.py'
Jan 31 07:14:55 compute-2 sudo[118054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:55.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:55 compute-2 python3.9[118056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843694.6450846-322-123198360217975/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=005e44589b03f310b2e01f05c09d39b290e9f9f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:55 compute-2 sudo[118054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:56 compute-2 sudo[118206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yojllrfdpqdasdtehypklemqvchnplmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843695.7320993-322-4050509297195/AnsiballZ_stat.py'
Jan 31 07:14:56 compute-2 sudo[118206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:56.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:56 compute-2 python3.9[118208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:56 compute-2 sudo[118206]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:56 compute-2 ceph-mon[77282]: pgmap v411: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:56 compute-2 sudo[118329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etvrnsfidktliuxojywlfksswrjmoajj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843695.7320993-322-4050509297195/AnsiballZ_copy.py'
Jan 31 07:14:56 compute-2 sudo[118329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:56 compute-2 python3.9[118331]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843695.7320993-322-4050509297195/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=fd048da0a75f5db912978a44dd37ec9ef8127e96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:56 compute-2 sudo[118329]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:57 compute-2 sudo[118482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdcelazvbkchzgnkeehnvrfdyzgqhpjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843697.067479-446-146138113650135/AnsiballZ_file.py'
Jan 31 07:14:57 compute-2 sudo[118482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:57.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:57 compute-2 python3.9[118484]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:57 compute-2 sudo[118482]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:57 compute-2 sudo[118634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sarxjdvkxydbdrfzwfylytjzbhmcdksa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843697.6522346-446-104721875248293/AnsiballZ_file.py'
Jan 31 07:14:57 compute-2 sudo[118634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:58 compute-2 python3.9[118636]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:58 compute-2 sudo[118634]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:58.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:58 compute-2 sudo[118786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vztbgccoyjyzclheashqbeydexzzadfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843698.2207575-491-60917985724414/AnsiballZ_stat.py'
Jan 31 07:14:58 compute-2 sudo[118786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:58 compute-2 python3.9[118788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:58 compute-2 sudo[118786]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:58 compute-2 ceph-mon[77282]: pgmap v412: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:58 compute-2 sudo[118909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obgawmntznhxoitgpfokxdilveeffoky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843698.2207575-491-60917985724414/AnsiballZ_copy.py'
Jan 31 07:14:58 compute-2 sudo[118909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:59 compute-2 python3.9[118911]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843698.2207575-491-60917985724414/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=4ef98072bb4f685b71cbb820266724201fdacdb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:59 compute-2 sudo[118909]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:59 compute-2 sudo[119062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjezggpxwtcloyjsuytyjtowprdgaglf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843699.2193966-491-88490683779696/AnsiballZ_stat.py'
Jan 31 07:14:59 compute-2 sudo[119062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:14:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:59.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:59 compute-2 sudo[119065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:59 compute-2 sudo[119065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:59 compute-2 sudo[119065]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:59 compute-2 python3.9[119064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:59 compute-2 sudo[119090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:59 compute-2 sudo[119062]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:59 compute-2 sudo[119090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:59 compute-2 sudo[119090]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:00 compute-2 sudo[119235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxwweitkvokttqxbhfznafpaflxzhped ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843699.2193966-491-88490683779696/AnsiballZ_copy.py'
Jan 31 07:15:00 compute-2 sudo[119235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:00.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:00 compute-2 python3.9[119237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843699.2193966-491-88490683779696/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=005e44589b03f310b2e01f05c09d39b290e9f9f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:00 compute-2 sudo[119235]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:00 compute-2 ceph-mon[77282]: pgmap v413: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:00 compute-2 sudo[119387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkeccszkljieqcrnjcsiqyzdykoufuor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843700.4889326-491-71155749060109/AnsiballZ_stat.py'
Jan 31 07:15:00 compute-2 sudo[119387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:00 compute-2 python3.9[119389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:00 compute-2 sudo[119387]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:01 compute-2 sudo[119511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etnhkjcxyvdshdwfyckufvzldvuhwosa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843700.4889326-491-71155749060109/AnsiballZ_copy.py'
Jan 31 07:15:01 compute-2 sudo[119511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:01 compute-2 python3.9[119513]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843700.4889326-491-71155749060109/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=ae17ff0e612dad6b2c33607e6aced15614709a34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:01.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:01 compute-2 sudo[119511]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:02.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:02 compute-2 sudo[119663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcnsyopotydavzmdvhdrszwmhqmoaboh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843702.363559-639-22100755462326/AnsiballZ_file.py'
Jan 31 07:15:02 compute-2 sudo[119663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:02 compute-2 ceph-mon[77282]: pgmap v414: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:02 compute-2 python3.9[119665]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:02 compute-2 sudo[119663]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:03 compute-2 sudo[119816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnuuisyxdnldrbjxwburcuvosqzffont ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843703.0079188-662-163077822351430/AnsiballZ_stat.py'
Jan 31 07:15:03 compute-2 sudo[119816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:03.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:03 compute-2 python3.9[119818]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:03 compute-2 sudo[119816]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:04 compute-2 sudo[119939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpdjbyhnililykharqmtyfgdudoyylbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843703.0079188-662-163077822351430/AnsiballZ_copy.py'
Jan 31 07:15:04 compute-2 sudo[119939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:04.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:04 compute-2 python3.9[119941]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843703.0079188-662-163077822351430/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:04 compute-2 sudo[119939]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:04 compute-2 sudo[120091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbxyoxtoznzhhbhxbnpclmuvdbqzsvzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843704.4491515-727-79229462747329/AnsiballZ_file.py'
Jan 31 07:15:04 compute-2 sudo[120091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:04 compute-2 ceph-mon[77282]: pgmap v415: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:04 compute-2 python3.9[120093]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:04 compute-2 sudo[120091]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:05 compute-2 sudo[120244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hijgcythfyuvgigfduhmzhxwmvxutevv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843705.0010834-749-170488828158029/AnsiballZ_stat.py'
Jan 31 07:15:05 compute-2 sudo[120244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:05 compute-2 python3.9[120246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:05 compute-2 sudo[120244]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:05.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:05 compute-2 sudo[120367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgiotpbfcdtgbsmcrycaauweswnlqgek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843705.0010834-749-170488828158029/AnsiballZ_copy.py'
Jan 31 07:15:05 compute-2 sudo[120367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:05 compute-2 python3.9[120369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843705.0010834-749-170488828158029/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:05 compute-2 sudo[120367]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:06.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:06 compute-2 sudo[120519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhtrhfduhbqfopxekhxumivxvrsedogb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843706.0819368-791-226257922982189/AnsiballZ_file.py'
Jan 31 07:15:06 compute-2 sudo[120519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:06 compute-2 python3.9[120521]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:06 compute-2 sudo[120519]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:06 compute-2 ceph-mon[77282]: pgmap v416: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:06 compute-2 sudo[120671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fszmnavztrnqmcrfstdhobcgvqpqicaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843706.6774085-814-198100025485458/AnsiballZ_stat.py'
Jan 31 07:15:06 compute-2 sudo[120671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:07 compute-2 python3.9[120673]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:07 compute-2 sudo[120671]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:07.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:07 compute-2 sudo[120795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qskbkwhcbytlchgfbvdqvqbgfjvszkug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843706.6774085-814-198100025485458/AnsiballZ_copy.py'
Jan 31 07:15:07 compute-2 sudo[120795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:07 compute-2 python3.9[120797]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843706.6774085-814-198100025485458/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:07 compute-2 sudo[120795]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:08 compute-2 sudo[120947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyvtlulykselvuilfnbqnbfmfbsgfdrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843707.9686377-857-244715024905759/AnsiballZ_file.py'
Jan 31 07:15:08 compute-2 sudo[120947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:08.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:08 compute-2 python3.9[120949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:08 compute-2 sudo[120947]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:08 compute-2 ceph-mon[77282]: pgmap v417: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:08 compute-2 sudo[121099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfyqqjkxlembgroandkclaalbwoqvdmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843708.6400697-881-152037011532225/AnsiballZ_stat.py'
Jan 31 07:15:08 compute-2 sudo[121099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:09 compute-2 python3.9[121101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:09 compute-2 sudo[121099]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:09 compute-2 sudo[121223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgkvwsiaqjrlcmakuisaszegrajdvwid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843708.6400697-881-152037011532225/AnsiballZ_copy.py'
Jan 31 07:15:09 compute-2 sudo[121223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:09.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:09 compute-2 python3.9[121225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843708.6400697-881-152037011532225/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:09 compute-2 sudo[121223]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:10 compute-2 sudo[121375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsmovgfguhczcjhbzxaiavuwgekystgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843709.8133395-925-125826792331652/AnsiballZ_file.py'
Jan 31 07:15:10 compute-2 sudo[121375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:10 compute-2 python3.9[121377]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:10 compute-2 sudo[121375]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:10.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:10 compute-2 sudo[121527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfalbqebrzvotywggixyvuyyviaxuiug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843710.406634-951-44387068767662/AnsiballZ_stat.py'
Jan 31 07:15:10 compute-2 sudo[121527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:10 compute-2 ceph-mon[77282]: pgmap v418: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:10 compute-2 python3.9[121529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:10 compute-2 sudo[121527]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:11 compute-2 sudo[121651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnkxvicqsdvminljzhdfokbgexpkuczt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843710.406634-951-44387068767662/AnsiballZ_copy.py'
Jan 31 07:15:11 compute-2 sudo[121651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:11 compute-2 python3.9[121653]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843710.406634-951-44387068767662/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:11 compute-2 sudo[121651]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:11.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:11 compute-2 sudo[121803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqxxcryqakagfwbdduqwqyeivuudzuiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843711.5554852-997-54192475611765/AnsiballZ_file.py'
Jan 31 07:15:11 compute-2 sudo[121803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:12 compute-2 python3.9[121805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:12 compute-2 sudo[121803]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:12.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:12 compute-2 sudo[121955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvugdxptniidglxxdacrbmuurapwpdit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843712.2984645-1022-146292441590049/AnsiballZ_stat.py'
Jan 31 07:15:12 compute-2 sudo[121955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:12 compute-2 python3.9[121957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:12 compute-2 sudo[121955]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:12 compute-2 ceph-mon[77282]: pgmap v419: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:12 compute-2 sudo[122078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezitspibtqxcfnhwbxlwdqtjxgezqkml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843712.2984645-1022-146292441590049/AnsiballZ_copy.py'
Jan 31 07:15:12 compute-2 sudo[122078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:13 compute-2 python3.9[122080]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843712.2984645-1022-146292441590049/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:13 compute-2 sudo[122078]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:13.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:14.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:14 compute-2 ceph-mon[77282]: pgmap v420: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:15.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:16.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:16 compute-2 sshd-session[115914]: Connection closed by 192.168.122.30 port 34654
Jan 31 07:15:16 compute-2 sshd-session[115911]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:15:16 compute-2 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 07:15:16 compute-2 systemd[1]: session-43.scope: Consumed 16.935s CPU time.
Jan 31 07:15:16 compute-2 systemd-logind[801]: Session 43 logged out. Waiting for processes to exit.
Jan 31 07:15:16 compute-2 systemd-logind[801]: Removed session 43.
Jan 31 07:15:16 compute-2 ceph-mon[77282]: pgmap v421: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:17.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:17 compute-2 ceph-mon[77282]: pgmap v422: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:18.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:19.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:19 compute-2 sshd-session[122109]: Invalid user solv from 92.118.39.56 port 44268
Jan 31 07:15:19 compute-2 sudo[122111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:19 compute-2 sudo[122111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:19 compute-2 sudo[122111]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:19 compute-2 sshd-session[122109]: Connection closed by invalid user solv 92.118.39.56 port 44268 [preauth]
Jan 31 07:15:19 compute-2 sudo[122136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:19 compute-2 sudo[122136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:19 compute-2 sudo[122136]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:20.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:20 compute-2 ceph-mon[77282]: pgmap v423: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:21.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:21 compute-2 sshd-session[122162]: Accepted publickey for zuul from 192.168.122.30 port 44296 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:15:21 compute-2 systemd-logind[801]: New session 44 of user zuul.
Jan 31 07:15:21 compute-2 systemd[1]: Started Session 44 of User zuul.
Jan 31 07:15:21 compute-2 sshd-session[122162]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:15:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:22.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:22 compute-2 sudo[122315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndgufosbpzbytwydtehyjugugyfmulxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843721.9634192-28-269144219709373/AnsiballZ_file.py'
Jan 31 07:15:22 compute-2 sudo[122315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:22 compute-2 python3.9[122317]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:22 compute-2 sudo[122315]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:22 compute-2 ceph-mon[77282]: pgmap v424: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:23 compute-2 sudo[122468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svpnyirxxfjpvowxnxdmoxlmubgvfyur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843722.8121607-64-50040407540358/AnsiballZ_stat.py'
Jan 31 07:15:23 compute-2 sudo[122468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:23 compute-2 python3.9[122470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:23 compute-2 sudo[122468]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:23 compute-2 sudo[122591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksqzasclyguktsgphompumjhlalyjile ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843722.8121607-64-50040407540358/AnsiballZ_copy.py'
Jan 31 07:15:23 compute-2 sudo[122591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:24 compute-2 python3.9[122593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843722.8121607-64-50040407540358/.source.conf _original_basename=ceph.conf follow=False checksum=d315a9cac0e1e65728b0668f9e154f01a66e4c1a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:24 compute-2 sudo[122591]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:24.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:24 compute-2 sudo[122743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdylsyrceiajjkppwbrpkwqjjuyfienz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843724.167883-64-137481769058658/AnsiballZ_stat.py'
Jan 31 07:15:24 compute-2 sudo[122743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:24 compute-2 python3.9[122745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:24 compute-2 sudo[122743]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:24 compute-2 ceph-mon[77282]: pgmap v425: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:24 compute-2 sudo[122866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awhzboxrupaanzedobvzffbueezkxvpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843724.167883-64-137481769058658/AnsiballZ_copy.py'
Jan 31 07:15:24 compute-2 sudo[122866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:24 compute-2 python3.9[122868]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843724.167883-64-137481769058658/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=d07c30b1acab71467a05fb02d206fcd55de2512c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:24 compute-2 sudo[122866]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:25.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:25 compute-2 sshd-session[122165]: Connection closed by 192.168.122.30 port 44296
Jan 31 07:15:25 compute-2 sshd-session[122162]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:15:25 compute-2 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 07:15:25 compute-2 systemd[1]: session-44.scope: Consumed 2.058s CPU time.
Jan 31 07:15:25 compute-2 systemd-logind[801]: Session 44 logged out. Waiting for processes to exit.
Jan 31 07:15:25 compute-2 systemd-logind[801]: Removed session 44.
Jan 31 07:15:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:26.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:26 compute-2 ceph-mon[77282]: pgmap v426: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:27.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:28.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:28 compute-2 ceph-mon[77282]: pgmap v427: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:29.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:30.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:30 compute-2 ceph-mon[77282]: pgmap v428: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:30 compute-2 sshd-session[122896]: Accepted publickey for zuul from 192.168.122.30 port 53040 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:15:30 compute-2 systemd-logind[801]: New session 45 of user zuul.
Jan 31 07:15:30 compute-2 systemd[1]: Started Session 45 of User zuul.
Jan 31 07:15:30 compute-2 sshd-session[122896]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:15:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:31.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:31 compute-2 python3.9[123050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:15:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:32.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:32 compute-2 sudo[123204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozfcllxcgpdcupwqzgowcqjpcdmqgzci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843732.4200842-64-274961933593734/AnsiballZ_file.py'
Jan 31 07:15:32 compute-2 sudo[123204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:32 compute-2 ceph-mon[77282]: pgmap v429: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:32 compute-2 python3.9[123206]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:32 compute-2 sudo[123204]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:33 compute-2 sudo[123357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjgnopymagmaujlwobrswfnyzwookxap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843733.0569582-64-66305527090031/AnsiballZ_file.py'
Jan 31 07:15:33 compute-2 sudo[123357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:33 compute-2 python3.9[123359]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:33 compute-2 sudo[123357]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:33.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:34 compute-2 python3.9[123509]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:15:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:34 compute-2 sudo[123659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvegqdvaszwfctmzxhhvsmldnzijdtgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843734.4414203-133-86319664710314/AnsiballZ_seboolean.py'
Jan 31 07:15:34 compute-2 sudo[123659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:34 compute-2 ceph-mon[77282]: pgmap v430: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:34 compute-2 python3.9[123661]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 07:15:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:35.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:35 compute-2 sudo[123659]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:36.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:36 compute-2 ceph-mon[77282]: pgmap v431: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:36 compute-2 sudo[123816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nufqttshnbjgoneoshjwwvcsjrottnxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843736.7588117-163-269030161868638/AnsiballZ_setup.py'
Jan 31 07:15:36 compute-2 dbus-broker-launch[793]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 07:15:36 compute-2 sudo[123816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:37 compute-2 python3.9[123818]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:15:37 compute-2 sudo[123816]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:37.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:37 compute-2 sudo[123901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jexxsvwztpfotzsrmkxqwhnquvhwmbem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843736.7588117-163-269030161868638/AnsiballZ_dnf.py'
Jan 31 07:15:37 compute-2 sudo[123901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:38 compute-2 python3.9[123903]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:15:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:38.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:38 compute-2 ceph-mon[77282]: pgmap v432: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:39 compute-2 sudo[123901]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:39.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:39 compute-2 sudo[123930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:39 compute-2 sudo[123930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:39 compute-2 sudo[123930]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-2 sudo[123931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:39 compute-2 sudo[123931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:39 compute-2 sudo[123931]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-2 sudo[123982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:39 compute-2 sudo[123982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:39 compute-2 sudo[123982]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-2 sudo[124000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:15:39 compute-2 sudo[124000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:39 compute-2 sudo[124000]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-2 sudo[124076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:39 compute-2 sudo[124076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:39 compute-2 sudo[124076]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-2 sudo[124107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:15:39 compute-2 sudo[124107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:40.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:40 compute-2 sudo[124107]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:40 compute-2 sudo[124237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzogroruopcanqxdupmfhxfnyltnysnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843739.8484025-199-27824672760754/AnsiballZ_systemd.py'
Jan 31 07:15:40 compute-2 sudo[124237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:40 compute-2 python3.9[124239]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:15:40 compute-2 sudo[124237]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:40 compute-2 ceph-mon[77282]: pgmap v433: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:41 compute-2 sudo[124393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwzastdseuxlgqahvqpbwdfysitacizv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843740.9418747-223-3295420805267/AnsiballZ_edpm_nftables_snippet.py'
Jan 31 07:15:41 compute-2 sudo[124393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:41.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:41 compute-2 python3[124395]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 07:15:41 compute-2 sudo[124393]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:15:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:15:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:15:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:15:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:15:42 compute-2 sudo[124545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlckyvjkywzhrkydbavthcksrmqrgomg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843741.8148468-250-22360771014562/AnsiballZ_file.py'
Jan 31 07:15:42 compute-2 sudo[124545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:42 compute-2 python3.9[124547]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:42 compute-2 sudo[124545]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:42.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:42 compute-2 sudo[124697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vglrkftnjanrmjncuvcwifjzusppfytg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843742.444055-274-165066624007942/AnsiballZ_stat.py'
Jan 31 07:15:42 compute-2 sudo[124697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:42 compute-2 python3.9[124699]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:42 compute-2 sudo[124697]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:43 compute-2 sudo[124776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihnljzyzwxxiptnwrwydtjilqlpjwgci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843742.444055-274-165066624007942/AnsiballZ_file.py'
Jan 31 07:15:43 compute-2 sudo[124776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:43 compute-2 ceph-mon[77282]: pgmap v434: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:43 compute-2 python3.9[124778]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:43 compute-2 sudo[124776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:43.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:44 compute-2 sudo[124928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbfvwkeovsuoqnjwkrgpaabpmjiokylc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843743.8200583-310-218952882821047/AnsiballZ_stat.py'
Jan 31 07:15:44 compute-2 sudo[124928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:44 compute-2 python3.9[124930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:44 compute-2 sudo[124928]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:44 compute-2 ceph-mon[77282]: pgmap v435: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:44.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:44 compute-2 sudo[125006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cstmjebjjecganlbvvlsfkpkvjutrldf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843743.8200583-310-218952882821047/AnsiballZ_file.py'
Jan 31 07:15:44 compute-2 sudo[125006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:44 compute-2 python3.9[125008]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2h7tbekj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:44 compute-2 sudo[125006]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:45 compute-2 sudo[125159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjbllkpsqccyqdaxotwqozisbwlaevjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843744.855945-346-211528970685760/AnsiballZ_stat.py'
Jan 31 07:15:45 compute-2 sudo[125159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:45 compute-2 python3.9[125161]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:45 compute-2 sudo[125159]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:45 compute-2 sudo[125237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvxzcktvynwchljqsglqlhxriayrhmam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843744.855945-346-211528970685760/AnsiballZ_file.py'
Jan 31 07:15:45 compute-2 sudo[125237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:45.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:45 compute-2 python3.9[125239]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:45 compute-2 sudo[125237]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:46.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:46 compute-2 sudo[125389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftnihvutbvstjjwhxqwxjoskzmrmjrdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843745.942963-385-267545251643816/AnsiballZ_command.py'
Jan 31 07:15:46 compute-2 sudo[125389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:46 compute-2 python3.9[125391]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:15:46 compute-2 sudo[125389]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:46 compute-2 ceph-mon[77282]: pgmap v436: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:47 compute-2 sudo[125543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjxldiurgegpbsdnwlaarybgqdoajuxl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843746.6907947-409-48334098789313/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 07:15:47 compute-2 sudo[125543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:47 compute-2 python3[125545]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 07:15:47 compute-2 sudo[125546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:47 compute-2 sudo[125546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:47 compute-2 sudo[125546]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:47 compute-2 sudo[125543]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:47 compute-2 sudo[125571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:15:47 compute-2 sudo[125571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:47 compute-2 sudo[125571]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:47.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:47 compute-2 sudo[125745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfdcgopcwnjcpeaatdxpejaftnywzvzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843747.502625-434-35175519457196/AnsiballZ_stat.py'
Jan 31 07:15:47 compute-2 sudo[125745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:47 compute-2 python3.9[125747]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:47 compute-2 sudo[125745]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:47 compute-2 ceph-mon[77282]: pgmap v437: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:48.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:48 compute-2 sudo[125870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vokxxkxokcnlwyrisbhtajhgwgqgnrfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843747.502625-434-35175519457196/AnsiballZ_copy.py'
Jan 31 07:15:48 compute-2 sudo[125870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:48 compute-2 python3.9[125872]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843747.502625-434-35175519457196/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:48 compute-2 sudo[125870]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:49 compute-2 sudo[126023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gipeuazvjhwvxguhhedvbvydvezpotbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843748.840222-478-215045040181638/AnsiballZ_stat.py'
Jan 31 07:15:49 compute-2 sudo[126023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:49 compute-2 python3.9[126025]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:49 compute-2 sudo[126023]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:49.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:49 compute-2 sudo[126148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trvmxwxjxwbvleffwdgvlyyazjuckpgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843748.840222-478-215045040181638/AnsiballZ_copy.py'
Jan 31 07:15:49 compute-2 sudo[126148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:49 compute-2 python3.9[126150]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843748.840222-478-215045040181638/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:49 compute-2 sudo[126148]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:50 compute-2 sudo[126300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tybopqknrazfttzdpjynjhuliunbcebt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843750.0511615-524-72296733828477/AnsiballZ_stat.py'
Jan 31 07:15:50 compute-2 sudo[126300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:50.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:50 compute-2 python3.9[126302]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:50 compute-2 sudo[126300]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:50 compute-2 ceph-mon[77282]: pgmap v438: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:50 compute-2 sudo[126425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvxbyvpbqwtcyvuvnffkjrhzreexlarb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843750.0511615-524-72296733828477/AnsiballZ_copy.py'
Jan 31 07:15:50 compute-2 sudo[126425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:50 compute-2 python3.9[126427]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843750.0511615-524-72296733828477/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:50 compute-2 sudo[126425]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:51 compute-2 sudo[126578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwjlvmtajsrikjqcwldjfnfabbevbsjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843751.269844-568-129905643776540/AnsiballZ_stat.py'
Jan 31 07:15:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:51.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:51 compute-2 sudo[126578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:51 compute-2 python3.9[126580]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:51 compute-2 sudo[126578]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:51 compute-2 sudo[126703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdeucpgaygoouufhtbgrcpppbdmdvsnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843751.269844-568-129905643776540/AnsiballZ_copy.py'
Jan 31 07:15:51 compute-2 sudo[126703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:52 compute-2 python3.9[126705]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843751.269844-568-129905643776540/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:52 compute-2 sudo[126703]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:52.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:52 compute-2 ceph-mon[77282]: pgmap v439: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:52 compute-2 sudo[126855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umkqannebogakpguzfanpwbwcldpqizz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843752.4555998-614-184273515091824/AnsiballZ_stat.py'
Jan 31 07:15:52 compute-2 sudo[126855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:52 compute-2 python3.9[126857]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:52 compute-2 sudo[126855]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:53 compute-2 sudo[126981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaewwxquhghdorgogdfsqeeaaiujwegw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843752.4555998-614-184273515091824/AnsiballZ_copy.py'
Jan 31 07:15:53 compute-2 sudo[126981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:53 compute-2 python3.9[126983]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843752.4555998-614-184273515091824/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:53 compute-2 sudo[126981]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:53.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:53 compute-2 sudo[127133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfgutyfqphtadbdzldqkjdfdxdmivqnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843753.6699188-658-146766771216301/AnsiballZ_file.py'
Jan 31 07:15:53 compute-2 sudo[127133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:54 compute-2 python3.9[127135]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:54 compute-2 sudo[127133]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:54.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:54 compute-2 sudo[127285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgaupdvbqwncpqnkabwnimwsdekhuupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843754.2925773-682-213445177157284/AnsiballZ_command.py'
Jan 31 07:15:54 compute-2 sudo[127285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:54 compute-2 ceph-mon[77282]: pgmap v440: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:54 compute-2 python3.9[127287]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:15:54 compute-2 sudo[127285]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:55 compute-2 sudo[127441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pixyqpbmbubxjygpznfkxnluyddcjojt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843754.9452474-707-9389148979377/AnsiballZ_blockinfile.py'
Jan 31 07:15:55 compute-2 sudo[127441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:55 compute-2 python3.9[127443]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:55 compute-2 sudo[127441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:55.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:55 compute-2 sudo[127593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqupzhrxpkrncsjjngeegczmgjxdmkrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843755.7617667-733-42921848649238/AnsiballZ_command.py'
Jan 31 07:15:55 compute-2 sudo[127593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:56 compute-2 python3.9[127595]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:15:56 compute-2 sudo[127593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:56.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:56 compute-2 sudo[127746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjhxnpizjghlwnbljrgtinltakntsmre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843756.4230359-757-162054285481152/AnsiballZ_stat.py'
Jan 31 07:15:56 compute-2 sudo[127746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:56 compute-2 ceph-mon[77282]: pgmap v441: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:56 compute-2 python3.9[127748]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:15:56 compute-2 sudo[127746]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:57 compute-2 sudo[127901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gimfgbepprkewjtxuwlqherwvkesrgev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843757.0620465-781-185029828826124/AnsiballZ_command.py'
Jan 31 07:15:57 compute-2 sudo[127901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:57 compute-2 python3.9[127903]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:15:57 compute-2 sudo[127901]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:57.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:57 compute-2 sudo[128056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwwfuhnwdzkrjkdbqtdunbtuqpcfwyvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843757.7082922-805-49521171197884/AnsiballZ_file.py'
Jan 31 07:15:57 compute-2 sudo[128056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:58 compute-2 python3.9[128058]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:58 compute-2 sudo[128056]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:58.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:58 compute-2 ceph-mon[77282]: pgmap v442: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:59 compute-2 python3.9[128209]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:15:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:15:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:59.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:59 compute-2 sudo[128235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:59 compute-2 sudo[128235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:59 compute-2 sudo[128235]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:59 compute-2 sudo[128260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:59 compute-2 sudo[128260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:59 compute-2 sudo[128260]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:00.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:00 compute-2 sudo[128410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmvonbaetqhzwjzckrgpkpcdaukzytal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843760.2613397-925-113646236443325/AnsiballZ_command.py'
Jan 31 07:16:00 compute-2 sudo[128410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:00 compute-2 python3.9[128412]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:00 compute-2 ovs-vsctl[128413]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 07:16:00 compute-2 sudo[128410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:00 compute-2 ceph-mon[77282]: pgmap v443: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:01 compute-2 sudo[128564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rybusdpfnvvbpqlzxtlrtzmbegazbbrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843761.0022538-952-137186799766284/AnsiballZ_command.py'
Jan 31 07:16:01 compute-2 sudo[128564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:01 compute-2 python3.9[128566]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:01 compute-2 sudo[128564]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:01.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:01 compute-2 sudo[128719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olzxzccpyuqdpfmlbirfehgtrejpjkrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843761.6292582-977-52587587121460/AnsiballZ_command.py'
Jan 31 07:16:01 compute-2 sudo[128719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:02 compute-2 python3.9[128721]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:02 compute-2 ovs-vsctl[128722]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 07:16:02 compute-2 sudo[128719]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:02.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:02 compute-2 python3.9[128872]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:16:02 compute-2 ceph-mon[77282]: pgmap v444: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:03 compute-2 sudo[129025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bctbubfsjxxcpxklliwehrsvsyiijlgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843762.8960204-1027-208111182509316/AnsiballZ_file.py'
Jan 31 07:16:03 compute-2 sudo[129025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:03 compute-2 python3.9[129027]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:03 compute-2 sudo[129025]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:03.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:03 compute-2 sudo[129177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbjlyloiwhrkbtsieqsxdtunyljjrbxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843763.5338688-1052-67272416626025/AnsiballZ_stat.py'
Jan 31 07:16:03 compute-2 sudo[129177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:03 compute-2 python3.9[129179]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:03 compute-2 sudo[129177]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:04 compute-2 sudo[129255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djpyxkqceouxrygcyywludofmymyhstu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843763.5338688-1052-67272416626025/AnsiballZ_file.py'
Jan 31 07:16:04 compute-2 sudo[129255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:04 compute-2 python3.9[129257]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:04.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:04 compute-2 sudo[129255]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:04 compute-2 sudo[129407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnejjrawthfrtcnzldvvpugxqntmiziu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843764.4242961-1052-100756703060137/AnsiballZ_stat.py'
Jan 31 07:16:04 compute-2 sudo[129407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:04 compute-2 python3.9[129409]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:04 compute-2 sudo[129407]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:05 compute-2 sudo[129486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loddgpyosnulhanccpuxdpzprollpuqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843764.4242961-1052-100756703060137/AnsiballZ_file.py'
Jan 31 07:16:05 compute-2 sudo[129486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:05 compute-2 ceph-mon[77282]: pgmap v445: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:05 compute-2 python3.9[129488]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:05 compute-2 sudo[129486]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:05.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:05 compute-2 sudo[129638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dncyqryxdoazjrbqknhlrshqfegckkki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843765.5778785-1120-218576812266337/AnsiballZ_file.py'
Jan 31 07:16:05 compute-2 sudo[129638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:05 compute-2 python3.9[129640]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:05 compute-2 sudo[129638]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:06 compute-2 ceph-mon[77282]: pgmap v446: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:06.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:06 compute-2 sudo[129790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehrsuvkgnjqtptlncbxtcvvykmpoxauc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843766.2072427-1145-45760190140601/AnsiballZ_stat.py'
Jan 31 07:16:06 compute-2 sudo[129790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:06 compute-2 python3.9[129792]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:06 compute-2 sudo[129790]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:06 compute-2 sudo[129868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxwsbcozdpbklrzpttdqxiobycwwamei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843766.2072427-1145-45760190140601/AnsiballZ_file.py'
Jan 31 07:16:06 compute-2 sudo[129868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:06 compute-2 python3.9[129870]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:06 compute-2 sudo[129868]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:07 compute-2 sudo[130021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocsdwrtmqconhimnvskmxqjlbmdnwmgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843767.223483-1181-205759741232266/AnsiballZ_stat.py'
Jan 31 07:16:07 compute-2 sudo[130021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:07.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:07 compute-2 python3.9[130023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:07 compute-2 sudo[130021]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:07 compute-2 sudo[130099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvzyqqpomgcokoczbxjdjpqwibpzbvwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843767.223483-1181-205759741232266/AnsiballZ_file.py'
Jan 31 07:16:07 compute-2 sudo[130099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:07 compute-2 python3.9[130101]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:07 compute-2 sudo[130099]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:08.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:08 compute-2 sudo[130251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbrbgdsiwwikilxbdjfhxlxqynnnvgjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843768.326184-1216-125386400995619/AnsiballZ_systemd.py'
Jan 31 07:16:08 compute-2 sudo[130251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:08 compute-2 ceph-mon[77282]: pgmap v447: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:08 compute-2 python3.9[130253]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:16:08 compute-2 systemd[1]: Reloading.
Jan 31 07:16:08 compute-2 systemd-rc-local-generator[130275]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:16:08 compute-2 systemd-sysv-generator[130283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:16:09 compute-2 sudo[130251]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:09.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:09 compute-2 sudo[130443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agwgeofakrlrhmdhygdvylswxbpwdzgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843769.4913425-1241-236904560501944/AnsiballZ_stat.py'
Jan 31 07:16:09 compute-2 sudo[130443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:09 compute-2 python3.9[130445]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:09 compute-2 sudo[130443]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:10 compute-2 sudo[130521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usvdcgigfxzykuvuswoqzwhhcatxywjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843769.4913425-1241-236904560501944/AnsiballZ_file.py'
Jan 31 07:16:10 compute-2 sudo[130521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:10 compute-2 python3.9[130523]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:10 compute-2 sudo[130521]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:10.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:10 compute-2 ceph-mon[77282]: pgmap v448: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:10 compute-2 sudo[130673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihbjwqxfyvctgjpeobysqeqzrydhwohk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843770.677464-1276-235994944005134/AnsiballZ_stat.py'
Jan 31 07:16:10 compute-2 sudo[130673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:11 compute-2 python3.9[130675]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:11 compute-2 sudo[130673]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:11 compute-2 sudo[130752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujajusqgwzvggdzjwpjniqdejlmpcpyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843770.677464-1276-235994944005134/AnsiballZ_file.py'
Jan 31 07:16:11 compute-2 sudo[130752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:11 compute-2 python3.9[130754]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:11 compute-2 sudo[130752]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:11.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:11 compute-2 sudo[130904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dolhfpqyoilzwbdtmppfhzfabsjqddzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843771.7403905-1313-132382875935866/AnsiballZ_systemd.py'
Jan 31 07:16:11 compute-2 sudo[130904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:12 compute-2 python3.9[130906]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:16:12 compute-2 systemd[1]: Reloading.
Jan 31 07:16:12 compute-2 systemd-rc-local-generator[130927]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:16:12 compute-2 systemd-sysv-generator[130934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:16:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:12.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:12 compute-2 systemd[1]: Starting Create netns directory...
Jan 31 07:16:12 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 07:16:12 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 07:16:12 compute-2 systemd[1]: Finished Create netns directory.
Jan 31 07:16:12 compute-2 sudo[130904]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:12 compute-2 ceph-mon[77282]: pgmap v449: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:13 compute-2 sudo[131099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqddyusjuxdwpyquczbksigojymjnzyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843772.967838-1342-98178313086161/AnsiballZ_file.py'
Jan 31 07:16:13 compute-2 sudo[131099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:13 compute-2 python3.9[131101]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:13 compute-2 sudo[131099]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:13.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:13 compute-2 sudo[131251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltuzbgyrypysblhrdhsuuyuksmnyxzcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843773.5889914-1366-271800406603929/AnsiballZ_stat.py'
Jan 31 07:16:13 compute-2 sudo[131251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:13 compute-2 python3.9[131253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:14 compute-2 sudo[131251]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:14 compute-2 sudo[131374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcetuivjwwpbciwbuqznopwdlgodljwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843773.5889914-1366-271800406603929/AnsiballZ_copy.py'
Jan 31 07:16:14 compute-2 sudo[131374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:14.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:14 compute-2 python3.9[131376]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843773.5889914-1366-271800406603929/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:14 compute-2 sudo[131374]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:14 compute-2 ceph-mon[77282]: pgmap v450: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:15 compute-2 sudo[131527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seynywwdluectvqvscdjyqlhxbudpcco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843775.028141-1418-61017361409982/AnsiballZ_file.py'
Jan 31 07:16:15 compute-2 sudo[131527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:15.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:15 compute-2 python3.9[131529]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:15 compute-2 sudo[131527]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:16 compute-2 ceph-mon[77282]: pgmap v451: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:16 compute-2 sudo[131679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlqafraqctdjjrdlldeqkzllzdvrdzqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843775.9119916-1441-107239122190320/AnsiballZ_file.py'
Jan 31 07:16:16 compute-2 sudo[131679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:16 compute-2 python3.9[131681]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:16 compute-2 sudo[131679]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:16.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:16 compute-2 sudo[131831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trwbbgwmuwbstmziouorkkxcddheogai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843776.5870864-1465-119616864247150/AnsiballZ_stat.py'
Jan 31 07:16:16 compute-2 sudo[131831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:16 compute-2 python3.9[131833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:16 compute-2 sudo[131831]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:17 compute-2 sudo[131955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qafwksxwbhdnerbjfjtpuqopvefqjyps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843776.5870864-1465-119616864247150/AnsiballZ_copy.py'
Jan 31 07:16:17 compute-2 sudo[131955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:17 compute-2 python3.9[131957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843776.5870864-1465-119616864247150/.source.json _original_basename=.l1a5h1ew follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:17 compute-2 sudo[131955]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:17.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:18 compute-2 python3.9[132107]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:18.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:18 compute-2 ceph-mon[77282]: pgmap v452: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:19.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:19 compute-2 sudo[132483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:20 compute-2 sudo[132483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:20 compute-2 sudo[132483]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:20 compute-2 sudo[132528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:20 compute-2 sudo[132577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcqsasmskcmazoauvyeqelxrwccvujyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843779.691426-1585-7536120411770/AnsiballZ_container_config_data.py'
Jan 31 07:16:20 compute-2 sudo[132528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:20 compute-2 sudo[132577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:20 compute-2 sudo[132528]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:20 compute-2 python3.9[132581]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 07:16:20 compute-2 sudo[132577]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:20.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:20 compute-2 ceph-mon[77282]: pgmap v453: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:21 compute-2 sudo[132732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyflvzzwufzrzdjwgaxxuexqjorqiobf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843780.723486-1618-59730037449097/AnsiballZ_container_config_hash.py'
Jan 31 07:16:21 compute-2 sudo[132732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:21 compute-2 python3.9[132734]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 07:16:21 compute-2 sudo[132732]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:21.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:22 compute-2 sudo[132884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuzbnniebzrvjqpiilkqehsftwklnrmd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843781.701467-1648-262423560798639/AnsiballZ_edpm_container_manage.py'
Jan 31 07:16:22 compute-2 sudo[132884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:22.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:22 compute-2 python3[132886]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 07:16:22 compute-2 ceph-mon[77282]: pgmap v454: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:24.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:24 compute-2 ceph-mon[77282]: pgmap v455: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:25.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:26.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:26 compute-2 podman[132898]: 2026-01-31 07:16:26.75741766 +0000 UTC m=+4.293687856 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 07:16:26 compute-2 podman[133017]: 2026-01-31 07:16:26.860643541 +0000 UTC m=+0.039702198 container create 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:16:26 compute-2 podman[133017]: 2026-01-31 07:16:26.839219123 +0000 UTC m=+0.018277800 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 07:16:26 compute-2 python3[132886]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 07:16:26 compute-2 sudo[132884]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:27 compute-2 ceph-mon[77282]: pgmap v456: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:27.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:28 compute-2 sudo[133206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miqswthmdmjglfqbisadyozpgxtkgsrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843787.8990283-1672-225192240705773/AnsiballZ_stat.py'
Jan 31 07:16:28 compute-2 sudo[133206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:28 compute-2 ceph-mon[77282]: pgmap v457: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:28 compute-2 python3.9[133208]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:16:28 compute-2 sudo[133206]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:28.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:28 compute-2 sudo[133360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crqauzypmjbrynqipyyuhdghjgzvbkop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843788.6236897-1699-166093016415337/AnsiballZ_file.py'
Jan 31 07:16:28 compute-2 sudo[133360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:29 compute-2 python3.9[133362]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:29 compute-2 sudo[133360]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:29 compute-2 sudo[133437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvffnnnrfsywzrehaxizjkpvqnqvftmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843788.6236897-1699-166093016415337/AnsiballZ_stat.py'
Jan 31 07:16:29 compute-2 sudo[133437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:29 compute-2 python3.9[133439]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:16:29 compute-2 sudo[133437]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:29 compute-2 sudo[133588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esjnqtwzatfaqpyldfkunporzqcftjqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843789.4240735-1699-44740557438381/AnsiballZ_copy.py'
Jan 31 07:16:29 compute-2 sudo[133588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:30 compute-2 python3.9[133590]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843789.4240735-1699-44740557438381/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:30 compute-2 sudo[133588]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:30 compute-2 sudo[133664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boxxzryivaltplshmukavugyzeqkueao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843789.4240735-1699-44740557438381/AnsiballZ_systemd.py'
Jan 31 07:16:30 compute-2 sudo[133664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:30.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:30 compute-2 python3.9[133666]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:16:30 compute-2 systemd[1]: Reloading.
Jan 31 07:16:30 compute-2 systemd-sysv-generator[133688]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:16:30 compute-2 systemd-rc-local-generator[133684]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:16:30 compute-2 ceph-mon[77282]: pgmap v458: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:30 compute-2 sudo[133664]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:30 compute-2 sudo[133775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofdhtrghbiysfzxlrnpjumfxtfeszqwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843789.4240735-1699-44740557438381/AnsiballZ_systemd.py'
Jan 31 07:16:30 compute-2 sudo[133775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:31 compute-2 python3.9[133777]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:16:31 compute-2 systemd[1]: Reloading.
Jan 31 07:16:31 compute-2 systemd-sysv-generator[133809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:16:31 compute-2 systemd-rc-local-generator[133805]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:16:31 compute-2 systemd[1]: Starting ovn_controller container...
Jan 31 07:16:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:31.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:31 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:16:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557052af2dace1551bdd50cd28761ea2426bb170b6c13bbd2caa30f2febf4a3d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 07:16:31 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb.
Jan 31 07:16:31 compute-2 podman[133819]: 2026-01-31 07:16:31.644042422 +0000 UTC m=+0.094024761 container init 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 07:16:31 compute-2 ovn_controller[133834]: + sudo -E kolla_set_configs
Jan 31 07:16:31 compute-2 podman[133819]: 2026-01-31 07:16:31.668346821 +0000 UTC m=+0.118329150 container start 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 31 07:16:31 compute-2 edpm-start-podman-container[133819]: ovn_controller
Jan 31 07:16:31 compute-2 systemd[1]: Created slice User Slice of UID 0.
Jan 31 07:16:31 compute-2 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 07:16:31 compute-2 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 07:16:31 compute-2 systemd[1]: Starting User Manager for UID 0...
Jan 31 07:16:31 compute-2 edpm-start-podman-container[133818]: Creating additional drop-in dependency for "ovn_controller" (44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb)
Jan 31 07:16:31 compute-2 systemd[133871]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 31 07:16:31 compute-2 systemd[1]: Reloading.
Jan 31 07:16:31 compute-2 podman[133841]: 2026-01-31 07:16:31.757863753 +0000 UTC m=+0.081800043 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 07:16:31 compute-2 systemd-rc-local-generator[133921]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:16:31 compute-2 systemd-sysv-generator[133924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:16:31 compute-2 systemd[133871]: Queued start job for default target Main User Target.
Jan 31 07:16:31 compute-2 systemd[133871]: Created slice User Application Slice.
Jan 31 07:16:31 compute-2 systemd[133871]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 07:16:31 compute-2 systemd[133871]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:16:31 compute-2 systemd[133871]: Reached target Paths.
Jan 31 07:16:31 compute-2 systemd[133871]: Reached target Timers.
Jan 31 07:16:31 compute-2 systemd[133871]: Starting D-Bus User Message Bus Socket...
Jan 31 07:16:31 compute-2 systemd[133871]: Starting Create User's Volatile Files and Directories...
Jan 31 07:16:31 compute-2 systemd[133871]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:16:31 compute-2 systemd[133871]: Finished Create User's Volatile Files and Directories.
Jan 31 07:16:31 compute-2 systemd[133871]: Reached target Sockets.
Jan 31 07:16:31 compute-2 systemd[133871]: Reached target Basic System.
Jan 31 07:16:31 compute-2 systemd[133871]: Reached target Main User Target.
Jan 31 07:16:31 compute-2 systemd[133871]: Startup finished in 124ms.
Jan 31 07:16:31 compute-2 systemd[1]: Started User Manager for UID 0.
Jan 31 07:16:31 compute-2 systemd[1]: Started ovn_controller container.
Jan 31 07:16:31 compute-2 systemd[1]: 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb-26c5600aedf4260f.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 07:16:31 compute-2 systemd[1]: 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb-26c5600aedf4260f.service: Failed with result 'exit-code'.
Jan 31 07:16:31 compute-2 systemd[1]: Started Session c1 of User root.
Jan 31 07:16:31 compute-2 sudo[133775]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:32 compute-2 ovn_controller[133834]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:16:32 compute-2 ovn_controller[133834]: INFO:__main__:Validating config file
Jan 31 07:16:32 compute-2 ovn_controller[133834]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:16:32 compute-2 ovn_controller[133834]: INFO:__main__:Writing out command to execute
Jan 31 07:16:32 compute-2 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 07:16:32 compute-2 ovn_controller[133834]: ++ cat /run_command
Jan 31 07:16:32 compute-2 ovn_controller[133834]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 07:16:32 compute-2 ovn_controller[133834]: + ARGS=
Jan 31 07:16:32 compute-2 ovn_controller[133834]: + sudo kolla_copy_cacerts
Jan 31 07:16:32 compute-2 systemd[1]: Started Session c2 of User root.
Jan 31 07:16:32 compute-2 ovn_controller[133834]: + [[ ! -n '' ]]
Jan 31 07:16:32 compute-2 ovn_controller[133834]: + . kolla_extend_start
Jan 31 07:16:32 compute-2 ovn_controller[133834]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 07:16:32 compute-2 ovn_controller[133834]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 07:16:32 compute-2 ovn_controller[133834]: + umask 0022
Jan 31 07:16:32 compute-2 ovn_controller[133834]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 07:16:32 compute-2 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <info>  [1769843792.0889] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <info>  [1769843792.0896] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <warn>  [1769843792.0898] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:16:32 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <info>  [1769843792.0905] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <info>  [1769843792.0911] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <info>  [1769843792.0915] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 07:16:32 compute-2 kernel: br-int: entered promiscuous mode
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 07:16:32 compute-2 systemd-udevd[133967]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 07:16:32 compute-2 ovn_controller[133834]: 2026-01-31T07:16:32Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <info>  [1769843792.1221] manager: (ovn-f59cdb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 07:16:32 compute-2 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <info>  [1769843792.1377] device (genev_sys_6081): carrier: link connected
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <info>  [1769843792.1382] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 07:16:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:32.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <info>  [1769843792.6321] manager: (ovn-a84029-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 31 07:16:32 compute-2 NetworkManager[48999]: <info>  [1769843792.6528] manager: (ovn-5c3074-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 31 07:16:32 compute-2 ceph-mon[77282]: pgmap v459: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:33 compute-2 python3.9[134098]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 07:16:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:33.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:34 compute-2 sudo[134248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eidpkvtkvwvxsjcablpyaebkfcapwwvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843793.949261-1834-218895517901804/AnsiballZ_stat.py'
Jan 31 07:16:34 compute-2 sudo[134248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:34.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:34 compute-2 python3.9[134250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:34 compute-2 sudo[134248]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:34 compute-2 sudo[134371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvmwnlbxlwjgymbqspbypuzfvbzixlfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843793.949261-1834-218895517901804/AnsiballZ_copy.py'
Jan 31 07:16:34 compute-2 sudo[134371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:34 compute-2 python3.9[134373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843793.949261-1834-218895517901804/.source.yaml _original_basename=.6dkimk0p follow=False checksum=f6b75a149047666d3f825893ea1fa078d9873798 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:34 compute-2 sudo[134371]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:34 compute-2 ceph-mon[77282]: pgmap v460: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:35 compute-2 sudo[134524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvxjvwdytfyhphbspgkwtccwieghymfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843795.3671556-1880-220360537480004/AnsiballZ_command.py'
Jan 31 07:16:35 compute-2 sudo[134524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:35 compute-2 python3.9[134526]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:35 compute-2 ovs-vsctl[134527]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 07:16:35 compute-2 sudo[134524]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:36 compute-2 sudo[134677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iabysqfkgtukhmxsjfltsghbumusqmsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843796.0439444-1904-270560594665339/AnsiballZ_command.py'
Jan 31 07:16:36 compute-2 sudo[134677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:36.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:36 compute-2 ceph-mon[77282]: pgmap v461: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:36 compute-2 python3.9[134679]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:36 compute-2 ovs-vsctl[134681]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 07:16:36 compute-2 sudo[134677]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:37 compute-2 sudo[134833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjuberdosgodpfklmwbgcnaoaangprzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843797.0272453-1946-17512965044910/AnsiballZ_command.py'
Jan 31 07:16:37 compute-2 sudo[134833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:37 compute-2 python3.9[134835]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:37 compute-2 ovs-vsctl[134836]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 07:16:37 compute-2 sudo[134833]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:37 compute-2 sshd-session[122899]: Connection closed by 192.168.122.30 port 53040
Jan 31 07:16:37 compute-2 sshd-session[122896]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:16:37 compute-2 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 07:16:37 compute-2 systemd[1]: session-45.scope: Consumed 44.850s CPU time.
Jan 31 07:16:37 compute-2 systemd-logind[801]: Session 45 logged out. Waiting for processes to exit.
Jan 31 07:16:37 compute-2 systemd-logind[801]: Removed session 45.
Jan 31 07:16:38 compute-2 ceph-mon[77282]: pgmap v462: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:38.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:40 compute-2 sudo[134862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:40 compute-2 sudo[134862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:40 compute-2 sudo[134862]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:40 compute-2 sudo[134887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:40 compute-2 sudo[134887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:40 compute-2 sudo[134887]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:40.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:40 compute-2 ceph-mon[77282]: pgmap v463: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:41.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:42 compute-2 systemd[1]: Stopping User Manager for UID 0...
Jan 31 07:16:42 compute-2 systemd[133871]: Activating special unit Exit the Session...
Jan 31 07:16:42 compute-2 systemd[133871]: Stopped target Main User Target.
Jan 31 07:16:42 compute-2 systemd[133871]: Stopped target Basic System.
Jan 31 07:16:42 compute-2 systemd[133871]: Stopped target Paths.
Jan 31 07:16:42 compute-2 systemd[133871]: Stopped target Sockets.
Jan 31 07:16:42 compute-2 systemd[133871]: Stopped target Timers.
Jan 31 07:16:42 compute-2 systemd[133871]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 07:16:42 compute-2 systemd[133871]: Closed D-Bus User Message Bus Socket.
Jan 31 07:16:42 compute-2 systemd[133871]: Stopped Create User's Volatile Files and Directories.
Jan 31 07:16:42 compute-2 systemd[133871]: Removed slice User Application Slice.
Jan 31 07:16:42 compute-2 systemd[133871]: Reached target Shutdown.
Jan 31 07:16:42 compute-2 systemd[133871]: Finished Exit the Session.
Jan 31 07:16:42 compute-2 systemd[133871]: Reached target Exit the Session.
Jan 31 07:16:42 compute-2 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 07:16:42 compute-2 systemd[1]: Stopped User Manager for UID 0.
Jan 31 07:16:42 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 07:16:42 compute-2 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 07:16:42 compute-2 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 07:16:42 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 07:16:42 compute-2 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 07:16:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:42.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:42 compute-2 ceph-mon[77282]: pgmap v464: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:43.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:43 compute-2 sshd-session[134917]: Accepted publickey for zuul from 192.168.122.30 port 37796 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:16:43 compute-2 systemd-logind[801]: New session 47 of user zuul.
Jan 31 07:16:43 compute-2 systemd[1]: Started Session 47 of User zuul.
Jan 31 07:16:43 compute-2 sshd-session[134917]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:16:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:44.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:44 compute-2 python3.9[135071]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:16:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:45.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:46.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:47 compute-2 sudo[135102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:47 compute-2 sudo[135102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:47 compute-2 sudo[135102]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:47 compute-2 sudo[135127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:16:47 compute-2 sudo[135127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:47 compute-2 sudo[135127]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:47 compute-2 sudo[135152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:47 compute-2 sudo[135152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:47 compute-2 sudo[135152]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:47 compute-2 sudo[135177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:16:47 compute-2 sudo[135177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:47.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:47 compute-2 sudo[135177]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:48.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:49.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:50.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:50 compute-2 ceph-mon[77282]: pgmap v465: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:51.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:52.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:53.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:54.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:54 compute-2 sudo[135352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkzvxaxdiqeblygxkxxenhsbocaxkicu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843814.2546926-63-61912654768163/AnsiballZ_file.py'
Jan 31 07:16:54 compute-2 sudo[135352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:54 compute-2 python3.9[135354]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:54 compute-2 sudo[135352]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:55 compute-2 ceph-mon[77282]: pgmap v466: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:55 compute-2 ceph-mon[77282]: pgmap v467: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:16:55 compute-2 ceph-mon[77282]: pgmap v468: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:55 compute-2 sudo[135505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbgddmufiwoepxilnqdgmhrjsmyesmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843814.9919357-63-108916931842565/AnsiballZ_file.py'
Jan 31 07:16:55 compute-2 sudo[135505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:55 compute-2 python3.9[135507]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:55 compute-2 sudo[135505]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:55.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:55 compute-2 sudo[135657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfagmgrfcrcybgpxspkxtcqzkqjqcgqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843815.5947905-63-123070976761748/AnsiballZ_file.py'
Jan 31 07:16:55 compute-2 sudo[135657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:56 compute-2 python3.9[135659]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:56 compute-2 sudo[135657]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:56.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:56 compute-2 sudo[135809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-helbgctengxettgqrtbxzvzpnqpzugmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843816.1559649-63-101988389418397/AnsiballZ_file.py'
Jan 31 07:16:56 compute-2 sudo[135809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:56 compute-2 python3.9[135811]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:56 compute-2 sudo[135809]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:56 compute-2 sudo[135961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smbfuuupgfxtlydcdjpyjcordymnwucn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843816.7310033-63-92358494801811/AnsiballZ_file.py'
Jan 31 07:16:56 compute-2 sudo[135961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:57 compute-2 python3.9[135963]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:57 compute-2 sudo[135961]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:57.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:59 compute-2 ceph-mon[77282]: pgmap v469: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:59 compute-2 ceph-mon[77282]: pgmap v470: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:16:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:16:59 compute-2 ceph-mon[77282]: pgmap v471: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:59 compute-2 python3.9[136114]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:16:59 compute-2 sudo[136124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:59 compute-2 sudo[136124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:59 compute-2 sudo[136124]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:59 compute-2 sudo[136165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:16:59 compute-2 sudo[136165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:59 compute-2 sudo[136165]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:59 compute-2 sudo[136190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:59 compute-2 sudo[136190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:59 compute-2 sudo[136190]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:59 compute-2 sudo[136215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:16:59 compute-2 sudo[136215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:16:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2126 writes, 11K keys, 2126 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2126 writes, 2126 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2126 writes, 11K keys, 2126 commit groups, 1.0 writes per commit group, ingest: 22.89 MB, 0.04 MB/s
                                           Interval WAL: 2126 writes, 2126 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    125.3      0.09              0.03         4    0.023       0      0       0.0       0.0
                                             L6      1/0    7.37 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    187.8    160.4      0.15              0.05         3    0.050     12K   1277       0.0       0.0
                                            Sum      1/0    7.37 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    116.6    147.1      0.24              0.08         7    0.035     12K   1277       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    117.4    148.1      0.24              0.08         6    0.040     12K   1277       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    187.8    160.4      0.15              0.05         3    0.050     12K   1277       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    127.5      0.09              0.03         3    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.011, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 308.00 MB usage: 990.08 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(46,856.83 KB,0.271671%) FilterBlock(7,41.42 KB,0.0131335%) IndexBlock(7,91.83 KB,0.0291156%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:16:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:16:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:59.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:59 compute-2 sudo[136215]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:59 compute-2 sudo[136397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkugcmdrwlumtshsfnigkgydokzovsuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843819.3793633-196-118542511029769/AnsiballZ_seboolean.py'
Jan 31 07:16:59 compute-2 sudo[136397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:59 compute-2 python3.9[136399]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 07:17:00 compute-2 sudo[136400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:00 compute-2 sudo[136400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:00 compute-2 sudo[136400]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:00 compute-2 sudo[136425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:00 compute-2 sudo[136425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:00 compute-2 sudo[136425]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:00 compute-2 ceph-mon[77282]: pgmap v472: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:17:00 compute-2 ceph-mon[77282]: pgmap v473: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 31 07:17:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:17:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:17:00 compute-2 sudo[136397]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:17:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:17:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:17:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:01.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:01 compute-2 python3.9[136600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:02 compute-2 ovn_controller[133834]: 2026-01-31T07:17:02Z|00025|memory|INFO|17024 kB peak resident set size after 30.0 seconds
Jan 31 07:17:02 compute-2 ovn_controller[133834]: 2026-01-31T07:17:02Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Jan 31 07:17:02 compute-2 podman[136695]: 2026-01-31 07:17:02.139769777 +0000 UTC m=+0.079150619 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:17:02 compute-2 python3.9[136729]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843820.9510167-219-276614267932901/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:02 compute-2 ceph-mon[77282]: pgmap v474: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:03 compute-2 python3.9[136898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:03 compute-2 python3.9[137020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843822.4983397-264-163711590585253/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:03.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:04 compute-2 ceph-mon[77282]: pgmap v475: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:04 compute-2 sudo[137170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcoyuvgirtltlsqakswdypekfcsoyfoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843824.1114798-315-220163051131083/AnsiballZ_setup.py'
Jan 31 07:17:04 compute-2 sudo[137170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:04.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:04 compute-2 python3.9[137172]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:17:04 compute-2 sudo[137170]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:05 compute-2 sudo[137255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlhksulvyrqtnglpgrgukmdhqnirtpzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843824.1114798-315-220163051131083/AnsiballZ_dnf.py'
Jan 31 07:17:05 compute-2 sudo[137255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:05 compute-2 python3.9[137257]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:17:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:05.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:06.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:06 compute-2 sudo[137255]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:07 compute-2 ceph-mon[77282]: pgmap v476: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:07.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:07 compute-2 sudo[137409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iukawmhfloprxhuorihmljsbsnjugyum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843827.4498813-351-145783279670175/AnsiballZ_systemd.py'
Jan 31 07:17:07 compute-2 sudo[137409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:08 compute-2 python3.9[137411]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:17:08 compute-2 sudo[137409]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:08.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:08 compute-2 ceph-mon[77282]: pgmap v477: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:08 compute-2 python3.9[137564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:09 compute-2 python3.9[137686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843828.5329697-375-245787306048601/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:09.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:09 compute-2 python3.9[137836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:10 compute-2 python3.9[137957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843829.5692708-375-45425406636484/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:10 compute-2 ceph-mon[77282]: pgmap v478: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:11.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:12 compute-2 python3.9[138108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:12 compute-2 ceph-mon[77282]: pgmap v479: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:12 compute-2 python3.9[138229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843831.8432124-507-68636722188863/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:13 compute-2 python3.9[138380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:13.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:13 compute-2 python3.9[138501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843832.8004541-507-30412982875811/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:14 compute-2 ceph-mon[77282]: pgmap v480: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:14 compute-2 python3.9[138651]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:17:14 compute-2 sudo[138654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:14 compute-2 sudo[138654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:14 compute-2 sudo[138654]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:14 compute-2 sudo[138703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:17:14 compute-2 sudo[138703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:14 compute-2 sudo[138703]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 07:17:15 compute-2 sudo[138854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjickdgzmwippkyajvukoubthynldhap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843834.9923682-622-230449546941563/AnsiballZ_file.py'
Jan 31 07:17:15 compute-2 sudo[138854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:15 compute-2 python3.9[138856]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:15 compute-2 sudo[138854]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:15.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:15 compute-2 sudo[139006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymdxvefzsigkabhkhfljijsuktiztllp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843835.6645567-645-59796948782692/AnsiballZ_stat.py'
Jan 31 07:17:15 compute-2 sudo[139006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:16 compute-2 python3.9[139008]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:16 compute-2 sudo[139006]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:16.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:16 compute-2 sudo[139084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmsdveomuvuwnygyuanbfxsvdsbnaozw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843835.6645567-645-59796948782692/AnsiballZ_file.py'
Jan 31 07:17:16 compute-2 sudo[139084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:16 compute-2 python3.9[139086]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:16 compute-2 sudo[139084]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:16 compute-2 ceph-mon[77282]: pgmap v481: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:16 compute-2 sudo[139236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akoxlomqpruyvafesieesdchplzespfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843836.734767-645-265802063804220/AnsiballZ_stat.py'
Jan 31 07:17:16 compute-2 sudo[139236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:17 compute-2 python3.9[139238]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:17 compute-2 sudo[139236]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:17 compute-2 sudo[139315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqmxszsaqcjacxrnycpcglkgychyawra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843836.734767-645-265802063804220/AnsiballZ_file.py'
Jan 31 07:17:17 compute-2 sudo[139315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:17 compute-2 python3.9[139317]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:17 compute-2 sudo[139315]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:17.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:18 compute-2 sudo[139467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqmvvwhzlddxewixcwsgqqbybgkcfhfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843837.7505422-715-119787016461524/AnsiballZ_file.py'
Jan 31 07:17:18 compute-2 sudo[139467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:18 compute-2 python3.9[139469]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:18 compute-2 ceph-mon[77282]: pgmap v482: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:18 compute-2 sudo[139467]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 07:17:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:18.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 07:17:18 compute-2 sudo[139619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqttzmuhrhceiaugybvulblymodefjag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843838.4141402-738-152099270945874/AnsiballZ_stat.py'
Jan 31 07:17:18 compute-2 sudo[139619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:18 compute-2 python3.9[139621]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:18 compute-2 sudo[139619]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:19 compute-2 sudo[139698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toocyfxcfclfpzhflfblykxftkregtoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843838.4141402-738-152099270945874/AnsiballZ_file.py'
Jan 31 07:17:19 compute-2 sudo[139698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:19 compute-2 python3.9[139700]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:19 compute-2 sudo[139698]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:19.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:19 compute-2 sudo[139850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdcbsifihfwvcjisunatfuxmhtqievkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843839.458264-774-64256218815480/AnsiballZ_stat.py'
Jan 31 07:17:19 compute-2 sudo[139850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:19 compute-2 python3.9[139852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:19 compute-2 sudo[139850]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:20 compute-2 sudo[139928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etyouejvonohfmsboptfifftuibfdrra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843839.458264-774-64256218815480/AnsiballZ_file.py'
Jan 31 07:17:20 compute-2 sudo[139928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:20 compute-2 python3.9[139930]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:20 compute-2 sudo[139928]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:20.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:20 compute-2 sudo[139931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:20 compute-2 sudo[139931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:20 compute-2 sudo[139931]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:20 compute-2 sudo[139968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:20 compute-2 sudo[139968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:20 compute-2 sudo[139968]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:20 compute-2 sudo[140130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmcrvvzpwsojgtrcmsrwchkhlqakaesc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843840.5564747-811-36619402916162/AnsiballZ_systemd.py'
Jan 31 07:17:20 compute-2 sudo[140130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:20 compute-2 ceph-mon[77282]: pgmap v483: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:21 compute-2 python3.9[140132]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:17:21 compute-2 systemd[1]: Reloading.
Jan 31 07:17:21 compute-2 systemd-rc-local-generator[140152]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:17:21 compute-2 systemd-sysv-generator[140156]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:17:21 compute-2 sudo[140130]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:21 compute-2 sudo[140321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuzqrrwlixlizdvrcbdahlmabllokdqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843841.632104-834-146330842648421/AnsiballZ_stat.py'
Jan 31 07:17:21 compute-2 sudo[140321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:22 compute-2 python3.9[140323]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:22 compute-2 sudo[140321]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:22 compute-2 sudo[140399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkiakpygnkmdzxhmakjtviwgvqcletco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843841.632104-834-146330842648421/AnsiballZ_file.py'
Jan 31 07:17:22 compute-2 sudo[140399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:22.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:22 compute-2 python3.9[140401]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:22 compute-2 sudo[140399]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:23 compute-2 sudo[140552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdiafmmuhjbduygxnellvqavmgxlijh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843842.8163128-870-85437356432598/AnsiballZ_stat.py'
Jan 31 07:17:23 compute-2 sudo[140552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:23 compute-2 ceph-mon[77282]: pgmap v484: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:23 compute-2 python3.9[140554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:23 compute-2 sudo[140552]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:23 compute-2 sudo[140630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxucvsspryalpnliavyjooskzcfqmguc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843842.8163128-870-85437356432598/AnsiballZ_file.py'
Jan 31 07:17:23 compute-2 sudo[140630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:23.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:23 compute-2 python3.9[140632]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:23 compute-2 sudo[140630]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:24 compute-2 sudo[140782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqzwstpkmendvvgtmffhbwuehghhuwvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843843.876652-907-99983178352295/AnsiballZ_systemd.py'
Jan 31 07:17:24 compute-2 sudo[140782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:24.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:24 compute-2 python3.9[140784]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:17:24 compute-2 systemd[1]: Reloading.
Jan 31 07:17:24 compute-2 ceph-mon[77282]: pgmap v485: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.579955) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844580054, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1772, "num_deletes": 252, "total_data_size": 4413792, "memory_usage": 4463056, "flush_reason": "Manual Compaction"}
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844593824, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2882773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10704, "largest_seqno": 12471, "table_properties": {"data_size": 2875389, "index_size": 4391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15006, "raw_average_key_size": 19, "raw_value_size": 2860522, "raw_average_value_size": 3768, "num_data_blocks": 197, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843670, "oldest_key_time": 1769843670, "file_creation_time": 1769843844, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 13942 microseconds, and 7135 cpu microseconds.
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.593896) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2882773 bytes OK
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.593939) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.595343) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.595360) EVENT_LOG_v1 {"time_micros": 1769843844595353, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.595379) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4405768, prev total WAL file size 4405768, number of live WAL files 2.
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.596270) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2815KB)], [21(7542KB)]
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844596354, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10606019, "oldest_snapshot_seqno": -1}
Jan 31 07:17:24 compute-2 systemd-sysv-generator[140814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:17:24 compute-2 systemd-rc-local-generator[140811]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4012 keys, 8423858 bytes, temperature: kUnknown
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844646702, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8423858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8394359, "index_size": 18381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97419, "raw_average_key_size": 24, "raw_value_size": 8319219, "raw_average_value_size": 2073, "num_data_blocks": 794, "num_entries": 4012, "num_filter_entries": 4012, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769843844, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.647184) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8423858 bytes
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.648229) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.3 rd, 166.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.6) write-amplify(2.9) OK, records in: 4534, records dropped: 522 output_compression: NoCompression
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.648254) EVENT_LOG_v1 {"time_micros": 1769843844648246, "job": 10, "event": "compaction_finished", "compaction_time_micros": 50682, "compaction_time_cpu_micros": 29861, "output_level": 6, "num_output_files": 1, "total_output_size": 8423858, "num_input_records": 4534, "num_output_records": 4012, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844648825, "job": 10, "event": "table_file_deletion", "file_number": 23}
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844649562, "job": 10, "event": "table_file_deletion", "file_number": 21}
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.596173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.649690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.649695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.649696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.649699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:17:24.649701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:24 compute-2 systemd[1]: Starting Create netns directory...
Jan 31 07:17:24 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 07:17:24 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 07:17:24 compute-2 systemd[1]: Finished Create netns directory.
Jan 31 07:17:24 compute-2 sudo[140782]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:25.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:25 compute-2 sudo[140976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrqzssjfqhwdmtfpbwxdzmocwudljcrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843845.408883-936-234516880855965/AnsiballZ_file.py'
Jan 31 07:17:25 compute-2 sudo[140976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:26 compute-2 python3.9[140978]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:26 compute-2 sudo[140976]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:26 compute-2 sudo[141128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jayeglpqdaxwfbuhqfgelaltuvhsnvmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843846.3484786-960-101387629747077/AnsiballZ_stat.py'
Jan 31 07:17:26 compute-2 sudo[141128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:26 compute-2 ceph-mon[77282]: pgmap v486: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:26 compute-2 python3.9[141130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:26 compute-2 sudo[141128]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:27 compute-2 sudo[141252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycxfspfiiyemzorvifsqxwoarnflbthl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843846.3484786-960-101387629747077/AnsiballZ_copy.py'
Jan 31 07:17:27 compute-2 sudo[141252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:27 compute-2 python3.9[141254]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843846.3484786-960-101387629747077/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:27 compute-2 sudo[141252]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:27 compute-2 sshd-session[141255]: Invalid user solv from 92.118.39.56 port 58376
Jan 31 07:17:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:27.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:27 compute-2 sshd-session[141255]: Connection closed by invalid user solv 92.118.39.56 port 58376 [preauth]
Jan 31 07:17:28 compute-2 sudo[141406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxhvualtmsfcsotayohisvwgxvpiitul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843847.7819066-1011-256207907942322/AnsiballZ_file.py'
Jan 31 07:17:28 compute-2 sudo[141406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:28 compute-2 python3.9[141408]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:28 compute-2 sudo[141406]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:28 compute-2 sudo[141558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoobumvantkazmmcfokpmwcstkyrxbcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843848.4462278-1036-259409122627356/AnsiballZ_file.py'
Jan 31 07:17:28 compute-2 sudo[141558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:17:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5570 writes, 24K keys, 5570 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5570 writes, 807 syncs, 6.90 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5570 writes, 24K keys, 5570 commit groups, 1.0 writes per commit group, ingest: 18.68 MB, 0.03 MB/s
                                           Interval WAL: 5570 writes, 807 syncs, 6.90 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:17:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:28 compute-2 python3.9[141560]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:28 compute-2 sudo[141558]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:28 compute-2 ceph-mon[77282]: pgmap v487: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:29 compute-2 sudo[141711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqhzugabkbracwcvdrwnnuslubmgssrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843849.1402977-1059-16571381529515/AnsiballZ_stat.py'
Jan 31 07:17:29 compute-2 sudo[141711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:29 compute-2 python3.9[141713]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:29 compute-2 sudo[141711]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:29.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:29 compute-2 sudo[141834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytnxhmenqbzadoyhpwwshpkrfyhfajpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843849.1402977-1059-16571381529515/AnsiballZ_copy.py'
Jan 31 07:17:29 compute-2 sudo[141834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:30 compute-2 python3.9[141836]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843849.1402977-1059-16571381529515/.source.json _original_basename=.ybhij_qr follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:30 compute-2 sudo[141834]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:30 compute-2 ceph-mon[77282]: pgmap v488: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:30 compute-2 python3.9[141986]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:31.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:32 compute-2 sudo[142419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgwuegrtluwsqtudnoygqxughhuplouq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843852.1937342-1179-164109623959161/AnsiballZ_container_config_data.py'
Jan 31 07:17:32 compute-2 sudo[142419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:32 compute-2 podman[142382]: 2026-01-31 07:17:32.639306631 +0000 UTC m=+0.099687742 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:17:32 compute-2 python3.9[142424]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 07:17:32 compute-2 sudo[142419]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:32 compute-2 ceph-mon[77282]: pgmap v489: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:33 compute-2 sudo[142589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssctogkzfhestdvonnzenkznbknilfci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843853.2189627-1212-235976033533386/AnsiballZ_container_config_hash.py'
Jan 31 07:17:33 compute-2 sudo[142589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:33 compute-2 python3.9[142591]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 07:17:33 compute-2 sudo[142589]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:34 compute-2 sudo[142741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzkqvjjioueaepzxpkvbtfspbgjxqidz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843854.3959236-1242-91862511654965/AnsiballZ_edpm_container_manage.py'
Jan 31 07:17:34 compute-2 sudo[142741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:35 compute-2 python3[142743]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 07:17:35 compute-2 ceph-mon[77282]: pgmap v490: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:35.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:36.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:36 compute-2 ceph-mon[77282]: pgmap v491: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:37.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:38 compute-2 ceph-mon[77282]: pgmap v492: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:38.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:39.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:40.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:40 compute-2 sudo[142825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:40 compute-2 sudo[142825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:40 compute-2 sudo[142825]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:40 compute-2 sudo[142850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:40 compute-2 sudo[142850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:40 compute-2 sudo[142850]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:41.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:42.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:42 compute-2 podman[142757]: 2026-01-31 07:17:42.476102709 +0000 UTC m=+7.349097163 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:17:42 compute-2 podman[142936]: 2026-01-31 07:17:42.589832448 +0000 UTC m=+0.020666587 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:17:42 compute-2 podman[142936]: 2026-01-31 07:17:42.839731843 +0000 UTC m=+0.270566002 container create b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:17:42 compute-2 python3[142743]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:17:42 compute-2 sudo[142741]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:43.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:44.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:44 compute-2 ceph-mon[77282]: pgmap v493: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:45.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:46.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:47.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:48.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:49 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:17:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:49.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:50.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:51.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:52.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:53.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:54.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:55.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:56.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:57 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:17:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:57.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:58.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:17:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:59.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:00 compute-2 ceph-mon[77282]: pgmap v494: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:00 compute-2 ceph-mon[77282]: pgmap v495: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:00.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:00 compute-2 sudo[143008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:00 compute-2 sudo[143008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:00 compute-2 sudo[143008]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:00 compute-2 sudo[143033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:00 compute-2 sudo[143033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:00 compute-2 sudo[143033]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:00 compute-2 sudo[143183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkpajrpniqdpbgppyrfjrllvmfgngver ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843880.6892428-1266-132338185104264/AnsiballZ_stat.py'
Jan 31 07:18:00 compute-2 sudo[143183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:01 compute-2 python3.9[143185]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:18:01 compute-2 sudo[143183]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:01 compute-2 ceph-mon[77282]: pgmap v496: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-2 ceph-mon[77282]: pgmap v497: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-2 ceph-mon[77282]: pgmap v498: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-2 ceph-mon[77282]: pgmap v499: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-2 ceph-mon[77282]: pgmap v500: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-2 ceph-mon[77282]: pgmap v501: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-2 ceph-mon[77282]: pgmap v502: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-2 ceph-mon[77282]: pgmap v503: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:18:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:01.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:18:01 compute-2 sudo[143338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwhzwvgivdzmznolpgepyhcsfzzkyxqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843881.7099907-1293-72962040118844/AnsiballZ_file.py'
Jan 31 07:18:01 compute-2 sudo[143338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:02 compute-2 python3.9[143340]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:02 compute-2 sudo[143338]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:02 compute-2 sudo[143414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqgplzhqpwsfhooxepyputnwhqbjzafi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843881.7099907-1293-72962040118844/AnsiballZ_stat.py'
Jan 31 07:18:02 compute-2 sudo[143414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:02.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:02 compute-2 python3.9[143416]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:18:02 compute-2 sudo[143414]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:02 compute-2 ceph-mon[77282]: pgmap v504: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:03 compute-2 sudo[143578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itrgufbzhjofwakqiwaxrhexkryzmepu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843882.6317987-1293-263505573213487/AnsiballZ_copy.py'
Jan 31 07:18:03 compute-2 sudo[143578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:03 compute-2 podman[143539]: 2026-01-31 07:18:03.109397682 +0000 UTC m=+0.106904916 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:18:03 compute-2 python3.9[143586]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843882.6317987-1293-263505573213487/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:03 compute-2 sudo[143578]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:03 compute-2 sudo[143667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjayknggwofpdzjbqfdduroewbdetwwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843882.6317987-1293-263505573213487/AnsiballZ_systemd.py'
Jan 31 07:18:03 compute-2 sudo[143667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:03.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:03 compute-2 python3.9[143669]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:18:03 compute-2 systemd[1]: Reloading.
Jan 31 07:18:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:03 compute-2 systemd-sysv-generator[143697]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:18:03 compute-2 systemd-rc-local-generator[143694]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:18:04 compute-2 sudo[143667]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:04 compute-2 sudo[143778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leszfkirnohplohbrrtzkbzqlpdapqqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843882.6317987-1293-263505573213487/AnsiballZ_systemd.py'
Jan 31 07:18:04 compute-2 sudo[143778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:04.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:04 compute-2 python3.9[143780]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:04 compute-2 systemd[1]: Reloading.
Jan 31 07:18:04 compute-2 systemd-rc-local-generator[143807]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:18:04 compute-2 systemd-sysv-generator[143811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:18:04 compute-2 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 07:18:04 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:18:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e27f565e1b71949bd9d45d1a0f7cef23bdfc336922fb29eb855748db790d4d61/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e27f565e1b71949bd9d45d1a0f7cef23bdfc336922fb29eb855748db790d4d61/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:04 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666.
Jan 31 07:18:04 compute-2 podman[143821]: 2026-01-31 07:18:04.985616937 +0000 UTC m=+0.113193844 container init b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 07:18:04 compute-2 ovn_metadata_agent[143834]: + sudo -E kolla_set_configs
Jan 31 07:18:05 compute-2 podman[143821]: 2026-01-31 07:18:05.019541391 +0000 UTC m=+0.147118268 container start b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:18:05 compute-2 edpm-start-podman-container[143821]: ovn_metadata_agent
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Validating config file
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Copying service configuration files
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Writing out command to execute
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: ++ cat /run_command
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: + CMD=neutron-ovn-metadata-agent
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: + ARGS=
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: + sudo kolla_copy_cacerts
Jan 31 07:18:05 compute-2 edpm-start-podman-container[143820]: Creating additional drop-in dependency for "ovn_metadata_agent" (b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666)
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: + [[ ! -n '' ]]
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: + . kolla_extend_start
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: + umask 0022
Jan 31 07:18:05 compute-2 ovn_metadata_agent[143834]: + exec neutron-ovn-metadata-agent
Jan 31 07:18:05 compute-2 systemd[1]: Reloading.
Jan 31 07:18:05 compute-2 podman[143843]: 2026-01-31 07:18:05.105593244 +0000 UTC m=+0.074724812 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:18:05 compute-2 systemd-rc-local-generator[143909]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:18:05 compute-2 systemd-sysv-generator[143914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:18:05 compute-2 systemd[1]: Started ovn_metadata_agent container.
Jan 31 07:18:05 compute-2 sudo[143778]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:05.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:06.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.780 143841 INFO neutron.common.config [-] Logging enabled!
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.780 143841 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.780 143841 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.781 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.781 143841 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.781 143841 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.781 143841 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.781 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.781 143841 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.781 143841 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.781 143841 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.782 143841 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.782 143841 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.782 143841 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.782 143841 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.782 143841 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.782 143841 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.782 143841 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.782 143841 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.782 143841 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.783 143841 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.783 143841 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.783 143841 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.783 143841 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.783 143841 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.783 143841 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.783 143841 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.783 143841 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.783 143841 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.784 143841 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.784 143841 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.784 143841 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.784 143841 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.784 143841 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.784 143841 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.784 143841 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.784 143841 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.785 143841 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.785 143841 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.785 143841 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.785 143841 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.785 143841 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.785 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.785 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.785 143841 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.785 143841 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.786 143841 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.786 143841 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.786 143841 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.786 143841 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.786 143841 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.786 143841 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.786 143841 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.786 143841 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.786 143841 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.786 143841 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.787 143841 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.787 143841 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.787 143841 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.787 143841 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.787 143841 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.787 143841 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.787 143841 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.788 143841 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.788 143841 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.788 143841 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.788 143841 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.788 143841 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.788 143841 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.788 143841 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.788 143841 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.788 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.789 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.789 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.789 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.789 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.789 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.789 143841 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.789 143841 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.790 143841 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.790 143841 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.790 143841 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.790 143841 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.790 143841 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.790 143841 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.790 143841 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.790 143841 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.790 143841 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.791 143841 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.791 143841 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.791 143841 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.791 143841 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.791 143841 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.791 143841 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.791 143841 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.791 143841 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.792 143841 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.792 143841 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.792 143841 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.792 143841 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.792 143841 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.792 143841 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.792 143841 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.792 143841 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.792 143841 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.792 143841 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.793 143841 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.793 143841 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.793 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.793 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.793 143841 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.793 143841 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.793 143841 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.793 143841 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.794 143841 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.794 143841 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.794 143841 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.794 143841 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.794 143841 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.794 143841 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.794 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.794 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.795 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.795 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.795 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.795 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.795 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.795 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.795 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.795 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.796 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.796 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.796 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.796 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.796 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.796 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.796 143841 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.796 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.796 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.797 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.797 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.797 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.797 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.797 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.797 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.797 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.797 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.798 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.798 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.798 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.798 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.798 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.798 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.798 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.798 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.798 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.799 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.799 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.799 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.799 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.799 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.799 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.799 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.799 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.800 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.800 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.800 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.800 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.800 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.800 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.800 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.800 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.801 143841 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.801 143841 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.801 143841 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.801 143841 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.801 143841 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.801 143841 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.801 143841 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.801 143841 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.802 143841 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.802 143841 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.802 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.802 143841 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.802 143841 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.802 143841 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.802 143841 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.802 143841 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.802 143841 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.803 143841 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.803 143841 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.803 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.803 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.803 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.803 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.803 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.803 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.804 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.804 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.804 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.804 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.804 143841 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.804 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.804 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.804 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.805 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.805 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.805 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.805 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.805 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.805 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.805 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.805 143841 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.805 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.806 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.806 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.806 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.806 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.806 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.806 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.806 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.806 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.807 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.807 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.807 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.807 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.807 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.807 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.807 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.807 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.807 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.808 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.808 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.808 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.808 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.808 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.808 143841 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.808 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.808 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.808 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.809 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.809 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.809 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.809 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.809 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.809 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.809 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.809 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.810 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.810 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.810 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.810 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.810 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.810 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.810 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.810 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.811 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.811 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.811 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.811 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.811 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.811 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.811 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.811 143841 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.812 143841 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.812 143841 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.812 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.812 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.812 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.812 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.812 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.812 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.812 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.813 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.813 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.813 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.813 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.813 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.813 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.813 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.813 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.814 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.814 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.814 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.814 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.814 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.814 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.814 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.814 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.815 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.815 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.815 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.815 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.815 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.815 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.815 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.815 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.815 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.816 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.816 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.816 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.816 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.816 143841 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.816 143841 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.827 143841 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.828 143841 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.828 143841 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.828 143841 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.828 143841 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.844 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c06836a7-1d29-4815-800d-4e6d21a36ae0 (UUID: c06836a7-1d29-4815-800d-4e6d21a36ae0) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.886 143841 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.886 143841 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.886 143841 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.886 143841 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.892 143841 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.897 143841 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.903 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c06836a7-1d29-4815-800d-4e6d21a36ae0'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], external_ids={}, name=c06836a7-1d29-4815-800d-4e6d21a36ae0, nb_cfg_timestamp=1769843800119, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.904 143841 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f54849dbf70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.905 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.905 143841 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.906 143841 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.906 143841 INFO oslo_service.service [-] Starting 1 workers
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.909 143841 DEBUG oslo_service.service [-] Started child 143948 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.912 143948 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-361439'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.914 143841 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpfre96ghf/privsep.sock']
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.935 143948 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.936 143948 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.936 143948 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.938 143948 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.945 143948 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 07:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:06.951 143948 INFO eventlet.wsgi.server [-] (143948) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 31 07:18:07 compute-2 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 07:18:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:07.564 143841 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 07:18:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:07.565 143841 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpfre96ghf/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 07:18:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:07.427 143954 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:18:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:07.432 143954 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:18:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:07.436 143954 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 31 07:18:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:07.437 143954 INFO oslo.privsep.daemon [-] privsep daemon running as pid 143954
Jan 31 07:18:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:07.568 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[815989fa-ab10-4676-9bde-26f4d147d502]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:18:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:07.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.056 143954 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.056 143954 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.056 143954 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:18:08 compute-2 ceph-mon[77282]: pgmap v505: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:08.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.504 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5fc367-af34-4c99-ba35-78ca165c1454]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.506 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, column=external_ids, values=({'neutron:ovn-metadata-id': '185ddf36-f9d0-53bb-937f-c40a6f3d6bbf'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:18:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.899 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.941 143841 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.941 143841 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.941 143841 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.942 143841 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.942 143841 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.942 143841 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.942 143841 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.942 143841 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.942 143841 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.943 143841 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.943 143841 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.943 143841 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.943 143841 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.943 143841 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.943 143841 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.944 143841 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.944 143841 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.944 143841 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.944 143841 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.944 143841 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.945 143841 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.945 143841 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.945 143841 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.945 143841 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.945 143841 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.946 143841 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.946 143841 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.946 143841 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.946 143841 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.946 143841 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.947 143841 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.947 143841 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.947 143841 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.947 143841 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.947 143841 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.948 143841 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.948 143841 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.948 143841 DEBUG oslo_service.service [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.948 143841 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.948 143841 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.949 143841 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.949 143841 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.949 143841 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.949 143841 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.949 143841 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.949 143841 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.949 143841 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.950 143841 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.950 143841 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.950 143841 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.950 143841 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.950 143841 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.950 143841 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.950 143841 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.951 143841 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.951 143841 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.951 143841 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.951 143841 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.951 143841 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.951 143841 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.951 143841 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.952 143841 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.952 143841 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.952 143841 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.952 143841 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.952 143841 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.953 143841 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.953 143841 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.953 143841 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.953 143841 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.953 143841 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.953 143841 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.954 143841 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.954 143841 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.954 143841 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.954 143841 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.954 143841 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.954 143841 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.954 143841 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.955 143841 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.955 143841 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.955 143841 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.955 143841 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.955 143841 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.955 143841 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.956 143841 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.956 143841 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.956 143841 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.956 143841 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.956 143841 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.956 143841 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.956 143841 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.957 143841 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.957 143841 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.957 143841 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.957 143841 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.957 143841 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.957 143841 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.957 143841 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.958 143841 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.958 143841 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.958 143841 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.958 143841 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.958 143841 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.958 143841 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.958 143841 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.959 143841 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.959 143841 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.959 143841 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.959 143841 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.959 143841 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.959 143841 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.960 143841 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.960 143841 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.960 143841 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.960 143841 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.960 143841 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.960 143841 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.960 143841 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.961 143841 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.961 143841 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.961 143841 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.961 143841 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.961 143841 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.961 143841 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.962 143841 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.962 143841 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.962 143841 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.962 143841 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.962 143841 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.962 143841 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.962 143841 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.963 143841 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.963 143841 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.963 143841 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.963 143841 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.963 143841 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.963 143841 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.964 143841 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.964 143841 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.964 143841 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.964 143841 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.964 143841 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.964 143841 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.964 143841 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.965 143841 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.965 143841 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.965 143841 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.965 143841 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.965 143841 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.965 143841 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.965 143841 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.966 143841 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.966 143841 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.966 143841 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.966 143841 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.966 143841 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.966 143841 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.966 143841 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.966 143841 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.967 143841 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.967 143841 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.967 143841 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.967 143841 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.967 143841 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.967 143841 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.968 143841 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.968 143841 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.968 143841 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.968 143841 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.968 143841 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.968 143841 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.969 143841 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.969 143841 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.969 143841 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.969 143841 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.969 143841 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.970 143841 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.970 143841 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.970 143841 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.970 143841 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.970 143841 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.970 143841 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.970 143841 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.971 143841 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.971 143841 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.971 143841 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.971 143841 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.971 143841 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.971 143841 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.972 143841 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.972 143841 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.972 143841 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.972 143841 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.972 143841 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.973 143841 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.973 143841 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.973 143841 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.973 143841 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.973 143841 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.974 143841 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.974 143841 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.974 143841 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.974 143841 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.975 143841 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.975 143841 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.975 143841 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.975 143841 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.975 143841 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.975 143841 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.976 143841 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.976 143841 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.976 143841 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.976 143841 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.976 143841 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.977 143841 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.977 143841 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.977 143841 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.977 143841 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.977 143841 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.978 143841 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.978 143841 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.978 143841 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.978 143841 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.978 143841 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.979 143841 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.979 143841 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.979 143841 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.979 143841 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.979 143841 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.980 143841 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.980 143841 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.980 143841 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.980 143841 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.980 143841 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.981 143841 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.981 143841 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.981 143841 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.981 143841 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.981 143841 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.981 143841 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.982 143841 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.982 143841 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.982 143841 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.982 143841 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.982 143841 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.982 143841 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.983 143841 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.983 143841 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.983 143841 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.983 143841 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.983 143841 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.983 143841 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.984 143841 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.984 143841 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.984 143841 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.984 143841 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.984 143841 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.985 143841 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.985 143841 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.985 143841 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.985 143841 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.985 143841 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.986 143841 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.986 143841 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.986 143841 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.986 143841 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.986 143841 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.987 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.987 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.987 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.987 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.987 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.988 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.988 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.988 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.988 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.988 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.989 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.989 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.989 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.989 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.989 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.989 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.990 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.990 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.990 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.990 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.990 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.991 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.991 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.991 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.991 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.991 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.991 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.992 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.992 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.992 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.992 143841 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.992 143841 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.992 143841 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.993 143841 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.993 143841 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:18:08.993 143841 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:18:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:09.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:09 compute-2 ceph-mon[77282]: pgmap v506: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:09 compute-2 ceph-mon[77282]: pgmap v507: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:18:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:10.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:18:11 compute-2 python3.9[144085]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 07:18:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:11.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:12 compute-2 sudo[144236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hceahoohairemvevexstydgurcroxaha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843891.799585-1429-23841771662884/AnsiballZ_stat.py'
Jan 31 07:18:12 compute-2 sudo[144236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:12 compute-2 python3.9[144238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:18:12 compute-2 sudo[144236]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:12.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:12 compute-2 sudo[144361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adtbnturykerrmdhplpaxnckqibdvwyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843891.799585-1429-23841771662884/AnsiballZ_copy.py'
Jan 31 07:18:12 compute-2 sudo[144361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:12 compute-2 python3.9[144363]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843891.799585-1429-23841771662884/.source.yaml _original_basename=.z5wnb0ph follow=False checksum=d444960821d3e2834cd73828669d050a1a289a05 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:12 compute-2 sudo[144361]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:13 compute-2 ceph-mon[77282]: pgmap v508: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:13.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:14.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:14 compute-2 sudo[144390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:14 compute-2 sudo[144390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:14 compute-2 sudo[144390]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:15 compute-2 sudo[144415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:18:15 compute-2 sudo[144415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:15 compute-2 sudo[144415]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:15 compute-2 sudo[144441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:15 compute-2 sudo[144441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:15 compute-2 sudo[144441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:15 compute-2 sudo[144466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:18:15 compute-2 sudo[144466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:15 compute-2 ceph-mon[77282]: pgmap v509: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:15 compute-2 ceph-mon[77282]: pgmap v510: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:15 compute-2 sudo[144466]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:15.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:16.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:16 compute-2 sshd-session[134920]: Connection closed by 192.168.122.30 port 37796
Jan 31 07:18:16 compute-2 sshd-session[134917]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:18:16 compute-2 systemd[1]: session-47.scope: Deactivated successfully.
Jan 31 07:18:16 compute-2 systemd[1]: session-47.scope: Consumed 48.608s CPU time.
Jan 31 07:18:16 compute-2 systemd-logind[801]: Session 47 logged out. Waiting for processes to exit.
Jan 31 07:18:16 compute-2 systemd-logind[801]: Removed session 47.
Jan 31 07:18:17 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 07:18:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:17.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:17 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 07:18:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:18.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:18:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:18:18 compute-2 ceph-mon[77282]: pgmap v511: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:18:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:19.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:20.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:20 compute-2 sudo[144525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:20 compute-2 sudo[144525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:20 compute-2 sudo[144525]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:20 compute-2 sudo[144550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:20 compute-2 sudo[144550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:20 compute-2 sudo[144550]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:21 compute-2 ceph-mon[77282]: pgmap v512: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:18:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:18:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:18:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:18:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:21.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:22 compute-2 sshd-session[144576]: Accepted publickey for zuul from 192.168.122.30 port 49768 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:18:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:22.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:22 compute-2 systemd-logind[801]: New session 48 of user zuul.
Jan 31 07:18:22 compute-2 systemd[1]: Started Session 48 of User zuul.
Jan 31 07:18:22 compute-2 sshd-session[144576]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:18:22 compute-2 ceph-mon[77282]: pgmap v513: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 07:18:22 compute-2 ceph-mon[77282]: pgmap v514: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 07:18:23 compute-2 python3.9[144730]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:18:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:24 compute-2 sudo[144884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcjaxbtilobxolmjssjhwxnnxlvyrsqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843904.0340106-65-191060102855978/AnsiballZ_command.py'
Jan 31 07:18:24 compute-2 sudo[144884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:24.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:24 compute-2 python3.9[144886]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:24 compute-2 sudo[144884]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:25 compute-2 ceph-mon[77282]: pgmap v515: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 07:18:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:25.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:26 compute-2 sudo[145050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boltwhisyltawqnccgzbbkubdigqtjzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843905.6835964-97-187347546120422/AnsiballZ_systemd_service.py'
Jan 31 07:18:26 compute-2 sudo[145050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:26.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:26 compute-2 python3.9[145052]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:18:26 compute-2 systemd[1]: Reloading.
Jan 31 07:18:26 compute-2 ceph-mon[77282]: pgmap v516: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 30 op/s
Jan 31 07:18:26 compute-2 systemd-rc-local-generator[145074]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:18:26 compute-2 systemd-sysv-generator[145079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:18:26 compute-2 sudo[145050]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:18:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:27.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:18:27 compute-2 python3.9[145238]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:18:27 compute-2 network[145255]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:18:27 compute-2 network[145256]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:18:27 compute-2 network[145257]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:18:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:28.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:28 compute-2 sudo[145289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:28 compute-2 sudo[145289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:28 compute-2 sudo[145289]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:28 compute-2 ceph-mon[77282]: pgmap v517: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Jan 31 07:18:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:18:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:18:28 compute-2 sudo[145316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:18:28 compute-2 sudo[145316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:28 compute-2 sudo[145316]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:29.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:30 compute-2 ceph-mon[77282]: pgmap v518: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Jan 31 07:18:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:30.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:30 compute-2 sudo[145568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgmtlupnzjrthzwdsjoxhcidkzhdplfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843910.3300698-155-248467104757934/AnsiballZ_systemd_service.py'
Jan 31 07:18:30 compute-2 sudo[145568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:31 compute-2 python3.9[145570]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:31 compute-2 sudo[145568]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:31 compute-2 sudo[145722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjfxzsqntpetafmuewwlnphpkdeyeuwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843911.288064-155-86932295068719/AnsiballZ_systemd_service.py'
Jan 31 07:18:31 compute-2 sudo[145722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:31.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:31 compute-2 python3.9[145724]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:31 compute-2 sudo[145722]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:32 compute-2 sudo[145875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzsxclzhrocuvmldztxyhtivoufrzlev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843911.9745393-155-168923076662937/AnsiballZ_systemd_service.py'
Jan 31 07:18:32 compute-2 sudo[145875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:32 compute-2 python3.9[145877]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:32.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:32 compute-2 sudo[145875]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:32 compute-2 ceph-mon[77282]: pgmap v519: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 137 op/s
Jan 31 07:18:32 compute-2 sudo[146028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkycvphwhtwtjjhrcekvycpfwbbmckpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843912.6142619-155-147144749931785/AnsiballZ_systemd_service.py'
Jan 31 07:18:32 compute-2 sudo[146028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:33 compute-2 python3.9[146030]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:33 compute-2 sudo[146028]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:33 compute-2 podman[146033]: 2026-01-31 07:18:33.280799243 +0000 UTC m=+0.106267023 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 31 07:18:33 compute-2 sudo[146208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztdfjxmugcvclruipewfbhatncqgnixl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843913.4315705-155-255939763285673/AnsiballZ_systemd_service.py'
Jan 31 07:18:33 compute-2 sudo[146208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:33.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:34 compute-2 python3.9[146210]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:34 compute-2 sudo[146208]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:34 compute-2 ceph-mon[77282]: pgmap v520: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 163 op/s
Jan 31 07:18:34 compute-2 sudo[146361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chfyafbntbtvuatzbjndprbpurxlbvyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843914.1649668-155-204363926734162/AnsiballZ_systemd_service.py'
Jan 31 07:18:34 compute-2 sudo[146361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:34.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:34 compute-2 python3.9[146363]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:34 compute-2 sudo[146361]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:35 compute-2 sudo[146515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inswysfaobedkryktjjonztizqgtnlvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843914.8908095-155-77327158082631/AnsiballZ_systemd_service.py'
Jan 31 07:18:35 compute-2 sudo[146515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:35 compute-2 python3.9[146517]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:35 compute-2 sudo[146515]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:35 compute-2 podman[146519]: 2026-01-31 07:18:35.583593676 +0000 UTC m=+0.071052377 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 07:18:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:35.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:36.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:36 compute-2 ceph-mon[77282]: pgmap v521: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 31 07:18:37 compute-2 sudo[146689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbpqpcjmjldnthaxgcbvcwlthbogkboe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843917.0195003-311-171335065159746/AnsiballZ_file.py'
Jan 31 07:18:37 compute-2 sudo[146689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:37 compute-2 python3.9[146691]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:37 compute-2 sudo[146689]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:37.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:37 compute-2 sudo[146841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwyyugoqmqywgegzjapasdnssiuyfcva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843917.6890302-311-215570555973668/AnsiballZ_file.py'
Jan 31 07:18:37 compute-2 sudo[146841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:38 compute-2 python3.9[146843]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:38 compute-2 sudo[146841]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:38.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:38 compute-2 sudo[146993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-typbyhllmieeopsyygcbflxojaokvlrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843918.2646909-311-200418275780600/AnsiballZ_file.py'
Jan 31 07:18:38 compute-2 sudo[146993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:38 compute-2 python3.9[146995]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:38 compute-2 sudo[146993]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:39 compute-2 sudo[147146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpupjwlsdqytzvozrlitjfafnowqxgkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843918.8136227-311-92764378448954/AnsiballZ_file.py'
Jan 31 07:18:39 compute-2 sudo[147146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:39 compute-2 ceph-mon[77282]: pgmap v522: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 0 B/s wr, 140 op/s
Jan 31 07:18:39 compute-2 python3.9[147148]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:39 compute-2 sudo[147146]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:39 compute-2 sudo[147298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkerapcxbcwmxbfuppzayjbqbdmyqkiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843919.4013832-311-16191882231697/AnsiballZ_file.py'
Jan 31 07:18:39 compute-2 sudo[147298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:39 compute-2 python3.9[147300]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:39 compute-2 sudo[147298]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:40 compute-2 ceph-mon[77282]: pgmap v523: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 76 KiB/s rd, 0 B/s wr, 126 op/s
Jan 31 07:18:40 compute-2 sudo[147450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcznwpgfkndveppktwoapwvkdgdmcuxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843920.018121-311-163076729725052/AnsiballZ_file.py'
Jan 31 07:18:40 compute-2 sudo[147450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:40.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:40 compute-2 python3.9[147452]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:40 compute-2 sudo[147450]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:40 compute-2 sudo[147602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exfhxkvxndxgerlakttnoorlmxinyxag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843920.6032195-311-74718084614364/AnsiballZ_file.py'
Jan 31 07:18:40 compute-2 sudo[147602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:40 compute-2 sudo[147605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:40 compute-2 sudo[147605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:40 compute-2 sudo[147605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:40 compute-2 sudo[147630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:40 compute-2 sudo[147630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:40 compute-2 sudo[147630]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:40 compute-2 python3.9[147604]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:41 compute-2 sudo[147602]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:41 compute-2 sudo[147805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csvxynvyfgspuaxwafczhsbjwrxbyptq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843921.4661243-460-131051996895102/AnsiballZ_file.py'
Jan 31 07:18:41 compute-2 sudo[147805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:18:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:41.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:18:41 compute-2 python3.9[147807]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:41 compute-2 sudo[147805]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:42 compute-2 sudo[147957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcypxixxabsxaxrxhohshzgdlcjzpmdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843922.0168452-460-120930440368300/AnsiballZ_file.py'
Jan 31 07:18:42 compute-2 sudo[147957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:42 compute-2 python3.9[147959]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:42 compute-2 sudo[147957]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:42.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:42 compute-2 ceph-mon[77282]: pgmap v524: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 64 op/s
Jan 31 07:18:42 compute-2 sudo[148109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwzdkhhebgkpqtmpxihllnwpchyvqdza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843922.5053363-460-888668755743/AnsiballZ_file.py'
Jan 31 07:18:42 compute-2 sudo[148109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:42 compute-2 python3.9[148111]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:43 compute-2 sudo[148109]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:43 compute-2 sudo[148262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbmgdpqenhunwyxalawljpeiwuvynrjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843923.1189568-460-144648437444237/AnsiballZ_file.py'
Jan 31 07:18:43 compute-2 sudo[148262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:43 compute-2 python3.9[148264]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:43 compute-2 sudo[148262]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:43.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:43 compute-2 sudo[148414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbassgpxytftlbgsrdkrigwvdavbgygo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843923.6318252-460-3799677465233/AnsiballZ_file.py'
Jan 31 07:18:43 compute-2 sudo[148414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:44 compute-2 python3.9[148416]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:44 compute-2 sudo[148414]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:44.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:44 compute-2 sudo[148566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqhmwutaosquvqsafyznppcewwafgmcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843924.3591611-460-240999067104824/AnsiballZ_file.py'
Jan 31 07:18:44 compute-2 sudo[148566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:44 compute-2 python3.9[148568]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:44 compute-2 sudo[148566]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:45 compute-2 sudo[148719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnfxwtqqezlcpxylyelkyeidsxbecmck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843924.8818195-460-100396636121523/AnsiballZ_file.py'
Jan 31 07:18:45 compute-2 sudo[148719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:45 compute-2 ceph-mon[77282]: pgmap v525: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Jan 31 07:18:45 compute-2 python3.9[148721]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:45 compute-2 sudo[148719]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:45.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:46 compute-2 sudo[148871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjhufqlmzmkrwftbdstsurylptjocnlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843925.906551-615-5418436352331/AnsiballZ_command.py'
Jan 31 07:18:46 compute-2 sudo[148871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:46 compute-2 python3.9[148873]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:46 compute-2 sudo[148871]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:46.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:46 compute-2 ceph-mon[77282]: pgmap v526: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 07:18:47 compute-2 python3.9[149025]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:18:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:47.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:47 compute-2 sudo[149176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewwosxiomcgmjvwfdbnyelklyubotrzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843927.5177186-667-126521368436189/AnsiballZ_systemd_service.py'
Jan 31 07:18:47 compute-2 sudo[149176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:48 compute-2 python3.9[149178]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:18:48 compute-2 systemd[1]: Reloading.
Jan 31 07:18:48 compute-2 systemd-rc-local-generator[149199]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:18:48 compute-2 systemd-sysv-generator[149208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:18:48 compute-2 sudo[149176]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:48.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:48 compute-2 sudo[149363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oonilesgkmvqezbrxzvfzgjmyopotfun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843928.5487142-692-151912634940372/AnsiballZ_command.py'
Jan 31 07:18:48 compute-2 sudo[149363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:48 compute-2 ceph-mon[77282]: pgmap v527: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 31 07:18:48 compute-2 python3.9[149365]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:49 compute-2 sudo[149363]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:49 compute-2 sudo[149517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udbmjjkiyjidkittguhtywvczvxjeosc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843929.1604636-692-61119364152932/AnsiballZ_command.py'
Jan 31 07:18:49 compute-2 sudo[149517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:49 compute-2 python3.9[149519]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:49 compute-2 sudo[149517]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:18:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:49.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:18:50 compute-2 sudo[149670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmpvuestgmkigvjjbmdeqftfbwwzftej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843929.8001842-692-182052354668083/AnsiballZ_command.py'
Jan 31 07:18:50 compute-2 sudo[149670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:50 compute-2 python3.9[149672]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:50 compute-2 sudo[149670]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:50.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:50 compute-2 sudo[149823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnllcdvpqfnqwmljrpkvrzkcqbwtslfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843930.3457456-692-18949395383306/AnsiballZ_command.py'
Jan 31 07:18:50 compute-2 sudo[149823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:50 compute-2 python3.9[149825]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:50 compute-2 sudo[149823]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:50 compute-2 ceph-mon[77282]: pgmap v528: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:51 compute-2 sudo[149977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubabrztaszibznpeelckradebryvdlsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843930.9512858-692-153268668296360/AnsiballZ_command.py'
Jan 31 07:18:51 compute-2 sudo[149977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:51 compute-2 python3.9[149979]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:51 compute-2 sudo[149977]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:51 compute-2 sudo[150130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkcrslamwxxiciibhauekersknriugcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843931.506236-692-190394424211362/AnsiballZ_command.py'
Jan 31 07:18:51 compute-2 sudo[150130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:51.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:51 compute-2 python3.9[150132]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:51 compute-2 sudo[150130]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:52 compute-2 sudo[150283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmpurtzypdgkdmjcdfkuwwcbrpzrumpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843932.0246437-692-136189824265440/AnsiballZ_command.py'
Jan 31 07:18:52 compute-2 sudo[150283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:18:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:52.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:18:52 compute-2 python3.9[150285]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:52 compute-2 sudo[150283]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:53 compute-2 ceph-mon[77282]: pgmap v529: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:53 compute-2 sudo[150437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbjhdcltxwogyqvmnfbwirdejimelroq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843933.3048675-854-250921561982765/AnsiballZ_getent.py'
Jan 31 07:18:53 compute-2 sudo[150437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:53.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:53 compute-2 python3.9[150439]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 07:18:53 compute-2 sudo[150437]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:54 compute-2 ceph-mon[77282]: pgmap v530: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:54 compute-2 sudo[150590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpzfozilgucrckiiktyxghmmfbhvnpur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843934.056063-878-5452813937689/AnsiballZ_group.py'
Jan 31 07:18:54 compute-2 sudo[150590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:54.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:54 compute-2 python3.9[150592]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:18:54 compute-2 groupadd[150593]: group added to /etc/group: name=libvirt, GID=42473
Jan 31 07:18:54 compute-2 groupadd[150593]: group added to /etc/gshadow: name=libvirt
Jan 31 07:18:54 compute-2 groupadd[150593]: new group: name=libvirt, GID=42473
Jan 31 07:18:54 compute-2 sudo[150590]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:55.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:55 compute-2 sudo[150749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzybcminihgilbalxldfvpphmocgszia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843935.490463-902-48118696082775/AnsiballZ_user.py'
Jan 31 07:18:55 compute-2 sudo[150749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:56 compute-2 python3.9[150751]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 07:18:56 compute-2 useradd[150753]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 07:18:56 compute-2 sudo[150749]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:18:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:56.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:18:56 compute-2 ceph-mon[77282]: pgmap v531: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:57 compute-2 sudo[150910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-racdrsagtiomionhcprrklogxbfqnsif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843936.8377757-934-128262440016458/AnsiballZ_setup.py'
Jan 31 07:18:57 compute-2 sudo[150910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:57 compute-2 python3.9[150912]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:18:57 compute-2 sudo[150910]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:57.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:57 compute-2 sudo[150994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbaecncrexomscubvvancbhitecznnjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843936.8377757-934-128262440016458/AnsiballZ_dnf.py'
Jan 31 07:18:57 compute-2 sudo[150994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:58 compute-2 python3.9[150996]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:18:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:58.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:58 compute-2 ceph-mon[77282]: pgmap v532: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:18:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:59.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:00.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:00 compute-2 ceph-mon[77282]: pgmap v533: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:01 compute-2 sudo[151008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:01 compute-2 sudo[151008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:01 compute-2 sudo[151008]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:01 compute-2 sudo[151034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:01 compute-2 sudo[151034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:01 compute-2 sudo[151034]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:01.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:19:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:02.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:19:03 compute-2 ceph-mon[77282]: pgmap v534: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:03.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:04 compute-2 podman[151119]: 2026-01-31 07:19:04.194745917 +0000 UTC m=+0.074571131 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 07:19:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:04.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:04 compute-2 ceph-mon[77282]: pgmap v535: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:05.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:06 compute-2 podman[151251]: 2026-01-31 07:19:06.164809508 +0000 UTC m=+0.046740338 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 07:19:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:06.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:19:06.820 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:19:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:19:06.823 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:19:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:19:06.823 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:19:07 compute-2 ceph-mon[77282]: pgmap v536: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:19:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:07.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:19:08 compute-2 ceph-mon[77282]: pgmap v537: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:19:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:08.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:19:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:10.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:10 compute-2 ceph-mon[77282]: pgmap v538: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:19:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:11.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:19:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:12.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:12 compute-2 ceph-mon[77282]: pgmap v539: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:13.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:14.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:14 compute-2 ceph-mon[77282]: pgmap v540: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:15.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:19:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:16.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:19:16 compute-2 ceph-mon[77282]: pgmap v541: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:17.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:18 compute-2 ceph-mon[77282]: pgmap v542: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:18.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:19:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:19.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:19:20 compute-2 kernel: SELinux:  Converting 2779 SID table entries...
Jan 31 07:19:20 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:19:20 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:19:20 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:19:20 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:19:20 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:19:20 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:19:20 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:19:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:20.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:21 compute-2 ceph-mon[77282]: pgmap v543: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:21 compute-2 sudo[151302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:21 compute-2 dbus-broker-launch[793]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 31 07:19:21 compute-2 sudo[151302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:21 compute-2 sudo[151302]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:21 compute-2 sudo[151327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:21 compute-2 sudo[151327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:21 compute-2 sudo[151327]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:21.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:22 compute-2 ceph-mon[77282]: pgmap v544: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:22.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:19:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:23.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:19:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:19:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:24.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:19:24 compute-2 ceph-mon[77282]: pgmap v545: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:19:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:25.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:19:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:26.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:26 compute-2 ceph-mon[77282]: pgmap v546: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:27.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:28 compute-2 ceph-mon[77282]: pgmap v547: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:28.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:28 compute-2 sudo[151358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:28 compute-2 sudo[151358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:28 compute-2 sudo[151358]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:28 compute-2 sudo[151385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:19:28 compute-2 sudo[151385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:28 compute-2 sudo[151385]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:28 compute-2 sudo[151410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:28 compute-2 sudo[151410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:28 compute-2 sudo[151410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:28 compute-2 sudo[151435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:19:28 compute-2 sudo[151435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:29 compute-2 kernel: SELinux:  Converting 2779 SID table entries...
Jan 31 07:19:29 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:19:29 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:19:29 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:19:29 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:19:29 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:19:29 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:19:29 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:19:29 compute-2 sudo[151435]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:19:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:19:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:19:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:29.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:19:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:19:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:30.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:19:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:19:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:19:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:19:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:19:30 compute-2 ceph-mon[77282]: pgmap v548: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:32.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:32 compute-2 ceph-mon[77282]: pgmap v549: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:33.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:34 compute-2 ceph-mon[77282]: pgmap v550: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:34.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:35 compute-2 dbus-broker-launch[793]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 07:19:35 compute-2 podman[151499]: 2026-01-31 07:19:35.230686223 +0000 UTC m=+0.108019720 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:19:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:19:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:35.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:19:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:36.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:36 compute-2 ceph-mon[77282]: pgmap v551: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:37 compute-2 podman[151526]: 2026-01-31 07:19:37.16661886 +0000 UTC m=+0.055118945 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:19:37 compute-2 sudo[151532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:37 compute-2 sudo[151532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:37 compute-2 sudo[151532]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:37 compute-2 sudo[151569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:19:37 compute-2 sudo[151569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:37 compute-2 sudo[151569]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:19:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:37.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:19:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:19:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:19:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:19:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:38.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:19:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:39 compute-2 ceph-mon[77282]: pgmap v552: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:39.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:40 compute-2 ceph-mon[77282]: pgmap v553: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:40.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:41 compute-2 sudo[152714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:41 compute-2 sudo[152714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:41 compute-2 sudo[152714]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:41 compute-2 sudo[152773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:41 compute-2 sudo[152773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:41 compute-2 sudo[152773]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:41.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:42.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:42 compute-2 ceph-mon[77282]: pgmap v554: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:43.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:44.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:44 compute-2 ceph-mon[77282]: pgmap v555: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:45.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:46.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:47 compute-2 ceph-mon[77282]: pgmap v556: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:47.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:48 compute-2 ceph-mon[77282]: pgmap v557: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:48.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:49.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:50.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:50 compute-2 ceph-mon[77282]: pgmap v558: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:19:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:51.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:19:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:52.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:53 compute-2 ceph-mon[77282]: pgmap v559: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:19:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:53.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:19:54 compute-2 ceph-mon[77282]: pgmap v560: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:54.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:55.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:56 compute-2 ceph-mon[77282]: pgmap v561: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:56.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:57.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:58.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:58 compute-2 ceph-mon[77282]: pgmap v562: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:19:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:19:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:59.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:20:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:00.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:00 compute-2 ceph-mon[77282]: pgmap v563: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:20:01 compute-2 sudo[168525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:01 compute-2 sudo[168525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:01 compute-2 sudo[168525]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:01 compute-2 sudo[168550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:01 compute-2 sudo[168550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:01 compute-2 sudo[168550]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:20:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:01.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:20:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:02.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:02 compute-2 ceph-mon[77282]: pgmap v564: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:03.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:20:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:04.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:20:04 compute-2 ceph-mon[77282]: pgmap v565: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:05.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:06 compute-2 podman[168580]: 2026-01-31 07:20:06.278340629 +0000 UTC m=+0.160015160 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:20:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:06.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:20:06.822 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:20:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:20:06.826 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:20:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:20:06.826 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:20:06 compute-2 ceph-mon[77282]: pgmap v566: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:07.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:08 compute-2 podman[168607]: 2026-01-31 07:20:08.169818471 +0000 UTC m=+0.048954178 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:20:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:08 compute-2 ceph-mon[77282]: pgmap v567: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:09.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:10.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:11 compute-2 ceph-mon[77282]: pgmap v568: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:11 compute-2 kernel: SELinux:  Converting 2780 SID table entries...
Jan 31 07:20:11 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:20:11 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:20:11 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:20:11 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:20:11 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:20:11 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:20:11 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:20:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:20:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:11.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:20:12 compute-2 ceph-mon[77282]: pgmap v569: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:12 compute-2 groupadd[168641]: group added to /etc/group: name=dnsmasq, GID=993
Jan 31 07:20:12 compute-2 groupadd[168641]: group added to /etc/gshadow: name=dnsmasq
Jan 31 07:20:12 compute-2 groupadd[168641]: new group: name=dnsmasq, GID=993
Jan 31 07:20:12 compute-2 useradd[168648]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 31 07:20:12 compute-2 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Jan 31 07:20:12 compute-2 dbus-broker-launch[793]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 07:20:12 compute-2 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Jan 31 07:20:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:12.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:13 compute-2 groupadd[168662]: group added to /etc/group: name=clevis, GID=992
Jan 31 07:20:13 compute-2 groupadd[168662]: group added to /etc/gshadow: name=clevis
Jan 31 07:20:13 compute-2 groupadd[168662]: new group: name=clevis, GID=992
Jan 31 07:20:13 compute-2 useradd[168669]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 31 07:20:13 compute-2 usermod[168679]: add 'clevis' to group 'tss'
Jan 31 07:20:13 compute-2 usermod[168679]: add 'clevis' to shadow group 'tss'
Jan 31 07:20:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:13.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:14.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:14 compute-2 ceph-mon[77282]: pgmap v570: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:15 compute-2 polkitd[43481]: Reloading rules
Jan 31 07:20:15 compute-2 polkitd[43481]: Collecting garbage unconditionally...
Jan 31 07:20:15 compute-2 polkitd[43481]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 07:20:15 compute-2 polkitd[43481]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 07:20:15 compute-2 polkitd[43481]: Finished loading, compiling and executing 3 rules
Jan 31 07:20:15 compute-2 polkitd[43481]: Reloading rules
Jan 31 07:20:15 compute-2 polkitd[43481]: Collecting garbage unconditionally...
Jan 31 07:20:15 compute-2 polkitd[43481]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 07:20:15 compute-2 polkitd[43481]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 07:20:15 compute-2 polkitd[43481]: Finished loading, compiling and executing 3 rules
Jan 31 07:20:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:15.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:16 compute-2 groupadd[168870]: group added to /etc/group: name=ceph, GID=167
Jan 31 07:20:16 compute-2 groupadd[168870]: group added to /etc/gshadow: name=ceph
Jan 31 07:20:16 compute-2 groupadd[168870]: new group: name=ceph, GID=167
Jan 31 07:20:16 compute-2 useradd[168876]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 31 07:20:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:16 compute-2 ceph-mon[77282]: pgmap v571: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:17.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:18.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:19 compute-2 ceph-mon[77282]: pgmap v572: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:19 compute-2 sshd[1006]: Received signal 15; terminating.
Jan 31 07:20:19 compute-2 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 07:20:19 compute-2 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 07:20:19 compute-2 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 07:20:19 compute-2 systemd[1]: sshd.service: Consumed 1.895s CPU time, read 32.0K from disk, written 20.0K to disk.
Jan 31 07:20:19 compute-2 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 07:20:19 compute-2 systemd[1]: Stopping sshd-keygen.target...
Jan 31 07:20:19 compute-2 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:20:19 compute-2 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:20:19 compute-2 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:20:19 compute-2 systemd[1]: Reached target sshd-keygen.target.
Jan 31 07:20:19 compute-2 systemd[1]: Starting OpenSSH server daemon...
Jan 31 07:20:19 compute-2 sshd[169503]: Server listening on 0.0.0.0 port 22.
Jan 31 07:20:19 compute-2 sshd[169503]: Server listening on :: port 22.
Jan 31 07:20:19 compute-2 systemd[1]: Started OpenSSH server daemon.
Jan 31 07:20:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:19.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:20.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:21 compute-2 ceph-mon[77282]: pgmap v573: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:21 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:20:21 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:20:21 compute-2 systemd[1]: Reloading.
Jan 31 07:20:21 compute-2 systemd-rc-local-generator[169756]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:21 compute-2 systemd-sysv-generator[169760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:21 compute-2 sudo[169798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:21 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:20:21 compute-2 sudo[169798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:21 compute-2 sudo[169798]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:21 compute-2 sudo[170008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:21 compute-2 sudo[170008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:21 compute-2 sudo[170008]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:21.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:22 compute-2 ceph-mon[77282]: pgmap v574: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:22.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:23 compute-2 sudo[150994]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:23.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:24.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:24 compute-2 ceph-mon[77282]: pgmap v575: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:25.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:26 compute-2 ceph-mon[77282]: pgmap v576: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:26.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:27 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:20:27 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:20:27 compute-2 systemd[1]: man-db-cache-update.service: Consumed 6.746s CPU time.
Jan 31 07:20:27 compute-2 systemd[1]: run-ra50ef205680846d989d92519d0ccb023.service: Deactivated successfully.
Jan 31 07:20:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:27.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:28.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:28 compute-2 ceph-mon[77282]: pgmap v577: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:29.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:30.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:30 compute-2 ceph-mon[77282]: pgmap v578: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:31.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:20:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:32.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:20:32 compute-2 ceph-mon[77282]: pgmap v579: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:33.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:34.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:34 compute-2 ceph-mon[77282]: pgmap v580: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:35.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:20:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:36.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:20:37 compute-2 podman[178228]: 2026-01-31 07:20:37.232986535 +0000 UTC m=+0.122565911 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:20:37 compute-2 sudo[178254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:37 compute-2 sudo[178254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:37 compute-2 sudo[178254]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:37 compute-2 sudo[178279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:20:37 compute-2 sudo[178279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:37 compute-2 sudo[178279]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:37 compute-2 sudo[178304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:37 compute-2 sudo[178304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:37 compute-2 sudo[178304]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:37 compute-2 sudo[178329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:20:37 compute-2 sudo[178329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:37 compute-2 sudo[178329]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:37.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:38.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:38 compute-2 ceph-mon[77282]: pgmap v581: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:39 compute-2 podman[178385]: 2026-01-31 07:20:39.204662817 +0000 UTC m=+0.085744181 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:20:39 compute-2 ceph-mon[77282]: pgmap v582: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:39.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:40.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:40 compute-2 ceph-mon[77282]: pgmap v583: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:20:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:20:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:20:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:20:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:20:41 compute-2 sudo[178405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:41 compute-2 sudo[178405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:41 compute-2 sudo[178405]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:41 compute-2 sudo[178430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:41 compute-2 sudo[178430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:41 compute-2 sudo[178430]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:41.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:42.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:42 compute-2 ceph-mon[77282]: pgmap v584: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:20:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:43.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:20:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:44.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:45 compute-2 ceph-mon[77282]: pgmap v585: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:45.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:46.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:46 compute-2 ceph-mon[77282]: pgmap v586: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:47 compute-2 sudo[178533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:47 compute-2 sudo[178533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:47 compute-2 sudo[178533]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:47 compute-2 sudo[178631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qacgoldrysqkqodjsdpntkigdgftjcnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844046.972818-971-135385359795403/AnsiballZ_systemd.py'
Jan 31 07:20:47 compute-2 sudo[178585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:20:47 compute-2 sudo[178631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:47 compute-2 sudo[178585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:47 compute-2 sudo[178585]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:47 compute-2 python3.9[178634]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:20:47 compute-2 systemd[1]: Reloading.
Jan 31 07:20:47 compute-2 systemd-sysv-generator[178670]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:47 compute-2 systemd-rc-local-generator[178665]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:47.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:48 compute-2 sudo[178631]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:48 compute-2 ceph-mon[77282]: pgmap v587: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:48 compute-2 sudo[178824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnwmnyudhagcvkpebnjkcofsfnnlieiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844048.2757232-971-115012544202573/AnsiballZ_systemd.py'
Jan 31 07:20:48 compute-2 sudo[178824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:48.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:48 compute-2 python3.9[178826]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:20:48 compute-2 systemd[1]: Reloading.
Jan 31 07:20:48 compute-2 systemd-sysv-generator[178856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:48 compute-2 systemd-rc-local-generator[178852]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:49 compute-2 sudo[178824]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:49 compute-2 sudo[179015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qymikaagkfgndspwvfxwqlwtyvyyfsyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844049.3466332-971-117059639293799/AnsiballZ_systemd.py'
Jan 31 07:20:49 compute-2 sudo[179015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:49 compute-2 python3.9[179017]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:20:49 compute-2 systemd[1]: Reloading.
Jan 31 07:20:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:49.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:49 compute-2 systemd-rc-local-generator[179047]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:50 compute-2 systemd-sysv-generator[179050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:50 compute-2 sudo[179015]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:50 compute-2 sudo[179205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byfrbrqlvpixqikfmyulssrogjgjqdqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844050.3261132-971-213787075572424/AnsiballZ_systemd.py'
Jan 31 07:20:50 compute-2 sudo[179205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:50.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:50 compute-2 ceph-mon[77282]: pgmap v588: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:50 compute-2 python3.9[179207]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:20:50 compute-2 systemd[1]: Reloading.
Jan 31 07:20:51 compute-2 systemd-sysv-generator[179240]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:51 compute-2 systemd-rc-local-generator[179236]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:51 compute-2 sudo[179205]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:51 compute-2 sudo[179397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uldxugqucukesmfovvwkqrcmbuxxgcuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844051.4687185-1058-179434469639745/AnsiballZ_systemd.py'
Jan 31 07:20:51 compute-2 sudo[179397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:51.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:52 compute-2 python3.9[179399]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:52 compute-2 systemd[1]: Reloading.
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.083527) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052083641, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1922, "num_deletes": 251, "total_data_size": 4895465, "memory_usage": 4964984, "flush_reason": "Manual Compaction"}
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052146242, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3212469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12476, "largest_seqno": 14393, "table_properties": {"data_size": 3204452, "index_size": 4896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14766, "raw_average_key_size": 18, "raw_value_size": 3188732, "raw_average_value_size": 3995, "num_data_blocks": 220, "num_entries": 798, "num_filter_entries": 798, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843845, "oldest_key_time": 1769843845, "file_creation_time": 1769844052, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 62818 microseconds, and 6125 cpu microseconds.
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.146347) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3212469 bytes OK
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.146370) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.162755) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.162798) EVENT_LOG_v1 {"time_micros": 1769844052162788, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.162823) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 4886932, prev total WAL file size 4886932, number of live WAL files 2.
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.163800) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3137KB)], [24(8226KB)]
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052163878, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 11636327, "oldest_snapshot_seqno": -1}
Jan 31 07:20:52 compute-2 systemd-sysv-generator[179425]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:52 compute-2 systemd-rc-local-generator[179422]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4293 keys, 11135968 bytes, temperature: kUnknown
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052282157, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11135968, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11102048, "index_size": 22098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 104842, "raw_average_key_size": 24, "raw_value_size": 11019350, "raw_average_value_size": 2566, "num_data_blocks": 941, "num_entries": 4293, "num_filter_entries": 4293, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769844052, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.282703) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11135968 bytes
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.283979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 98.0 rd, 93.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 4810, records dropped: 517 output_compression: NoCompression
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.283997) EVENT_LOG_v1 {"time_micros": 1769844052283989, "job": 12, "event": "compaction_finished", "compaction_time_micros": 118678, "compaction_time_cpu_micros": 24283, "output_level": 6, "num_output_files": 1, "total_output_size": 11135968, "num_input_records": 4810, "num_output_records": 4293, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052284356, "job": 12, "event": "table_file_deletion", "file_number": 26}
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052284970, "job": 12, "event": "table_file_deletion", "file_number": 24}
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.163698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.285098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.285108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.285112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.285115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:20:52.285119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-2 sudo[179397]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:52.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:52 compute-2 sudo[179587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hswlpwypeplllhannubpppmekigrxixk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844052.5005634-1058-44941133750868/AnsiballZ_systemd.py'
Jan 31 07:20:52 compute-2 sudo[179587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:52 compute-2 ceph-mon[77282]: pgmap v589: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:53 compute-2 python3.9[179589]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:53 compute-2 systemd[1]: Reloading.
Jan 31 07:20:53 compute-2 systemd-rc-local-generator[179615]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:53 compute-2 systemd-sysv-generator[179620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:53 compute-2 sudo[179587]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:53 compute-2 sudo[179778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcfeevlmuzeltzxaukbxzaybnsrbbvog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844053.5660002-1058-195360559821314/AnsiballZ_systemd.py'
Jan 31 07:20:53 compute-2 sudo[179778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:53.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:54 compute-2 python3.9[179780]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:54 compute-2 systemd[1]: Reloading.
Jan 31 07:20:54 compute-2 systemd-rc-local-generator[179807]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:54 compute-2 systemd-sysv-generator[179810]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:54 compute-2 sudo[179778]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:54.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:54 compute-2 ceph-mon[77282]: pgmap v590: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:54 compute-2 sudo[179968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbbksdfarrktqnyaptqoxrazeiyysklm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844054.6398659-1058-76354994265918/AnsiballZ_systemd.py'
Jan 31 07:20:54 compute-2 sudo[179968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:55 compute-2 python3.9[179970]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:55.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:56 compute-2 sudo[179968]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:56 compute-2 sudo[180124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oriryeetfhturrnyqmavhgytbgjvlwqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844056.3458579-1058-14855942404894/AnsiballZ_systemd.py'
Jan 31 07:20:56 compute-2 sudo[180124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:56.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:56 compute-2 ceph-mon[77282]: pgmap v591: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:56 compute-2 python3.9[180126]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:56 compute-2 systemd[1]: Reloading.
Jan 31 07:20:57 compute-2 systemd-rc-local-generator[180156]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:57 compute-2 systemd-sysv-generator[180160]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:57 compute-2 sudo[180124]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:57 compute-2 sudo[180315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfdxhbsorgdwrfrqflffwiddlvndhaap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844057.3782833-1166-60632810415193/AnsiballZ_systemd.py'
Jan 31 07:20:57 compute-2 sudo[180315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:57 compute-2 python3.9[180317]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:20:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:20:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:57.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:20:57 compute-2 systemd[1]: Reloading.
Jan 31 07:20:58 compute-2 systemd-rc-local-generator[180348]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:58 compute-2 systemd-sysv-generator[180351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:58 compute-2 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 07:20:58 compute-2 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 07:20:58 compute-2 sudo[180315]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:58.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:58 compute-2 sudo[180508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvdnaxlgkpycrwhpvavdkkdheubbcjyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844058.6228418-1190-68341700790259/AnsiballZ_systemd.py'
Jan 31 07:20:58 compute-2 sudo[180508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:59 compute-2 python3.9[180510]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:59 compute-2 sudo[180508]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:59 compute-2 sudo[180664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pncxtnxbwsweifwystlcjrpktskklhia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844059.2979946-1190-143517200541732/AnsiballZ_systemd.py'
Jan 31 07:20:59 compute-2 sudo[180664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:59 compute-2 ceph-mon[77282]: pgmap v592: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:59 compute-2 python3.9[180666]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:59 compute-2 sudo[180664]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:20:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:59.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:00 compute-2 sudo[180819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gybqoocnrlidcwjkgseqkesefrnxwcwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844059.963635-1190-250989400179668/AnsiballZ_systemd.py'
Jan 31 07:21:00 compute-2 sudo[180819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:00 compute-2 python3.9[180821]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:00 compute-2 sudo[180819]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:00.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:00 compute-2 sudo[180974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jawbchytlondceumzruxneeojertkrdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844060.6629-1190-87352994555159/AnsiballZ_systemd.py'
Jan 31 07:21:00 compute-2 sudo[180974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:01 compute-2 python3.9[180976]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:01 compute-2 sudo[180974]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:01 compute-2 ceph-mon[77282]: pgmap v593: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:01 compute-2 sudo[181130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bexqxzdrpfcvvjlsgrhvzgdwzumssuyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844061.4968054-1190-214071433123446/AnsiballZ_systemd.py'
Jan 31 07:21:01 compute-2 sudo[181130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:01 compute-2 sudo[181133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:01 compute-2 sudo[181133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:01 compute-2 sudo[181133]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:01 compute-2 sudo[181158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:01 compute-2 sudo[181158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:01 compute-2 sudo[181158]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:02 compute-2 python3.9[181132]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:02 compute-2 sudo[181130]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:02 compute-2 sudo[181335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xorkwbhgmwrerdsuufplguqalnclwmao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844062.1887872-1190-204318187858258/AnsiballZ_systemd.py'
Jan 31 07:21:02 compute-2 sudo[181335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:02 compute-2 ceph-mon[77282]: pgmap v594: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:02.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:02 compute-2 python3.9[181337]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:02 compute-2 sudo[181335]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:03 compute-2 sudo[181491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhnylntvnyrmuddfbbulimnrzwpnujwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844062.8902044-1190-209635052124005/AnsiballZ_systemd.py'
Jan 31 07:21:03 compute-2 sudo[181491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:03 compute-2 python3.9[181493]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:03 compute-2 sudo[181491]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:03 compute-2 sudo[181646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpqbpujwileoqlverxitvvqfjizhjvgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844063.5623524-1190-274928673182949/AnsiballZ_systemd.py'
Jan 31 07:21:03 compute-2 sudo[181646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:03.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:04 compute-2 python3.9[181648]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:04 compute-2 sudo[181646]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:04 compute-2 sudo[181801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cabdetmaoblprmxknykrdvzwsmlzlhdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844064.2308471-1190-43823169422754/AnsiballZ_systemd.py'
Jan 31 07:21:04 compute-2 sudo[181801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:04.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:04 compute-2 ceph-mon[77282]: pgmap v595: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:04 compute-2 python3.9[181803]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:04 compute-2 sudo[181801]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:05 compute-2 sudo[181957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guhhbkwtunfadhhvrqhskfyotsweprsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844065.061547-1190-258994733660553/AnsiballZ_systemd.py'
Jan 31 07:21:05 compute-2 sudo[181957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:05 compute-2 python3.9[181959]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:05 compute-2 sudo[181957]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:05.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:06 compute-2 sudo[182112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azlzqxhahzjuaortwfnkigxsvcbxxtpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844065.760969-1190-221761491247516/AnsiballZ_systemd.py'
Jan 31 07:21:06 compute-2 sudo[182112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:06 compute-2 python3.9[182114]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:06 compute-2 sudo[182112]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:06.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:06 compute-2 ceph-mon[77282]: pgmap v596: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:21:06.823 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:21:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:21:06.825 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:21:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:21:06.826 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:21:06 compute-2 sudo[182267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nywiqrytabjqhladyfzfrpbxfkggadno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844066.464758-1190-80584291720926/AnsiballZ_systemd.py'
Jan 31 07:21:06 compute-2 sudo[182267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:07 compute-2 python3.9[182269]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:07 compute-2 sudo[182267]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:07 compute-2 podman[182274]: 2026-01-31 07:21:07.383368741 +0000 UTC m=+0.108584518 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 07:21:07 compute-2 sudo[182449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvfxyvykhpoomzaujljrmpznimauwgsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844067.4123642-1190-22886189339152/AnsiballZ_systemd.py'
Jan 31 07:21:07 compute-2 sudo[182449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:07 compute-2 python3.9[182451]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:08 compute-2 sudo[182449]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:08 compute-2 sudo[182604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hizccqognpurcxqumjvniwgcddmlcptj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844068.169635-1190-94292238268871/AnsiballZ_systemd.py'
Jan 31 07:21:08 compute-2 sudo[182604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:08.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:08 compute-2 python3.9[182606]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:08 compute-2 sudo[182604]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:08 compute-2 ceph-mon[77282]: pgmap v597: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:09.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:10 compute-2 podman[182635]: 2026-01-31 07:21:10.170742506 +0000 UTC m=+0.053901944 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 31 07:21:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:10.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:10 compute-2 ceph-mon[77282]: pgmap v598: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:11.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:12 compute-2 sudo[182782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsvffzkxgttkqgrmozfkrnaedbnbrtcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844071.793921-1496-257992072013773/AnsiballZ_file.py'
Jan 31 07:21:12 compute-2 sudo[182782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:12 compute-2 python3.9[182784]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:12 compute-2 sudo[182782]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:12.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:12 compute-2 sudo[182934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfbeatlgogqmkphnzgzqftvgaoxtqund ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844072.476772-1496-101721025550539/AnsiballZ_file.py'
Jan 31 07:21:12 compute-2 sudo[182934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:12 compute-2 python3.9[182936]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:12 compute-2 sudo[182934]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:12 compute-2 ceph-mon[77282]: pgmap v599: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:13 compute-2 sudo[183087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azvbpersjrquglhkxhkbdwtjuotvsgtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844073.0232823-1496-171677329433273/AnsiballZ_file.py'
Jan 31 07:21:13 compute-2 sudo[183087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:13 compute-2 python3.9[183089]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:13 compute-2 sudo[183087]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:13 compute-2 sudo[183239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyjcmwimafbuwvuwcjawgqxrmlsqdnbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844073.5864663-1496-223255431698454/AnsiballZ_file.py'
Jan 31 07:21:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:13 compute-2 sudo[183239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:13.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:14 compute-2 python3.9[183241]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:14 compute-2 sudo[183239]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:14 compute-2 sudo[183391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdeyviicdpakbehnbpdwrdnexutcfgzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844074.1595318-1496-7277933865969/AnsiballZ_file.py'
Jan 31 07:21:14 compute-2 sudo[183391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:14 compute-2 python3.9[183393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:14 compute-2 sudo[183391]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:14.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:14 compute-2 ceph-mon[77282]: pgmap v600: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:15 compute-2 sudo[183543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugrgjbjhjgttqlwsolkneguafcieguvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844074.8257039-1496-259416643809140/AnsiballZ_file.py'
Jan 31 07:21:15 compute-2 sudo[183543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:15 compute-2 python3.9[183545]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:15 compute-2 sudo[183543]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:15.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:16 compute-2 python3.9[183696]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:21:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:21:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:16.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:21:16 compute-2 ceph-mon[77282]: pgmap v601: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:17 compute-2 sudo[183847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfavyzkjaowwdjymicrwvfpxafzymskl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844076.6719909-1648-67596942284961/AnsiballZ_stat.py'
Jan 31 07:21:17 compute-2 sudo[183847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:17 compute-2 python3.9[183849]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:17 compute-2 sudo[183847]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:17 compute-2 sudo[183972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iikeaycwhrdzktbdimlhjriikjqemhvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844076.6719909-1648-67596942284961/AnsiballZ_copy.py'
Jan 31 07:21:17 compute-2 sudo[183972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:17 compute-2 python3.9[183974]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844076.6719909-1648-67596942284961/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:17.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:17 compute-2 sudo[183972]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:18 compute-2 sudo[184124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ledrmfnscnfbmwtkgvntrspyeoqwrjar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844078.1127083-1648-105670281713976/AnsiballZ_stat.py'
Jan 31 07:21:18 compute-2 sudo[184124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:18 compute-2 python3.9[184126]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:18 compute-2 sudo[184124]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:18.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:18 compute-2 sudo[184249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwcrdfyvdiupgwmbkuhxtvfxzrpwcgqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844078.1127083-1648-105670281713976/AnsiballZ_copy.py'
Jan 31 07:21:18 compute-2 sudo[184249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:19 compute-2 ceph-mon[77282]: pgmap v602: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:19 compute-2 python3.9[184251]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844078.1127083-1648-105670281713976/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:19 compute-2 sudo[184249]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:19 compute-2 sudo[184402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfaskuppxhxqlbkhbbrfndvapbkmizsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844079.2430553-1648-78346528014846/AnsiballZ_stat.py'
Jan 31 07:21:19 compute-2 sudo[184402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:19 compute-2 python3.9[184404]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:19 compute-2 sudo[184402]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:19.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:20 compute-2 sudo[184527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cainktqfhclazeqysqcgqvedjcxuhnnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844079.2430553-1648-78346528014846/AnsiballZ_copy.py'
Jan 31 07:21:20 compute-2 sudo[184527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:20 compute-2 python3.9[184529]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844079.2430553-1648-78346528014846/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:20 compute-2 sudo[184527]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:20.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:20 compute-2 sudo[184679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmoktocfmuwzrspdxkbkkahcighqyeoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844080.379976-1648-22266940811872/AnsiballZ_stat.py'
Jan 31 07:21:20 compute-2 sudo[184679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:20 compute-2 python3.9[184681]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:20 compute-2 sudo[184679]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:21 compute-2 ceph-mon[77282]: pgmap v603: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:21 compute-2 sudo[184805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnhbfahlcangfjsuifgwrfxpdbyhyguk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844080.379976-1648-22266940811872/AnsiballZ_copy.py'
Jan 31 07:21:21 compute-2 sudo[184805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:21 compute-2 python3.9[184807]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844080.379976-1648-22266940811872/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:21 compute-2 sudo[184805]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:21 compute-2 sudo[184957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyajplycvciftrciziefsgxabqnuzjku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844081.488513-1648-134385414536920/AnsiballZ_stat.py'
Jan 31 07:21:21 compute-2 sudo[184957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:21 compute-2 python3.9[184959]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:21 compute-2 sudo[184957]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:21.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:22 compute-2 sudo[184998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:22 compute-2 sudo[184998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:22 compute-2 sudo[184998]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:22 compute-2 sudo[185041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:22 compute-2 sudo[185041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:22 compute-2 sudo[185041]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:22 compute-2 sudo[185132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ketoppdrjbrmnenmwqdtodlhnjpzarbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844081.488513-1648-134385414536920/AnsiballZ_copy.py'
Jan 31 07:21:22 compute-2 sudo[185132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:22 compute-2 python3.9[185134]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844081.488513-1648-134385414536920/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:22 compute-2 sudo[185132]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:22.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:22 compute-2 sudo[185284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oprnjwvzvmgfldzphaamfljxrupbwjsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844082.4551284-1648-127730414271958/AnsiballZ_stat.py'
Jan 31 07:21:22 compute-2 sudo[185284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:22 compute-2 python3.9[185286]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:22 compute-2 sudo[185284]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:23 compute-2 ceph-mon[77282]: pgmap v604: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:23 compute-2 sudo[185410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fscnoenakmgubrgfxfdthoioznqwgmmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844082.4551284-1648-127730414271958/AnsiballZ_copy.py'
Jan 31 07:21:23 compute-2 sudo[185410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:23 compute-2 python3.9[185412]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844082.4551284-1648-127730414271958/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:23 compute-2 sudo[185410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:23 compute-2 sudo[185562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgmaywrgppqzdbngjhlelbeozkuwcrxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844083.5021503-1648-98498370398855/AnsiballZ_stat.py'
Jan 31 07:21:23 compute-2 sudo[185562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:23 compute-2 python3.9[185564]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:23 compute-2 sudo[185562]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:24 compute-2 ceph-mon[77282]: pgmap v605: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:24 compute-2 sudo[185685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvmsxtdkjwdcjsztdbpydhzcfsjbvagf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844083.5021503-1648-98498370398855/AnsiballZ_copy.py'
Jan 31 07:21:24 compute-2 sudo[185685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:24 compute-2 python3.9[185687]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844083.5021503-1648-98498370398855/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:24 compute-2 sudo[185685]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:24.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:24 compute-2 sudo[185837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjbuykmxassaptrhjnlzxgbmnsdskfnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844084.5310974-1648-44514567228160/AnsiballZ_stat.py'
Jan 31 07:21:24 compute-2 sudo[185837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:25 compute-2 python3.9[185839]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:25 compute-2 sudo[185837]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:25 compute-2 sudo[185963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svshanemshbqyostesjeirksghgymmzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844084.5310974-1648-44514567228160/AnsiballZ_copy.py'
Jan 31 07:21:25 compute-2 sudo[185963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:25 compute-2 python3.9[185965]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844084.5310974-1648-44514567228160/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:25 compute-2 sudo[185963]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:25.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:26.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:26 compute-2 ceph-mon[77282]: pgmap v606: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:26 compute-2 sudo[186115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olohbtcudnjerpgdexvhqiovvcjbxpdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844086.7057927-1988-71817223880497/AnsiballZ_command.py'
Jan 31 07:21:26 compute-2 sudo[186115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:27 compute-2 python3.9[186117]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 07:21:27 compute-2 sudo[186115]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:27 compute-2 sudo[186269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-indebiyqoicmmjhjjhvmiwsrzwekqrua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844087.5429964-2014-163123868602997/AnsiballZ_file.py'
Jan 31 07:21:27 compute-2 sudo[186269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:27.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:27 compute-2 python3.9[186271]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:28 compute-2 sudo[186269]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:28 compute-2 sudo[186421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqirfawemlppurkhifozjbqxnyhmqzsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844088.1527019-2014-242603148722020/AnsiballZ_file.py'
Jan 31 07:21:28 compute-2 sudo[186421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:28.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:28 compute-2 python3.9[186423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:28 compute-2 sudo[186421]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:28 compute-2 ceph-mon[77282]: pgmap v607: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:29 compute-2 sudo[186573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szpkltamavzwstsrfjjdlsfkfzooazqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844088.8109322-2014-81829295207966/AnsiballZ_file.py'
Jan 31 07:21:29 compute-2 sudo[186573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:29 compute-2 python3.9[186575]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:29 compute-2 sudo[186573]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:29 compute-2 sudo[186726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpdratcrxrzehvsmgwhigvxupybppubr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844089.3944552-2014-247233647361331/AnsiballZ_file.py'
Jan 31 07:21:29 compute-2 sudo[186726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:29 compute-2 python3.9[186728]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:29 compute-2 sudo[186726]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:29.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:30 compute-2 sudo[186878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzwxnzvihotjbtbqvirzwaiuxquvobit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844089.952041-2014-9517433658335/AnsiballZ_file.py'
Jan 31 07:21:30 compute-2 sudo[186878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:30 compute-2 python3.9[186880]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:30 compute-2 sudo[186878]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:30.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:30 compute-2 sudo[187030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpnwkctdtceelksztqzzeesuzvzranbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844090.516311-2014-231026058898901/AnsiballZ_file.py'
Jan 31 07:21:30 compute-2 sudo[187030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:30 compute-2 ceph-mon[77282]: pgmap v608: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:30 compute-2 python3.9[187032]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:30 compute-2 sudo[187030]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:31 compute-2 sudo[187183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrhsyhpmjddxqhojlkdojlsbopukpzor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844091.0503943-2014-50244346120960/AnsiballZ_file.py'
Jan 31 07:21:31 compute-2 sudo[187183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:31 compute-2 python3.9[187185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:31 compute-2 sudo[187183]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:31 compute-2 sudo[187335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kktrvcpvespcfuyqiaosblgxgfctnvfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844091.5679312-2014-87582960828809/AnsiballZ_file.py'
Jan 31 07:21:31 compute-2 sudo[187335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:31 compute-2 python3.9[187337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:31.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:31 compute-2 sudo[187335]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:32 compute-2 sudo[187487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cumhbzzkjhejmucquejpqrssvvwdmvow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844092.0998456-2014-124069791804142/AnsiballZ_file.py'
Jan 31 07:21:32 compute-2 sudo[187487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:32 compute-2 python3.9[187489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:32 compute-2 sudo[187487]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:32 compute-2 sudo[187639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iakdumlpurcsrrwkxipvqproghsqmsmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844092.6509802-2014-219008711219149/AnsiballZ_file.py'
Jan 31 07:21:32 compute-2 sudo[187639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:32 compute-2 ceph-mon[77282]: pgmap v609: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:33 compute-2 python3.9[187641]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:33 compute-2 sudo[187639]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:33 compute-2 sudo[187792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dunybbkeddtwlzzuhgxinvudnaqkpazo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844093.1880205-2014-123704943651384/AnsiballZ_file.py'
Jan 31 07:21:33 compute-2 sudo[187792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:33 compute-2 python3.9[187794]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:33 compute-2 sudo[187792]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:21:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:21:34 compute-2 sudo[187944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etzorwuzhnczyglkramzmavpzpzvtbvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844093.7355382-2014-223663654976649/AnsiballZ_file.py'
Jan 31 07:21:34 compute-2 sudo[187944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:34 compute-2 python3.9[187946]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:34 compute-2 sudo[187944]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:34 compute-2 sudo[188096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqyzdujmrpclioffrayxrcruownsmxin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844094.383521-2014-43263867809117/AnsiballZ_file.py'
Jan 31 07:21:34 compute-2 sudo[188096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:34.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:34 compute-2 python3.9[188098]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:34 compute-2 sudo[188096]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:34 compute-2 ceph-mon[77282]: pgmap v610: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:35 compute-2 sudo[188249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzkpjcworbkhibqrmmgyubxbmrgpusfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844094.9804099-2014-245093577373923/AnsiballZ_file.py'
Jan 31 07:21:35 compute-2 sudo[188249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:35 compute-2 python3.9[188251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:35 compute-2 sudo[188249]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:36 compute-2 ceph-mon[77282]: pgmap v611: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:36.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:37.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:38 compute-2 podman[188277]: 2026-01-31 07:21:38.210662653 +0000 UTC m=+0.093062535 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 07:21:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:38.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:38 compute-2 ceph-mon[77282]: pgmap v612: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:38 compute-2 sudo[188429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glqjpybeqpumzczvzgaztzjwgqfpdfpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844098.196823-2311-6704276154341/AnsiballZ_stat.py'
Jan 31 07:21:38 compute-2 sudo[188429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:38 compute-2 python3.9[188431]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:38 compute-2 sudo[188429]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:39 compute-2 sudo[188553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzifwegqokemsaoifcrfooqbnsnvpnaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844098.196823-2311-6704276154341/AnsiballZ_copy.py'
Jan 31 07:21:39 compute-2 sudo[188553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:39 compute-2 python3.9[188555]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844098.196823-2311-6704276154341/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:39 compute-2 sudo[188553]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:39 compute-2 sudo[188705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkkwjsnxvbscmmccozeyxnhugqvudcjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844099.6627526-2311-250880144163919/AnsiballZ_stat.py'
Jan 31 07:21:39 compute-2 sudo[188705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:40.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:40 compute-2 python3.9[188707]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:40 compute-2 sudo[188705]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:40 compute-2 sudo[188839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmmlfmtoksgignuzpqdqrrcnmvjcvhzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844099.6627526-2311-250880144163919/AnsiballZ_copy.py'
Jan 31 07:21:40 compute-2 sudo[188839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:40 compute-2 podman[188802]: 2026-01-31 07:21:40.57664214 +0000 UTC m=+0.096948490 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:21:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:40.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:40 compute-2 python3.9[188845]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844099.6627526-2311-250880144163919/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:40 compute-2 sudo[188839]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:40 compute-2 ceph-mon[77282]: pgmap v613: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:41 compute-2 sudo[189001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypbibdabbeybcugiplqwrimwphhnvgxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844100.895911-2311-51055766217645/AnsiballZ_stat.py'
Jan 31 07:21:41 compute-2 sudo[189001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:41 compute-2 python3.9[189003]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:41 compute-2 sudo[189001]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:41 compute-2 sudo[189124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqdesuqzijlehnmgdtycczxthtdpijjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844100.895911-2311-51055766217645/AnsiballZ_copy.py'
Jan 31 07:21:41 compute-2 sudo[189124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:41 compute-2 python3.9[189126]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844100.895911-2311-51055766217645/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:41 compute-2 sudo[189124]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:42.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:42 compute-2 sudo[189127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:42 compute-2 sudo[189127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:42 compute-2 sudo[189127]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:42 compute-2 sudo[189152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:42 compute-2 sudo[189152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:42 compute-2 sudo[189152]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:42.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:42 compute-2 ceph-mon[77282]: pgmap v614: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:42 compute-2 sudo[189326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikvamxuriqqhygnpobnlaihlhvmzdujy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844102.6365519-2311-33105377425130/AnsiballZ_stat.py'
Jan 31 07:21:42 compute-2 sudo[189326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:43 compute-2 python3.9[189328]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:43 compute-2 sudo[189326]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:43 compute-2 sudo[189450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjzxgfzkrxwblpodqxeeavnshclbhnnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844102.6365519-2311-33105377425130/AnsiballZ_copy.py'
Jan 31 07:21:43 compute-2 sudo[189450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:43 compute-2 python3.9[189452]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844102.6365519-2311-33105377425130/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:43 compute-2 sudo[189450]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:44.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:44 compute-2 sudo[189602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvptwkpixuvuqyxoseseowpwmfvuajps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844103.7251568-2311-166258893489692/AnsiballZ_stat.py'
Jan 31 07:21:44 compute-2 sudo[189602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:44 compute-2 python3.9[189604]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:44 compute-2 sudo[189602]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:44 compute-2 sudo[189725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekvrnrxngfiktlsddopcnlnzkjwgijte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844103.7251568-2311-166258893489692/AnsiballZ_copy.py'
Jan 31 07:21:44 compute-2 sudo[189725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:44.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:44 compute-2 python3.9[189727]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844103.7251568-2311-166258893489692/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:44 compute-2 sudo[189725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:44 compute-2 ceph-mon[77282]: pgmap v615: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:45 compute-2 sudo[189878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mflhjymulgaqdjyumasmlhktcmuvqcmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844104.9375684-2311-143130246216726/AnsiballZ_stat.py'
Jan 31 07:21:45 compute-2 sudo[189878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:45 compute-2 python3.9[189880]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:45 compute-2 sudo[189878]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:45 compute-2 sudo[190001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugiaomfltmmvbfngzlzprgnvsecniemu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844104.9375684-2311-143130246216726/AnsiballZ_copy.py'
Jan 31 07:21:45 compute-2 sudo[190001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:45 compute-2 python3.9[190003]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844104.9375684-2311-143130246216726/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:45 compute-2 sudo[190001]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:46.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:46 compute-2 sudo[190153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ligkeylroqoadmbdvduiskstxtqfecpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844106.058687-2311-229076920644727/AnsiballZ_stat.py'
Jan 31 07:21:46 compute-2 sudo[190153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:46 compute-2 python3.9[190155]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:46 compute-2 sudo[190153]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:46.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:46 compute-2 sudo[190276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ervqjmiouunkjkfvwobppacmewrbmcaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844106.058687-2311-229076920644727/AnsiballZ_copy.py'
Jan 31 07:21:46 compute-2 sudo[190276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:46 compute-2 ceph-mon[77282]: pgmap v616: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:47 compute-2 python3.9[190278]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844106.058687-2311-229076920644727/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:47 compute-2 sudo[190276]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:47 compute-2 sudo[190429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miladbnprhfzmzuixmpswxwzrhisumfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844107.2001762-2311-188382501791086/AnsiballZ_stat.py'
Jan 31 07:21:47 compute-2 sudo[190429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:47 compute-2 sudo[190432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:47 compute-2 sudo[190432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:47 compute-2 sudo[190432]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:47 compute-2 sudo[190457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:21:47 compute-2 sudo[190457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:47 compute-2 sudo[190457]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:47 compute-2 python3.9[190431]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:47 compute-2 sudo[190429]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:47 compute-2 sudo[190482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:47 compute-2 sudo[190482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:47 compute-2 sudo[190482]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:47 compute-2 sudo[190530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:21:47 compute-2 sudo[190530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:48 compute-2 sudo[190668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlzypxzhedgdhfseqivffdhtwircsery ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844107.2001762-2311-188382501791086/AnsiballZ_copy.py'
Jan 31 07:21:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:48 compute-2 sudo[190668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:48.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:48 compute-2 python3.9[190670]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844107.2001762-2311-188382501791086/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:48 compute-2 sudo[190668]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:48 compute-2 sudo[190530]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:48.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:48 compute-2 sudo[190837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iekycyfpbxsrgqgsawvtoyfubctxhdfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844108.4283712-2311-68638822271674/AnsiballZ_stat.py'
Jan 31 07:21:48 compute-2 sudo[190837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:48 compute-2 python3.9[190839]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:48 compute-2 sudo[190837]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:49 compute-2 ceph-mon[77282]: pgmap v617: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:49 compute-2 sudo[190961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjukbbhfgurfsjrqtbuopzivgjcvoink ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844108.4283712-2311-68638822271674/AnsiballZ_copy.py'
Jan 31 07:21:49 compute-2 sudo[190961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:49 compute-2 python3.9[190963]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844108.4283712-2311-68638822271674/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:49 compute-2 sudo[190961]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:50 compute-2 sudo[191113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwfvioqryvlltucqwunhmlgcriejvcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844109.7279408-2311-241197703117368/AnsiballZ_stat.py'
Jan 31 07:21:50 compute-2 sudo[191113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:50.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:50 compute-2 python3.9[191115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:50 compute-2 sudo[191113]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:50 compute-2 sudo[191236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpcymbwziagpnlofozeqzagkgutsgfht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844109.7279408-2311-241197703117368/AnsiballZ_copy.py'
Jan 31 07:21:50 compute-2 sudo[191236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:50.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:50 compute-2 python3.9[191238]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844109.7279408-2311-241197703117368/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:50 compute-2 sudo[191236]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:50 compute-2 ceph-mon[77282]: pgmap v618: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:21:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:21:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:21:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:21:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:21:51 compute-2 sudo[191389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwhgfdkvijytykojqibbzkzggmflffuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844110.8881567-2311-8198151108531/AnsiballZ_stat.py'
Jan 31 07:21:51 compute-2 sudo[191389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:51 compute-2 python3.9[191391]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:51 compute-2 sudo[191389]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:51 compute-2 sudo[191512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csepedenmuoiuahmpsptgyeqsicydyhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844110.8881567-2311-8198151108531/AnsiballZ_copy.py'
Jan 31 07:21:51 compute-2 sudo[191512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:52 compute-2 python3.9[191514]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844110.8881567-2311-8198151108531/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:52.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:52 compute-2 sudo[191512]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:52 compute-2 sudo[191664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzmiguytzxvjwvpqggwnszzbmnbiyymf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844112.162871-2311-167207456402262/AnsiballZ_stat.py'
Jan 31 07:21:52 compute-2 sudo[191664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:52 compute-2 python3.9[191666]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:52 compute-2 sudo[191664]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:52.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:52 compute-2 sudo[191787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meorquxafzkwhgmtgpyumvkskcghkexl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844112.162871-2311-167207456402262/AnsiballZ_copy.py'
Jan 31 07:21:52 compute-2 sudo[191787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:52 compute-2 ceph-mon[77282]: pgmap v619: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:52 compute-2 python3.9[191789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844112.162871-2311-167207456402262/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:52 compute-2 sudo[191787]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:53 compute-2 sudo[191940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-singaqrhsqqzmpxvbhgkjxhewhwfqryi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844113.113601-2311-168956501059126/AnsiballZ_stat.py'
Jan 31 07:21:53 compute-2 sudo[191940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:53 compute-2 python3.9[191942]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:53 compute-2 sudo[191940]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:53 compute-2 sudo[192063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmvwcuwwnfuqselfyedrmdbhxflbkxcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844113.113601-2311-168956501059126/AnsiballZ_copy.py'
Jan 31 07:21:54 compute-2 sudo[192063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:54.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:54 compute-2 python3.9[192065]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844113.113601-2311-168956501059126/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:54 compute-2 sudo[192063]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:54 compute-2 sudo[192215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjzbseekpmbugtnwfnvdolzeqncxkopr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844114.3717995-2311-273636426826200/AnsiballZ_stat.py'
Jan 31 07:21:54 compute-2 sudo[192215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:54.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:54 compute-2 python3.9[192217]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:54 compute-2 sudo[192215]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:54 compute-2 ceph-mon[77282]: pgmap v620: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:55 compute-2 sudo[192339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bujdaelmzybgihqrdnulxmcjcwlyuzfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844114.3717995-2311-273636426826200/AnsiballZ_copy.py'
Jan 31 07:21:55 compute-2 sudo[192339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:55 compute-2 python3.9[192341]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844114.3717995-2311-273636426826200/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:55 compute-2 sudo[192339]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:56.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:56.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:56 compute-2 ceph-mon[77282]: pgmap v621: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:57 compute-2 sudo[192493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:57 compute-2 sudo[192493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:57 compute-2 sudo[192493]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:57 compute-2 sudo[192518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:21:57 compute-2 sudo[192518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:57 compute-2 sudo[192518]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:57 compute-2 python3.9[192492]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:21:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:21:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:58.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:21:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:21:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:58.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:58 compute-2 sudo[192695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiprrqxgqytiteevqojsfzkrbsfvqkbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844117.7692792-2930-246212867299806/AnsiballZ_seboolean.py'
Jan 31 07:21:58 compute-2 sudo[192695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:58 compute-2 python3.9[192697]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 07:21:59 compute-2 ceph-mon[77282]: pgmap v622: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:59 compute-2 sudo[192695]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:00.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:00 compute-2 ceph-mon[77282]: pgmap v623: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:00.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:00 compute-2 sudo[192852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzxfkoozorrentwypheuramhqjxrpcgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844120.4855871-2953-151056722560352/AnsiballZ_copy.py'
Jan 31 07:22:00 compute-2 dbus-broker-launch[793]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 07:22:00 compute-2 sudo[192852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:00 compute-2 python3.9[192854]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:00 compute-2 sudo[192852]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:01 compute-2 sudo[193005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aobrpweppxxrwexurgnswogweihxmoob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844121.0918028-2953-67649943319884/AnsiballZ_copy.py'
Jan 31 07:22:01 compute-2 sudo[193005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:01 compute-2 python3.9[193007]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:01 compute-2 sudo[193005]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:01 compute-2 sudo[193157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epnpummiaxepphfsbzciqbqjocmpquoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844121.6665397-2953-71447741097503/AnsiballZ_copy.py'
Jan 31 07:22:01 compute-2 sudo[193157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:02.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:02 compute-2 python3.9[193159]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:02 compute-2 sudo[193157]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:02 compute-2 sudo[193196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:02 compute-2 sudo[193196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:02 compute-2 sudo[193196]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:02 compute-2 sudo[193245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:02 compute-2 sudo[193245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:02 compute-2 sudo[193245]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:02 compute-2 sudo[193359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbwpqqlcxmyytgtbmrnekpttgrqatfpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844122.2252328-2953-110264143951039/AnsiballZ_copy.py'
Jan 31 07:22:02 compute-2 sudo[193359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:02 compute-2 python3.9[193361]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:02 compute-2 sudo[193359]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:02.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:02 compute-2 ceph-mon[77282]: pgmap v624: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:03 compute-2 sudo[193512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqwtlwbcaguuabejccbcivfnuwhhyzjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844122.8277123-2953-93442076605195/AnsiballZ_copy.py'
Jan 31 07:22:03 compute-2 sudo[193512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:03 compute-2 python3.9[193514]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:03 compute-2 sudo[193512]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:04 compute-2 sudo[193664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aktcpqdczmwmrvitlmfeapczacehvauz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844123.765165-3062-24570066526924/AnsiballZ_copy.py'
Jan 31 07:22:04 compute-2 sudo[193664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:04.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:04 compute-2 python3.9[193666]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:04 compute-2 sudo[193664]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:04 compute-2 auditd[702]: Audit daemon rotating log files
Jan 31 07:22:04 compute-2 sudo[193816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqdxrzvuujorfeqhogcvdyxcnlswwqmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844124.4046211-3062-186094418021475/AnsiballZ_copy.py'
Jan 31 07:22:04 compute-2 sudo[193816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:04.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:04 compute-2 ceph-mon[77282]: pgmap v625: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:04 compute-2 python3.9[193818]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:04 compute-2 sudo[193816]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:05 compute-2 sudo[193969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpgsobiznragdwejgnzarwyxjtktvdxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844124.984394-3062-116836577028046/AnsiballZ_copy.py'
Jan 31 07:22:05 compute-2 sudo[193969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:05 compute-2 python3.9[193971]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:05 compute-2 sudo[193969]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:05 compute-2 sudo[194121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xorywihnixfnjvlssuvjxmnywrybvnle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844125.5794234-3062-276766660137788/AnsiballZ_copy.py'
Jan 31 07:22:05 compute-2 sudo[194121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:06.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:06 compute-2 python3.9[194123]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:06 compute-2 sudo[194121]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:06 compute-2 sudo[194273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqzncaswvfgpoyvuxlwvelxhuqecqjdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844126.1977568-3062-228645085059619/AnsiballZ_copy.py'
Jan 31 07:22:06 compute-2 sudo[194273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:06 compute-2 python3.9[194275]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:06 compute-2 sudo[194273]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:06.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:22:06.824 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:22:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:22:06.826 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:22:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:22:06.826 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:22:06 compute-2 ceph-mon[77282]: pgmap v626: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:07 compute-2 sudo[194426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drvhupuvisvmfhorznqsrnsmturoeakq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844127.2955062-3170-116033111188037/AnsiballZ_systemd.py'
Jan 31 07:22:07 compute-2 sudo[194426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:07 compute-2 python3.9[194428]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:22:07 compute-2 systemd[1]: Reloading.
Jan 31 07:22:07 compute-2 systemd-sysv-generator[194456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:07 compute-2 systemd-rc-local-generator[194453]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:08.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:08 compute-2 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 07:22:08 compute-2 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 07:22:08 compute-2 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 07:22:08 compute-2 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 07:22:08 compute-2 systemd[1]: Starting libvirt logging daemon...
Jan 31 07:22:08 compute-2 systemd[1]: Started libvirt logging daemon.
Jan 31 07:22:08 compute-2 sudo[194426]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:08 compute-2 podman[194470]: 2026-01-31 07:22:08.39040088 +0000 UTC m=+0.100079466 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:22:08 compute-2 sudo[194646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-undllkjqrlhhyxbdrogpxckpwmppnlvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844128.44648-3170-84940717504434/AnsiballZ_systemd.py'
Jan 31 07:22:08 compute-2 sudo[194646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:08.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:08 compute-2 ceph-mon[77282]: pgmap v627: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:08 compute-2 python3.9[194648]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:22:08 compute-2 systemd[1]: Reloading.
Jan 31 07:22:09 compute-2 systemd-rc-local-generator[194674]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:09 compute-2 systemd-sysv-generator[194678]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:09 compute-2 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 07:22:09 compute-2 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 07:22:09 compute-2 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 07:22:09 compute-2 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 07:22:09 compute-2 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 07:22:09 compute-2 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 07:22:09 compute-2 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 07:22:09 compute-2 systemd[1]: Started libvirt nodedev daemon.
Jan 31 07:22:09 compute-2 sudo[194646]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:09 compute-2 sudo[194863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-texmhsvzvohnjqbvzliurmynpjkesfld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844129.458753-3170-48790052113166/AnsiballZ_systemd.py'
Jan 31 07:22:09 compute-2 sudo[194863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:10.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:10 compute-2 python3.9[194865]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:22:10 compute-2 systemd[1]: Reloading.
Jan 31 07:22:10 compute-2 systemd-rc-local-generator[194891]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:10 compute-2 systemd-sysv-generator[194895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:10 compute-2 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 07:22:10 compute-2 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 07:22:10 compute-2 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 07:22:10 compute-2 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 07:22:10 compute-2 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 07:22:10 compute-2 systemd[1]: Starting libvirt proxy daemon...
Jan 31 07:22:10 compute-2 systemd[1]: Started libvirt proxy daemon.
Jan 31 07:22:10 compute-2 sudo[194863]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:10 compute-2 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 07:22:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:10.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:10 compute-2 podman[195024]: 2026-01-31 07:22:10.795622264 +0000 UTC m=+0.065672128 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:22:10 compute-2 sudo[195092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brymiqdfcylfenovgcucupstmnntoxfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844130.614011-3170-109456272902862/AnsiballZ_systemd.py'
Jan 31 07:22:10 compute-2 sudo[195092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:10 compute-2 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 07:22:10 compute-2 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 07:22:10 compute-2 ceph-mon[77282]: pgmap v628: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:11 compute-2 python3.9[195095]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:22:11 compute-2 systemd[1]: Reloading.
Jan 31 07:22:11 compute-2 systemd-sysv-generator[195133]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:11 compute-2 systemd-rc-local-generator[195130]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:11 compute-2 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 07:22:11 compute-2 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 07:22:11 compute-2 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 07:22:11 compute-2 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 07:22:11 compute-2 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 07:22:11 compute-2 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 07:22:11 compute-2 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 07:22:11 compute-2 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 07:22:11 compute-2 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 07:22:11 compute-2 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 07:22:11 compute-2 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 07:22:11 compute-2 systemd[1]: Started libvirt QEMU daemon.
Jan 31 07:22:11 compute-2 sudo[195092]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:11 compute-2 setroubleshoot[194901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 8e441edc-b264-4eab-b515-fcb5b468310e
Jan 31 07:22:11 compute-2 setroubleshoot[194901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 07:22:11 compute-2 setroubleshoot[194901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 8e441edc-b264-4eab-b515-fcb5b468310e
Jan 31 07:22:11 compute-2 setroubleshoot[194901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 07:22:12 compute-2 sudo[195316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikiobkyfdkttykqhvmyidzltemrbduks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844131.7231271-3170-256763084151346/AnsiballZ_systemd.py'
Jan 31 07:22:12 compute-2 sudo[195316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:12.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:12 compute-2 python3.9[195318]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:22:12 compute-2 systemd[1]: Reloading.
Jan 31 07:22:12 compute-2 systemd-rc-local-generator[195341]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:12 compute-2 systemd-sysv-generator[195348]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:12 compute-2 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 07:22:12 compute-2 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 07:22:12 compute-2 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 07:22:12 compute-2 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 07:22:12 compute-2 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 07:22:12 compute-2 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 07:22:12 compute-2 systemd[1]: Starting libvirt secret daemon...
Jan 31 07:22:12 compute-2 systemd[1]: Started libvirt secret daemon.
Jan 31 07:22:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:12.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:12 compute-2 sudo[195316]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:12 compute-2 ceph-mon[77282]: pgmap v629: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:13 compute-2 sudo[195528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiavqvvycotkbacdjpfkodyefvichmue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844133.3358116-3281-273360003577143/AnsiballZ_file.py'
Jan 31 07:22:13 compute-2 sudo[195528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:13 compute-2 python3.9[195530]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:13 compute-2 sudo[195528]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:14.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:14.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:14 compute-2 sudo[195680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seshgbdprwwcqgbhorsyzhrrjceplvmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844134.5138595-3305-146087063035501/AnsiballZ_find.py'
Jan 31 07:22:14 compute-2 sudo[195680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:14 compute-2 ceph-mon[77282]: pgmap v630: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:14 compute-2 python3.9[195682]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:22:15 compute-2 sudo[195680]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:15 compute-2 sudo[195833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oukurbehzyuvnjirsghynvcxouyqcdpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844135.1831295-3328-220123603297704/AnsiballZ_command.py'
Jan 31 07:22:15 compute-2 sudo[195833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:15 compute-2 python3.9[195835]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:15 compute-2 sudo[195833]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:16.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:16 compute-2 python3.9[195989]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:22:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:16.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:17 compute-2 ceph-mon[77282]: pgmap v631: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:17 compute-2 python3.9[196140]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:18.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:18 compute-2 python3.9[196261]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844137.2370777-3385-236316034987863/.source.xml follow=False _original_basename=secret.xml.j2 checksum=e49dd15d2c7191e2dea7492d81017d486826e706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:18 compute-2 sudo[196411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vavxaxgsjmtryggjxerzthfzcftapatp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844138.3951244-3431-32940611697239/AnsiballZ_command.py'
Jan 31 07:22:18 compute-2 sudo[196411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:18.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:18 compute-2 python3.9[196413]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine f70fcd2a-dcb4-5f89-a4ba-79a09959083b
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:18 compute-2 polkitd[43481]: Registered Authentication Agent for unix-process:196415:439136 (system bus name :1.1924 [pkttyagent --process 196415 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 07:22:18 compute-2 polkitd[43481]: Unregistered Authentication Agent for unix-process:196415:439136 (system bus name :1.1924, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 07:22:18 compute-2 polkitd[43481]: Registered Authentication Agent for unix-process:196414:439136 (system bus name :1.1925 [pkttyagent --process 196414 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 07:22:18 compute-2 polkitd[43481]: Unregistered Authentication Agent for unix-process:196414:439136 (system bus name :1.1925, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 07:22:19 compute-2 ceph-mon[77282]: pgmap v632: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:19 compute-2 sudo[196411]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:20.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:20 compute-2 python3.9[196576]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:20.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:21 compute-2 ceph-mon[77282]: pgmap v633: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:21 compute-2 sudo[196727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vayseiuwybzekiwtodxkcawvdxjnwjjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844140.85698-3479-184811557190299/AnsiballZ_command.py'
Jan 31 07:22:21 compute-2 sudo[196727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:21 compute-2 sudo[196727]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:21 compute-2 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 07:22:21 compute-2 sudo[196880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouqknxbfykmglgdyouyjhnqoqlpmvmvx ; FSID=f70fcd2a-dcb4-5f89-a4ba-79a09959083b KEY=AQBjqX1pAAAAABAAZUDJ8pReeykI0ZmVlnkCdQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844141.5432882-3502-144345765040251/AnsiballZ_command.py'
Jan 31 07:22:21 compute-2 sudo[196880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:21 compute-2 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 07:22:22 compute-2 polkitd[43481]: Registered Authentication Agent for unix-process:196883:439455 (system bus name :1.1928 [pkttyagent --process 196883 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 07:22:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:22.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:22 compute-2 polkitd[43481]: Unregistered Authentication Agent for unix-process:196883:439455 (system bus name :1.1928, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 07:22:22 compute-2 ceph-mon[77282]: pgmap v634: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:22 compute-2 sudo[196880]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:22 compute-2 sudo[196913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:22 compute-2 sudo[196913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:22 compute-2 sudo[196913]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:22 compute-2 sudo[196964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:22 compute-2 sudo[196964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:22 compute-2 sudo[196964]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:22 compute-2 sudo[197088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfbmgjwkynbyzodskaynyjneunmlseoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844142.3951738-3526-183168102132274/AnsiballZ_copy.py'
Jan 31 07:22:22 compute-2 sudo[197088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:22.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:22 compute-2 python3.9[197090]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:22 compute-2 sudo[197088]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:23 compute-2 sudo[197241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjhaismkymiwropbntesylfnlpzeisxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844143.1249008-3551-111870614958058/AnsiballZ_stat.py'
Jan 31 07:22:23 compute-2 sudo[197241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:23 compute-2 python3.9[197243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:23 compute-2 sudo[197241]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:24 compute-2 sudo[197364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prkokvkdqsbbpmtfvhvplehzsdgyjtxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844143.1249008-3551-111870614958058/AnsiballZ_copy.py'
Jan 31 07:22:24 compute-2 sudo[197364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:24.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:24 compute-2 python3.9[197366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844143.1249008-3551-111870614958058/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:24 compute-2 sudo[197364]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:24.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:24 compute-2 ceph-mon[77282]: pgmap v635: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:24 compute-2 sudo[197516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jclogxxqweltckzvpcafturuiteeyayz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844144.720496-3599-22970421755130/AnsiballZ_file.py'
Jan 31 07:22:25 compute-2 sudo[197516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:25 compute-2 python3.9[197518]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:25 compute-2 sudo[197516]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:25 compute-2 sudo[197669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guyppaisfmvdykiybfdksbocjydlzuni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844145.3976498-3623-46358460308214/AnsiballZ_stat.py'
Jan 31 07:22:25 compute-2 sudo[197669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:25 compute-2 python3.9[197671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:25 compute-2 sudo[197669]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:26.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:26 compute-2 sudo[197747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghbdhunxnzqvffbphejesidhafkoomod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844145.3976498-3623-46358460308214/AnsiballZ_file.py'
Jan 31 07:22:26 compute-2 sudo[197747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:26 compute-2 python3.9[197749]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:26 compute-2 sudo[197747]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:26.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:26 compute-2 ceph-mon[77282]: pgmap v636: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:26 compute-2 sudo[197899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pklgxbbbzeosxgrauzcqnmvacauilpif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844146.6159372-3659-258561511081601/AnsiballZ_stat.py'
Jan 31 07:22:26 compute-2 sudo[197899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:27 compute-2 python3.9[197901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:27 compute-2 sudo[197899]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:27 compute-2 sudo[197978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctulevnmsaavovtfdkvuvvbvsdryddov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844146.6159372-3659-258561511081601/AnsiballZ_file.py'
Jan 31 07:22:27 compute-2 sudo[197978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:27 compute-2 python3.9[197980]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4b7ktam0 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:27 compute-2 sudo[197978]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:28 compute-2 sudo[198130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzptyewnrchmesgaeznfvtdgujysxqjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844147.8267088-3695-267939818893473/AnsiballZ_stat.py'
Jan 31 07:22:28 compute-2 sudo[198130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:28.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:28 compute-2 python3.9[198132]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:28 compute-2 sudo[198130]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:28 compute-2 sudo[198208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uterasivgdpdxgskhdfdhvdmltfyfcdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844147.8267088-3695-267939818893473/AnsiballZ_file.py'
Jan 31 07:22:28 compute-2 sudo[198208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:28.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:28 compute-2 python3.9[198210]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:28 compute-2 sudo[198208]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:28 compute-2 ceph-mon[77282]: pgmap v637: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:29 compute-2 sudo[198361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijeukiujbciymvmygulfjpdmwoiovwaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844148.9873095-3733-233281283286724/AnsiballZ_command.py'
Jan 31 07:22:29 compute-2 sudo[198361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:29 compute-2 python3.9[198363]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:29 compute-2 sudo[198361]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:30.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:30 compute-2 sudo[198514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwzdbfkwhzxtxuxqjnpbizdpunixavoy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769844149.9745224-3758-249738968093985/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 07:22:30 compute-2 sudo[198514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:30 compute-2 python3[198516]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 07:22:30 compute-2 sudo[198514]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:30.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:30 compute-2 ceph-mon[77282]: pgmap v638: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:31 compute-2 sudo[198667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zegfpkmmavjwaaprugdcfvyihhxhywjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844150.799049-3782-224149851612580/AnsiballZ_stat.py'
Jan 31 07:22:31 compute-2 sudo[198667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:31 compute-2 python3.9[198669]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:31 compute-2 sudo[198667]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:31 compute-2 sudo[198745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwyojzsqcknmcxsnkgnapelvlbihtlbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844150.799049-3782-224149851612580/AnsiballZ_file.py'
Jan 31 07:22:31 compute-2 sudo[198745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:31 compute-2 python3.9[198747]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:31 compute-2 sudo[198745]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:32.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:32 compute-2 sudo[198897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfzfyykvigouufbqqjpmywscglqllibm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844152.0168462-3818-229372340244427/AnsiballZ_stat.py'
Jan 31 07:22:32 compute-2 sudo[198897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:32 compute-2 python3.9[198899]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:32 compute-2 sudo[198897]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:32.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:32 compute-2 ceph-mon[77282]: pgmap v639: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:33 compute-2 sudo[199022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wucmplikwgxyvpzzwuaagijzszjtfmgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844152.0168462-3818-229372340244427/AnsiballZ_copy.py'
Jan 31 07:22:33 compute-2 sudo[199022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:33 compute-2 python3.9[199025]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844152.0168462-3818-229372340244427/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:33 compute-2 sudo[199022]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:33 compute-2 sudo[199175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxryurjkpghwjrwyhnjlbffgvnvufwsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844153.6233468-3862-154887463278130/AnsiballZ_stat.py'
Jan 31 07:22:33 compute-2 sudo[199175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:34 compute-2 python3.9[199177]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:34 compute-2 sudo[199175]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:34.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:34 compute-2 ceph-mgr[77635]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 07:22:34 compute-2 sudo[199253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzrxcksrtemkmugwoulwqdxcwtcinvdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844153.6233468-3862-154887463278130/AnsiballZ_file.py'
Jan 31 07:22:34 compute-2 sudo[199253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:34 compute-2 python3.9[199255]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:34 compute-2 sudo[199253]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:34.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:34 compute-2 ceph-mon[77282]: pgmap v640: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:35 compute-2 sudo[199406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obsfqtgcecuqmlyjtsvqjozrsvmloskn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844154.8344004-3899-139733901556834/AnsiballZ_stat.py'
Jan 31 07:22:35 compute-2 sudo[199406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:35 compute-2 python3.9[199408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:35 compute-2 sudo[199406]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:35 compute-2 sudo[199484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqkjtanexkwgictxybmsamtgunsnkank ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844154.8344004-3899-139733901556834/AnsiballZ_file.py'
Jan 31 07:22:35 compute-2 sudo[199484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:35 compute-2 python3.9[199486]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:35 compute-2 sudo[199484]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:36.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:36 compute-2 sudo[199636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppnoyfytcpvjsgsrzzfxtdgzrkpqarhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844156.0292008-3935-123633908900029/AnsiballZ_stat.py'
Jan 31 07:22:36 compute-2 sudo[199636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:36 compute-2 python3.9[199638]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:36 compute-2 sudo[199636]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:36.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:36 compute-2 sudo[199761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwtwgffkxeqdtqzklxiexozwxxycqcpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844156.0292008-3935-123633908900029/AnsiballZ_copy.py'
Jan 31 07:22:36 compute-2 sudo[199761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:36 compute-2 ceph-mon[77282]: pgmap v641: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:37 compute-2 python3.9[199763]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844156.0292008-3935-123633908900029/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:37 compute-2 sudo[199761]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:37 compute-2 sudo[199914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcazhaxyhvihkyjvxiiuffchlighvycm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844157.4794528-3980-246881584585039/AnsiballZ_file.py'
Jan 31 07:22:37 compute-2 sudo[199914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:37 compute-2 python3.9[199916]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:37 compute-2 sudo[199914]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:38.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:38 compute-2 sudo[200074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxziqoenmcjcmfsecaoruptzrvdnsyvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844158.2881355-4004-33324304899998/AnsiballZ_command.py'
Jan 31 07:22:38 compute-2 sudo[200074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:38 compute-2 podman[200040]: 2026-01-31 07:22:38.663174433 +0000 UTC m=+0.129063761 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:22:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:38.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:38 compute-2 python3.9[200081]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:38 compute-2 sudo[200074]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:39 compute-2 ceph-mon[77282]: pgmap v642: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:39 compute-2 sudo[200249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iftutpmmhjdwsayygppyjphakccapeuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844159.07582-4028-269066621042195/AnsiballZ_blockinfile.py'
Jan 31 07:22:39 compute-2 sudo[200249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:39 compute-2 python3.9[200251]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:39 compute-2 sudo[200249]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:40.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:40 compute-2 sudo[200401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxdtavxjcssfjlfnefmyhyvypegxeoee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844160.1290026-4055-272413780953919/AnsiballZ_command.py'
Jan 31 07:22:40 compute-2 sudo[200401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:40 compute-2 python3.9[200403]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:40 compute-2 sudo[200401]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:40.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:41 compute-2 ceph-mon[77282]: pgmap v643: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:41 compute-2 sudo[200566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpfkfqywytdnzuqrygqfpixuschpihdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844160.8888373-4079-69130750681695/AnsiballZ_stat.py'
Jan 31 07:22:41 compute-2 sudo[200566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:41 compute-2 podman[200529]: 2026-01-31 07:22:41.194596169 +0000 UTC m=+0.072983899 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:22:41 compute-2 python3.9[200576]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:22:41 compute-2 sudo[200566]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:42 compute-2 sudo[200728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znoybwacyjluockoakvjwrutnrsbnxxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844161.8290846-4103-262874188960025/AnsiballZ_command.py'
Jan 31 07:22:42 compute-2 sudo[200728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:42.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:42 compute-2 python3.9[200730]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:42 compute-2 sudo[200728]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:42 compute-2 sudo[200758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:42 compute-2 sudo[200758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:42 compute-2 sudo[200758]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:42 compute-2 sudo[200790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:42 compute-2 sudo[200790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:42 compute-2 sudo[200790]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:42.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:42 compute-2 sudo[200933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbdbsyiuvxbdpiyrjajococphgvjeuos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844162.5977917-4127-20524624096845/AnsiballZ_file.py'
Jan 31 07:22:42 compute-2 sudo[200933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:43 compute-2 python3.9[200935]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:43 compute-2 sudo[200933]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:43 compute-2 ceph-mon[77282]: pgmap v644: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:43 compute-2 sudo[201086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aadmewmdrlwksijzkekcdnvzkpmxnxub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844163.2697086-4151-61861094338105/AnsiballZ_stat.py'
Jan 31 07:22:43 compute-2 sudo[201086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:43 compute-2 python3.9[201088]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:43 compute-2 sudo[201086]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:43 compute-2 sudo[201209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfxmrawpawbjymuvoeciagrebmlshhbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844163.2697086-4151-61861094338105/AnsiballZ_copy.py'
Jan 31 07:22:43 compute-2 sudo[201209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:44 compute-2 python3.9[201211]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844163.2697086-4151-61861094338105/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:44 compute-2 sudo[201209]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:22:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:44.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:22:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:44.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:44 compute-2 sudo[201361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaemcniomfrqwmogdizminqxurvwajkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844164.6049182-4196-182697262563394/AnsiballZ_stat.py'
Jan 31 07:22:44 compute-2 sudo[201361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:45 compute-2 python3.9[201363]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:45 compute-2 sudo[201361]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:45 compute-2 ceph-mon[77282]: pgmap v645: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:45 compute-2 sudo[201485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzbwrenpjpqguklyjfoqvipdfiuwpvgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844164.6049182-4196-182697262563394/AnsiballZ_copy.py'
Jan 31 07:22:45 compute-2 sudo[201485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:45 compute-2 python3.9[201487]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844164.6049182-4196-182697262563394/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:45 compute-2 sudo[201485]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:46 compute-2 ceph-mon[77282]: pgmap v646: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:22:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:46.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:22:46 compute-2 sudo[201637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxlindaxjebqiyrzmkvvhjnxvxgxfudi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844165.9510999-4241-273801843736736/AnsiballZ_stat.py'
Jan 31 07:22:46 compute-2 sudo[201637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:46 compute-2 python3.9[201639]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:46 compute-2 sudo[201637]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:22:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:46.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:22:46 compute-2 sudo[201760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kovomdqugvjgbgdvcynznwiygjwfmjud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844165.9510999-4241-273801843736736/AnsiballZ_copy.py'
Jan 31 07:22:46 compute-2 sudo[201760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:46 compute-2 python3.9[201762]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844165.9510999-4241-273801843736736/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:46 compute-2 sudo[201760]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:47 compute-2 sudo[201913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfczwsiyxjikgrfuujlujckxzmrkrnty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844167.241475-4286-168590426486628/AnsiballZ_systemd.py'
Jan 31 07:22:47 compute-2 sudo[201913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:47 compute-2 python3.9[201915]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:22:47 compute-2 systemd[1]: Reloading.
Jan 31 07:22:47 compute-2 systemd-sysv-generator[201947]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:47 compute-2 systemd-rc-local-generator[201943]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:48 compute-2 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 07:22:48 compute-2 sudo[201913]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:48.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:48 compute-2 sudo[202104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlszghopatlojoennbtpljmmqwsoclls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844168.3925908-4310-133719836997913/AnsiballZ_systemd.py'
Jan 31 07:22:48 compute-2 sudo[202104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:48.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:48 compute-2 ceph-mon[77282]: pgmap v647: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:49 compute-2 python3.9[202106]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 07:22:49 compute-2 systemd[1]: Reloading.
Jan 31 07:22:49 compute-2 systemd-rc-local-generator[202124]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:49 compute-2 systemd-sysv-generator[202130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:49 compute-2 systemd[1]: Reloading.
Jan 31 07:22:49 compute-2 systemd-sysv-generator[202171]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:49 compute-2 systemd-rc-local-generator[202168]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:49 compute-2 sudo[202104]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:50.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:50 compute-2 sshd-session[144579]: Connection closed by 192.168.122.30 port 49768
Jan 31 07:22:50 compute-2 sshd-session[144576]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:22:50 compute-2 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 07:22:50 compute-2 systemd[1]: session-48.scope: Consumed 2min 57.640s CPU time.
Jan 31 07:22:50 compute-2 systemd-logind[801]: Session 48 logged out. Waiting for processes to exit.
Jan 31 07:22:50 compute-2 systemd-logind[801]: Removed session 48.
Jan 31 07:22:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:50.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:50 compute-2 ceph-mon[77282]: pgmap v648: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:52.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:52.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:52 compute-2 ceph-mon[77282]: pgmap v649: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:54.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:54.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:54 compute-2 ceph-mon[77282]: pgmap v650: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:56.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:56.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:57 compute-2 ceph-mon[77282]: pgmap v651: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:57 compute-2 sshd-session[202208]: Accepted publickey for zuul from 192.168.122.30 port 49272 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:22:57 compute-2 systemd-logind[801]: New session 49 of user zuul.
Jan 31 07:22:57 compute-2 systemd[1]: Started Session 49 of User zuul.
Jan 31 07:22:57 compute-2 sshd-session[202208]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:22:57 compute-2 sudo[202210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:57 compute-2 sudo[202210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:57 compute-2 sudo[202210]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:57 compute-2 sudo[202237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:22:57 compute-2 sudo[202237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:57 compute-2 sudo[202237]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:57 compute-2 sudo[202285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:57 compute-2 sudo[202285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:57 compute-2 sudo[202285]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:57 compute-2 sudo[202339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:22:57 compute-2 sudo[202339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:58 compute-2 sudo[202339]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:58.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:58 compute-2 python3.9[202477]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:22:58 compute-2 ceph-mon[77282]: pgmap v652: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:22:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:22:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:22:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:22:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:22:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:22:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:22:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:22:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:58.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:22:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:59 compute-2 python3.9[202646]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:22:59 compute-2 network[202663]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:22:59 compute-2 network[202664]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:22:59 compute-2 network[202665]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:23:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:00.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:00.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:00 compute-2 ceph-mon[77282]: pgmap v653: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:02.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:02 compute-2 sudo[202811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:02 compute-2 sudo[202811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:02 compute-2 sudo[202811]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:02 compute-2 sudo[202836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:02 compute-2 sudo[202836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:02 compute-2 sudo[202836]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:02.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:02 compute-2 ceph-mon[77282]: pgmap v654: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:04 compute-2 sudo[202987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bshrfbhgkgybbhpbwkeirjbsulmxngfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844183.8878727-104-140212339684508/AnsiballZ_setup.py'
Jan 31 07:23:04 compute-2 sudo[202987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:04.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:04 compute-2 python3.9[202989]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:23:04 compute-2 sudo[202994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:04 compute-2 sudo[202994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:04 compute-2 sudo[202994]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:04 compute-2 sudo[203020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:23:04 compute-2 sudo[203020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:04 compute-2 sudo[203020]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:04 compute-2 sudo[202987]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:04.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:04 compute-2 ceph-mon[77282]: pgmap v655: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:23:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:23:05 compute-2 sudo[203122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vezmlgvauheeqqjkweyyutzcojoptqrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844183.8878727-104-140212339684508/AnsiballZ_dnf.py'
Jan 31 07:23:05 compute-2 sudo[203122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:05 compute-2 python3.9[203124]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:23:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:06.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:06.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:23:06.825 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:23:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:23:06.828 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:23:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:23:06.828 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:23:06 compute-2 ceph-mon[77282]: pgmap v656: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:08.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:08.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:08 compute-2 ceph-mon[77282]: pgmap v657: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:09 compute-2 podman[203128]: 2026-01-31 07:23:09.240869145 +0000 UTC m=+0.115538079 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Jan 31 07:23:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:10.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:10 compute-2 sudo[203122]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:10.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:10 compute-2 ceph-mon[77282]: pgmap v658: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:11 compute-2 sudo[203304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfxreqbmglvncngzbdgzdwyemkccwkzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844190.7174804-140-221668851780173/AnsiballZ_stat.py'
Jan 31 07:23:11 compute-2 sudo[203304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:11 compute-2 python3.9[203306]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:23:11 compute-2 sudo[203304]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:12.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:12 compute-2 podman[203406]: 2026-01-31 07:23:12.203206595 +0000 UTC m=+0.080289910 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:23:12 compute-2 sudo[203475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmoiutuupeallnzdbsnmcrldwctwrumm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844191.7801569-170-9008714313781/AnsiballZ_command.py'
Jan 31 07:23:12 compute-2 sudo[203475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:12 compute-2 python3.9[203477]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:23:12 compute-2 sudo[203475]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:12.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:12 compute-2 ceph-mon[77282]: pgmap v659: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:13 compute-2 sudo[203629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hahnkconsufrlulwvpklufjdcjmxhuyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844192.8173168-200-228718331432440/AnsiballZ_stat.py'
Jan 31 07:23:13 compute-2 sudo[203629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:13 compute-2 python3.9[203631]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:23:13 compute-2 sudo[203629]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:13 compute-2 sudo[203781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgqokjwswcgzzxjvdyesdkehkxnxpgdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844193.5265942-224-242916110249339/AnsiballZ_command.py'
Jan 31 07:23:13 compute-2 sudo[203781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:14 compute-2 python3.9[203783]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:23:14 compute-2 sudo[203781]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:14.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:14 compute-2 sudo[203934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpphrxwujiqbmpqecejpqixrlvyqgjxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844194.2873147-248-36253230719871/AnsiballZ_stat.py'
Jan 31 07:23:14 compute-2 sudo[203934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:14 compute-2 python3.9[203936]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:23:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:14.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:14 compute-2 sudo[203934]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:14 compute-2 ceph-mon[77282]: pgmap v660: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:15 compute-2 sudo[204058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofruennsfihyvgkyiufdjtjiumsnnsor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844194.2873147-248-36253230719871/AnsiballZ_copy.py'
Jan 31 07:23:15 compute-2 sudo[204058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:15 compute-2 python3.9[204060]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844194.2873147-248-36253230719871/.source.iscsi _original_basename=.96478jjl follow=False checksum=c29ac2867f14bcae991cd66c923659e166eaadd9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:15 compute-2 sudo[204058]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:16 compute-2 sudo[204210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvzudzcehueqnbxpbrkbdfswctzacij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844195.6416504-293-133420703655993/AnsiballZ_file.py'
Jan 31 07:23:16 compute-2 sudo[204210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:16.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:16 compute-2 python3.9[204212]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:16 compute-2 sudo[204210]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:16.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:16 compute-2 sudo[204362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeyedmcdzgqboyyzddfdqpizhygikxqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844196.521795-316-185794023999934/AnsiballZ_lineinfile.py'
Jan 31 07:23:16 compute-2 sudo[204362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:17 compute-2 ceph-mon[77282]: pgmap v661: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:17 compute-2 python3.9[204364]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:17 compute-2 sudo[204362]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:18 compute-2 sudo[204515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxxvxzifnwagdpovesytivjdqntxdve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844197.4881349-344-78930390840455/AnsiballZ_systemd_service.py'
Jan 31 07:23:18 compute-2 sudo[204515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:18.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:18 compute-2 python3.9[204517]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:23:18 compute-2 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 07:23:18 compute-2 sudo[204515]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:18.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:19 compute-2 sudo[204671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zajsswvwnwezzkowtslksyrridjeojbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844198.730899-368-210559859861538/AnsiballZ_systemd_service.py'
Jan 31 07:23:19 compute-2 sudo[204671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:19 compute-2 ceph-mon[77282]: pgmap v662: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:19 compute-2 python3.9[204673]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:23:19 compute-2 systemd[1]: Reloading.
Jan 31 07:23:19 compute-2 systemd-sysv-generator[204707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:23:19 compute-2 systemd-rc-local-generator[204701]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:23:19 compute-2 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 07:23:19 compute-2 systemd[1]: Starting Open-iSCSI...
Jan 31 07:23:19 compute-2 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 07:23:19 compute-2 systemd[1]: Started Open-iSCSI.
Jan 31 07:23:19 compute-2 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 07:23:19 compute-2 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 07:23:19 compute-2 sudo[204671]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:20.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:20 compute-2 ceph-mon[77282]: pgmap v663: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:20.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:20 compute-2 python3.9[204876]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:23:20 compute-2 network[204893]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:23:20 compute-2 network[204894]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:23:20 compute-2 network[204895]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.133779) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202133898, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1831, "num_deletes": 503, "total_data_size": 3903526, "memory_usage": 3960072, "flush_reason": "Manual Compaction"}
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202148354, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1495129, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14399, "largest_seqno": 16224, "table_properties": {"data_size": 1489630, "index_size": 2254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16560, "raw_average_key_size": 19, "raw_value_size": 1475728, "raw_average_value_size": 1709, "num_data_blocks": 104, "num_entries": 863, "num_filter_entries": 863, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844052, "oldest_key_time": 1769844052, "file_creation_time": 1769844202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 14667 microseconds, and 6548 cpu microseconds.
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.148452) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1495129 bytes OK
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.148475) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.151424) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.151447) EVENT_LOG_v1 {"time_micros": 1769844202151440, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.151470) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3894346, prev total WAL file size 3894346, number of live WAL files 2.
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.153332) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323535' seq:72057594037927935, type:22 .. '6D67727374617400353038' seq:0, type:0; will stop at (end)
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1460KB)], [27(10MB)]
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202153426, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 12631097, "oldest_snapshot_seqno": -1}
Jan 31 07:23:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.003000082s ======
Jan 31 07:23:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:22.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4205 keys, 8058679 bytes, temperature: kUnknown
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202232889, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 8058679, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8029234, "index_size": 17815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104136, "raw_average_key_size": 24, "raw_value_size": 7951819, "raw_average_value_size": 1891, "num_data_blocks": 750, "num_entries": 4205, "num_filter_entries": 4205, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769844202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.233156) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8058679 bytes
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.237818) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.9 rd, 101.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.6 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(13.8) write-amplify(5.4) OK, records in: 5156, records dropped: 951 output_compression: NoCompression
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.237838) EVENT_LOG_v1 {"time_micros": 1769844202237828, "job": 14, "event": "compaction_finished", "compaction_time_micros": 79506, "compaction_time_cpu_micros": 33764, "output_level": 6, "num_output_files": 1, "total_output_size": 8058679, "num_input_records": 5156, "num_output_records": 4205, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202238129, "job": 14, "event": "table_file_deletion", "file_number": 29}
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202239314, "job": 14, "event": "table_file_deletion", "file_number": 27}
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.153205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.239518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.239530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.239534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.239537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:23:22.239540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:22.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:22 compute-2 ceph-mon[77282]: pgmap v664: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:22 compute-2 sudo[204980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:22 compute-2 sudo[204980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:22 compute-2 sudo[204980]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:22 compute-2 sudo[205008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:22 compute-2 sudo[205008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:22 compute-2 sudo[205008]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:24.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:24.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:24 compute-2 ceph-mon[77282]: pgmap v665: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:25 compute-2 sudo[205218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cetamqtqwzfczztamvckgrlvmflescns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844205.1075556-437-8337222219967/AnsiballZ_dnf.py'
Jan 31 07:23:25 compute-2 sudo[205218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:25 compute-2 python3.9[205220]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:23:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:26.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:26.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:26 compute-2 ceph-mon[77282]: pgmap v666: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:28 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:23:28 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:23:28 compute-2 systemd[1]: Reloading.
Jan 31 07:23:28 compute-2 systemd-sysv-generator[205262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:23:28 compute-2 systemd-rc-local-generator[205257]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:23:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:28.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:28 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:23:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:28.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:28 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:23:28 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:23:28 compute-2 systemd[1]: run-rec584fbb1a8641f195f7b8993949bca5.service: Deactivated successfully.
Jan 31 07:23:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:28 compute-2 sudo[205218]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:28 compute-2 ceph-mon[77282]: pgmap v667: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:29 compute-2 sudo[205536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knvwbpfboaoftjckdastrzwmsjjvnwjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844209.1394787-464-63521005952802/AnsiballZ_file.py'
Jan 31 07:23:29 compute-2 sudo[205536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:29 compute-2 python3.9[205538]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 07:23:29 compute-2 sudo[205536]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:30.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:30 compute-2 sudo[205688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyqvmnckglhugoowxelgyazxjkhabfol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844210.0284913-488-63530337436917/AnsiballZ_modprobe.py'
Jan 31 07:23:30 compute-2 sudo[205688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:30 compute-2 ceph-mon[77282]: pgmap v668: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:30 compute-2 python3.9[205690]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 07:23:30 compute-2 sudo[205688]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:30 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:23:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:30.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:31 compute-2 sudo[205846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvdtkdspmddsqfwhmvhlkzmyfbjahjth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844210.944468-512-208110449357518/AnsiballZ_stat.py'
Jan 31 07:23:31 compute-2 sudo[205846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:31 compute-2 python3.9[205848]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:23:31 compute-2 sudo[205846]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:31 compute-2 sudo[205969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpccpobcypvowxumvrleelfpxnflluoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844210.944468-512-208110449357518/AnsiballZ_copy.py'
Jan 31 07:23:31 compute-2 sudo[205969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:32 compute-2 python3.9[205971]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844210.944468-512-208110449357518/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:32 compute-2 sudo[205969]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:32.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:32 compute-2 sudo[206121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqmasgcynbxgnoolqxvdmljfgupemxpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844212.331469-559-230021711026136/AnsiballZ_lineinfile.py'
Jan 31 07:23:32 compute-2 sudo[206121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:32 compute-2 python3.9[206123]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:32.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:32 compute-2 sudo[206121]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:33 compute-2 ceph-mon[77282]: pgmap v669: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:33 compute-2 sudo[206274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roblohsudxlvnbphqpjhvhiudhzwkidz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844213.035624-584-139104205265433/AnsiballZ_systemd.py'
Jan 31 07:23:33 compute-2 sudo[206274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:34 compute-2 python3.9[206276]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:23:34 compute-2 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 07:23:34 compute-2 systemd[1]: Stopped Load Kernel Modules.
Jan 31 07:23:34 compute-2 systemd[1]: Stopping Load Kernel Modules...
Jan 31 07:23:34 compute-2 systemd[1]: Starting Load Kernel Modules...
Jan 31 07:23:34 compute-2 systemd[1]: Finished Load Kernel Modules.
Jan 31 07:23:34 compute-2 sudo[206274]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:34 compute-2 ceph-mon[77282]: pgmap v670: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:34.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:34 compute-2 sudo[206430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcikunkbhwnxhmqhxqclsmakogntfwys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844214.3219094-608-195003593327256/AnsiballZ_command.py'
Jan 31 07:23:34 compute-2 sudo[206430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:34 compute-2 python3.9[206432]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:23:34 compute-2 sudo[206430]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:34.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:35 compute-2 sudo[206584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbbzdmlfflmzyvkcofajkknrinmlwexd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844215.202548-638-193179860551259/AnsiballZ_stat.py'
Jan 31 07:23:35 compute-2 sudo[206584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:35 compute-2 python3.9[206586]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:23:35 compute-2 sudo[206584]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:36.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:36 compute-2 sudo[206736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsaqkdyhihcpqnvidgiqsrrkmwkjlfog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844216.041493-665-36584472650717/AnsiballZ_stat.py'
Jan 31 07:23:36 compute-2 sudo[206736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:36 compute-2 python3.9[206738]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:23:36 compute-2 sudo[206736]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:36.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:36 compute-2 ceph-mon[77282]: pgmap v671: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:36 compute-2 sudo[206859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqcenosqbvmxazlvojboqxjjofnjhuaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844216.041493-665-36584472650717/AnsiballZ_copy.py'
Jan 31 07:23:36 compute-2 sudo[206859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:37 compute-2 python3.9[206861]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844216.041493-665-36584472650717/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:37 compute-2 sudo[206859]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:37 compute-2 sudo[207012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylsbioiwfjfagwqkikfnaewwvqnoigzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844217.3058176-709-257326249762804/AnsiballZ_command.py'
Jan 31 07:23:37 compute-2 sudo[207012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:37 compute-2 python3.9[207014]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:23:37 compute-2 sudo[207012]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:38.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:38 compute-2 sudo[207165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fshxznowjwifgsqlkqhpzmcvrrjtphtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844218.0388334-734-59324652570227/AnsiballZ_lineinfile.py'
Jan 31 07:23:38 compute-2 sudo[207165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:38 compute-2 python3.9[207167]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:38 compute-2 sudo[207165]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:38.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:38 compute-2 ceph-mon[77282]: pgmap v672: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:39 compute-2 sudo[207331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmlwderkylvyecallfsfvaaddfyuzzgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844218.714723-758-198748683798852/AnsiballZ_replace.py'
Jan 31 07:23:39 compute-2 sudo[207331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:39 compute-2 podman[207292]: 2026-01-31 07:23:39.445429413 +0000 UTC m=+0.094259983 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:23:39 compute-2 python3.9[207341]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:39 compute-2 sudo[207331]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:40 compute-2 sudo[207498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzetzlufowqnerpmhraywdsmspsjfawn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844219.8555176-782-44386529169310/AnsiballZ_replace.py'
Jan 31 07:23:40 compute-2 sudo[207498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:40.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:40 compute-2 python3.9[207500]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:40 compute-2 sudo[207498]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:40.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:40 compute-2 sudo[207650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gojqqkjbtymgefavjtuoomaajifxbfjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844220.6462393-808-54512727186141/AnsiballZ_lineinfile.py'
Jan 31 07:23:40 compute-2 sudo[207650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:41 compute-2 ceph-mon[77282]: pgmap v673: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:41 compute-2 python3.9[207652]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:41 compute-2 sudo[207650]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:41 compute-2 sudo[207803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atrejxigasmjuowvxzxnzoguzmziepnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844221.2830963-808-253331594819643/AnsiballZ_lineinfile.py'
Jan 31 07:23:41 compute-2 sudo[207803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:41 compute-2 python3.9[207805]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:41 compute-2 sudo[207803]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:42 compute-2 sudo[207955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goedakslseorwrujfbsczcrnhdmyacmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844221.93365-808-209314343602022/AnsiballZ_lineinfile.py'
Jan 31 07:23:42 compute-2 sudo[207955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:42.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:42 compute-2 podman[207957]: 2026-01-31 07:23:42.310866636 +0000 UTC m=+0.066082378 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 07:23:42 compute-2 python3.9[207958]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:42 compute-2 sudo[207955]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:42.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:42 compute-2 sudo[208126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnosrrzvdimnyfxmoawegobdqflqdlse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844222.5898314-808-246948330062777/AnsiballZ_lineinfile.py'
Jan 31 07:23:42 compute-2 sudo[208126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:42 compute-2 sudo[208129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:42 compute-2 sudo[208129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:42 compute-2 sudo[208129]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:43 compute-2 sudo[208154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:43 compute-2 sudo[208154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:43 compute-2 sudo[208154]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:43 compute-2 python3.9[208128]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:43 compute-2 sudo[208126]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:43 compute-2 ceph-mon[77282]: pgmap v674: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:43 compute-2 sudo[208329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmwtdbxclsptaghgeokgpynvvizatnxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844223.2891016-896-212779624307343/AnsiballZ_stat.py'
Jan 31 07:23:43 compute-2 sudo[208329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:43 compute-2 python3.9[208331]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:23:43 compute-2 sudo[208329]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:44 compute-2 sudo[208483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wisfnybdaxsmmmqjnrohhshuuvdlnifq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844223.9688947-920-113508417332411/AnsiballZ_command.py'
Jan 31 07:23:44 compute-2 sudo[208483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:44.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:44 compute-2 python3.9[208485]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:23:44 compute-2 sudo[208483]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:44.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:45 compute-2 sudo[208636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaeexotmejpwrugkqlyrrbqtmeluddyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844224.7831395-946-76249743976249/AnsiballZ_systemd_service.py'
Jan 31 07:23:45 compute-2 sudo[208636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:45 compute-2 ceph-mon[77282]: pgmap v675: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:45 compute-2 python3.9[208638]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:23:45 compute-2 systemd[1]: Listening on multipathd control socket.
Jan 31 07:23:45 compute-2 sudo[208636]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:45 compute-2 sudo[208793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-letvsemgbkkhpoxlqbcudlaccyrsyiva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844225.7444255-970-267491640436299/AnsiballZ_systemd_service.py'
Jan 31 07:23:45 compute-2 sudo[208793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:46.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:46 compute-2 python3.9[208795]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:23:46 compute-2 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 07:23:46 compute-2 udevadm[208800]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 07:23:46 compute-2 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 07:23:46 compute-2 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 07:23:46 compute-2 multipathd[208804]: --------start up--------
Jan 31 07:23:46 compute-2 multipathd[208804]: read /etc/multipath.conf
Jan 31 07:23:46 compute-2 multipathd[208804]: path checkers start up
Jan 31 07:23:46 compute-2 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 07:23:46 compute-2 sudo[208793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:46 compute-2 ceph-mon[77282]: pgmap v676: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:46.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:47 compute-2 sudo[208962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnypdxeawkdojwrbxavdmyaifbrgmxic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844227.0858107-1007-270712469929085/AnsiballZ_file.py'
Jan 31 07:23:47 compute-2 sudo[208962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:47 compute-2 python3.9[208964]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 07:23:47 compute-2 sudo[208962]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:48 compute-2 sudo[209114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmujkjtakqxyfjugoxysejpsudogbysm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844227.892363-1030-212351062564397/AnsiballZ_modprobe.py'
Jan 31 07:23:48 compute-2 sudo[209114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:48.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:48 compute-2 python3.9[209116]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 07:23:48 compute-2 kernel: Key type psk registered
Jan 31 07:23:48 compute-2 sudo[209114]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:48.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:48 compute-2 ceph-mon[77282]: pgmap v677: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:48 compute-2 sudo[209275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipayewrlfsqcqzodokpjbrfkprzbjhvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844228.7281978-1055-167589911730313/AnsiballZ_stat.py'
Jan 31 07:23:48 compute-2 sudo[209275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:49 compute-2 python3.9[209277]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:23:49 compute-2 sudo[209275]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:49 compute-2 sudo[209399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvslhgfjhbsnwhoyuqnmdasvolhxtac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844228.7281978-1055-167589911730313/AnsiballZ_copy.py'
Jan 31 07:23:49 compute-2 sudo[209399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:49 compute-2 python3.9[209401]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844228.7281978-1055-167589911730313/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:49 compute-2 sudo[209399]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:50.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:50 compute-2 sudo[209551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hartmiyojedxvhusfhnnzduuucjbbhex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844230.1515152-1103-192120736107963/AnsiballZ_lineinfile.py'
Jan 31 07:23:50 compute-2 sudo[209551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:50 compute-2 python3.9[209553]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:50 compute-2 sudo[209551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:50.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:51 compute-2 sudo[209704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcenszlsvkjzpbeffksfnysizhepiegq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844230.8553016-1127-54839194182933/AnsiballZ_systemd.py'
Jan 31 07:23:51 compute-2 sudo[209704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:51 compute-2 ceph-mon[77282]: pgmap v678: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:51 compute-2 python3.9[209706]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:23:51 compute-2 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 07:23:51 compute-2 systemd[1]: Stopped Load Kernel Modules.
Jan 31 07:23:51 compute-2 systemd[1]: Stopping Load Kernel Modules...
Jan 31 07:23:51 compute-2 systemd[1]: Starting Load Kernel Modules...
Jan 31 07:23:51 compute-2 systemd[1]: Finished Load Kernel Modules.
Jan 31 07:23:51 compute-2 sudo[209704]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:52.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:52 compute-2 sudo[209860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebvgtepztuibejtxlndrlpuwbiitjixf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844232.209315-1151-273714090954255/AnsiballZ_dnf.py'
Jan 31 07:23:52 compute-2 sudo[209860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:52 compute-2 python3.9[209862]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:23:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:52.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:53 compute-2 ceph-mon[77282]: pgmap v679: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:23:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:54.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:23:54 compute-2 ceph-mon[77282]: pgmap v680: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:54.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:55 compute-2 systemd[1]: Reloading.
Jan 31 07:23:55 compute-2 systemd-rc-local-generator[209889]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:23:55 compute-2 systemd-sysv-generator[209893]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:23:55 compute-2 systemd[1]: Reloading.
Jan 31 07:23:55 compute-2 systemd-rc-local-generator[209933]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:23:55 compute-2 systemd-sysv-generator[209937]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:23:55 compute-2 systemd-logind[801]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 07:23:56 compute-2 systemd-logind[801]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 07:23:56 compute-2 lvm[209981]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:23:56 compute-2 lvm[209981]: VG ceph_vg0 finished
Jan 31 07:23:56 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:23:56 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:23:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:56.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:56 compute-2 systemd[1]: Reloading.
Jan 31 07:23:56 compute-2 systemd-rc-local-generator[210022]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:23:56 compute-2 systemd-sysv-generator[210026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:23:56 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:23:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:56.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:57 compute-2 ceph-mon[77282]: pgmap v681: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:57 compute-2 sudo[209860]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:58.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:58 compute-2 sudo[211330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qonvasehvmisfqgytcavoekrsgxlxcyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844238.2269278-1174-191611729828677/AnsiballZ_systemd_service.py'
Jan 31 07:23:58 compute-2 sudo[211330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:23:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:58.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:59 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:23:59 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:23:59 compute-2 systemd[1]: man-db-cache-update.service: Consumed 1.361s CPU time.
Jan 31 07:23:59 compute-2 systemd[1]: run-r7f0a43ee829542c296b667c893805878.service: Deactivated successfully.
Jan 31 07:23:59 compute-2 python3.9[211332]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:23:59 compute-2 systemd[1]: Stopping Open-iSCSI...
Jan 31 07:23:59 compute-2 iscsid[204715]: iscsid shutting down.
Jan 31 07:23:59 compute-2 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 07:23:59 compute-2 systemd[1]: Stopped Open-iSCSI.
Jan 31 07:23:59 compute-2 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 07:23:59 compute-2 systemd[1]: Starting Open-iSCSI...
Jan 31 07:23:59 compute-2 systemd[1]: Started Open-iSCSI.
Jan 31 07:23:59 compute-2 sudo[211330]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:59 compute-2 ceph-mon[77282]: pgmap v682: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:59 compute-2 sudo[211488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tavwoerqmseytgamjgoookgdozpxmvdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844239.373523-1199-195728494437497/AnsiballZ_systemd_service.py'
Jan 31 07:23:59 compute-2 sudo[211488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:59 compute-2 python3.9[211490]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:23:59 compute-2 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 07:23:59 compute-2 multipathd[208804]: exit (signal)
Jan 31 07:24:00 compute-2 multipathd[208804]: --------shut down-------
Jan 31 07:24:00 compute-2 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 07:24:00 compute-2 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 07:24:00 compute-2 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 07:24:00 compute-2 multipathd[211496]: --------start up--------
Jan 31 07:24:00 compute-2 multipathd[211496]: read /etc/multipath.conf
Jan 31 07:24:00 compute-2 multipathd[211496]: path checkers start up
Jan 31 07:24:00 compute-2 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 07:24:00 compute-2 sudo[211488]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:00.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:00.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:00 compute-2 python3.9[211653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:24:01 compute-2 ceph-mon[77282]: pgmap v683: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:01 compute-2 sudo[211808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bivohvyhsmrinifjcgdioaxtktruatno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844241.5176215-1251-176104739957253/AnsiballZ_file.py'
Jan 31 07:24:01 compute-2 sudo[211808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:01 compute-2 python3.9[211810]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:01 compute-2 sudo[211808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:02.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:02 compute-2 sudo[211960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswirfhyvvswhaztxdgxgyggwtyaymet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844242.424952-1284-61434767927263/AnsiballZ_systemd_service.py'
Jan 31 07:24:02 compute-2 sudo[211960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:02.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:02 compute-2 python3.9[211962]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:24:02 compute-2 systemd[1]: Reloading.
Jan 31 07:24:03 compute-2 systemd-rc-local-generator[211985]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:24:03 compute-2 systemd-sysv-generator[211988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:24:03 compute-2 sudo[211999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:03 compute-2 sudo[211999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:03 compute-2 sudo[211999]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:03 compute-2 sudo[211960]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:03 compute-2 sudo[212025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:03 compute-2 sudo[212025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:03 compute-2 ceph-mon[77282]: pgmap v684: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:03 compute-2 sudo[212025]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:03 compute-2 python3.9[212199]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:24:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:03 compute-2 network[212216]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:24:03 compute-2 network[212217]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:24:03 compute-2 network[212218]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:24:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:04.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:04 compute-2 sudo[212247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:04 compute-2 sudo[212247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:04 compute-2 sudo[212247]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:04 compute-2 sudo[212275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:24:04 compute-2 sudo[212275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:04 compute-2 sudo[212275]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:04.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:04 compute-2 sudo[212304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:04 compute-2 sudo[212304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:04 compute-2 sudo[212304]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:04 compute-2 sudo[212332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:24:04 compute-2 sudo[212332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:05 compute-2 podman[212452]: 2026-01-31 07:24:05.320492994 +0000 UTC m=+0.069965665 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:24:05 compute-2 podman[212452]: 2026-01-31 07:24:05.42467429 +0000 UTC m=+0.174146971 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:24:05 compute-2 ceph-mon[77282]: pgmap v685: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:05 compute-2 podman[212655]: 2026-01-31 07:24:05.969388914 +0000 UTC m=+0.077183894 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:24:06 compute-2 podman[212683]: 2026-01-31 07:24:06.038212397 +0000 UTC m=+0.056202057 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:24:06 compute-2 podman[212655]: 2026-01-31 07:24:06.088701866 +0000 UTC m=+0.196496856 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:24:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:06.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:06 compute-2 podman[212756]: 2026-01-31 07:24:06.434837878 +0000 UTC m=+0.135467428 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.expose-services=, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, release=1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 07:24:06 compute-2 podman[212776]: 2026-01-31 07:24:06.519167778 +0000 UTC m=+0.055834277 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, release=1793, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20)
Jan 31 07:24:06 compute-2 podman[212756]: 2026-01-31 07:24:06.575941359 +0000 UTC m=+0.276570939 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, version=2.2.4, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, description=keepalived for Ceph, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 07:24:06 compute-2 sudo[212332]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:06 compute-2 ceph-mon[77282]: pgmap v686: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:24:06.826 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:24:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:24:06.829 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:24:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:24:06.830 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:24:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:06.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:07 compute-2 sudo[212789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:07 compute-2 sudo[212789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:07 compute-2 sudo[212789]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:07 compute-2 sudo[212814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:24:07 compute-2 sudo[212814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:07 compute-2 sudo[212814]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:07 compute-2 sudo[212839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:07 compute-2 sudo[212839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:07 compute-2 sudo[212839]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:07 compute-2 sudo[212865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:24:07 compute-2 sudo[212865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:07 compute-2 sudo[212865]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:24:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:24:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:24:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:24:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:24:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:08.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:08 compute-2 sudo[213046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysfjjidcvbaossrcfhbvqkxnmcxurcsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844248.3779268-1340-92443957090519/AnsiballZ_systemd_service.py'
Jan 31 07:24:08 compute-2 sudo[213046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:08.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:08 compute-2 python3.9[213048]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:08 compute-2 ceph-mon[77282]: pgmap v687: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:08 compute-2 sudo[213046]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:09 compute-2 sudo[213200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zihpbnknzljxqnphdvwlugzadkkyxsxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844249.070969-1340-9797549101432/AnsiballZ_systemd_service.py'
Jan 31 07:24:09 compute-2 sudo[213200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:09 compute-2 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 07:24:09 compute-2 python3.9[213202]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:09 compute-2 sudo[213200]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:09 compute-2 podman[213205]: 2026-01-31 07:24:09.754399654 +0000 UTC m=+0.125047681 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:24:09 compute-2 sudo[213380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jefyyxjnusuvwawjbbicymddkjmfojlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844249.7702854-1340-120262431015359/AnsiballZ_systemd_service.py'
Jan 31 07:24:09 compute-2 sudo[213380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:10 compute-2 python3.9[213382]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:10 compute-2 sudo[213380]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:10 compute-2 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 07:24:10 compute-2 sudo[213534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsfvdkogwachaxeizkgbuvewroahcbzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844250.485868-1340-52573556759044/AnsiballZ_systemd_service.py'
Jan 31 07:24:10 compute-2 sudo[213534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:10.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:10 compute-2 ceph-mon[77282]: pgmap v688: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:11 compute-2 python3.9[213536]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:11 compute-2 sudo[213534]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:11 compute-2 sudo[213688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfynnnimnulrkluoyywxpkannncebzls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844251.1808562-1340-81898436075227/AnsiballZ_systemd_service.py'
Jan 31 07:24:11 compute-2 sudo[213688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:11 compute-2 python3.9[213690]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:11 compute-2 sudo[213688]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:12 compute-2 sudo[213841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlzvnorcqbihsuteflktcjwhsbdiiebm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844251.8603826-1340-48023185579251/AnsiballZ_systemd_service.py'
Jan 31 07:24:12 compute-2 sudo[213841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:12.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:12 compute-2 python3.9[213843]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:12 compute-2 sudo[213841]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:12 compute-2 podman[213845]: 2026-01-31 07:24:12.469673447 +0000 UTC m=+0.071995872 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 07:24:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:12.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:12 compute-2 sudo[214013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shatjdtpicafkzsuubaguqfobylsvkkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844252.6160686-1340-267114942071804/AnsiballZ_systemd_service.py'
Jan 31 07:24:12 compute-2 sudo[214013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:13 compute-2 ceph-mon[77282]: pgmap v689: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:13 compute-2 python3.9[214015]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:13 compute-2 sudo[214013]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:13 compute-2 sudo[214167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsbvmhnwimajosmdmksmaroabrupfsnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844253.3013644-1340-278774759447922/AnsiballZ_systemd_service.py'
Jan 31 07:24:13 compute-2 sudo[214167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:13 compute-2 python3.9[214169]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:13 compute-2 sudo[214167]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:14 compute-2 sudo[214195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:14 compute-2 sudo[214195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:14 compute-2 sudo[214195]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:14 compute-2 sudo[214220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:24:14 compute-2 sudo[214220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:14 compute-2 sudo[214220]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:14.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:14 compute-2 ceph-mon[77282]: pgmap v690: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:14.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:15 compute-2 sudo[214371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-civvakrtcqqilkwolpkaeypkfcxhehbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844255.106467-1518-24310249833427/AnsiballZ_file.py'
Jan 31 07:24:15 compute-2 sudo[214371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:15 compute-2 python3.9[214373]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:15 compute-2 sudo[214371]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:15 compute-2 sudo[214523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrzknpliuvmvmfrzerunbnlssjptmrfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844255.6704297-1518-211260227206859/AnsiballZ_file.py'
Jan 31 07:24:15 compute-2 sudo[214523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:16 compute-2 python3.9[214525]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:16 compute-2 sudo[214523]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:16 compute-2 sudo[214675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyhaxjmbwddijfxbvogltjxuyhhhlvjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844256.259911-1518-218857387099705/AnsiballZ_file.py'
Jan 31 07:24:16 compute-2 sudo[214675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:16 compute-2 python3.9[214677]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:16 compute-2 sudo[214675]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:16.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:16 compute-2 ceph-mon[77282]: pgmap v691: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:17 compute-2 sudo[214828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wibxpomrjhhwmhxfmcocypbznnpwrxpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844256.8495305-1518-29259253507482/AnsiballZ_file.py'
Jan 31 07:24:17 compute-2 sudo[214828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:17 compute-2 python3.9[214830]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:17 compute-2 sudo[214828]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:17 compute-2 sudo[214980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlvqgkmytzkonpkubqnehfqmplkyzsvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844257.584612-1518-211596469460818/AnsiballZ_file.py'
Jan 31 07:24:17 compute-2 sudo[214980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:18 compute-2 python3.9[214982]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:18 compute-2 sudo[214980]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:18.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:18 compute-2 sudo[215132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfdgtkgbywbzxeeapksqfbdyudbgkdpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844258.1418426-1518-119379403823174/AnsiballZ_file.py'
Jan 31 07:24:18 compute-2 sudo[215132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:18 compute-2 python3.9[215134]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:18 compute-2 sudo[215132]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:18.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:18 compute-2 ceph-mon[77282]: pgmap v692: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:19 compute-2 sudo[215284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gencarvvwtwzsuqdidqtnvohstrkcahv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844258.7397182-1518-194664613385097/AnsiballZ_file.py'
Jan 31 07:24:19 compute-2 sudo[215284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:19 compute-2 python3.9[215286]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:19 compute-2 sudo[215284]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:19 compute-2 sudo[215437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lziijnnttmxqcxfibqxiyzraapapzozi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844259.337985-1518-210851690633619/AnsiballZ_file.py'
Jan 31 07:24:19 compute-2 sudo[215437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:19 compute-2 python3.9[215439]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:19 compute-2 sudo[215437]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:20.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:20 compute-2 sudo[215589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlahrnpztudijhxdgmowtxxoazrnmjpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844260.377191-1689-199254475063089/AnsiballZ_file.py'
Jan 31 07:24:20 compute-2 sudo[215589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:20 compute-2 python3.9[215591]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:20 compute-2 sudo[215589]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:20.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:20 compute-2 ceph-mon[77282]: pgmap v693: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:21 compute-2 sudo[215742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvmidgkntnnfqdernqnbnrqngjbexhxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844260.99114-1689-277683656585728/AnsiballZ_file.py'
Jan 31 07:24:21 compute-2 sudo[215742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:21 compute-2 python3.9[215744]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:21 compute-2 sudo[215742]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:21 compute-2 sudo[215894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msdaofursfeqezkloskbjegdpzvwcxpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844261.7174354-1689-140339631095699/AnsiballZ_file.py'
Jan 31 07:24:21 compute-2 sudo[215894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:22 compute-2 python3.9[215896]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:22 compute-2 sudo[215894]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:22 compute-2 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 07:24:22 compute-2 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 07:24:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:22 compute-2 sudo[216048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-denflgfatctjlzmzkowkrlhdqnqlnddu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844262.2478995-1689-112974677108894/AnsiballZ_file.py'
Jan 31 07:24:22 compute-2 sudo[216048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:22 compute-2 python3.9[216050]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:22 compute-2 sudo[216048]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:22.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:22 compute-2 ceph-mon[77282]: pgmap v694: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:22 compute-2 sudo[216200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thxyuxlvvqxcfopbpwztlqunndyxnapz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844262.7531145-1689-178899233832007/AnsiballZ_file.py'
Jan 31 07:24:22 compute-2 sudo[216200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:23 compute-2 python3.9[216202]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:23 compute-2 sudo[216200]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:23 compute-2 sudo[216279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:23 compute-2 sudo[216279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:23 compute-2 sudo[216279]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:23 compute-2 sudo[216328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:23 compute-2 sudo[216328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:23 compute-2 sudo[216328]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:23 compute-2 sudo[216403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxsoxcfjpbnxajyxctwibmwadgvcwgfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844263.2525764-1689-244363456134304/AnsiballZ_file.py'
Jan 31 07:24:23 compute-2 sudo[216403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:23 compute-2 python3.9[216405]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:23 compute-2 sudo[216403]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:24 compute-2 sudo[216555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqivnuoqnznoncpyuybrhqumjrxygvca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844263.8483243-1689-246305636356036/AnsiballZ_file.py'
Jan 31 07:24:24 compute-2 sudo[216555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:24 compute-2 python3.9[216557]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:24 compute-2 sudo[216555]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:24.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:24 compute-2 sudo[216707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqfplqohnyajsovihzaxzzrjfoqcsfdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844264.356599-1689-98841296313209/AnsiballZ_file.py'
Jan 31 07:24:24 compute-2 sudo[216707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:24 compute-2 python3.9[216709]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:24 compute-2 sudo[216707]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:24.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:24 compute-2 ceph-mon[77282]: pgmap v695: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:25 compute-2 sudo[216860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzacqfzterkeomwxfvtqmlkxezayxyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844265.3349717-1863-245331025057879/AnsiballZ_command.py'
Jan 31 07:24:25 compute-2 sudo[216860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:25 compute-2 python3.9[216862]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:25 compute-2 sudo[216860]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:26.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:26 compute-2 python3.9[217014]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:24:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:26.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:27 compute-2 ceph-mon[77282]: pgmap v696: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:27 compute-2 sudo[217165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khfqfqhotvyisceseazpynygfcqaciou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844266.9504514-1917-38145160544320/AnsiballZ_systemd_service.py'
Jan 31 07:24:27 compute-2 sudo[217165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:27 compute-2 python3.9[217167]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:24:27 compute-2 systemd[1]: Reloading.
Jan 31 07:24:27 compute-2 systemd-rc-local-generator[217195]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:24:27 compute-2 systemd-sysv-generator[217199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:24:27 compute-2 sudo[217165]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:28.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:28 compute-2 sudo[217353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uayifniaqaectxnbrngbleueaptanmbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844268.0795515-1941-65276769090261/AnsiballZ_command.py'
Jan 31 07:24:28 compute-2 sudo[217353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:28 compute-2 python3.9[217355]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:28 compute-2 sudo[217353]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:28 compute-2 sudo[217506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuinduvojpgzxrvnfbimmlndmawxnwje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844268.6559536-1941-260495332649146/AnsiballZ_command.py'
Jan 31 07:24:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:28.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:28 compute-2 sudo[217506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:29 compute-2 python3.9[217508]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:29 compute-2 sudo[217506]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:29 compute-2 ceph-mon[77282]: pgmap v697: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:29 compute-2 sudo[217660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpggjvhwugijbtglayyxyefjfvqijfzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844269.1731725-1941-224824431122079/AnsiballZ_command.py'
Jan 31 07:24:29 compute-2 sudo[217660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:29 compute-2 python3.9[217662]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:29 compute-2 sudo[217660]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:30 compute-2 sudo[217813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktovawvyvamolpnfaeqmesknvwjybrwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844269.7808943-1941-220058151686868/AnsiballZ_command.py'
Jan 31 07:24:30 compute-2 sudo[217813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:30 compute-2 python3.9[217815]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:30 compute-2 sudo[217813]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:30.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:30 compute-2 sudo[217966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arxaatkgjbumqknwonlugqrtwvybliyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844270.335981-1941-45205320235592/AnsiballZ_command.py'
Jan 31 07:24:30 compute-2 sudo[217966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:30 compute-2 python3.9[217968]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:30 compute-2 sudo[217966]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:30.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:31 compute-2 ceph-mon[77282]: pgmap v698: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:31 compute-2 sudo[218120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgwemxqlyaovdnjhldlwpbhqzlcvpvww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844270.9642892-1941-92098417334982/AnsiballZ_command.py'
Jan 31 07:24:31 compute-2 sudo[218120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:31 compute-2 python3.9[218122]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:31 compute-2 sudo[218120]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:31 compute-2 sudo[218273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usfcgsgeninpkyqiphjqgpqfyhzztqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844271.4940171-1941-113667113546669/AnsiballZ_command.py'
Jan 31 07:24:31 compute-2 sudo[218273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:31 compute-2 python3.9[218275]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:31 compute-2 sudo[218273]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:32 compute-2 sudo[218426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcqvcygwugudqspqpenqcfhybfmbwwml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844272.085363-1941-83069032547095/AnsiballZ_command.py'
Jan 31 07:24:32 compute-2 sudo[218426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:24:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:32.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:24:32 compute-2 python3.9[218428]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:32 compute-2 sudo[218426]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:32.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:33 compute-2 ceph-mon[77282]: pgmap v699: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:34.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:35 compute-2 ceph-mon[77282]: pgmap v700: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:35 compute-2 sudo[218581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsaacayfvvhutjltxyinuaqyclefilkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844275.1662607-2148-121972104417967/AnsiballZ_file.py'
Jan 31 07:24:35 compute-2 sudo[218581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:35 compute-2 python3.9[218583]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:35 compute-2 sudo[218581]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:35 compute-2 sudo[218733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agcbewyealgzjbrjqchkmxeatrzdxavt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844275.7014112-2148-29612766257286/AnsiballZ_file.py'
Jan 31 07:24:35 compute-2 sudo[218733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:36 compute-2 python3.9[218735]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:36 compute-2 sudo[218733]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:24:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:24:36 compute-2 sudo[218885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edxcquqfpjadsphuhfadcvnveklibwaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844276.2464254-2148-259125835567692/AnsiballZ_file.py'
Jan 31 07:24:36 compute-2 sudo[218885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:36 compute-2 python3.9[218887]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:36 compute-2 sudo[218885]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:36.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:37 compute-2 ceph-mon[77282]: pgmap v701: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:37 compute-2 sudo[219038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvluwfienspjttsgprhbeyxacvhprvqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844277.2842445-2214-94742593426925/AnsiballZ_file.py'
Jan 31 07:24:37 compute-2 sudo[219038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:37 compute-2 python3.9[219040]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:37 compute-2 sudo[219038]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:38 compute-2 sudo[219190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpyfsbvqkwjdslduvwuycclbrtskktgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844277.8551037-2214-273771827600540/AnsiballZ_file.py'
Jan 31 07:24:38 compute-2 sudo[219190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:38 compute-2 python3.9[219192]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:38 compute-2 sudo[219190]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:38.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:38 compute-2 sudo[219342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovjufzsojhjrhtrjiqriykemxugdkxrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844278.387981-2214-88832582222753/AnsiballZ_file.py'
Jan 31 07:24:38 compute-2 sudo[219342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:38 compute-2 python3.9[219344]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:38 compute-2 sudo[219342]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:38.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:39 compute-2 sudo[219495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eswoxrgasxiisammxtoqwrnhyvylqzhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844278.903239-2214-43877267081634/AnsiballZ_file.py'
Jan 31 07:24:39 compute-2 sudo[219495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:39 compute-2 ceph-mon[77282]: pgmap v702: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:39 compute-2 python3.9[219497]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:39 compute-2 sudo[219495]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:39 compute-2 sudo[219647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lycjhzarttjmhhvrkbmoajmqktrwtsmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844279.4501593-2214-188110017443808/AnsiballZ_file.py'
Jan 31 07:24:39 compute-2 sudo[219647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:39 compute-2 python3.9[219649]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:39 compute-2 sudo[219647]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:40 compute-2 podman[219650]: 2026-01-31 07:24:40.001727899 +0000 UTC m=+0.150484091 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:24:40 compute-2 sudo[219826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzxzxrqzfwbgwlcapiwxzbdotqnoxqmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844280.0068345-2214-139714022745368/AnsiballZ_file.py'
Jan 31 07:24:40 compute-2 sudo[219826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:40.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:40 compute-2 python3.9[219828]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:40 compute-2 sudo[219826]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:40 compute-2 sudo[219978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbxurhheoqcfckuouhbnpckbtihgmcxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844280.5815248-2214-275812479754545/AnsiballZ_file.py'
Jan 31 07:24:40 compute-2 sudo[219978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:40.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:40 compute-2 python3.9[219980]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:41 compute-2 sudo[219978]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:41 compute-2 ceph-mon[77282]: pgmap v703: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:42.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:42.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:43 compute-2 podman[220007]: 2026-01-31 07:24:43.220238516 +0000 UTC m=+0.082844820 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:24:43 compute-2 ceph-mon[77282]: pgmap v704: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:43 compute-2 sudo[220026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:43 compute-2 sudo[220026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:43 compute-2 sudo[220026]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:43 compute-2 sudo[220051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:43 compute-2 sudo[220051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:43 compute-2 sudo[220051]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:24:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:24:44 compute-2 ceph-mon[77282]: pgmap v705: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:44.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:46.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:46.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:46 compute-2 ceph-mon[77282]: pgmap v706: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:47 compute-2 sudo[220203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lptpkzudeoqkpjzfzxjzmbdhutnfobjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844286.8988514-2539-175273992625823/AnsiballZ_getent.py'
Jan 31 07:24:47 compute-2 sudo[220203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:47 compute-2 python3.9[220205]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 07:24:47 compute-2 sudo[220203]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:48 compute-2 sudo[220356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pijcrjpyhnysgnrdgribkqbfdfnqjfpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844287.7520468-2562-60907547789232/AnsiballZ_group.py'
Jan 31 07:24:48 compute-2 sudo[220356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:48 compute-2 python3.9[220358]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:24:48 compute-2 groupadd[220359]: group added to /etc/group: name=nova, GID=42436
Jan 31 07:24:48 compute-2 groupadd[220359]: group added to /etc/gshadow: name=nova
Jan 31 07:24:48 compute-2 groupadd[220359]: new group: name=nova, GID=42436
Jan 31 07:24:48 compute-2 sudo[220356]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:48.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:48 compute-2 ceph-mon[77282]: pgmap v707: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:49 compute-2 sudo[220515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwcalgwsqeqmcmcfuhdpwegfghoczqcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844288.9681249-2587-217944184996203/AnsiballZ_user.py'
Jan 31 07:24:49 compute-2 sudo[220515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:49 compute-2 python3.9[220517]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 07:24:49 compute-2 useradd[220519]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 31 07:24:49 compute-2 useradd[220519]: add 'nova' to group 'libvirt'
Jan 31 07:24:49 compute-2 useradd[220519]: add 'nova' to shadow group 'libvirt'
Jan 31 07:24:50 compute-2 sudo[220515]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:50.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:50 compute-2 ceph-mon[77282]: pgmap v708: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:51 compute-2 sshd-session[220550]: Accepted publickey for zuul from 192.168.122.30 port 58248 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:24:51 compute-2 systemd-logind[801]: New session 50 of user zuul.
Jan 31 07:24:51 compute-2 systemd[1]: Started Session 50 of User zuul.
Jan 31 07:24:51 compute-2 sshd-session[220550]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:24:51 compute-2 sshd-session[220553]: Received disconnect from 192.168.122.30 port 58248:11: disconnected by user
Jan 31 07:24:51 compute-2 sshd-session[220553]: Disconnected from user zuul 192.168.122.30 port 58248
Jan 31 07:24:51 compute-2 sshd-session[220550]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:24:51 compute-2 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 07:24:51 compute-2 systemd-logind[801]: Session 50 logged out. Waiting for processes to exit.
Jan 31 07:24:51 compute-2 systemd-logind[801]: Removed session 50.
Jan 31 07:24:51 compute-2 python3.9[220704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:52.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:52 compute-2 python3.9[220825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844291.4359653-2662-278172430042353/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:52.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:52 compute-2 python3.9[220975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:52 compute-2 ceph-mon[77282]: pgmap v709: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:53 compute-2 python3.9[221052]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:53 compute-2 python3.9[221202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:54.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:54 compute-2 python3.9[221323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844293.508361-2662-154550665104563/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:54.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:54 compute-2 python3.9[221473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:55 compute-2 ceph-mon[77282]: pgmap v710: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:55 compute-2 python3.9[221595]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844294.5963771-2662-63050900037457/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d01cc1b48d783e4ed08d12bb4d0a107aba230a69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:56 compute-2 python3.9[221745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:56.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:56 compute-2 python3.9[221866]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844295.6355531-2662-228869388338579/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:56.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:57 compute-2 ceph-mon[77282]: pgmap v711: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:57 compute-2 python3.9[222016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:57 compute-2 python3.9[222138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844296.6374743-2662-161698517398375/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:58.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:58 compute-2 sudo[222288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxvuwwnqjzuwcdatvdxllguamitiaqfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844298.3113952-2911-113104516434252/AnsiballZ_file.py'
Jan 31 07:24:58 compute-2 sudo[222288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:58 compute-2 python3.9[222290]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:58 compute-2 sudo[222288]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:24:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:24:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:58.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:24:59 compute-2 ceph-mon[77282]: pgmap v712: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:59 compute-2 sudo[222441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfecfcmedrasmyriqxwbeddvbsjwigrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844299.050547-2935-93135003795855/AnsiballZ_copy.py'
Jan 31 07:24:59 compute-2 sudo[222441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:59 compute-2 python3.9[222443]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:59 compute-2 sudo[222441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:59 compute-2 sudo[222593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qffwtesvgchirtudsmpjqcozggyzpyhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844299.7363486-2959-181121176574464/AnsiballZ_stat.py'
Jan 31 07:24:59 compute-2 sudo[222593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:00 compute-2 python3.9[222595]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:00 compute-2 sudo[222593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:00 compute-2 sudo[222745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmblslfzldjyxqydpexrtvyuvkbvxvrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844300.4227273-2982-26311814238467/AnsiballZ_stat.py'
Jan 31 07:25:00 compute-2 sudo[222745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:00 compute-2 python3.9[222747]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:25:00 compute-2 sudo[222745]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:00.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:01 compute-2 ceph-mon[77282]: pgmap v713: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:01 compute-2 sudo[222869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mauotbiivcarkhupjifzcjyxidbwadbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844300.4227273-2982-26311814238467/AnsiballZ_copy.py'
Jan 31 07:25:01 compute-2 sudo[222869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:01 compute-2 python3.9[222871]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769844300.4227273-2982-26311814238467/.source _original_basename=.zdvjves0 follow=False checksum=868786591676fdd6c6fbb1d60db68baf45c62bd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 07:25:01 compute-2 sudo[222869]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:02 compute-2 python3.9[223023]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:02.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:02.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:02 compute-2 python3.9[223175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:25:03 compute-2 ceph-mon[77282]: pgmap v714: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:03 compute-2 python3.9[223297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844302.5927145-3061-266200892521691/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:25:03 compute-2 sudo[223322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:03 compute-2 sudo[223322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:03 compute-2 sudo[223322]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:03 compute-2 sudo[223347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:03 compute-2 sudo[223347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:03 compute-2 sudo[223347]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:04 compute-2 python3.9[223497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:25:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:04.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:04 compute-2 python3.9[223618]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844303.780552-3106-264158622667894/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:25:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:04.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:05 compute-2 ceph-mon[77282]: pgmap v715: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:05 compute-2 sudo[223769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyogjfclekaiycntyzyxrwthetgjuwoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844305.2412524-3157-65845424474138/AnsiballZ_container_config_data.py'
Jan 31 07:25:05 compute-2 sudo[223769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:05 compute-2 python3.9[223771]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 07:25:05 compute-2 sudo[223769]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:06.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:06 compute-2 sudo[223921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyychqhdawpxnzmbrqmoaqdwkdyqldgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844306.2767966-3190-117574700068345/AnsiballZ_container_config_hash.py'
Jan 31 07:25:06 compute-2 sudo[223921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:25:06.829 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:25:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:25:06.833 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:25:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:25:06.833 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:25:06 compute-2 python3.9[223923]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 07:25:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:06.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:06 compute-2 sudo[223921]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:07 compute-2 ceph-mon[77282]: pgmap v716: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:07 compute-2 sudo[224074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bditymhojqcmtlvycjabfhrreymdvkae ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769844307.4244058-3219-107083222064746/AnsiballZ_edpm_container_manage.py'
Jan 31 07:25:07 compute-2 sudo[224074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:08 compute-2 python3[224076]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 07:25:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:08.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:08 compute-2 ceph-mon[77282]: pgmap v717: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:08.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:10 compute-2 podman[224114]: 2026-01-31 07:25:10.22089181 +0000 UTC m=+0.105631147 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 07:25:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:10.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:10.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:11 compute-2 ceph-mon[77282]: pgmap v718: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:12.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:14 compute-2 sudo[224184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:14 compute-2 sudo[224184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:14 compute-2 sudo[224184]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:14 compute-2 sudo[224209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:25:14 compute-2 sudo[224209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:14 compute-2 sudo[224209]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:14 compute-2 sudo[224234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:14 compute-2 sudo[224234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:14 compute-2 sudo[224234]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:14 compute-2 sudo[224259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:25:14 compute-2 sudo[224259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:14.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:15 compute-2 ceph-mon[77282]: pgmap v719: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:16.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:17 compute-2 ceph-mon[77282]: pgmap v720: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:18 compute-2 podman[224173]: 2026-01-31 07:25:18.110423528 +0000 UTC m=+3.996903839 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:25:18 compute-2 podman[224089]: 2026-01-31 07:25:18.14866581 +0000 UTC m=+9.824509087 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 07:25:18 compute-2 podman[224336]: 2026-01-31 07:25:18.288116876 +0000 UTC m=+0.048011531 container create 72c66274019d26a924cf92a287c36ed331ea1b8d964b35e85b9a5f3eb296f787 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true)
Jan 31 07:25:18 compute-2 podman[224336]: 2026-01-31 07:25:18.25846544 +0000 UTC m=+0.018360135 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 07:25:18 compute-2 python3[224076]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 07:25:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:18.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:18 compute-2 sudo[224259]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:18 compute-2 sudo[224074]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:18.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:20.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:20 compute-2 ceph-mon[77282]: pgmap v721: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:20.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:22.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:22 compute-2 ceph-mon[77282]: pgmap v722: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:22 compute-2 ceph-mon[77282]: pgmap v723: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:22.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:23 compute-2 sudo[224543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uefqlttajuvyqhiqmdnzifbniclhgpaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844323.308171-3244-54412845973355/AnsiballZ_stat.py'
Jan 31 07:25:23 compute-2 sudo[224543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:23 compute-2 python3.9[224545]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:23 compute-2 sudo[224546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:23 compute-2 ceph-mon[77282]: pgmap v724: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:25:23 compute-2 sudo[224546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:23 compute-2 sudo[224546]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:23 compute-2 sudo[224543]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:23 compute-2 sudo[224572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:23 compute-2 sudo[224572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:23 compute-2 sudo[224572]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:24.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:24 compute-2 ceph-mon[77282]: pgmap v725: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:24 compute-2 sudo[224747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exdfrpyozcenbtvwhzyottreklvtvmln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844324.511723-3280-52852691765398/AnsiballZ_container_config_data.py'
Jan 31 07:25:24 compute-2 sudo[224747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:24.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:24 compute-2 python3.9[224749]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 07:25:25 compute-2 sudo[224747]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:25 compute-2 sudo[224900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bufdsfjjgrseveaugxxhdcmpsnqzjkum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844325.4336388-3312-2900827831894/AnsiballZ_container_config_hash.py'
Jan 31 07:25:25 compute-2 sudo[224900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:25 compute-2 python3.9[224902]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 07:25:25 compute-2 sudo[224900]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:26.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:26 compute-2 sudo[225052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lercykhgragloqaheckklqlaglqcliuo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769844326.3125317-3342-201359643272376/AnsiballZ_edpm_container_manage.py'
Jan 31 07:25:26 compute-2 sudo[225052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:26 compute-2 ceph-mon[77282]: pgmap v726: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:26 compute-2 python3[225054]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 07:25:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:26.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:27 compute-2 podman[225093]: 2026-01-31 07:25:27.10191776 +0000 UTC m=+0.063950751 container create 3fc3af851586c6b425fec3344a5cac58df3b0c63302c8497271707d1c9a61daa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:25:27 compute-2 podman[225093]: 2026-01-31 07:25:27.064802339 +0000 UTC m=+0.026835410 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 07:25:27 compute-2 python3[225054]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 07:25:27 compute-2 sudo[225052]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:27 compute-2 sudo[225280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmbkhaouqetnvrfhxaifkneldjenhoco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844327.4275255-3367-219325109347410/AnsiballZ_stat.py'
Jan 31 07:25:27 compute-2 sudo[225280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:27 compute-2 python3.9[225282]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:27 compute-2 sudo[225280]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:25:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:28.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:25:28 compute-2 sudo[225434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xneqjgzzatplcemhdwjdwgvswnhbaqfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844328.2443655-3394-102132929932308/AnsiballZ_file.py'
Jan 31 07:25:28 compute-2 sudo[225434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:28 compute-2 python3.9[225436]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:25:28 compute-2 sudo[225434]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:28.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:29 compute-2 ceph-mon[77282]: pgmap v727: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:29 compute-2 sudo[225586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyoxendutdeucpkihmlxrhmtdvaadqwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844328.82208-3394-118298509823969/AnsiballZ_copy.py'
Jan 31 07:25:29 compute-2 sudo[225586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:29 compute-2 python3.9[225588]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844328.82208-3394-118298509823969/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:25:29 compute-2 sudo[225586]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:29 compute-2 sudo[225592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:29 compute-2 sudo[225592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:29 compute-2 sudo[225592]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:29 compute-2 sudo[225637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:25:29 compute-2 sudo[225637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:29 compute-2 sudo[225637]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:29 compute-2 sudo[225712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrilglnbeajltvwfdybiodpzxzbkwqch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844328.82208-3394-118298509823969/AnsiballZ_systemd.py'
Jan 31 07:25:29 compute-2 sudo[225712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:30 compute-2 python3.9[225714]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:25:30 compute-2 systemd[1]: Reloading.
Jan 31 07:25:30 compute-2 systemd-rc-local-generator[225732]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:25:30 compute-2 systemd-sysv-generator[225737]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:25:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:25:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:25:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:25:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:30.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:25:30 compute-2 sudo[225712]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:30 compute-2 sudo[225823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpgvomrvdxrmizmxgehjhxbaiosnucum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844328.82208-3394-118298509823969/AnsiballZ_systemd.py'
Jan 31 07:25:30 compute-2 sudo[225823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:30.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:30 compute-2 python3.9[225825]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:25:31 compute-2 systemd[1]: Reloading.
Jan 31 07:25:31 compute-2 systemd-sysv-generator[225857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:25:31 compute-2 systemd-rc-local-generator[225854]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:25:31 compute-2 systemd[1]: Starting nova_compute container...
Jan 31 07:25:31 compute-2 ceph-mon[77282]: pgmap v728: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:31 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:25:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:31 compute-2 podman[225867]: 2026-01-31 07:25:31.577125657 +0000 UTC m=+0.229973429 container init 3fc3af851586c6b425fec3344a5cac58df3b0c63302c8497271707d1c9a61daa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:25:31 compute-2 podman[225867]: 2026-01-31 07:25:31.583584184 +0000 UTC m=+0.236431946 container start 3fc3af851586c6b425fec3344a5cac58df3b0c63302c8497271707d1c9a61daa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:25:31 compute-2 nova_compute[225883]: + sudo -E kolla_set_configs
Jan 31 07:25:31 compute-2 podman[225867]: nova_compute
Jan 31 07:25:31 compute-2 systemd[1]: Started nova_compute container.
Jan 31 07:25:31 compute-2 sudo[225823]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Validating config file
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying service configuration files
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Deleting /etc/ceph
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Creating directory /etc/ceph
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Writing out command to execute
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:31 compute-2 nova_compute[225883]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:25:31 compute-2 nova_compute[225883]: ++ cat /run_command
Jan 31 07:25:31 compute-2 nova_compute[225883]: + CMD=nova-compute
Jan 31 07:25:31 compute-2 nova_compute[225883]: + ARGS=
Jan 31 07:25:31 compute-2 nova_compute[225883]: + sudo kolla_copy_cacerts
Jan 31 07:25:31 compute-2 nova_compute[225883]: + [[ ! -n '' ]]
Jan 31 07:25:31 compute-2 nova_compute[225883]: + . kolla_extend_start
Jan 31 07:25:31 compute-2 nova_compute[225883]: Running command: 'nova-compute'
Jan 31 07:25:31 compute-2 nova_compute[225883]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 07:25:31 compute-2 nova_compute[225883]: + umask 0022
Jan 31 07:25:31 compute-2 nova_compute[225883]: + exec nova-compute
Jan 31 07:25:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:32.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:32 compute-2 ceph-mon[77282]: pgmap v729: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.503337) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332503478, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1412, "num_deletes": 256, "total_data_size": 3356115, "memory_usage": 3404128, "flush_reason": "Manual Compaction"}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332521778, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2206040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16229, "largest_seqno": 17636, "table_properties": {"data_size": 2200047, "index_size": 3320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11881, "raw_average_key_size": 18, "raw_value_size": 2188076, "raw_average_value_size": 3445, "num_data_blocks": 150, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844202, "oldest_key_time": 1769844202, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 18495 microseconds, and 5192 cpu microseconds.
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.521864) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2206040 bytes OK
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.521887) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.527927) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.527979) EVENT_LOG_v1 {"time_micros": 1769844332527968, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.528005) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3349563, prev total WAL file size 3357787, number of live WAL files 2.
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.529645) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2154KB)], [30(7869KB)]
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332529715, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 10264719, "oldest_snapshot_seqno": -1}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4313 keys, 9882552 bytes, temperature: kUnknown
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332593105, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 9882552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9850896, "index_size": 19770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 107470, "raw_average_key_size": 24, "raw_value_size": 9770001, "raw_average_value_size": 2265, "num_data_blocks": 825, "num_entries": 4313, "num_filter_entries": 4313, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.593415) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9882552 bytes
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.596830) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.6 rd, 155.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 4840, records dropped: 527 output_compression: NoCompression
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.596894) EVENT_LOG_v1 {"time_micros": 1769844332596868, "job": 16, "event": "compaction_finished", "compaction_time_micros": 63517, "compaction_time_cpu_micros": 21508, "output_level": 6, "num_output_files": 1, "total_output_size": 9882552, "num_input_records": 4840, "num_output_records": 4313, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332597382, "job": 16, "event": "table_file_deletion", "file_number": 32}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332598545, "job": 16, "event": "table_file_deletion", "file_number": 30}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.529515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.598578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.598583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.598584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.598586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.598588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.599103) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332599131, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 257, "num_deletes": 251, "total_data_size": 21496, "memory_usage": 27752, "flush_reason": "Manual Compaction"}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332604198, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 13304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17638, "largest_seqno": 17893, "table_properties": {"data_size": 11551, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 4640, "raw_average_key_size": 18, "raw_value_size": 8177, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844332, "oldest_key_time": 1769844332, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5223 microseconds, and 1182 cpu microseconds.
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.604315) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 13304 bytes OK
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.604348) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.605889) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.605919) EVENT_LOG_v1 {"time_micros": 1769844332605908, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.605947) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 19470, prev total WAL file size 19470, number of live WAL files 2.
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.606477) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(12KB)], [33(9650KB)]
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332606536, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 9895856, "oldest_snapshot_seqno": -1}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4063 keys, 7831645 bytes, temperature: kUnknown
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332661447, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7831645, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7803441, "index_size": 16958, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 102970, "raw_average_key_size": 25, "raw_value_size": 7728623, "raw_average_value_size": 1902, "num_data_blocks": 698, "num_entries": 4063, "num_filter_entries": 4063, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.661691) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7831645 bytes
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.663469) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.0 rd, 142.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.4 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(1332.5) write-amplify(588.7) OK, records in: 4569, records dropped: 506 output_compression: NoCompression
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.663505) EVENT_LOG_v1 {"time_micros": 1769844332663489, "job": 18, "event": "compaction_finished", "compaction_time_micros": 54986, "compaction_time_cpu_micros": 24652, "output_level": 6, "num_output_files": 1, "total_output_size": 7831645, "num_input_records": 4569, "num_output_records": 4063, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332664281, "job": 18, "event": "table_file_deletion", "file_number": 35}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332665247, "job": 18, "event": "table_file_deletion", "file_number": 33}
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.606414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.665330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.665334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.665336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.665338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:25:32.665340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-2 python3.9[226044]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:32.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:33 compute-2 python3.9[226196]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:34 compute-2 nova_compute[225883]: 2026-01-31 07:25:34.298 225887 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:34 compute-2 nova_compute[225883]: 2026-01-31 07:25:34.298 225887 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:34 compute-2 nova_compute[225883]: 2026-01-31 07:25:34.298 225887 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:34 compute-2 nova_compute[225883]: 2026-01-31 07:25:34.299 225887 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 07:25:34 compute-2 python3.9[226348]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:34.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:34 compute-2 nova_compute[225883]: 2026-01-31 07:25:34.447 225887 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:25:34 compute-2 nova_compute[225883]: 2026-01-31 07:25:34.464 225887 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:25:34 compute-2 nova_compute[225883]: 2026-01-31 07:25:34.464 225887 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 07:25:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:34.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:34 compute-2 ceph-mon[77282]: pgmap v730: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.158 225887 INFO nova.virt.driver [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 07:25:35 compute-2 sudo[226501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kddpeghqoutcpkjdkiulccaatxfytmzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844334.8087165-3573-236844201654959/AnsiballZ_podman_container.py'
Jan 31 07:25:35 compute-2 sudo[226501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.344 225887 INFO nova.compute.provider_config [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.419 225887 DEBUG oslo_concurrency.lockutils [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.420 225887 DEBUG oslo_concurrency.lockutils [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.420 225887 DEBUG oslo_concurrency.lockutils [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.421 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.421 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.422 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.422 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.422 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.422 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.423 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.423 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.424 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.424 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.424 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.424 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.425 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.425 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.425 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.426 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.426 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.426 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.427 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.427 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.427 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.428 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.428 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.428 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.429 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.429 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.429 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.430 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.430 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.431 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.431 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.431 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.432 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.432 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.432 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.432 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.433 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.433 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.433 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.434 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.434 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.435 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.435 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.435 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.436 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.436 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.436 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.437 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.437 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.437 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.438 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.438 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.438 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.438 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.439 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.439 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.439 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.440 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.440 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.440 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.441 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.441 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.441 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.442 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.442 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.442 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.442 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.443 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.443 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.443 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.444 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.444 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.444 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.445 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.445 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.445 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.446 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.446 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.446 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.447 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.447 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.447 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.448 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.448 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.448 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.449 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.449 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.449 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.450 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.450 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.450 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.451 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.451 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.451 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.452 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.452 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.452 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.453 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.453 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.453 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.453 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.454 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.454 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.454 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.455 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.455 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.455 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.456 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.456 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.456 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.457 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.457 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.457 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.457 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.458 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.458 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.458 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.459 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.459 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.459 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.460 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.460 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.460 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.461 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.461 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.461 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.462 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.462 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.462 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.463 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.463 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.463 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.463 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.463 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.464 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.464 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.464 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.464 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.464 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.465 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.465 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.465 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.465 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.465 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.466 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.466 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.466 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.466 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.467 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.467 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.467 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.467 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.467 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.468 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.468 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.468 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.468 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.468 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.468 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.469 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.469 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.469 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.469 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.469 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.470 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.470 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.470 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.470 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.470 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.471 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.471 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.471 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.471 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.471 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.471 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.472 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.472 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.472 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.472 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.473 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.473 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.473 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.473 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.473 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.474 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.474 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.474 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.474 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.474 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.475 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.475 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.475 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.475 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.475 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.475 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.476 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.476 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.476 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.476 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.476 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.477 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.477 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.477 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.477 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.478 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.478 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.478 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.478 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.479 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.479 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.479 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.479 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.480 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.480 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.480 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.480 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.481 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.481 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.481 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.481 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.481 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.482 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.482 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.482 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.482 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.482 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.483 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.483 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.483 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.483 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.483 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.484 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.484 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.484 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.484 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.485 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.485 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.485 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.485 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.486 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.486 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.486 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.487 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.487 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.487 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.487 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.488 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.488 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.488 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.488 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.489 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.489 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.489 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.489 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.490 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.490 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.490 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.491 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.491 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.491 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.491 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.492 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.492 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.492 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.492 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.493 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.493 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.494 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.494 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.494 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.495 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.495 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.495 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.496 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.496 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.496 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.497 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.497 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.497 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.497 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.498 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.498 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.498 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.499 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.499 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.499 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.499 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.499 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.499 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.499 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.500 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.500 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.500 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.500 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.500 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.501 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.501 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.501 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.501 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.501 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.501 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.501 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.502 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.502 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.502 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.502 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.502 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.502 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.502 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.503 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.503 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.503 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.503 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.503 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.503 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.503 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.504 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.504 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.504 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.504 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.504 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.504 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.504 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.505 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.505 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.505 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.505 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.505 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.505 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.505 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.506 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.506 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.506 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.506 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.506 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.506 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.506 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.507 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.507 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.507 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.507 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.507 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.507 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.507 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.508 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.508 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.508 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.508 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.508 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.508 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.509 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.509 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.509 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.509 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.510 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.510 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.510 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.511 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.511 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.511 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.511 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.511 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.512 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.512 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.512 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.512 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.513 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.513 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.513 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.513 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.513 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.514 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.514 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.514 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.514 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.514 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.515 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.515 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.515 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.515 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.516 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.516 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.516 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.516 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.516 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.517 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.517 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.517 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.517 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.517 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.517 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.517 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.518 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.518 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.518 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.518 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.518 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.518 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.518 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.519 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.519 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.519 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.519 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.519 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.519 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.519 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.520 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.520 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.520 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.520 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.520 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.520 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.520 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.521 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.521 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.521 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.521 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.521 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.521 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.521 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.522 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.522 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.522 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.522 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.522 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.522 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.522 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.523 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.523 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.523 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.523 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.523 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.523 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.523 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.524 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.524 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.524 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.524 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.524 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.524 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.524 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.525 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.525 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.525 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.525 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.525 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.525 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.526 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.526 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.526 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.526 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.526 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.526 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.526 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.527 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.527 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.527 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.527 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.527 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.527 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.528 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.528 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.528 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.528 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.528 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.528 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.529 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.529 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.529 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 python3.9[226503]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.529 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.529 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.529 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.529 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.530 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.530 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.530 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.530 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.530 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.530 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.530 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.531 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.531 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.531 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.531 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.531 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.531 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.531 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.532 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.532 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.532 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.532 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.532 225887 WARNING oslo_config.cfg [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 07:25:35 compute-2 nova_compute[225883]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 07:25:35 compute-2 nova_compute[225883]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 07:25:35 compute-2 nova_compute[225883]: and ``live_migration_inbound_addr`` respectively.
Jan 31 07:25:35 compute-2 nova_compute[225883]: ).  Its value may be silently ignored in the future.
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.532 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.533 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.533 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.533 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.533 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.533 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.533 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.534 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.534 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.534 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.534 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.534 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.534 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.534 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.535 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.535 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.535 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.535 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.535 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.rbd_secret_uuid        = f70fcd2a-dcb4-5f89-a4ba-79a09959083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.535 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.535 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.536 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.536 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.536 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.536 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.536 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.536 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.536 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.537 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.537 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.537 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.537 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.537 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.537 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.537 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.538 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.538 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.538 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.538 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.538 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.538 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.538 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.539 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.539 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.539 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.539 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.539 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.539 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.539 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.540 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.540 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.540 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.540 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.540 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.540 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.541 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.541 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.541 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.541 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.541 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.541 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.541 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.542 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.542 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.542 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.542 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.542 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.542 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.542 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.543 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.543 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.543 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.543 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.543 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.543 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.543 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.543 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.544 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.544 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.544 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.544 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.544 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.544 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.544 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.545 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.545 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.545 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.545 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.545 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.545 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.546 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.546 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.546 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.546 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.546 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.546 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.546 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.547 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.547 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.547 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.547 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.547 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.547 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.547 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.547 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.548 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.548 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.548 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.548 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.548 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.548 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.548 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.549 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.549 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.549 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.549 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.549 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.549 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.550 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.550 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.550 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.550 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.550 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.551 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.551 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.551 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.551 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.551 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.552 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.552 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.552 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.552 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.552 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.553 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.553 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.553 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.553 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.553 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.554 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.554 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.554 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.554 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.554 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.555 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.555 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.555 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.555 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.555 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.555 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.555 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.556 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.556 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.556 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.556 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.556 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.557 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.557 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.557 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.557 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.557 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.557 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.557 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.558 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.558 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.558 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.558 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.558 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.558 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.559 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.559 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.559 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.559 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.559 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.559 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.559 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.560 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.560 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.560 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.560 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.560 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.560 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.560 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.561 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.561 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.561 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.561 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.561 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.561 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.561 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.562 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.562 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.562 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.562 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.562 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.562 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.563 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.563 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.563 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.563 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.563 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.563 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.563 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.564 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.564 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.564 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.564 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.564 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.564 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.564 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.565 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.565 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.565 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.565 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.565 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.565 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.565 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.565 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.566 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.566 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.566 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.566 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.566 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.566 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.566 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.567 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.567 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.567 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.567 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.567 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.567 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.567 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.568 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.568 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.568 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.568 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.568 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.568 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.568 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.568 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.569 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.569 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.569 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.569 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.569 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.569 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.569 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.570 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.570 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.570 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.570 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.570 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.571 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.571 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.571 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.571 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.571 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.572 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.572 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.572 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.572 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.572 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.573 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.573 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.573 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.573 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.573 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.574 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.574 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.574 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.574 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.574 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.575 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.575 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.575 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.575 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.575 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.576 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.576 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.576 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.576 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.576 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.577 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.577 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.577 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.577 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.577 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.578 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.578 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.578 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.578 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.579 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.579 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.579 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.579 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.579 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.580 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.580 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.580 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.580 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.580 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.580 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.581 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.581 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.581 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.581 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.581 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.581 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.581 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.582 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.582 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.582 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.582 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.582 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.582 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.583 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.583 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.583 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.583 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.583 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.584 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.584 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.584 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.584 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.584 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.584 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.585 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.585 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.585 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.585 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.585 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.585 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.585 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.586 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.586 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.586 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.586 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.586 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.586 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.586 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.587 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.587 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.587 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.587 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.587 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.587 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.587 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.588 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.588 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.588 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.588 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.588 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.588 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.588 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.589 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.589 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.589 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.589 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.589 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.589 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.589 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.589 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.590 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.590 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.590 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.590 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.590 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.590 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.590 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.591 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.591 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.591 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.591 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.591 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.591 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.591 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.592 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.592 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.592 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.592 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.592 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.592 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.592 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.593 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.593 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.593 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.593 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.593 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.593 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.593 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.593 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.594 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.594 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.594 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.594 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.594 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.594 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.595 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.595 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.595 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.595 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.595 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.595 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.595 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.596 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.596 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.596 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.596 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.596 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.596 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.596 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.597 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.597 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.597 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.597 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.597 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.597 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.597 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.598 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.598 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.598 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.598 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.598 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.598 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.598 225887 DEBUG oslo_service.service [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.599 225887 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.629 225887 DEBUG nova.virt.libvirt.host [None req-a3d04d79-d686-47c7-a102-4678a32904d9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.630 225887 DEBUG nova.virt.libvirt.host [None req-a3d04d79-d686-47c7-a102-4678a32904d9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.630 225887 DEBUG nova.virt.libvirt.host [None req-a3d04d79-d686-47c7-a102-4678a32904d9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.630 225887 DEBUG nova.virt.libvirt.host [None req-a3d04d79-d686-47c7-a102-4678a32904d9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 31 07:25:35 compute-2 sudo[226501]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:35 compute-2 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 07:25:35 compute-2 systemd[1]: Started libvirt QEMU daemon.
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.695 225887 DEBUG nova.virt.libvirt.host [None req-a3d04d79-d686-47c7-a102-4678a32904d9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd86ff7ebb0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.697 225887 DEBUG nova.virt.libvirt.host [None req-a3d04d79-d686-47c7-a102-4678a32904d9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd86ff7ebb0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.697 225887 INFO nova.virt.libvirt.driver [None req-a3d04d79-d686-47c7-a102-4678a32904d9 - - - - - -] Connection event '1' reason 'None'
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.783 225887 WARNING nova.virt.libvirt.driver [None req-a3d04d79-d686-47c7-a102-4678a32904d9 - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.
Jan 31 07:25:35 compute-2 nova_compute[225883]: 2026-01-31 07:25:35.784 225887 DEBUG nova.virt.libvirt.volume.mount [None req-a3d04d79-d686-47c7-a102-4678a32904d9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 31 07:25:36 compute-2 sudo[226726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euvzzcwumrpkesxdodmpnepwopamkond ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844335.9049678-3598-160524091253211/AnsiballZ_systemd.py'
Jan 31 07:25:36 compute-2 sudo[226726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:36 compute-2 python3.9[226728]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:25:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:36.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:36 compute-2 systemd[1]: Stopping nova_compute container...
Jan 31 07:25:36 compute-2 nova_compute[225883]: 2026-01-31 07:25:36.518 225887 DEBUG oslo_concurrency.lockutils [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:25:36 compute-2 nova_compute[225883]: 2026-01-31 07:25:36.518 225887 DEBUG oslo_concurrency.lockutils [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:25:36 compute-2 nova_compute[225883]: 2026-01-31 07:25:36.519 225887 DEBUG oslo_concurrency.lockutils [None req-1cfe6e3f-6bf2-4cf2-9b96-ce2236d5cc43 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:25:36 compute-2 virtqemud[226546]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 07:25:36 compute-2 virtqemud[226546]: hostname: compute-2
Jan 31 07:25:36 compute-2 virtqemud[226546]: End of file while reading data: Input/output error
Jan 31 07:25:36 compute-2 systemd[1]: libpod-3fc3af851586c6b425fec3344a5cac58df3b0c63302c8497271707d1c9a61daa.scope: Deactivated successfully.
Jan 31 07:25:36 compute-2 systemd[1]: libpod-3fc3af851586c6b425fec3344a5cac58df3b0c63302c8497271707d1c9a61daa.scope: Consumed 3.091s CPU time.
Jan 31 07:25:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:36.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:36 compute-2 podman[226740]: 2026-01-31 07:25:36.940617247 +0000 UTC m=+0.472531390 container died 3fc3af851586c6b425fec3344a5cac58df3b0c63302c8497271707d1c9a61daa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 07:25:37 compute-2 ceph-mon[77282]: pgmap v731: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:37 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3fc3af851586c6b425fec3344a5cac58df3b0c63302c8497271707d1c9a61daa-userdata-shm.mount: Deactivated successfully.
Jan 31 07:25:37 compute-2 systemd[1]: var-lib-containers-storage-overlay-15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22-merged.mount: Deactivated successfully.
Jan 31 07:25:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:38.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:38.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:40.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:40 compute-2 podman[226772]: 2026-01-31 07:25:40.726273322 +0000 UTC m=+0.116495575 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 31 07:25:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:40.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:41 compute-2 ceph-mon[77282]: pgmap v732: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:41 compute-2 podman[226740]: 2026-01-31 07:25:41.227709756 +0000 UTC m=+4.759623849 container cleanup 3fc3af851586c6b425fec3344a5cac58df3b0c63302c8497271707d1c9a61daa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:25:41 compute-2 podman[226740]: nova_compute
Jan 31 07:25:41 compute-2 podman[226800]: nova_compute
Jan 31 07:25:41 compute-2 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 07:25:41 compute-2 systemd[1]: Stopped nova_compute container.
Jan 31 07:25:41 compute-2 systemd[1]: Starting nova_compute container...
Jan 31 07:25:41 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:25:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15eb5035db4a269a76ef9f8c41fda9449763f1d47425882a52d43992938baa22/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:41 compute-2 podman[226813]: 2026-01-31 07:25:41.46994471 +0000 UTC m=+0.170297696 container init 3fc3af851586c6b425fec3344a5cac58df3b0c63302c8497271707d1c9a61daa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:25:41 compute-2 podman[226813]: 2026-01-31 07:25:41.477889149 +0000 UTC m=+0.178242105 container start 3fc3af851586c6b425fec3344a5cac58df3b0c63302c8497271707d1c9a61daa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:25:41 compute-2 nova_compute[226829]: + sudo -E kolla_set_configs
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Validating config file
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying service configuration files
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Deleting /etc/ceph
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Creating directory /etc/ceph
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Writing out command to execute
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:41 compute-2 nova_compute[226829]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:25:41 compute-2 nova_compute[226829]: ++ cat /run_command
Jan 31 07:25:41 compute-2 nova_compute[226829]: + CMD=nova-compute
Jan 31 07:25:41 compute-2 nova_compute[226829]: + ARGS=
Jan 31 07:25:41 compute-2 nova_compute[226829]: + sudo kolla_copy_cacerts
Jan 31 07:25:41 compute-2 nova_compute[226829]: + [[ ! -n '' ]]
Jan 31 07:25:41 compute-2 nova_compute[226829]: + . kolla_extend_start
Jan 31 07:25:41 compute-2 nova_compute[226829]: Running command: 'nova-compute'
Jan 31 07:25:41 compute-2 nova_compute[226829]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 07:25:41 compute-2 nova_compute[226829]: + umask 0022
Jan 31 07:25:41 compute-2 nova_compute[226829]: + exec nova-compute
Jan 31 07:25:41 compute-2 podman[226813]: nova_compute
Jan 31 07:25:41 compute-2 systemd[1]: Started nova_compute container.
Jan 31 07:25:41 compute-2 sudo[226726]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:42 compute-2 sudo[226990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qefeahnpijfrvbiawmklhimbmjpygquf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844341.8955958-3625-188683603062904/AnsiballZ_podman_container.py'
Jan 31 07:25:42 compute-2 sudo[226990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:42.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:42 compute-2 python3.9[226992]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 07:25:42 compute-2 ceph-mon[77282]: pgmap v733: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:42 compute-2 systemd[1]: Started libpod-conmon-72c66274019d26a924cf92a287c36ed331ea1b8d964b35e85b9a5f3eb296f787.scope.
Jan 31 07:25:42 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:25:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b79987b6973eae2e5cb14bf3c0de452187fec02a1d7de5bf55de6d58f4c04a/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b79987b6973eae2e5cb14bf3c0de452187fec02a1d7de5bf55de6d58f4c04a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b79987b6973eae2e5cb14bf3c0de452187fec02a1d7de5bf55de6d58f4c04a/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:42 compute-2 podman[227016]: 2026-01-31 07:25:42.8851594 +0000 UTC m=+0.259730256 container init 72c66274019d26a924cf92a287c36ed331ea1b8d964b35e85b9a5f3eb296f787 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 07:25:42 compute-2 podman[227016]: 2026-01-31 07:25:42.891115694 +0000 UTC m=+0.265686490 container start 72c66274019d26a924cf92a287c36ed331ea1b8d964b35e85b9a5f3eb296f787 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 07:25:42 compute-2 python3.9[226992]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 07:25:42 compute-2 nova_compute_init[227038]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 07:25:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:42.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:42 compute-2 systemd[1]: libpod-72c66274019d26a924cf92a287c36ed331ea1b8d964b35e85b9a5f3eb296f787.scope: Deactivated successfully.
Jan 31 07:25:42 compute-2 podman[227039]: 2026-01-31 07:25:42.974391585 +0000 UTC m=+0.038137490 container died 72c66274019d26a924cf92a287c36ed331ea1b8d964b35e85b9a5f3eb296f787 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:25:43 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-72c66274019d26a924cf92a287c36ed331ea1b8d964b35e85b9a5f3eb296f787-userdata-shm.mount: Deactivated successfully.
Jan 31 07:25:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-50b79987b6973eae2e5cb14bf3c0de452187fec02a1d7de5bf55de6d58f4c04a-merged.mount: Deactivated successfully.
Jan 31 07:25:43 compute-2 nova_compute[226829]: 2026-01-31 07:25:43.348 226833 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:43 compute-2 nova_compute[226829]: 2026-01-31 07:25:43.349 226833 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:43 compute-2 nova_compute[226829]: 2026-01-31 07:25:43.349 226833 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:43 compute-2 nova_compute[226829]: 2026-01-31 07:25:43.349 226833 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 07:25:43 compute-2 nova_compute[226829]: 2026-01-31 07:25:43.472 226833 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:25:43 compute-2 nova_compute[226829]: 2026-01-31 07:25:43.479 226833 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:25:43 compute-2 nova_compute[226829]: 2026-01-31 07:25:43.479 226833 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 07:25:43 compute-2 sudo[226990]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:43 compute-2 podman[227045]: 2026-01-31 07:25:43.599620584 +0000 UTC m=+0.641995491 container cleanup 72c66274019d26a924cf92a287c36ed331ea1b8d964b35e85b9a5f3eb296f787 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 07:25:43 compute-2 systemd[1]: libpod-conmon-72c66274019d26a924cf92a287c36ed331ea1b8d964b35e85b9a5f3eb296f787.scope: Deactivated successfully.
Jan 31 07:25:43 compute-2 ceph-mon[77282]: pgmap v734: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:43 compute-2 sudo[227108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:43 compute-2 sudo[227108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:43 compute-2 sudo[227108]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:43 compute-2 sudo[227133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:43 compute-2 nova_compute[226829]: 2026-01-31 07:25:43.937 226833 INFO nova.virt.driver [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 07:25:43 compute-2 sudo[227133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:43 compute-2 sudo[227133]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:44 compute-2 sshd-session[202234]: Connection closed by 192.168.122.30 port 49272
Jan 31 07:25:44 compute-2 sshd-session[202208]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:25:44 compute-2 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 07:25:44 compute-2 systemd[1]: session-49.scope: Consumed 1min 47.672s CPU time.
Jan 31 07:25:44 compute-2 systemd-logind[801]: Session 49 logged out. Waiting for processes to exit.
Jan 31 07:25:44 compute-2 systemd-logind[801]: Removed session 49.
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.088 226833 INFO nova.compute.provider_config [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.199 226833 DEBUG oslo_concurrency.lockutils [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.200 226833 DEBUG oslo_concurrency.lockutils [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.200 226833 DEBUG oslo_concurrency.lockutils [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.201 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.202 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.202 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.202 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.203 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.203 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.203 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.204 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.204 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.204 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.205 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.205 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.206 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.206 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.206 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.207 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.207 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.207 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.208 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.208 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.209 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.209 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.209 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.210 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.210 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.210 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.211 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.211 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.212 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.212 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.212 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.213 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.213 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.214 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.214 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.214 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.215 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.215 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.216 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.216 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.216 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.217 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.217 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.218 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.218 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.218 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.219 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.219 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.219 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.220 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.220 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.221 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.221 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.221 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.222 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.222 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.223 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.223 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.223 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.224 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.224 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.224 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.225 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.225 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.225 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.226 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.226 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.226 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.227 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.227 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.228 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.228 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.228 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.229 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.229 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.230 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.230 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.230 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.231 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.231 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.232 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.232 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.233 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.233 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.233 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.234 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.234 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.234 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.235 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.235 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.236 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.236 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.236 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.237 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.237 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.237 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.238 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.238 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.239 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.239 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.239 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.240 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.240 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.240 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.241 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.241 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.242 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.242 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.242 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.243 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.243 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.244 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.244 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.245 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.245 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.246 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.246 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.246 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.246 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.247 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.247 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.247 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.248 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.248 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.248 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.248 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.249 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.249 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.249 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.249 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.250 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.250 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.250 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.250 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.251 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.251 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.251 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.252 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.252 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.252 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.252 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.253 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.253 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.253 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.254 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.254 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.254 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.255 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.255 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.255 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.256 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.256 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.256 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.256 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.257 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.257 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.257 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.258 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.258 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.258 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.259 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.259 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.259 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.259 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.260 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.260 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.260 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.261 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.261 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.261 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.261 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.262 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.262 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.262 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.263 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.263 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.264 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.264 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.264 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.265 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.265 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.265 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.266 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.266 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.266 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.266 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.267 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.267 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.267 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.268 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.268 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.268 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.268 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.269 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.269 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.269 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.270 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.270 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.270 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.271 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.271 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.271 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.271 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.272 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.272 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.272 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.273 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.273 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.273 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.273 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.274 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.274 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.274 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.275 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.275 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.275 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.275 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.276 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.276 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.276 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.277 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.277 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.277 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.277 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.278 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.278 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.278 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.279 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.279 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.279 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.279 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.280 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.280 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.280 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.280 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.281 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.281 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.281 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.282 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.282 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.282 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.282 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.283 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.283 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.283 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.283 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.283 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.284 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.284 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.284 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.284 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.284 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.284 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.284 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.285 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.285 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.285 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.285 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.285 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.286 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.286 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.286 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.286 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.286 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.287 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.287 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.287 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.287 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.287 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.288 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.288 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.288 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.288 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.288 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.289 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.289 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.289 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.289 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.289 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.290 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.290 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.290 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.290 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.290 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.290 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.291 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.291 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.291 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.291 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.291 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.291 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.292 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.292 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.292 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.292 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.292 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.292 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.292 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.293 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.293 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.293 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.293 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.293 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.293 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.294 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.294 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.294 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.294 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.294 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.294 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.295 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.295 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.295 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.295 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.295 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.295 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.295 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.296 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.296 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.296 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.296 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.296 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.297 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.297 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.297 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.297 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.297 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.298 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.298 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.298 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.298 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.298 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.298 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.299 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.299 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.299 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.299 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.299 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.299 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.300 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.300 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.300 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.300 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.300 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.300 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.300 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.301 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.301 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.301 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.301 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.301 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.301 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.301 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.302 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.302 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.302 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.302 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.302 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.303 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.303 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.303 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.303 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.303 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.303 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.303 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.304 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.304 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.304 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.304 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.304 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.304 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.304 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.305 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.305 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.305 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.305 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.305 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.305 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.305 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.306 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.306 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.306 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.306 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.306 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.306 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.307 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.307 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.307 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.307 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.307 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.307 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.307 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.308 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.308 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.308 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.308 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.308 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.308 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.309 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.309 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.309 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.309 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.309 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.309 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.309 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.310 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.310 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.310 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.310 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.310 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.310 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.310 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.311 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.311 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.311 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.311 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.311 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.311 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.311 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.312 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.312 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.312 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.312 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.312 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.312 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.312 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.313 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.313 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.313 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.313 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.313 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.313 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.313 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.314 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.314 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.314 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.314 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.314 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.314 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.314 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.315 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.315 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.315 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.315 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.315 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.315 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.316 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.316 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.316 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.316 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.316 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.316 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.317 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.317 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.317 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.317 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.317 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.317 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.317 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.318 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.318 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.318 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.318 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.318 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.318 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.318 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.319 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.319 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.319 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.319 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.319 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.319 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.319 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.320 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.320 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.320 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.320 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.320 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.320 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.320 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.321 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.321 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.321 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.321 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.321 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.321 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.321 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.322 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.322 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.322 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.322 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.322 226833 WARNING oslo_config.cfg [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 07:25:44 compute-2 nova_compute[226829]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 07:25:44 compute-2 nova_compute[226829]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 07:25:44 compute-2 nova_compute[226829]: and ``live_migration_inbound_addr`` respectively.
Jan 31 07:25:44 compute-2 nova_compute[226829]: ).  Its value may be silently ignored in the future.
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.322 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.323 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.323 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.323 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.323 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.323 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.323 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.323 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.324 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.324 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.324 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.324 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.324 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.324 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.324 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.325 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.325 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.325 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.325 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.rbd_secret_uuid        = f70fcd2a-dcb4-5f89-a4ba-79a09959083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.325 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.325 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.326 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.326 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.326 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.326 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.326 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.326 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.326 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.327 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.327 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.327 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.327 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.327 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.327 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.327 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.328 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.328 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.328 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.328 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.328 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.328 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.328 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.329 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.329 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.329 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.329 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.329 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.329 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.329 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.330 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.330 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.330 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.330 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.330 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.330 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.330 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.331 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.331 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.331 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.331 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.331 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.331 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.331 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.331 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.332 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.332 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.332 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.332 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.332 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.332 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.332 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.333 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.333 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.333 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.333 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.333 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.333 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.333 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.334 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.334 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.334 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.334 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.334 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.334 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.334 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.335 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.335 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.335 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.335 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.335 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.335 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.335 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.336 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.336 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.336 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.336 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.336 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.336 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.336 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.336 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.337 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.337 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.337 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.337 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.337 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.337 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.337 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.338 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.338 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.338 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.338 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.338 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.338 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.338 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.338 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.339 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.339 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.339 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.339 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.339 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.339 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.339 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.340 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.340 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.340 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.340 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.340 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.340 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.340 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.341 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.341 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.341 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.341 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.341 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.341 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.341 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.342 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.342 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.342 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.342 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.342 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.342 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.343 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.343 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.343 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.343 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.343 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.343 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.343 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.344 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.344 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.344 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.344 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.344 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.344 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.344 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.345 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.345 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.345 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.345 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.345 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.345 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.345 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.346 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.346 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.346 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.346 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.346 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.346 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.346 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.347 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.347 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.347 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.347 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.347 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.347 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.347 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.348 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.348 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.348 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.348 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.348 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.348 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.349 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.349 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.349 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.349 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.349 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.349 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.349 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.350 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.350 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.350 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.350 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.350 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.350 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.350 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.351 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.351 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.351 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.351 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.351 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.351 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.352 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.352 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.352 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.352 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.352 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.352 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.352 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.353 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.353 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.353 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.353 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.353 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.354 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.354 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.354 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.354 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.354 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.354 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.354 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.355 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.355 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.355 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.355 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.355 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.355 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.355 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.356 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.356 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.356 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.356 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.356 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.356 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.356 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.357 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.357 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.357 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.357 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.357 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.357 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.357 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.358 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.358 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.358 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.358 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.358 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.358 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.359 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.359 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.359 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.359 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.359 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.359 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.359 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.360 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.360 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.360 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.360 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.360 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.360 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.360 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.361 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.361 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.361 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.361 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.361 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.361 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.361 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.362 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.362 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.362 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.362 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.362 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.362 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.363 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.363 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.363 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.363 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.363 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.363 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.363 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.364 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.364 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.364 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.364 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.364 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.364 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.365 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.365 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.365 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.365 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.365 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.365 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.366 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.366 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.366 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.366 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.366 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.366 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.366 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.367 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.367 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.367 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.367 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.367 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.367 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.368 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.368 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.368 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.368 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.368 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.368 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.368 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.369 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.369 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.369 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.369 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.369 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.369 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.369 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.370 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.370 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.370 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.370 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.370 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.370 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.370 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.371 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.371 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.371 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.371 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.371 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.371 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.372 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.372 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.372 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.372 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.372 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.372 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.372 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.373 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.373 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.373 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.373 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.373 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.373 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.373 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.374 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.374 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.374 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.374 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.374 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.374 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.374 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.375 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.375 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.375 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.375 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.375 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.375 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.375 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.376 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.376 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.376 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.376 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.376 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.376 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.377 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.377 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.377 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.377 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.377 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.377 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.377 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.378 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.378 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.378 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.378 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.378 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.378 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.378 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.379 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.379 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.379 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.379 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.379 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.379 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.380 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.380 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.380 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.380 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.380 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.380 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.380 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.380 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.381 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.381 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.381 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.381 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.381 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.381 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.381 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.382 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.382 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.382 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.382 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.382 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.382 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.382 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.383 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.383 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.383 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.383 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.383 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.383 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.383 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.384 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.384 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.384 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.384 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.384 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.384 226833 DEBUG oslo_service.service [None req-70de08e4-58d1-406e-b9cc-d407592cfe96 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.385 226833 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 07:25:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:44.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.584 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.586 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.586 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.587 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.601 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f43a3ca60d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.604 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f43a3ca60d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.605 226833 INFO nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Connection event '1' reason 'None'
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.612 226833 INFO nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 07:25:44 compute-2 nova_compute[226829]: 
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <host>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <uuid>d14f084b-ec77-4fba-801f-103494d34b3a</uuid>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <arch>x86_64</arch>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model>EPYC-Rome-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <vendor>AMD</vendor>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <microcode version='16777317'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <signature family='23' model='49' stepping='0'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='x2apic'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='tsc-deadline'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='osxsave'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='hypervisor'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='tsc_adjust'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='spec-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='stibp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='arch-capabilities'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='cmp_legacy'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='topoext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='virt-ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='lbrv'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='tsc-scale'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='vmcb-clean'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='pause-filter'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='pfthreshold'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='svme-addr-chk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='rdctl-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='skip-l1dfl-vmentry'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='mds-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature name='pschange-mc-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <pages unit='KiB' size='4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <pages unit='KiB' size='2048'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <pages unit='KiB' size='1048576'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <power_management>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <suspend_mem/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </power_management>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <iommu support='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <migration_features>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <live/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <uri_transports>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <uri_transport>tcp</uri_transport>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <uri_transport>rdma</uri_transport>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </uri_transports>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </migration_features>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <topology>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <cells num='1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <cell id='0'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:           <memory unit='KiB'>7864300</memory>
Jan 31 07:25:44 compute-2 nova_compute[226829]:           <pages unit='KiB' size='4'>1966075</pages>
Jan 31 07:25:44 compute-2 nova_compute[226829]:           <pages unit='KiB' size='2048'>0</pages>
Jan 31 07:25:44 compute-2 nova_compute[226829]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 31 07:25:44 compute-2 nova_compute[226829]:           <distances>
Jan 31 07:25:44 compute-2 nova_compute[226829]:             <sibling id='0' value='10'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:           </distances>
Jan 31 07:25:44 compute-2 nova_compute[226829]:           <cpus num='8'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:           </cpus>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         </cell>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </cells>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </topology>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <cache>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </cache>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <secmodel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model>selinux</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <doi>0</doi>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </secmodel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <secmodel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model>dac</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <doi>0</doi>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </secmodel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </host>
Jan 31 07:25:44 compute-2 nova_compute[226829]: 
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <guest>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <os_type>hvm</os_type>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <arch name='i686'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <wordsize>32</wordsize>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <domain type='qemu'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <domain type='kvm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </arch>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <features>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <pae/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <nonpae/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <acpi default='on' toggle='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <apic default='on' toggle='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <cpuselection/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <deviceboot/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <disksnapshot default='on' toggle='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <externalSnapshot/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </features>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </guest>
Jan 31 07:25:44 compute-2 nova_compute[226829]: 
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <guest>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <os_type>hvm</os_type>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <arch name='x86_64'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <wordsize>64</wordsize>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <domain type='qemu'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <domain type='kvm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </arch>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <features>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <acpi default='on' toggle='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <apic default='on' toggle='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <cpuselection/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <deviceboot/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <disksnapshot default='on' toggle='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <externalSnapshot/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </features>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </guest>
Jan 31 07:25:44 compute-2 nova_compute[226829]: 
Jan 31 07:25:44 compute-2 nova_compute[226829]: </capabilities>
Jan 31 07:25:44 compute-2 nova_compute[226829]: 
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.621 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.672 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 07:25:44 compute-2 nova_compute[226829]: <domainCapabilities>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <domain>kvm</domain>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <arch>i686</arch>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <vcpu max='240'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <iothreads supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <os supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <enum name='firmware'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <loader supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>rom</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pflash</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='readonly'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>yes</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>no</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='secure'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>no</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </loader>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </os>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>on</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>off</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='maximum' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='maximumMigratable'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>on</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>off</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='host-model' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <vendor>AMD</vendor>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='x2apic'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='stibp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='succor'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='lbrv'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='custom' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='ClearwaterForest'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ddpd-u'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sha512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ddpd-u'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sha512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Dhyana-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Turin'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbpb'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbpb'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-128'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-256'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-128'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-256'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='KnightsMill'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512er'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512pf'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='KnightsMill-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512er'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512pf'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tbm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tbm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='athlon'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='athlon-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='core2duo'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='core2duo-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='coreduo'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='coreduo-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='n270'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='n270-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='phenom'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='phenom-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <memoryBacking supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <enum name='sourceType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>file</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>anonymous</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>memfd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </memoryBacking>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <disk supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='diskDevice'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>disk</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>cdrom</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>floppy</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>lun</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='bus'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>ide</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>fdc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>scsi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>sata</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-non-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <graphics supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vnc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>egl-headless</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dbus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </graphics>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <video supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='modelType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vga</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>cirrus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>none</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>bochs</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>ramfb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </video>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <hostdev supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='mode'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>subsystem</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='startupPolicy'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>default</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>mandatory</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>requisite</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>optional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='subsysType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pci</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>scsi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='capsType'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='pciBackend'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </hostdev>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <rng supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-non-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>random</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>egd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>builtin</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <filesystem supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='driverType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>path</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>handle</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtiofs</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </filesystem>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <tpm supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tpm-tis</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tpm-crb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>emulator</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>external</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendVersion'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>2.0</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </tpm>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <redirdev supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='bus'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </redirdev>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <channel supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pty</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>unix</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </channel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <crypto supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>qemu</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>builtin</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </crypto>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <interface supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>default</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>passt</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <panic supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>isa</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>hyperv</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </panic>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <console supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>null</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pty</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dev</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>file</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pipe</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>stdio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>udp</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tcp</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>unix</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>qemu-vdagent</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dbus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </console>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <features>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <gic supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <vmcoreinfo supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <genid supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <backingStoreInput supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <backup supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <async-teardown supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <s390-pv supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <ps2 supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <tdx supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <sev supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <sgx supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <hyperv supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='features'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>relaxed</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vapic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>spinlocks</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vpindex</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>runtime</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>synic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>stimer</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>reset</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vendor_id</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>frequencies</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>reenlightenment</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tlbflush</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>ipi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>avic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>emsr_bitmap</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>xmm_input</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <defaults>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <spinlocks>4095</spinlocks>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <stimer_direct>on</stimer_direct>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </defaults>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </hyperv>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <launchSecurity supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </features>
Jan 31 07:25:44 compute-2 nova_compute[226829]: </domainCapabilities>
Jan 31 07:25:44 compute-2 nova_compute[226829]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.682 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 07:25:44 compute-2 nova_compute[226829]: <domainCapabilities>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <domain>kvm</domain>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <arch>i686</arch>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <vcpu max='4096'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <iothreads supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <os supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <enum name='firmware'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <loader supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>rom</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pflash</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='readonly'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>yes</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>no</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='secure'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>no</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </loader>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </os>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>on</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>off</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='maximum' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='maximumMigratable'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>on</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>off</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='host-model' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <vendor>AMD</vendor>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='x2apic'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='stibp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='succor'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='lbrv'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='custom' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='ClearwaterForest'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ddpd-u'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sha512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ddpd-u'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sha512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Dhyana-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Turin'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbpb'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbpb'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-128'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-256'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-128'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-256'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='KnightsMill'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512er'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512pf'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='KnightsMill-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512er'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512pf'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tbm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tbm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='athlon'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='athlon-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='core2duo'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='core2duo-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='coreduo'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='coreduo-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='n270'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='n270-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='phenom'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='phenom-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <memoryBacking supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <enum name='sourceType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>file</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>anonymous</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>memfd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </memoryBacking>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <disk supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='diskDevice'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>disk</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>cdrom</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>floppy</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>lun</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='bus'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>fdc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>scsi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>sata</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-non-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <graphics supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vnc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>egl-headless</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dbus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </graphics>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <video supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='modelType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vga</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>cirrus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>none</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>bochs</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>ramfb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </video>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <hostdev supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='mode'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>subsystem</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='startupPolicy'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>default</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>mandatory</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>requisite</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>optional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='subsysType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pci</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>scsi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='capsType'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='pciBackend'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </hostdev>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <rng supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-non-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>random</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>egd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>builtin</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <filesystem supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='driverType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>path</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>handle</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtiofs</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </filesystem>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <tpm supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tpm-tis</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tpm-crb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>emulator</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>external</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendVersion'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>2.0</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </tpm>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <redirdev supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='bus'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </redirdev>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <channel supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pty</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>unix</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </channel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <crypto supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>qemu</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>builtin</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </crypto>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <interface supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>default</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>passt</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <panic supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>isa</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>hyperv</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </panic>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <console supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>null</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pty</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dev</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>file</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pipe</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>stdio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>udp</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tcp</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>unix</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>qemu-vdagent</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dbus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </console>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <features>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <gic supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <vmcoreinfo supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <genid supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <backingStoreInput supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <backup supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <async-teardown supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <s390-pv supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <ps2 supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <tdx supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <sev supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <sgx supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <hyperv supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='features'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>relaxed</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vapic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>spinlocks</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vpindex</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>runtime</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>synic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>stimer</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>reset</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vendor_id</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>frequencies</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>reenlightenment</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tlbflush</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>ipi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>avic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>emsr_bitmap</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>xmm_input</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <defaults>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <spinlocks>4095</spinlocks>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <stimer_direct>on</stimer_direct>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </defaults>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </hyperv>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <launchSecurity supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </features>
Jan 31 07:25:44 compute-2 nova_compute[226829]: </domainCapabilities>
Jan 31 07:25:44 compute-2 nova_compute[226829]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.756 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.761 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 07:25:44 compute-2 nova_compute[226829]: <domainCapabilities>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <domain>kvm</domain>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <arch>x86_64</arch>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <vcpu max='4096'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <iothreads supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <os supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <enum name='firmware'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>efi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <loader supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>rom</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pflash</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='readonly'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>yes</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>no</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='secure'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>yes</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>no</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </loader>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </os>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>on</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>off</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='maximum' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='maximumMigratable'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>on</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>off</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='host-model' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <vendor>AMD</vendor>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='x2apic'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='stibp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='succor'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='lbrv'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='custom' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='ClearwaterForest'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ddpd-u'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sha512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ddpd-u'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sha512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Dhyana-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Turin'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbpb'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbpb'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-128'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-256'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-128'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-256'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='KnightsMill'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512er'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512pf'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='KnightsMill-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512er'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512pf'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tbm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tbm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='athlon'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='athlon-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='core2duo'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='core2duo-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='coreduo'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='coreduo-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2169502590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:25:44 compute-2 ceph-mon[77282]: pgmap v735: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='n270'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='n270-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='phenom'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='phenom-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <memoryBacking supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <enum name='sourceType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>file</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>anonymous</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>memfd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </memoryBacking>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <disk supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='diskDevice'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>disk</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>cdrom</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>floppy</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>lun</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='bus'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>fdc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>scsi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>sata</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-non-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <graphics supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vnc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>egl-headless</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dbus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </graphics>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <video supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='modelType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vga</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>cirrus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>none</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>bochs</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>ramfb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </video>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <hostdev supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='mode'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>subsystem</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='startupPolicy'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>default</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>mandatory</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>requisite</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>optional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='subsysType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pci</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>scsi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='capsType'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='pciBackend'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </hostdev>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <rng supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-non-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>random</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>egd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>builtin</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <filesystem supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='driverType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>path</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>handle</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtiofs</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </filesystem>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <tpm supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tpm-tis</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tpm-crb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>emulator</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>external</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendVersion'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>2.0</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </tpm>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <redirdev supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='bus'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </redirdev>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <channel supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pty</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>unix</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </channel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <crypto supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>qemu</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>builtin</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </crypto>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <interface supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>default</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>passt</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <panic supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>isa</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>hyperv</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </panic>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <console supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>null</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pty</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dev</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>file</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pipe</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>stdio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>udp</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tcp</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>unix</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>qemu-vdagent</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dbus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </console>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <features>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <gic supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <vmcoreinfo supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <genid supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <backingStoreInput supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <backup supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <async-teardown supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <s390-pv supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <ps2 supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <tdx supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <sev supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <sgx supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <hyperv supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='features'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>relaxed</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vapic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>spinlocks</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vpindex</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>runtime</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>synic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>stimer</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>reset</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vendor_id</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>frequencies</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>reenlightenment</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tlbflush</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>ipi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>avic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>emsr_bitmap</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>xmm_input</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <defaults>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <spinlocks>4095</spinlocks>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <stimer_direct>on</stimer_direct>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </defaults>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </hyperv>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <launchSecurity supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </features>
Jan 31 07:25:44 compute-2 nova_compute[226829]: </domainCapabilities>
Jan 31 07:25:44 compute-2 nova_compute[226829]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.827 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 07:25:44 compute-2 nova_compute[226829]: <domainCapabilities>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <domain>kvm</domain>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <arch>x86_64</arch>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <vcpu max='240'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <iothreads supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <os supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <enum name='firmware'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <loader supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>rom</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pflash</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='readonly'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>yes</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>no</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='secure'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>no</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </loader>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </os>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>on</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>off</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='maximum' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='maximumMigratable'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>on</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>off</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='host-model' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <vendor>AMD</vendor>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='x2apic'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='stibp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='succor'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='lbrv'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <mode name='custom' supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Broadwell-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='ClearwaterForest'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ddpd-u'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sha512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ddpd-u'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sha512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm3'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sm4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Cooperlake-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Denverton-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Dhyana-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Turin'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbpb'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amd-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='auto-ibrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='perfmon-v2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbpb'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='stibp-always-on'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='EPYC-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-128'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-256'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-128'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-256'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx10-512'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='prefetchiti'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Haswell-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='IvyBridge-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='KnightsMill'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512er'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512pf'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='KnightsMill-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512er'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512pf'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tbm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fma4'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tbm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xop'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='amx-tile'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-bf16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-fp16'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bitalg'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrc'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fzrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='la57'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='taa-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='SierraForest-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ifma'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cmpccxadd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fbsdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='fsrs'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ibrs-all'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='intel-psfd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='lam'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mcdt-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pbrsb-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='psdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='serialize'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vaes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='hle'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='rtm'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512bw'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512cd'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512dq'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512f'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='avx512vl'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='invpcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pcid'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='pku'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='mpx'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v2'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v3'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='core-capability'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='split-lock-detect'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='Snowridge-v4'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='cldemote'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='erms'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='gfni'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdir64b'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='movdiri'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='xsaves'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:25:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='athlon'>
Jan 31 07:25:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:44.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='athlon-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='core2duo'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='core2duo-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='coreduo'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='coreduo-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='n270'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='n270-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='ss'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='phenom'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <blockers model='phenom-v1'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnow'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <feature name='3dnowext'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </blockers>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </mode>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <memoryBacking supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <enum name='sourceType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>file</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>anonymous</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <value>memfd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </memoryBacking>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <disk supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='diskDevice'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>disk</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>cdrom</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>floppy</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>lun</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='bus'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>ide</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>fdc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>scsi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>sata</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-non-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <graphics supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vnc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>egl-headless</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dbus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </graphics>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <video supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='modelType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vga</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>cirrus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>none</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>bochs</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>ramfb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </video>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <hostdev supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='mode'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>subsystem</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='startupPolicy'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>default</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>mandatory</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>requisite</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>optional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='subsysType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pci</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>scsi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='capsType'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='pciBackend'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </hostdev>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <rng supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtio-non-transitional</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>random</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>egd</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>builtin</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <filesystem supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='driverType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>path</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>handle</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>virtiofs</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </filesystem>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <tpm supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tpm-tis</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tpm-crb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>emulator</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>external</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendVersion'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>2.0</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </tpm>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <redirdev supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='bus'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>usb</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </redirdev>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <channel supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pty</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>unix</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </channel>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <crypto supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>qemu</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendModel'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>builtin</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </crypto>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <interface supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='backendType'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>default</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>passt</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <panic supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='model'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>isa</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>hyperv</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </panic>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <console supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='type'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>null</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vc</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pty</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dev</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>file</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>pipe</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>stdio</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>udp</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tcp</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>unix</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>qemu-vdagent</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>dbus</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </console>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <features>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <gic supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <vmcoreinfo supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <genid supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <backingStoreInput supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <backup supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <async-teardown supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <s390-pv supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <ps2 supported='yes'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <tdx supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <sev supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <sgx supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <hyperv supported='yes'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <enum name='features'>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>relaxed</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vapic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>spinlocks</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vpindex</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>runtime</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>synic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>stimer</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>reset</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>vendor_id</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>frequencies</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>reenlightenment</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>tlbflush</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>ipi</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>avic</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>emsr_bitmap</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <value>xmm_input</value>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </enum>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       <defaults>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <spinlocks>4095</spinlocks>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <stimer_direct>on</stimer_direct>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:25:44 compute-2 nova_compute[226829]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:25:44 compute-2 nova_compute[226829]:       </defaults>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     </hyperv>
Jan 31 07:25:44 compute-2 nova_compute[226829]:     <launchSecurity supported='no'/>
Jan 31 07:25:44 compute-2 nova_compute[226829]:   </features>
Jan 31 07:25:44 compute-2 nova_compute[226829]: </domainCapabilities>
Jan 31 07:25:44 compute-2 nova_compute[226829]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.891 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.892 226833 INFO nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Secure Boot support detected
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.894 226833 WARNING nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.895 226833 DEBUG nova.virt.libvirt.volume.mount [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.897 226833 INFO nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.897 226833 INFO nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.909 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 31 07:25:44 compute-2 nova_compute[226829]:   <model>Nehalem</model>
Jan 31 07:25:44 compute-2 nova_compute[226829]: </cpu>
Jan 31 07:25:44 compute-2 nova_compute[226829]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 31 07:25:44 compute-2 nova_compute[226829]: 2026-01-31 07:25:44.911 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 31 07:25:45 compute-2 nova_compute[226829]: 2026-01-31 07:25:45.316 226833 INFO nova.virt.node [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Determined node identity 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from /var/lib/nova/compute_id
Jan 31 07:25:45 compute-2 nova_compute[226829]: 2026-01-31 07:25:45.539 226833 WARNING nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Compute nodes ['2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 31 07:25:45 compute-2 nova_compute[226829]: 2026-01-31 07:25:45.649 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 31 07:25:45 compute-2 nova_compute[226829]: 2026-01-31 07:25:45.735 226833 WARNING nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.
Jan 31 07:25:45 compute-2 nova_compute[226829]: 2026-01-31 07:25:45.736 226833 DEBUG oslo_concurrency.lockutils [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:25:45 compute-2 nova_compute[226829]: 2026-01-31 07:25:45.736 226833 DEBUG oslo_concurrency.lockutils [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:25:45 compute-2 nova_compute[226829]: 2026-01-31 07:25:45.737 226833 DEBUG oslo_concurrency.lockutils [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:25:45 compute-2 nova_compute[226829]: 2026-01-31 07:25:45.737 226833 DEBUG nova.compute.resource_tracker [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:25:45 compute-2 nova_compute[226829]: 2026-01-31 07:25:45.738 226833 DEBUG oslo_concurrency.processutils [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:25:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/613405914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:25:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/724673685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:46 compute-2 nova_compute[226829]: 2026-01-31 07:25:46.367 226833 DEBUG oslo_concurrency.processutils [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:25:46 compute-2 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 07:25:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:46.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:46 compute-2 systemd[1]: Started libvirt nodedev daemon.
Jan 31 07:25:46 compute-2 nova_compute[226829]: 2026-01-31 07:25:46.721 226833 WARNING nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:25:46 compute-2 nova_compute[226829]: 2026-01-31 07:25:46.722 226833 DEBUG nova.compute.resource_tracker [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5300MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:25:46 compute-2 nova_compute[226829]: 2026-01-31 07:25:46.723 226833 DEBUG oslo_concurrency.lockutils [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:25:46 compute-2 nova_compute[226829]: 2026-01-31 07:25:46.723 226833 DEBUG oslo_concurrency.lockutils [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:25:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:46.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:46 compute-2 nova_compute[226829]: 2026-01-31 07:25:46.986 226833 WARNING nova.compute.resource_tracker [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] No compute node record for compute-2.ctlplane.example.com:2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc could not be found.
Jan 31 07:25:47 compute-2 ceph-mon[77282]: pgmap v736: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/724673685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:47 compute-2 nova_compute[226829]: 2026-01-31 07:25:47.363 226833 INFO nova.compute.resource_tracker [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Compute node record created for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com with uuid: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc
Jan 31 07:25:47 compute-2 nova_compute[226829]: 2026-01-31 07:25:47.616 226833 DEBUG nova.compute.resource_tracker [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:25:47 compute-2 nova_compute[226829]: 2026-01-31 07:25:47.616 226833 DEBUG nova.compute.resource_tracker [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:25:47 compute-2 nova_compute[226829]: 2026-01-31 07:25:47.759 226833 INFO nova.scheduler.client.report [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [req-714302a5-717a-454e-9f95-ecce0fe226b4] Created resource provider record via placement API for resource provider with UUID 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc and name compute-2.ctlplane.example.com.
Jan 31 07:25:47 compute-2 nova_compute[226829]: 2026-01-31 07:25:47.825 226833 DEBUG oslo_concurrency.processutils [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:25:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2858455899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2119535112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:25:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3659799667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.289 226833 DEBUG oslo_concurrency.processutils [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.295 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 07:25:48 compute-2 nova_compute[226829]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.295 226833 INFO nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] kernel doesn't support AMD SEV
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.297 226833 DEBUG nova.compute.provider_tree [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.297 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.302 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Libvirt baseline CPU <cpu>
Jan 31 07:25:48 compute-2 nova_compute[226829]:   <arch>x86_64</arch>
Jan 31 07:25:48 compute-2 nova_compute[226829]:   <model>Nehalem</model>
Jan 31 07:25:48 compute-2 nova_compute[226829]:   <vendor>AMD</vendor>
Jan 31 07:25:48 compute-2 nova_compute[226829]:   <topology sockets="8" cores="1" threads="1"/>
Jan 31 07:25:48 compute-2 nova_compute[226829]: </cpu>
Jan 31 07:25:48 compute-2 nova_compute[226829]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Jan 31 07:25:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:48.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.451 226833 DEBUG nova.scheduler.client.report [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Updated inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.452 226833 DEBUG nova.compute.provider_tree [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Updating resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.452 226833 DEBUG nova.compute.provider_tree [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.551 226833 DEBUG nova.compute.provider_tree [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Updating resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.596 226833 DEBUG nova.compute.resource_tracker [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.597 226833 DEBUG oslo_concurrency.lockutils [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.597 226833 DEBUG nova.service [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.741 226833 DEBUG nova.service [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 31 07:25:48 compute-2 nova_compute[226829]: 2026-01-31 07:25:48.742 226833 DEBUG nova.servicegroup.drivers.db [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] DB_Driver: join new ServiceGroup member compute-2.ctlplane.example.com to the compute group, service = <Service: host=compute-2.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 31 07:25:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:48.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:49 compute-2 ceph-mon[77282]: pgmap v737: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3659799667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:49 compute-2 podman[227252]: 2026-01-31 07:25:49.197892434 +0000 UTC m=+0.076519867 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:25:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:50.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:50.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:51 compute-2 ceph-mon[77282]: pgmap v738: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:52.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:52.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:53 compute-2 ceph-mon[77282]: pgmap v739: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:54.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:54 compute-2 ceph-mon[77282]: pgmap v740: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:54.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:56.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:56 compute-2 ceph-mon[77282]: pgmap v741: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:56.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:58 compute-2 ceph-mon[77282]: pgmap v742: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:25:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:25:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:58.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:25:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:00.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:00 compute-2 nova_compute[226829]: 2026-01-31 07:26:00.745 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:00 compute-2 nova_compute[226829]: 2026-01-31 07:26:00.820 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:00 compute-2 ceph-mon[77282]: pgmap v743: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:00.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:26:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:02.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:26:02 compute-2 ceph-mon[77282]: pgmap v744: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:02.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:04 compute-2 sudo[227279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:04 compute-2 sudo[227279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:04 compute-2 sudo[227279]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:04 compute-2 sudo[227304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:04 compute-2 sudo[227304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:04 compute-2 sudo[227304]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:04.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:04 compute-2 ceph-mon[77282]: pgmap v745: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:04.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:06.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:26:06.829 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:26:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:26:06.831 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:26:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:26:06.832 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:26:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:06.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:06 compute-2 ceph-mon[77282]: pgmap v746: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:08.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:08.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:08 compute-2 ceph-mon[77282]: pgmap v747: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:26:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:10.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:26:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:10.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:11 compute-2 ceph-mon[77282]: pgmap v748: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:12 compute-2 podman[227333]: 2026-01-31 07:26:12.21909769 +0000 UTC m=+0.095387115 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:26:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:26:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:12.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:26:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:12.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:13 compute-2 ceph-mon[77282]: pgmap v749: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:26:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:14.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:26:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:14.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:15 compute-2 ceph-mon[77282]: pgmap v750: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:16.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:16.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:17 compute-2 ceph-mon[77282]: pgmap v751: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:18.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:26:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:26:19 compute-2 ceph-mon[77282]: pgmap v752: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:26:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172451859' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:26:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:26:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172451859' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:26:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2172451859' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:26:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2172451859' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:26:20 compute-2 podman[227365]: 2026-01-31 07:26:20.159216941 +0000 UTC m=+0.048713781 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:26:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:20.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:20.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:21 compute-2 ceph-mon[77282]: pgmap v753: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/571993948' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:26:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/571993948' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:26:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:22.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:23 compute-2 ceph-mon[77282]: pgmap v754: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:24 compute-2 sudo[227386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:24 compute-2 sudo[227386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:24 compute-2 sudo[227386]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:24 compute-2 sudo[227411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:24 compute-2 sudo[227411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:24 compute-2 sudo[227411]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:24.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:24.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:25 compute-2 ceph-mon[77282]: pgmap v755: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:26.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:26.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:27 compute-2 ceph-mon[77282]: pgmap v756: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:28.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:29 compute-2 ceph-mon[77282]: pgmap v757: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:29 compute-2 sudo[227439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:29 compute-2 sudo[227439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:29 compute-2 sudo[227439]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:29 compute-2 sudo[227464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:26:29 compute-2 sudo[227464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:29 compute-2 sudo[227464]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:29 compute-2 sudo[227489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:29 compute-2 sudo[227489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:29 compute-2 sudo[227489]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:29 compute-2 sudo[227514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:26:29 compute-2 sudo[227514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:30 compute-2 sudo[227514]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:30.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:30.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:31 compute-2 ceph-mon[77282]: pgmap v758: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:26:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2853494607' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:26:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2853494607' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:26:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:32.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:32.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:33 compute-2 ceph-mon[77282]: pgmap v759: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:34.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:34.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:35 compute-2 ceph-mon[77282]: pgmap v760: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:36.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:37.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:37 compute-2 ceph-mon[77282]: pgmap v761: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:37 compute-2 sudo[227572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:37 compute-2 sudo[227572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:37 compute-2 sudo[227572]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:37 compute-2 sudo[227597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:26:37 compute-2 sudo[227597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:37 compute-2 sudo[227597]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:38.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:39.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:39 compute-2 ceph-mon[77282]: pgmap v762: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:40.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:26:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:41.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:26:41 compute-2 ceph-mon[77282]: pgmap v763: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1089613681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/54853499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:42.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:26:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:43.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:26:43 compute-2 podman[227625]: 2026-01-31 07:26:43.246367924 +0000 UTC m=+0.099815307 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:26:43 compute-2 ceph-mon[77282]: pgmap v764: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1433876801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2851660928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.491 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.491 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.492 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.492 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.515 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.516 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.516 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.516 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.517 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.517 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.517 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.517 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.517 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.548 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.548 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.549 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.549 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:26:43 compute-2 nova_compute[226829]: 2026-01-31 07:26:43.550 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:26:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:26:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2103975054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.014 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.219 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.220 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5324MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.221 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.221 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.308 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.309 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:26:44 compute-2 sudo[227673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:44 compute-2 sudo[227673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:44 compute-2 sudo[227673]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.334 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:26:44 compute-2 sudo[227698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:44 compute-2 sudo[227698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:44 compute-2 sudo[227698]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:44 compute-2 ceph-mon[77282]: pgmap v765: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2103975054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:26:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:44.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:26:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:26:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/848047206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.768 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.772 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.889 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.891 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:26:44 compute-2 nova_compute[226829]: 2026-01-31 07:26:44.891 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:26:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:45.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/848047206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:46 compute-2 ceph-mon[77282]: pgmap v766: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:46.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:26:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:47.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:26:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:48.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:48 compute-2 ceph-mon[77282]: pgmap v767: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:49.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:50.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:50 compute-2 ceph-mon[77282]: pgmap v768: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:51.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:51 compute-2 podman[227749]: 2026-01-31 07:26:51.187135529 +0000 UTC m=+0.062453024 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 07:26:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:52.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:52 compute-2 ceph-mon[77282]: pgmap v769: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:53.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:54.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:55.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:55 compute-2 ceph-mon[77282]: pgmap v770: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:26:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:56.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:26:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:57.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:57 compute-2 ceph-mon[77282]: pgmap v771: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:26:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:58.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:26:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:26:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:59.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:59 compute-2 ceph-mon[77282]: pgmap v772: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:26:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3478 writes, 18K keys, 3478 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 3478 writes, 3478 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1352 writes, 6763 keys, 1352 commit groups, 1.0 writes per commit group, ingest: 14.43 MB, 0.02 MB/s
                                           Interval WAL: 1352 writes, 1352 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    100.8      0.21              0.05         9    0.023       0      0       0.0       0.0
                                             L6      1/0    7.47 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    155.9    130.1      0.52              0.19         8    0.065     36K   4300       0.0       0.0
                                            Sum      1/0    7.47 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    111.4    121.7      0.72              0.24        17    0.043     36K   4300       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.6    108.8    109.0      0.48              0.16        10    0.048     23K   3023       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    155.9    130.1      0.52              0.19         8    0.065     36K   4300       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    101.5      0.21              0.05         8    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.09 GB write, 0.07 MB/s write, 0.08 GB read, 0.07 MB/s read, 0.7 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 308.00 MB usage: 4.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000123 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(246,4.21 MB,1.36728%) FilterBlock(17,106.67 KB,0.033822%) IndexBlock(17,211.67 KB,0.0671139%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:26:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:01.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:01 compute-2 ceph-mon[77282]: pgmap v773: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:02.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:03.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:03 compute-2 ceph-mon[77282]: pgmap v774: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:04 compute-2 sudo[227775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:04 compute-2 sudo[227775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:04 compute-2 sudo[227775]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:04 compute-2 sudo[227800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:04 compute-2 sudo[227800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:04 compute-2 sudo[227800]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:04.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:05.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:05 compute-2 ceph-mon[77282]: pgmap v775: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:06.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:27:06.830 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:27:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:27:06.831 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:27:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:27:06.831 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:27:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:07.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:07 compute-2 ceph-mon[77282]: pgmap v776: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:08.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:09.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:09 compute-2 ceph-mon[77282]: pgmap v777: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:10 compute-2 ceph-mon[77282]: pgmap v778: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:10.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:11.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:12.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:12 compute-2 ceph-mon[77282]: pgmap v779: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:13.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:14 compute-2 podman[227830]: 2026-01-31 07:27:14.231384837 +0000 UTC m=+0.123509316 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:27:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:14.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:14 compute-2 ceph-mon[77282]: pgmap v780: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:15.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 07:27:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:16.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:17 compute-2 ceph-mon[77282]: pgmap v781: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:17.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:19 compute-2 ceph-mon[77282]: pgmap v782: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:19.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:20.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:21 compute-2 ceph-mon[77282]: pgmap v783: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:21.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:22 compute-2 podman[227860]: 2026-01-31 07:27:22.185824808 +0000 UTC m=+0.074855961 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 07:27:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:22.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:23.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:23 compute-2 ceph-mon[77282]: pgmap v784: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:24 compute-2 sudo[227880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:24 compute-2 sudo[227880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:24 compute-2 sudo[227880]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:24.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:24 compute-2 sudo[227905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:24 compute-2 sudo[227905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:24 compute-2 sudo[227905]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:25.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:25 compute-2 ceph-mon[77282]: pgmap v785: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:26.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:27 compute-2 ceph-mon[77282]: pgmap v786: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:28.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:27:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6049 writes, 24K keys, 6049 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6049 writes, 1045 syncs, 5.79 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 479 writes, 718 keys, 479 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s
                                           Interval WAL: 479 writes, 238 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:27:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:29.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:29 compute-2 ceph-mon[77282]: pgmap v787: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:30.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:31.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:31 compute-2 ceph-mon[77282]: pgmap v788: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:32.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:33.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:33 compute-2 ceph-mon[77282]: pgmap v789: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:34.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:35.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:35 compute-2 ceph-mon[77282]: pgmap v790: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:36.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:36 compute-2 ceph-mon[77282]: pgmap v791: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:37.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:37 compute-2 sudo[227937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:37 compute-2 sudo[227937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:37 compute-2 sudo[227937]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:37 compute-2 sudo[227962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:27:37 compute-2 sudo[227962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:37 compute-2 sudo[227962]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:37 compute-2 sudo[227987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:37 compute-2 sudo[227987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:37 compute-2 sudo[227987]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:37 compute-2 sudo[228012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:27:37 compute-2 sudo[228012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:38 compute-2 sudo[228012]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:38 compute-2 sudo[228057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:38 compute-2 sudo[228057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:38 compute-2 sudo[228057]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:38 compute-2 sudo[228082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:27:38 compute-2 sudo[228082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:38 compute-2 sudo[228082]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:38 compute-2 sudo[228107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:38 compute-2 sudo[228107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:38 compute-2 sudo[228107]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:38 compute-2 sudo[228132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:27:38 compute-2 sudo[228132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:38.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:38 compute-2 sudo[228132]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:39 compute-2 ceph-mon[77282]: pgmap v792: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:27:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:39.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:27:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:27:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:27:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:27:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:27:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:27:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:40.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:27:40.780 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:27:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:27:40.782 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:27:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:27:40.785 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:27:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:41.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:41 compute-2 ceph-mon[77282]: pgmap v793: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:42.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:43.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:43 compute-2 ceph-mon[77282]: pgmap v794: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1464465284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4160809978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:44.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:44 compute-2 sudo[228191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:44 compute-2 sudo[228191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:44 compute-2 sudo[228191]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:44 compute-2 sudo[228217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:44 compute-2 sudo[228217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:44 compute-2 sudo[228217]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:27:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2038699296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:27:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:27:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2038699296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:27:44 compute-2 nova_compute[226829]: 2026-01-31 07:27:44.882 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:44 compute-2 nova_compute[226829]: 2026-01-31 07:27:44.883 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:44 compute-2 podman[228215]: 2026-01-31 07:27:44.886062065 +0000 UTC m=+0.127238888 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:27:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:45.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:45 compute-2 ceph-mon[77282]: pgmap v795: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2038699296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:27:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2038699296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:27:45 compute-2 sudo[228268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:45 compute-2 sudo[228268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:45 compute-2 sudo[228268]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:45 compute-2 sudo[228293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:27:45 compute-2 sudo[228293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:45 compute-2 sudo[228293]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:46 compute-2 ceph-mon[77282]: pgmap v796: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:46.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:47.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.102 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.103 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.103 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.123 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.124 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.125 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.125 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.126 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.126 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.127 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.127 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.128 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.154 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.155 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.155 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.156 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.156 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:27:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:27:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2871053164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.563 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:27:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2871053164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2323152489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.746 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.748 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5299MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.748 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.748 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.871 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.871 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:27:47 compute-2 nova_compute[226829]: 2026-01-31 07:27:47.901 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:27:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:27:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/60847909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:48 compute-2 nova_compute[226829]: 2026-01-31 07:27:48.320 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:27:48 compute-2 nova_compute[226829]: 2026-01-31 07:27:48.324 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:27:48 compute-2 nova_compute[226829]: 2026-01-31 07:27:48.514 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:27:48 compute-2 nova_compute[226829]: 2026-01-31 07:27:48.516 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:27:48 compute-2 nova_compute[226829]: 2026-01-31 07:27:48.516 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:27:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/229641050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:48 compute-2 ceph-mon[77282]: pgmap v797: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/60847909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:48.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:49.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:50.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:50 compute-2 ceph-mon[77282]: pgmap v798: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:51.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:52.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:52 compute-2 ceph-mon[77282]: pgmap v799: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:53.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:53 compute-2 podman[228366]: 2026-01-31 07:27:53.161892323 +0000 UTC m=+0.048833642 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 31 07:27:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:54.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:55 compute-2 ceph-mon[77282]: pgmap v800: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:55.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:56.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:57 compute-2 ceph-mon[77282]: pgmap v801: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:57.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:58.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:27:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:27:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:59.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:27:59 compute-2 ceph-mon[77282]: pgmap v802: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:00 compute-2 ceph-mon[77282]: pgmap v803: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:00.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:02.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:03 compute-2 ceph-mon[77282]: pgmap v804: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:03.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:04.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:04 compute-2 sudo[228391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:04 compute-2 sudo[228391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:04 compute-2 sudo[228391]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:04 compute-2 sudo[228416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:04 compute-2 sudo[228416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:04 compute-2 sudo[228416]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:05 compute-2 ceph-mon[77282]: pgmap v805: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:06.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:28:06.830 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:28:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:28:06.831 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:28:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:28:06.831 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:28:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:07.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:07 compute-2 ceph-mon[77282]: pgmap v806: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:08 compute-2 ceph-mon[77282]: pgmap v807: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:08.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:09.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:10.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:10 compute-2 ceph-mon[77282]: pgmap v808: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:11.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:12.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:13 compute-2 ceph-mon[77282]: pgmap v809: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:14 compute-2 ceph-mon[77282]: pgmap v810: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:15 compute-2 podman[228447]: 2026-01-31 07:28:15.205753984 +0000 UTC m=+0.084834392 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:28:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:16.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:17 compute-2 ceph-mon[77282]: pgmap v811: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:19.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:19 compute-2 ceph-mon[77282]: pgmap v812: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:28:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:20.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:28:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:21.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:21 compute-2 ceph-mon[77282]: pgmap v813: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Jan 31 07:28:22 compute-2 ceph-mon[77282]: pgmap v814: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 31 07:28:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:22.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:23.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:24 compute-2 podman[228477]: 2026-01-31 07:28:24.16929449 +0000 UTC m=+0.055512344 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:28:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:24.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:25 compute-2 sudo[228496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:25 compute-2 sudo[228496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:25 compute-2 sudo[228496]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:25 compute-2 ceph-mon[77282]: pgmap v815: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 31 07:28:25 compute-2 sudo[228521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:25 compute-2 sudo[228521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:25 compute-2 sudo[228521]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:28:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:25.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:28:26 compute-2 ceph-mon[77282]: pgmap v816: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 114 op/s
Jan 31 07:28:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:26.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:27.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:28.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:29.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:29 compute-2 ceph-mon[77282]: pgmap v817: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 31 07:28:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:30 compute-2 ceph-mon[77282]: pgmap v818: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Jan 31 07:28:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:30.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:31.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:32.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:33.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:33 compute-2 ceph-mon[77282]: pgmap v819: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 164 op/s
Jan 31 07:28:34 compute-2 ceph-mon[77282]: pgmap v820: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 0 B/s wr, 154 op/s
Jan 31 07:28:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:34.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:35.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:36.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:36 compute-2 ceph-mon[77282]: pgmap v821: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 0 B/s wr, 124 op/s
Jan 31 07:28:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:37.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:38.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:38 compute-2 ceph-mon[77282]: pgmap v822: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Jan 31 07:28:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:39.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:40.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:41 compute-2 ceph-mon[77282]: pgmap v823: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 31 07:28:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:41.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:42.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:43 compute-2 ceph-mon[77282]: pgmap v824: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 07:28:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:43.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2522133190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:44.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:45.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:45 compute-2 sudo[228557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:45 compute-2 sudo[228557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-2 sudo[228557]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:45 compute-2 sudo[228582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:45 compute-2 sudo[228582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-2 sudo[228582]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:45 compute-2 sudo[228607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:45 compute-2 sudo[228607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-2 sudo[228607]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:45 compute-2 sudo[228642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:28:45 compute-2 sudo[228642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-2 sudo[228642]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:45 compute-2 podman[228631]: 2026-01-31 07:28:45.747838872 +0000 UTC m=+0.098158306 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:28:45 compute-2 sudo[228681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:45 compute-2 sudo[228681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-2 sudo[228681]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:45 compute-2 sudo[228706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:28:45 compute-2 sudo[228706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-2 ceph-mon[77282]: pgmap v825: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2535823876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/840305027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3089092945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:28:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3089092945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:28:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1309724722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:46 compute-2 sudo[228706]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:46.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:47.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:47 compute-2 ceph-mon[77282]: pgmap v826: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:28:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:28:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:28:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:28:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:28:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:28:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:28:48 compute-2 ceph-mon[77282]: pgmap v827: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:48 compute-2 nova_compute[226829]: 2026-01-31 07:28:48.519 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:48 compute-2 nova_compute[226829]: 2026-01-31 07:28:48.520 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:48 compute-2 nova_compute[226829]: 2026-01-31 07:28:48.520 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:28:48 compute-2 nova_compute[226829]: 2026-01-31 07:28:48.521 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:28:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:48.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:49.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.270 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.271 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.272 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.272 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.272 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.273 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.273 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.273 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.273 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.882 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.883 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.883 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.883 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:28:49 compute-2 nova_compute[226829]: 2026-01-31 07:28:49.884 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:28:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:28:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3985746322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:50 compute-2 nova_compute[226829]: 2026-01-31 07:28:50.325 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:28:50 compute-2 nova_compute[226829]: 2026-01-31 07:28:50.494 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:28:50 compute-2 nova_compute[226829]: 2026-01-31 07:28:50.496 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5296MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:28:50 compute-2 nova_compute[226829]: 2026-01-31 07:28:50.496 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:28:50 compute-2 nova_compute[226829]: 2026-01-31 07:28:50.497 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:28:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:50.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:51 compute-2 ceph-mon[77282]: pgmap v828: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3985746322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:51.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:52 compute-2 ceph-mon[77282]: pgmap v829: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:52.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:53.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:54 compute-2 sudo[228790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:54 compute-2 sudo[228790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:54 compute-2 sudo[228790]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:54 compute-2 nova_compute[226829]: 2026-01-31 07:28:54.137 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:28:54 compute-2 nova_compute[226829]: 2026-01-31 07:28:54.137 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:28:54 compute-2 sudo[228815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:28:54 compute-2 sudo[228815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:54 compute-2 sudo[228815]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:54 compute-2 nova_compute[226829]: 2026-01-31 07:28:54.163 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:28:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:28:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/82651890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:54 compute-2 nova_compute[226829]: 2026-01-31 07:28:54.571 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:28:54 compute-2 nova_compute[226829]: 2026-01-31 07:28:54.577 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:28:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:28:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:28:54 compute-2 ceph-mon[77282]: pgmap v830: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/82651890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:55.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:55 compute-2 podman[228863]: 2026-01-31 07:28:55.183216051 +0000 UTC m=+0.067218452 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:28:55 compute-2 nova_compute[226829]: 2026-01-31 07:28:55.409 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:28:55 compute-2 nova_compute[226829]: 2026-01-31 07:28:55.411 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:28:55 compute-2 nova_compute[226829]: 2026-01-31 07:28:55.412 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:28:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:57 compute-2 ceph-mon[77282]: pgmap v831: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:57.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:28:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:58.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:28:59 compute-2 ceph-mon[77282]: pgmap v832: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:28:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:59.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:00.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:01 compute-2 ceph-mon[77282]: pgmap v833: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:01.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:02.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:03 compute-2 ceph-mon[77282]: pgmap v834: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:03.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:04.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:05 compute-2 ceph-mon[77282]: pgmap v835: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:05.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:05 compute-2 sudo[228887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:05 compute-2 sudo[228887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:05 compute-2 sudo[228887]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:05 compute-2 sudo[228912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:05 compute-2 sudo[228912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:05 compute-2 sudo[228912]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:06.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:29:06.831 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:29:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:29:06.832 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:29:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:29:06.832 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:29:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:07.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:07 compute-2 ceph-mon[77282]: pgmap v836: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:08 compute-2 ceph-mon[77282]: pgmap v837: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:08.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:09.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:10.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:11 compute-2 ceph-mon[77282]: pgmap v838: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:11.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:12.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:13 compute-2 ceph-mon[77282]: pgmap v839: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:13.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:14 compute-2 ceph-mon[77282]: pgmap v840: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:14.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:15.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:16 compute-2 podman[228942]: 2026-01-31 07:29:16.220412542 +0000 UTC m=+0.111629572 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 07:29:16 compute-2 ceph-mon[77282]: pgmap v841: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:16.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:17.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.030098) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558030173, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2354, "num_deletes": 251, "total_data_size": 5959312, "memory_usage": 6038288, "flush_reason": "Manual Compaction"}
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558054169, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3902909, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17898, "largest_seqno": 20247, "table_properties": {"data_size": 3893310, "index_size": 6093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19107, "raw_average_key_size": 20, "raw_value_size": 3874271, "raw_average_value_size": 4073, "num_data_blocks": 272, "num_entries": 951, "num_filter_entries": 951, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844333, "oldest_key_time": 1769844333, "file_creation_time": 1769844558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 24131 microseconds, and 8412 cpu microseconds.
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.054226) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3902909 bytes OK
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.054244) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.055433) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.055446) EVENT_LOG_v1 {"time_micros": 1769844558055441, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.055464) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5949015, prev total WAL file size 5949015, number of live WAL files 2.
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.056374) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3811KB)], [36(7648KB)]
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558056414, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 11734554, "oldest_snapshot_seqno": -1}
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4495 keys, 9656979 bytes, temperature: kUnknown
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558112107, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9656979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9624513, "index_size": 20132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 112399, "raw_average_key_size": 25, "raw_value_size": 9540655, "raw_average_value_size": 2122, "num_data_blocks": 836, "num_entries": 4495, "num_filter_entries": 4495, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769844558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.112367) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9656979 bytes
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.113464) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.3 rd, 173.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 5014, records dropped: 519 output_compression: NoCompression
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.113489) EVENT_LOG_v1 {"time_micros": 1769844558113479, "job": 20, "event": "compaction_finished", "compaction_time_micros": 55801, "compaction_time_cpu_micros": 22978, "output_level": 6, "num_output_files": 1, "total_output_size": 9656979, "num_input_records": 5014, "num_output_records": 4495, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558114035, "job": 20, "event": "table_file_deletion", "file_number": 38}
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558114752, "job": 20, "event": "table_file_deletion", "file_number": 36}
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.056316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.114846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.114855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.114858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.114861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:29:18.114864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:18.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:19 compute-2 ceph-mon[77282]: pgmap v842: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:19.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:20.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:21 compute-2 ceph-mon[77282]: pgmap v843: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:21.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:22 compute-2 ceph-mon[77282]: pgmap v844: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:22.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=404 latency=0.002000054s ======
Jan 31 07:29:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:23.119 +0000] "GET /healthcheck HTTP/1.1" 404 240 - "python-urllib3/1.26.5" - latency=0.002000054s
Jan 31 07:29:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:23.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:24.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:25 compute-2 ceph-mon[77282]: pgmap v845: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:25.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:25 compute-2 sudo[228973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:25 compute-2 sudo[228973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:25 compute-2 sudo[228973]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:25 compute-2 sudo[229004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:25 compute-2 sudo[229004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:25 compute-2 podman[228997]: 2026-01-31 07:29:25.532758343 +0000 UTC m=+0.053410251 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:29:25 compute-2 sudo[229004]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:26.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:27 compute-2 ceph-mon[77282]: pgmap v846: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:27.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e131 e131: 3 total, 3 up, 3 in
Jan 31 07:29:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:28.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:29.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:29 compute-2 ceph-mon[77282]: pgmap v847: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:29 compute-2 ceph-mon[77282]: osdmap e131: 3 total, 3 up, 3 in
Jan 31 07:29:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e132 e132: 3 total, 3 up, 3 in
Jan 31 07:29:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:30.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:30 compute-2 ceph-mon[77282]: osdmap e132: 3 total, 3 up, 3 in
Jan 31 07:29:30 compute-2 ceph-mon[77282]: pgmap v850: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Jan 31 07:29:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:31.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:32 compute-2 ceph-mon[77282]: pgmap v851: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 31 07:29:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:32.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:33.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e133 e133: 3 total, 3 up, 3 in
Jan 31 07:29:34 compute-2 ceph-mon[77282]: osdmap e133: 3 total, 3 up, 3 in
Jan 31 07:29:34 compute-2 ceph-mon[77282]: pgmap v853: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 15 op/s
Jan 31 07:29:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:34.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:35.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:36 compute-2 ceph-mon[77282]: pgmap v854: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 19 op/s
Jan 31 07:29:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:36.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:37.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e134 e134: 3 total, 3 up, 3 in
Jan 31 07:29:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:38.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:39.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:39 compute-2 ceph-mon[77282]: pgmap v855: 305 pgs: 305 active+clean; 29 MiB data, 181 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Jan 31 07:29:39 compute-2 ceph-mon[77282]: osdmap e134: 3 total, 3 up, 3 in
Jan 31 07:29:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 e135: 3 total, 3 up, 3 in
Jan 31 07:29:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:40 compute-2 ceph-mon[77282]: osdmap e135: 3 total, 3 up, 3 in
Jan 31 07:29:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:40.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:41.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:41 compute-2 ceph-mon[77282]: pgmap v858: 305 pgs: 305 active+clean; 37 MiB data, 189 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 5.2 MiB/s wr, 31 op/s
Jan 31 07:29:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:43.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:43 compute-2 ceph-mon[77282]: pgmap v859: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 4.9 MiB/s wr, 26 op/s
Jan 31 07:29:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3500024337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:44.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/451814909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:45.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:45 compute-2 sudo[229051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:45 compute-2 sudo[229051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:45 compute-2 sudo[229051]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:45 compute-2 sudo[229076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:45 compute-2 sudo[229076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:45 compute-2 sudo[229076]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:46 compute-2 ceph-mon[77282]: pgmap v860: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 2.5 MiB/s wr, 25 op/s
Jan 31 07:29:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/848144418' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:29:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/848144418' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:29:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:46.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:47.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:47 compute-2 podman[229102]: 2026-01-31 07:29:47.229302853 +0000 UTC m=+0.116060054 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:29:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2552159884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:47 compute-2 ceph-mon[77282]: pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 9.6 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Jan 31 07:29:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2138970295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:48.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:49.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:49 compute-2 ceph-mon[77282]: pgmap v862: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Jan 31 07:29:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.376 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.377 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.402 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.403 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.403 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.422 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.422 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.423 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.423 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.423 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.424 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.424 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.424 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.424 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.456 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.457 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.457 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.457 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.457 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:29:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:29:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/394366881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:50 compute-2 nova_compute[226829]: 2026-01-31 07:29:50.910 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.079 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.081 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5310MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.081 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.081 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.169 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.169 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.190 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:29:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:51.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:51 compute-2 ceph-mon[77282]: pgmap v863: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.5 KiB/s rd, 420 KiB/s wr, 6 op/s
Jan 31 07:29:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/394366881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:29:51 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2708000108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.633 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.639 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.700 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.702 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:29:51 compute-2 nova_compute[226829]: 2026-01-31 07:29:51.703 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:29:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2708000108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:52.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:53.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:53 compute-2 ceph-mon[77282]: pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 379 KiB/s wr, 5 op/s
Jan 31 07:29:54 compute-2 sudo[229175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:54 compute-2 sudo[229175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:54 compute-2 sudo[229175]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:54 compute-2 sudo[229200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:29:54 compute-2 sudo[229200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:54 compute-2 sudo[229200]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:54 compute-2 sudo[229225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:54 compute-2 sudo[229225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:54 compute-2 sudo[229225]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:54 compute-2 sudo[229250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:29:54 compute-2 sudo[229250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:54 compute-2 sudo[229250]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:55.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:55 compute-2 ceph-mon[77282]: pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 426 B/s wr, 5 op/s
Jan 31 07:29:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:29:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:29:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:29:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:29:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:29:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:29:56 compute-2 podman[229307]: 2026-01-31 07:29:56.215127937 +0000 UTC m=+0.096817967 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:29:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:29:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:56.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:29:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:57.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:58 compute-2 ceph-mon[77282]: pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 31 07:29:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:58.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:29:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:59.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:59 compute-2 ceph-mon[77282]: pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:30:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:00.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:01.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:01 compute-2 ceph-mon[77282]: pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:02.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:03.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:04 compute-2 ceph-mon[77282]: pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:30:04.730 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:30:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:30:04.734 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:30:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:30:04.737 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:30:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:04.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:05.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:05 compute-2 ceph-mon[77282]: pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:05 compute-2 sudo[229334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:05 compute-2 sudo[229334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:05 compute-2 sudo[229334]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:05 compute-2 sudo[229359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:05 compute-2 sudo[229359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:05 compute-2 sudo[229359]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:05 compute-2 sudo[229384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:05 compute-2 sudo[229384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:05 compute-2 sudo[229384]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:05 compute-2 sudo[229409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:30:05 compute-2 sudo[229409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:05 compute-2 sudo[229409]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:30:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:30:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:30:06.833 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:30:06.834 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:30:06.834 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:06.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:07.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:07 compute-2 ceph-mon[77282]: pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:08.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:09.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:09 compute-2 ceph-mon[77282]: pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:10.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:11.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:11 compute-2 ceph-mon[77282]: pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:12.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:13.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:13 compute-2 ceph-mon[77282]: pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:14.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:15.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:15 compute-2 ceph-mon[77282]: pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:16.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:17.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:18 compute-2 podman[229440]: 2026-01-31 07:30:18.197600841 +0000 UTC m=+0.088072498 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:30:18 compute-2 ceph-mon[77282]: pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:18.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:19.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:19 compute-2 ceph-mon[77282]: pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:20.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:21.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:21 compute-2 ceph-mon[77282]: pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:22.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:23.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:24 compute-2 ceph-mon[77282]: pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:24.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:25.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:25 compute-2 ceph-mon[77282]: pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:25 compute-2 sudo[229470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:25 compute-2 sudo[229470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:25 compute-2 sudo[229470]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:25 compute-2 sudo[229495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:25 compute-2 sudo[229495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:25 compute-2 sudo[229495]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:26.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:27 compute-2 podman[229520]: 2026-01-31 07:30:27.184016059 +0000 UTC m=+0.064509624 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:30:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:27.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:27 compute-2 ceph-mon[77282]: pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:28.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:29.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:29 compute-2 ceph-mon[77282]: pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:30.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/547252698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:31.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:32 compute-2 ceph-mon[77282]: pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:32.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:33.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:34 compute-2 ceph-mon[77282]: pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:34.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:35.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:36 compute-2 ceph-mon[77282]: pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 379 KiB/s rd, 0 op/s
Jan 31 07:30:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:36.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:37.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e136 e136: 3 total, 3 up, 3 in
Jan 31 07:30:37 compute-2 ceph-mon[77282]: pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1 op/s
Jan 31 07:30:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e137 e137: 3 total, 3 up, 3 in
Jan 31 07:30:38 compute-2 ceph-mon[77282]: osdmap e136: 3 total, 3 up, 3 in
Jan 31 07:30:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:38.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:39.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:39 compute-2 ceph-mon[77282]: pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Jan 31 07:30:39 compute-2 ceph-mon[77282]: osdmap e137: 3 total, 3 up, 3 in
Jan 31 07:30:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/461678655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:30:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2111554275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:30:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:40.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:41.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:41 compute-2 ceph-mon[77282]: pgmap v890: 305 pgs: 305 active+clean; 53 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 518 KiB/s wr, 27 op/s
Jan 31 07:30:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:30:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:42.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:30:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:43.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:43 compute-2 nova_compute[226829]: 2026-01-31 07:30:43.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:43 compute-2 nova_compute[226829]: 2026-01-31 07:30:43.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:30:43 compute-2 nova_compute[226829]: 2026-01-31 07:30:43.507 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:30:43 compute-2 nova_compute[226829]: 2026-01-31 07:30:43.511 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:43 compute-2 nova_compute[226829]: 2026-01-31 07:30:43.512 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:30:43 compute-2 nova_compute[226829]: 2026-01-31 07:30:43.541 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:44 compute-2 ceph-mon[77282]: pgmap v891: 305 pgs: 305 active+clean; 79 MiB data, 210 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 31 op/s
Jan 31 07:30:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:44.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:45.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/909324454' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:30:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/909324454' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:30:45 compute-2 nova_compute[226829]: 2026-01-31 07:30:45.580 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:45 compute-2 nova_compute[226829]: 2026-01-31 07:30:45.581 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:46 compute-2 sudo[229550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:46 compute-2 sudo[229550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:46 compute-2 sudo[229550]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:46 compute-2 sudo[229575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:46 compute-2 sudo[229575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:46 compute-2 sudo[229575]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:46 compute-2 ceph-mon[77282]: pgmap v892: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 578 KiB/s rd, 2.7 MiB/s wr, 76 op/s
Jan 31 07:30:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1515124371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3350655955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:46.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:47.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e138 e138: 3 total, 3 up, 3 in
Jan 31 07:30:48 compute-2 ceph-mon[77282]: pgmap v893: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 132 op/s
Jan 31 07:30:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e139 e139: 3 total, 3 up, 3 in
Jan 31 07:30:48 compute-2 nova_compute[226829]: 2026-01-31 07:30:48.485 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:48 compute-2 nova_compute[226829]: 2026-01-31 07:30:48.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:48 compute-2 nova_compute[226829]: 2026-01-31 07:30:48.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:30:48 compute-2 nova_compute[226829]: 2026-01-31 07:30:48.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:30:48 compute-2 nova_compute[226829]: 2026-01-31 07:30:48.516 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:30:48 compute-2 nova_compute[226829]: 2026-01-31 07:30:48.517 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:48.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:49 compute-2 ceph-mon[77282]: osdmap e138: 3 total, 3 up, 3 in
Jan 31 07:30:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1880901487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:49 compute-2 ceph-mon[77282]: osdmap e139: 3 total, 3 up, 3 in
Jan 31 07:30:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:49.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:49 compute-2 podman[229601]: 2026-01-31 07:30:49.294505603 +0000 UTC m=+0.173879974 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:30:49 compute-2 nova_compute[226829]: 2026-01-31 07:30:49.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:49 compute-2 nova_compute[226829]: 2026-01-31 07:30:49.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:30:49 compute-2 nova_compute[226829]: 2026-01-31 07:30:49.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:49 compute-2 nova_compute[226829]: 2026-01-31 07:30:49.528 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:49 compute-2 nova_compute[226829]: 2026-01-31 07:30:49.528 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:49 compute-2 nova_compute[226829]: 2026-01-31 07:30:49.529 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:49 compute-2 nova_compute[226829]: 2026-01-31 07:30:49.529 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:30:49 compute-2 nova_compute[226829]: 2026-01-31 07:30:49.530 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:30:49 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4030013093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:49 compute-2 nova_compute[226829]: 2026-01-31 07:30:49.989 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:50 compute-2 ceph-mon[77282]: pgmap v895: 305 pgs: 305 active+clean; 79 MiB data, 214 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Jan 31 07:30:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2270920724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4030013093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3494218237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.136 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.137 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5285MB free_disk=20.968303680419922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.137 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.138 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.442 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.442 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.575 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.700 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.701 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.722 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.775 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:30:50 compute-2 nova_compute[226829]: 2026-01-31 07:30:50.808 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:50.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:30:51 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/263640405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:51.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:51 compute-2 nova_compute[226829]: 2026-01-31 07:30:51.266 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:51 compute-2 nova_compute[226829]: 2026-01-31 07:30:51.270 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:30:51 compute-2 nova_compute[226829]: 2026-01-31 07:30:51.283 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:30:51 compute-2 nova_compute[226829]: 2026-01-31 07:30:51.285 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:30:51 compute-2 nova_compute[226829]: 2026-01-31 07:30:51.285 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:52 compute-2 ceph-mon[77282]: pgmap v897: 305 pgs: 305 active+clean; 67 MiB data, 210 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 123 KiB/s wr, 158 op/s
Jan 31 07:30:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/263640405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:52 compute-2 nova_compute[226829]: 2026-01-31 07:30:52.284 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:52 compute-2 nova_compute[226829]: 2026-01-31 07:30:52.285 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:52.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:53.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:54 compute-2 ceph-mon[77282]: pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 KiB/s wr, 123 op/s
Jan 31 07:30:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2419106158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:54.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:55.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:56 compute-2 ceph-mon[77282]: pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 333 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Jan 31 07:30:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:56.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:30:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:57.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:30:57 compute-2 ceph-mon[77282]: pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 2.4 KiB/s wr, 27 op/s
Jan 31 07:30:58 compute-2 podman[229677]: 2026-01-31 07:30:58.180580251 +0000 UTC m=+0.064382790 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:30:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 e140: 3 total, 3 up, 3 in
Jan 31 07:30:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:58.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:30:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:59.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:59 compute-2 ceph-mon[77282]: pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 2.0 KiB/s wr, 29 op/s
Jan 31 07:30:59 compute-2 ceph-mon[77282]: osdmap e140: 3 total, 3 up, 3 in
Jan 31 07:31:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2672843128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1000464503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/86164115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:00.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:01.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:02 compute-2 ceph-mon[77282]: pgmap v903: 305 pgs: 305 active+clean; 68 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 465 KiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 31 07:31:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:02.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:03.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:04 compute-2 ceph-mon[77282]: pgmap v904: 305 pgs: 305 active+clean; 94 MiB data, 197 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 43 op/s
Jan 31 07:31:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:04.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:31:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:05.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:31:06 compute-2 sudo[229700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:06 compute-2 sudo[229700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-2 sudo[229700]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-2 sudo[229725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:31:06 compute-2 sudo[229725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-2 sudo[229725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-2 sudo[229750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:06 compute-2 sudo[229750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-2 sudo[229750]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-2 sudo[229756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:06 compute-2 sudo[229756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-2 sudo[229756]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-2 ceph-mon[77282]: pgmap v905: 305 pgs: 305 active+clean; 134 MiB data, 228 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 108 op/s
Jan 31 07:31:06 compute-2 sudo[229798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:31:06 compute-2 sudo[229798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-2 sudo[229808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:06 compute-2 sudo[229808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-2 sudo[229808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-2 sudo[229798]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:06.834 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:06.834 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:06.835 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:06.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:07 compute-2 ceph-mon[77282]: pgmap v906: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 123 op/s
Jan 31 07:31:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:07.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:07 compute-2 nova_compute[226829]: 2026-01-31 07:31:07.801 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:07 compute-2 nova_compute[226829]: 2026-01-31 07:31:07.801 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:07 compute-2 nova_compute[226829]: 2026-01-31 07:31:07.820 226833 DEBUG nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:31:07 compute-2 nova_compute[226829]: 2026-01-31 07:31:07.943 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:07 compute-2 nova_compute[226829]: 2026-01-31 07:31:07.943 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:07 compute-2 nova_compute[226829]: 2026-01-31 07:31:07.951 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:31:07 compute-2 nova_compute[226829]: 2026-01-31 07:31:07.952 226833 INFO nova.compute.claims [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.079 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1388821805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:31:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:31:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:31:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:31:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:31:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/23788351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:31:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3026629658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.534 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.539 226833 DEBUG nova.compute.provider_tree [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.555 226833 DEBUG nova.scheduler.client.report [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.574 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.575 226833 DEBUG nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.622 226833 DEBUG nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.622 226833 DEBUG nova.network.neutron [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.642 226833 INFO nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.660 226833 DEBUG nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.749 226833 DEBUG nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.750 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.751 226833 INFO nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Creating image(s)
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.784 226833 DEBUG nova.storage.rbd_utils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.818 226833 DEBUG nova.storage.rbd_utils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.852 226833 DEBUG nova.storage.rbd_utils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.856 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:08 compute-2 nova_compute[226829]: 2026-01-31 07:31:08.857 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:08.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:09 compute-2 nova_compute[226829]: 2026-01-31 07:31:09.264 226833 DEBUG nova.virt.libvirt.imagebackend [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/7c23949f-bba8-4466-bb79-caf568852d38/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/7c23949f-bba8-4466-bb79-caf568852d38/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 07:31:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:09.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:09 compute-2 nova_compute[226829]: 2026-01-31 07:31:09.380 226833 WARNING oslo_policy.policy [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 31 07:31:09 compute-2 nova_compute[226829]: 2026-01-31 07:31:09.381 226833 WARNING oslo_policy.policy [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 31 07:31:09 compute-2 nova_compute[226829]: 2026-01-31 07:31:09.385 226833 DEBUG nova.policy [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '504b628b5b8b46f9a0fce37f63d8492e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '44b45fc9f5584f0ca482be3aa129958e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:31:09 compute-2 ceph-mon[77282]: pgmap v907: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 145 op/s
Jan 31 07:31:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3026629658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2466196534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:10.717 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:31:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:10.717 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:31:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:10.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.229 226833 DEBUG nova.network.neutron [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Successfully created port: cbc5d20a-d45f-4c15-9dc9-f76e172c59fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:31:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:11.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.298 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.374 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.375 226833 DEBUG nova.virt.images [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] 7c23949f-bba8-4466-bb79-caf568852d38 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.377 226833 DEBUG nova.privsep.utils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.378 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.552 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.558 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.634 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.635 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.672 226833 DEBUG nova.storage.rbd_utils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:11 compute-2 nova_compute[226829]: 2026-01-31 07:31:11.676 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:11 compute-2 ceph-mon[77282]: pgmap v908: 305 pgs: 305 active+clean; 107 MiB data, 226 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Jan 31 07:31:12 compute-2 nova_compute[226829]: 2026-01-31 07:31:12.472 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.795s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:12 compute-2 nova_compute[226829]: 2026-01-31 07:31:12.586 226833 DEBUG nova.storage.rbd_utils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] resizing rbd image bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:31:12 compute-2 nova_compute[226829]: 2026-01-31 07:31:12.963 226833 DEBUG nova.network.neutron [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Successfully updated port: cbc5d20a-d45f-4c15-9dc9-f76e172c59fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:31:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:12 compute-2 nova_compute[226829]: 2026-01-31 07:31:12.973 226833 DEBUG nova.objects.instance [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lazy-loading 'migration_context' on Instance uuid bbbf9310-a6db-4d77-80bc-df8acf10ed4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:31:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:12.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:12 compute-2 nova_compute[226829]: 2026-01-31 07:31:12.988 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:31:12 compute-2 nova_compute[226829]: 2026-01-31 07:31:12.989 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Ensure instance console log exists: /var/lib/nova/instances/bbbf9310-a6db-4d77-80bc-df8acf10ed4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:31:12 compute-2 nova_compute[226829]: 2026-01-31 07:31:12.990 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:12 compute-2 nova_compute[226829]: 2026-01-31 07:31:12.990 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:12 compute-2 nova_compute[226829]: 2026-01-31 07:31:12.991 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:13 compute-2 nova_compute[226829]: 2026-01-31 07:31:13.008 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "refresh_cache-bbbf9310-a6db-4d77-80bc-df8acf10ed4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:31:13 compute-2 nova_compute[226829]: 2026-01-31 07:31:13.009 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquired lock "refresh_cache-bbbf9310-a6db-4d77-80bc-df8acf10ed4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:31:13 compute-2 nova_compute[226829]: 2026-01-31 07:31:13.009 226833 DEBUG nova.network.neutron [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:31:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:13.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:13 compute-2 nova_compute[226829]: 2026-01-31 07:31:13.393 226833 DEBUG nova.network.neutron [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:31:13 compute-2 nova_compute[226829]: 2026-01-31 07:31:13.723 226833 DEBUG nova.compute.manager [req-36725a78-a78c-4ab8-9b88-eba4a9e78349 req-84e1c3d9-61ca-48cc-b909-1035bbdf502b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Received event network-changed-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:31:13 compute-2 nova_compute[226829]: 2026-01-31 07:31:13.724 226833 DEBUG nova.compute.manager [req-36725a78-a78c-4ab8-9b88-eba4a9e78349 req-84e1c3d9-61ca-48cc-b909-1035bbdf502b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Refreshing instance network info cache due to event network-changed-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:31:13 compute-2 nova_compute[226829]: 2026-01-31 07:31:13.724 226833 DEBUG oslo_concurrency.lockutils [req-36725a78-a78c-4ab8-9b88-eba4a9e78349 req-84e1c3d9-61ca-48cc-b909-1035bbdf502b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-bbbf9310-a6db-4d77-80bc-df8acf10ed4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:31:13 compute-2 ceph-mon[77282]: pgmap v909: 305 pgs: 305 active+clean; 96 MiB data, 223 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.7 MiB/s wr, 140 op/s
Jan 31 07:31:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:31:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:14.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:31:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:15.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:15 compute-2 sudo[230088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:15 compute-2 sudo[230088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:15 compute-2 sudo[230088]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:15 compute-2 ceph-mon[77282]: pgmap v910: 305 pgs: 305 active+clean; 122 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 146 op/s
Jan 31 07:31:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:16 compute-2 sudo[230113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:31:16 compute-2 sudo[230113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:16 compute-2 sudo[230113]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.051 226833 DEBUG nova.network.neutron [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Updating instance_info_cache with network_info: [{"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.094 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Releasing lock "refresh_cache-bbbf9310-a6db-4d77-80bc-df8acf10ed4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.094 226833 DEBUG nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Instance network_info: |[{"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.096 226833 DEBUG oslo_concurrency.lockutils [req-36725a78-a78c-4ab8-9b88-eba4a9e78349 req-84e1c3d9-61ca-48cc-b909-1035bbdf502b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-bbbf9310-a6db-4d77-80bc-df8acf10ed4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.097 226833 DEBUG nova.network.neutron [req-36725a78-a78c-4ab8-9b88-eba4a9e78349 req-84e1c3d9-61ca-48cc-b909-1035bbdf502b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Refreshing network info cache for port cbc5d20a-d45f-4c15-9dc9-f76e172c59fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.104 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Start _get_guest_xml network_info=[{"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.113 226833 WARNING nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.118 226833 DEBUG nova.virt.libvirt.host [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.119 226833 DEBUG nova.virt.libvirt.host [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.122 226833 DEBUG nova.virt.libvirt.host [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.123 226833 DEBUG nova.virt.libvirt.host [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.125 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.126 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:30:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1540265066',id=5,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-193340824',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.127 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.127 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.127 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.128 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.128 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.129 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.129 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.130 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.130 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.130 226833 DEBUG nova.virt.hardware [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.137 226833 DEBUG nova.privsep.utils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.138 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:31:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2925801226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.566 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.608 226833 DEBUG nova.storage.rbd_utils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:16 compute-2 nova_compute[226829]: 2026-01-31 07:31:16.615 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:16.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:31:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2444794305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2925801226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:17.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.703 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.707 226833 DEBUG nova.virt.libvirt.vif [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:31:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1052945082',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1052945082',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1052945082',id=4,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDxTAQcLwGUMVPW0FQMEh2WkiJiP4UW6jyeXpyYN49oJPb9d8Ollm4RLXjbgyBtbDOEel5iqcdq3OwrhOFKp5uzFUPyPJ5dQoEavamFkLA01W1IRHGw1oCDni1Hm+Ki2Jw==',key_name='tempest-keypair-338823368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44b45fc9f5584f0ca482be3aa129958e',ramdisk_id='',reservation_id='r-wh7xg7x6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-581936852',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-581936852-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:31:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='504b628b5b8b46f9a0fce37f63d8492e',uuid=bbbf9310-a6db-4d77-80bc-df8acf10ed4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.708 226833 DEBUG nova.network.os_vif_util [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converting VIF {"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.709 226833 DEBUG nova.network.os_vif_util [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:d4:ba,bridge_name='br-int',has_traffic_filtering=True,id=cbc5d20a-d45f-4c15-9dc9-f76e172c59fa,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbc5d20a-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.713 226833 DEBUG nova.objects.instance [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lazy-loading 'pci_devices' on Instance uuid bbbf9310-a6db-4d77-80bc-df8acf10ed4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.739 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <uuid>bbbf9310-a6db-4d77-80bc-df8acf10ed4f</uuid>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <name>instance-00000004</name>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1052945082</nova:name>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:31:16</nova:creationTime>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-193340824">
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <nova:user uuid="504b628b5b8b46f9a0fce37f63d8492e">tempest-ServersWithSpecificFlavorTestJSON-581936852-project-member</nova:user>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <nova:project uuid="44b45fc9f5584f0ca482be3aa129958e">tempest-ServersWithSpecificFlavorTestJSON-581936852</nova:project>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <nova:port uuid="cbc5d20a-d45f-4c15-9dc9-f76e172c59fa">
Jan 31 07:31:17 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <system>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <entry name="serial">bbbf9310-a6db-4d77-80bc-df8acf10ed4f</entry>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <entry name="uuid">bbbf9310-a6db-4d77-80bc-df8acf10ed4f</entry>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     </system>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <os>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   </os>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <features>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   </features>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk">
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       </source>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk.config">
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       </source>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:31:17 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:45:d4:ba"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <target dev="tapcbc5d20a-d4"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/bbbf9310-a6db-4d77-80bc-df8acf10ed4f/console.log" append="off"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <video>
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     </video>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:31:17 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:31:17 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:31:17 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:31:17 compute-2 nova_compute[226829]: </domain>
Jan 31 07:31:17 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.740 226833 DEBUG nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Preparing to wait for external event network-vif-plugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.741 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.741 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.742 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.743 226833 DEBUG nova.virt.libvirt.vif [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:31:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1052945082',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1052945082',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1052945082',id=4,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDxTAQcLwGUMVPW0FQMEh2WkiJiP4UW6jyeXpyYN49oJPb9d8Ollm4RLXjbgyBtbDOEel5iqcdq3OwrhOFKp5uzFUPyPJ5dQoEavamFkLA01W1IRHGw1oCDni1Hm+Ki2Jw==',key_name='tempest-keypair-338823368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44b45fc9f5584f0ca482be3aa129958e',ramdisk_id='',reservation_id='r-wh7xg7x6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-581936852',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-581936852-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:31:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='504b628b5b8b46f9a0fce37f63d8492e',uuid=bbbf9310-a6db-4d77-80bc-df8acf10ed4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.743 226833 DEBUG nova.network.os_vif_util [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converting VIF {"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.745 226833 DEBUG nova.network.os_vif_util [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:d4:ba,bridge_name='br-int',has_traffic_filtering=True,id=cbc5d20a-d45f-4c15-9dc9-f76e172c59fa,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbc5d20a-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.745 226833 DEBUG os_vif [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:d4:ba,bridge_name='br-int',has_traffic_filtering=True,id=cbc5d20a-d45f-4c15-9dc9-f76e172c59fa,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbc5d20a-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.806 226833 DEBUG ovsdbapp.backend.ovs_idl [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.806 226833 DEBUG ovsdbapp.backend.ovs_idl [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.807 226833 DEBUG ovsdbapp.backend.ovs_idl [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.808 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.808 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.809 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.811 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.813 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.826 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.826 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.826 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:31:17 compute-2 nova_compute[226829]: 2026-01-31 07:31:17.828 226833 INFO oslo.privsep.daemon [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpudoq0j08/privsep.sock']
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.006 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:18 compute-2 ceph-mon[77282]: pgmap v911: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 31 07:31:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2444794305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.588 226833 INFO oslo.privsep.daemon [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Spawned new privsep daemon via rootwrap
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.445 230205 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.451 230205 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.455 230205 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.455 230205 INFO oslo.privsep.daemon [-] privsep daemon running as pid 230205
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.850 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.850 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbc5d20a-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.851 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcbc5d20a-d4, col_values=(('external_ids', {'iface-id': 'cbc5d20a-d45f-4c15-9dc9-f76e172c59fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:d4:ba', 'vm-uuid': 'bbbf9310-a6db-4d77-80bc-df8acf10ed4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.853 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:18 compute-2 NetworkManager[48999]: <info>  [1769844678.8565] manager: (tapcbc5d20a-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.856 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.867 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.870 226833 INFO os_vif [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:d4:ba,bridge_name='br-int',has_traffic_filtering=True,id=cbc5d20a-d45f-4c15-9dc9-f76e172c59fa,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbc5d20a-d4')
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.949 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.951 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.952 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] No VIF found with MAC fa:16:3e:45:d4:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.953 226833 INFO nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Using config drive
Jan 31 07:31:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:18 compute-2 nova_compute[226829]: 2026-01-31 07:31:18.988 226833 DEBUG nova.storage.rbd_utils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:19 compute-2 ceph-mon[77282]: pgmap v912: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 31 07:31:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:19.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:19.720 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.111 226833 DEBUG nova.network.neutron [req-36725a78-a78c-4ab8-9b88-eba4a9e78349 req-84e1c3d9-61ca-48cc-b909-1035bbdf502b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Updated VIF entry in instance network info cache for port cbc5d20a-d45f-4c15-9dc9-f76e172c59fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.112 226833 DEBUG nova.network.neutron [req-36725a78-a78c-4ab8-9b88-eba4a9e78349 req-84e1c3d9-61ca-48cc-b909-1035bbdf502b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Updating instance_info_cache with network_info: [{"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.158 226833 DEBUG oslo_concurrency.lockutils [req-36725a78-a78c-4ab8-9b88-eba4a9e78349 req-84e1c3d9-61ca-48cc-b909-1035bbdf502b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-bbbf9310-a6db-4d77-80bc-df8acf10ed4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.175 226833 INFO nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Creating config drive at /var/lib/nova/instances/bbbf9310-a6db-4d77-80bc-df8acf10ed4f/disk.config
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.184 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bbbf9310-a6db-4d77-80bc-df8acf10ed4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptd3svxos execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:20 compute-2 podman[230230]: 2026-01-31 07:31:20.223010368 +0000 UTC m=+0.092142113 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.318 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bbbf9310-a6db-4d77-80bc-df8acf10ed4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptd3svxos" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.349 226833 DEBUG nova.storage.rbd_utils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.352 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bbbf9310-a6db-4d77-80bc-df8acf10ed4f/disk.config bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.515 226833 DEBUG oslo_concurrency.processutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bbbf9310-a6db-4d77-80bc-df8acf10ed4f/disk.config bbbf9310-a6db-4d77-80bc-df8acf10ed4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.517 226833 INFO nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Deleting local config drive /var/lib/nova/instances/bbbf9310-a6db-4d77-80bc-df8acf10ed4f/disk.config because it was imported into RBD.
Jan 31 07:31:20 compute-2 systemd[1]: Starting libvirt secret daemon...
Jan 31 07:31:20 compute-2 systemd[1]: Started libvirt secret daemon.
Jan 31 07:31:20 compute-2 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 31 07:31:20 compute-2 kernel: tapcbc5d20a-d4: entered promiscuous mode
Jan 31 07:31:20 compute-2 ovn_controller[133834]: 2026-01-31T07:31:20Z|00027|binding|INFO|Claiming lport cbc5d20a-d45f-4c15-9dc9-f76e172c59fa for this chassis.
Jan 31 07:31:20 compute-2 ovn_controller[133834]: 2026-01-31T07:31:20Z|00028|binding|INFO|cbc5d20a-d45f-4c15-9dc9-f76e172c59fa: Claiming fa:16:3e:45:d4:ba 10.100.0.10
Jan 31 07:31:20 compute-2 NetworkManager[48999]: <info>  [1769844680.6239] manager: (tapcbc5d20a-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.624 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:20 compute-2 systemd-udevd[230331]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:31:20 compute-2 NetworkManager[48999]: <info>  [1769844680.6846] device (tapcbc5d20a-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.682 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:20 compute-2 NetworkManager[48999]: <info>  [1769844680.6862] device (tapcbc5d20a-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:31:20 compute-2 ovn_controller[133834]: 2026-01-31T07:31:20Z|00029|binding|INFO|Setting lport cbc5d20a-d45f-4c15-9dc9-f76e172c59fa ovn-installed in OVS
Jan 31 07:31:20 compute-2 nova_compute[226829]: 2026-01-31 07:31:20.687 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:20 compute-2 systemd-machined[195142]: New machine qemu-1-instance-00000004.
Jan 31 07:31:20 compute-2 systemd[1]: Started Virtual Machine qemu-1-instance-00000004.
Jan 31 07:31:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:21 compute-2 nova_compute[226829]: 2026-01-31 07:31:21.151 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844681.1510217, bbbf9310-a6db-4d77-80bc-df8acf10ed4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:31:21 compute-2 nova_compute[226829]: 2026-01-31 07:31:21.153 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] VM Started (Lifecycle Event)
Jan 31 07:31:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:21.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:21 compute-2 ceph-mon[77282]: pgmap v913: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 31 07:31:22 compute-2 ovn_controller[133834]: 2026-01-31T07:31:22Z|00030|binding|INFO|Setting lport cbc5d20a-d45f-4c15-9dc9-f76e172c59fa up in Southbound
Jan 31 07:31:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:22.805 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:d4:ba 10.100.0.10'], port_security=['fa:16:3e:45:d4:ba 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bbbf9310-a6db-4d77-80bc-df8acf10ed4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44b45fc9f5584f0ca482be3aa129958e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'afa316ed-24f5-4171-ab84-996dd120cd11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a2fa1a5-5047-42e0-b079-f093555fa913, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=cbc5d20a-d45f-4c15-9dc9-f76e172c59fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:31:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:22.807 143841 INFO neutron.agent.ovn.metadata.agent [-] Port cbc5d20a-d45f-4c15-9dc9-f76e172c59fa in datapath c63b50b2-1358-4cfc-8846-5b8785b4f656 bound to our chassis
Jan 31 07:31:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:22.813 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c63b50b2-1358-4cfc-8846-5b8785b4f656
Jan 31 07:31:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:22.815 143841 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpyu3wkd7t/privsep.sock']
Jan 31 07:31:22 compute-2 nova_compute[226829]: 2026-01-31 07:31:22.867 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:31:22 compute-2 nova_compute[226829]: 2026-01-31 07:31:22.873 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844681.1512835, bbbf9310-a6db-4d77-80bc-df8acf10ed4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:31:22 compute-2 nova_compute[226829]: 2026-01-31 07:31:22.873 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] VM Paused (Lifecycle Event)
Jan 31 07:31:22 compute-2 nova_compute[226829]: 2026-01-31 07:31:22.926 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:31:22 compute-2 nova_compute[226829]: 2026-01-31 07:31:22.930 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:31:22 compute-2 nova_compute[226829]: 2026-01-31 07:31:22.972 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:31:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:31:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:22.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.008 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:23.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:23.469 143841 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 07:31:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:23.470 143841 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpyu3wkd7t/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 07:31:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:23.364 230393 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:31:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:23.368 230393 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:31:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:23.371 230393 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 31 07:31:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:23.371 230393 INFO oslo.privsep.daemon [-] privsep daemon running as pid 230393
Jan 31 07:31:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:23.475 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8195e7e8-829f-42c2-9297-68c5214daa0b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.550 226833 DEBUG nova.compute.manager [req-9dbc4f77-cbf3-494c-8e1d-c06eda1336ff req-a41a8df7-8f3e-4a4b-8906-0b25e5ac830b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Received event network-vif-plugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.550 226833 DEBUG oslo_concurrency.lockutils [req-9dbc4f77-cbf3-494c-8e1d-c06eda1336ff req-a41a8df7-8f3e-4a4b-8906-0b25e5ac830b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.551 226833 DEBUG oslo_concurrency.lockutils [req-9dbc4f77-cbf3-494c-8e1d-c06eda1336ff req-a41a8df7-8f3e-4a4b-8906-0b25e5ac830b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.551 226833 DEBUG oslo_concurrency.lockutils [req-9dbc4f77-cbf3-494c-8e1d-c06eda1336ff req-a41a8df7-8f3e-4a4b-8906-0b25e5ac830b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.566 226833 DEBUG nova.compute.manager [req-9dbc4f77-cbf3-494c-8e1d-c06eda1336ff req-a41a8df7-8f3e-4a4b-8906-0b25e5ac830b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Processing event network-vif-plugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.568 226833 DEBUG nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.572 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.587 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844683.5870605, bbbf9310-a6db-4d77-80bc-df8acf10ed4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.588 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] VM Resumed (Lifecycle Event)
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.593 226833 INFO nova.virt.libvirt.driver [-] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Instance spawned successfully.
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.593 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.611 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.616 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.629 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.630 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.631 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.632 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.633 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.633 226833 DEBUG nova.virt.libvirt.driver [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.648 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.720 226833 INFO nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Took 14.97 seconds to spawn the instance on the hypervisor.
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.721 226833 DEBUG nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.856 226833 INFO nova.compute.manager [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Took 15.96 seconds to build instance.
Jan 31 07:31:23 compute-2 ceph-mon[77282]: pgmap v914: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.879 226833 DEBUG oslo_concurrency.lockutils [None req-c6f2b3ea-7b1c-45cb-9b75-efd27e5d9df3 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:23 compute-2 nova_compute[226829]: 2026-01-31 07:31:23.890 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.031 230393 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.031 230393 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.031 230393 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.705 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba96eb2-34e1-47e6-9f01-4f456c1e600b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.707 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc63b50b2-11 in ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.708 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc63b50b2-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.708 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2ece7a18-44eb-4514-8450-048f7dcd4737]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.712 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc36a3d-626e-4887-9ea0-681bfcf337ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.737 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[5e219011-9830-4610-888d-8762e80aa7df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.749 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7e28d9-f874-4eef-ab62-9d34f1692bbb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:24.751 143841 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp1z3iqxwg/privsep.sock']
Jan 31 07:31:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:31:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:24.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:31:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:25.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:25.473 143841 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 07:31:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:25.475 143841 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1z3iqxwg/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 07:31:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:25.327 230408 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:31:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:25.339 230408 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:31:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:25.341 230408 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 31 07:31:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:25.342 230408 INFO oslo.privsep.daemon [-] privsep daemon running as pid 230408
Jan 31 07:31:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:25.478 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8040feeb-db98-4e0e-a261-0711c396775b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:25 compute-2 nova_compute[226829]: 2026-01-31 07:31:25.898 226833 DEBUG nova.compute.manager [req-827ed3df-da94-4c45-a7e2-6a564c897105 req-71deeb97-2de4-4061-9418-347b730bb4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Received event network-vif-plugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:31:25 compute-2 nova_compute[226829]: 2026-01-31 07:31:25.898 226833 DEBUG oslo_concurrency.lockutils [req-827ed3df-da94-4c45-a7e2-6a564c897105 req-71deeb97-2de4-4061-9418-347b730bb4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:25 compute-2 nova_compute[226829]: 2026-01-31 07:31:25.898 226833 DEBUG oslo_concurrency.lockutils [req-827ed3df-da94-4c45-a7e2-6a564c897105 req-71deeb97-2de4-4061-9418-347b730bb4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:25 compute-2 nova_compute[226829]: 2026-01-31 07:31:25.899 226833 DEBUG oslo_concurrency.lockutils [req-827ed3df-da94-4c45-a7e2-6a564c897105 req-71deeb97-2de4-4061-9418-347b730bb4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:25 compute-2 nova_compute[226829]: 2026-01-31 07:31:25.899 226833 DEBUG nova.compute.manager [req-827ed3df-da94-4c45-a7e2-6a564c897105 req-71deeb97-2de4-4061-9418-347b730bb4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] No waiting events found dispatching network-vif-plugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:31:25 compute-2 nova_compute[226829]: 2026-01-31 07:31:25.899 226833 WARNING nova.compute.manager [req-827ed3df-da94-4c45-a7e2-6a564c897105 req-71deeb97-2de4-4061-9418-347b730bb4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Received unexpected event network-vif-plugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa for instance with vm_state active and task_state None.
Jan 31 07:31:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:25.953 230408 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:25.955 230408 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:25.956 230408 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:26 compute-2 ceph-mon[77282]: pgmap v915: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Jan 31 07:31:26 compute-2 sudo[230413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:26 compute-2 sudo[230413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:26 compute-2 sudo[230413]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:26 compute-2 sudo[230438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:26 compute-2 sudo[230438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:26 compute-2 sudo[230438]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.545 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[faba1932-050a-4ae3-b57d-c1e252bc8abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.5767] manager: (tapc63b50b2-10): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.575 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[383f3664-46d9-45fc-81cd-5290d118e132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 nova_compute[226829]: 2026-01-31 07:31:26.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.5884] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.5890] device (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <warn>  [1769844686.5892] device (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.5903] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.5908] device (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <warn>  [1769844686.5908] device (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.5921] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.5929] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.5934] device (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.5940] device (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 07:31:26 compute-2 systemd-udevd[230470]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.615 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[687c4bc9-f03c-4e89-9a6c-06126cecb904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.620 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fa713d-5c84-4578-b68e-991be3567224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.6355] device (tapc63b50b2-10): carrier: link connected
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.638 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3e68fe0a-0cfb-4781-a0ae-11c848ee7557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.651 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7386bdd6-15bb-4ce4-bacd-d8fa94aec860]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc63b50b2-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493910, 'reachable_time': 23366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230488, 'error': None, 'target': 'ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.665 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7960a801-6606-456c-b148-b3b626866e8b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:a0ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493910, 'tstamp': 493910}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230489, 'error': None, 'target': 'ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 nova_compute[226829]: 2026-01-31 07:31:26.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.680 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e8cc4dce-3d3b-4ce5-9b8d-07cf166053fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc63b50b2-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493910, 'reachable_time': 23366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 230490, 'error': None, 'target': 'ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 nova_compute[226829]: 2026-01-31 07:31:26.711 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.714 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ef85f515-993b-4aaa-9e68-63cd98a95fb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.750 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[104788ba-c2d8-4d4f-ac12-c69f8948a301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.752 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc63b50b2-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.753 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.753 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc63b50b2-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:26 compute-2 nova_compute[226829]: 2026-01-31 07:31:26.755 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:26 compute-2 kernel: tapc63b50b2-10: entered promiscuous mode
Jan 31 07:31:26 compute-2 NetworkManager[48999]: <info>  [1769844686.7564] manager: (tapc63b50b2-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 31 07:31:26 compute-2 nova_compute[226829]: 2026-01-31 07:31:26.757 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.761 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc63b50b2-10, col_values=(('external_ids', {'iface-id': '87211dc4-a75d-4ebb-a1af-acc15276f8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:26 compute-2 nova_compute[226829]: 2026-01-31 07:31:26.762 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:26 compute-2 ovn_controller[133834]: 2026-01-31T07:31:26Z|00031|binding|INFO|Releasing lport 87211dc4-a75d-4ebb-a1af-acc15276f8a6 from this chassis (sb_readonly=0)
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.766 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c63b50b2-1358-4cfc-8846-5b8785b4f656.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c63b50b2-1358-4cfc-8846-5b8785b4f656.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:31:26 compute-2 nova_compute[226829]: 2026-01-31 07:31:26.766 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.767 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[944ec070-040c-41b6-a808-3271458e8263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.769 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-c63b50b2-1358-4cfc-8846-5b8785b4f656
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/c63b50b2-1358-4cfc-8846-5b8785b4f656.pid.haproxy
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID c63b50b2-1358-4cfc-8846-5b8785b4f656
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:31:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:26.770 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'env', 'PROCESS_TAG=haproxy-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c63b50b2-1358-4cfc-8846-5b8785b4f656.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:31:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:26.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:27 compute-2 nova_compute[226829]: 2026-01-31 07:31:27.023 226833 DEBUG nova.compute.manager [req-546a4129-e859-46c4-b386-0da70255fc86 req-b505a3ac-bdc5-44b3-bdc5-893de3089e52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Received event network-changed-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:31:27 compute-2 nova_compute[226829]: 2026-01-31 07:31:27.025 226833 DEBUG nova.compute.manager [req-546a4129-e859-46c4-b386-0da70255fc86 req-b505a3ac-bdc5-44b3-bdc5-893de3089e52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Refreshing instance network info cache due to event network-changed-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:31:27 compute-2 nova_compute[226829]: 2026-01-31 07:31:27.025 226833 DEBUG oslo_concurrency.lockutils [req-546a4129-e859-46c4-b386-0da70255fc86 req-b505a3ac-bdc5-44b3-bdc5-893de3089e52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-bbbf9310-a6db-4d77-80bc-df8acf10ed4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:31:27 compute-2 nova_compute[226829]: 2026-01-31 07:31:27.026 226833 DEBUG oslo_concurrency.lockutils [req-546a4129-e859-46c4-b386-0da70255fc86 req-b505a3ac-bdc5-44b3-bdc5-893de3089e52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-bbbf9310-a6db-4d77-80bc-df8acf10ed4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:31:27 compute-2 nova_compute[226829]: 2026-01-31 07:31:27.026 226833 DEBUG nova.network.neutron [req-546a4129-e859-46c4-b386-0da70255fc86 req-b505a3ac-bdc5-44b3-bdc5-893de3089e52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Refreshing network info cache for port cbc5d20a-d45f-4c15-9dc9-f76e172c59fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:31:27 compute-2 podman[230519]: 2026-01-31 07:31:27.109710244 +0000 UTC m=+0.060098333 container create 50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 07:31:27 compute-2 systemd[1]: Started libpod-conmon-50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f.scope.
Jan 31 07:31:27 compute-2 podman[230519]: 2026-01-31 07:31:27.076041759 +0000 UTC m=+0.026429878 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:31:27 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:31:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e3674c72842af02511890942ffba48122ca6bd9e7d71a58bd717c22919fd16a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:27 compute-2 podman[230519]: 2026-01-31 07:31:27.188017225 +0000 UTC m=+0.138405344 container init 50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:31:27 compute-2 podman[230519]: 2026-01-31 07:31:27.193975669 +0000 UTC m=+0.144363748 container start 50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 07:31:27 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[230535]: [NOTICE]   (230539) : New worker (230541) forked
Jan 31 07:31:27 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[230535]: [NOTICE]   (230539) : Loading success.
Jan 31 07:31:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:28 compute-2 nova_compute[226829]: 2026-01-31 07:31:28.010 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 31 07:31:28 compute-2 ceph-mon[77282]: pgmap v916: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 441 KiB/s wr, 117 op/s
Jan 31 07:31:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/817307480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:28 compute-2 nova_compute[226829]: 2026-01-31 07:31:28.818 226833 DEBUG nova.network.neutron [req-546a4129-e859-46c4-b386-0da70255fc86 req-b505a3ac-bdc5-44b3-bdc5-893de3089e52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Updated VIF entry in instance network info cache for port cbc5d20a-d45f-4c15-9dc9-f76e172c59fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:31:28 compute-2 nova_compute[226829]: 2026-01-31 07:31:28.819 226833 DEBUG nova.network.neutron [req-546a4129-e859-46c4-b386-0da70255fc86 req-b505a3ac-bdc5-44b3-bdc5-893de3089e52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Updating instance_info_cache with network_info: [{"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:31:28 compute-2 nova_compute[226829]: 2026-01-31 07:31:28.857 226833 DEBUG oslo_concurrency.lockutils [req-546a4129-e859-46c4-b386-0da70255fc86 req-b505a3ac-bdc5-44b3-bdc5-893de3089e52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-bbbf9310-a6db-4d77-80bc-df8acf10ed4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:31:28 compute-2 nova_compute[226829]: 2026-01-31 07:31:28.892 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:28.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:29 compute-2 podman[230550]: 2026-01-31 07:31:29.193863134 +0000 UTC m=+0.083510755 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:31:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:29.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:30 compute-2 ceph-mon[77282]: pgmap v917: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 15 KiB/s wr, 107 op/s
Jan 31 07:31:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:31.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:31.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:32 compute-2 ceph-mon[77282]: pgmap v918: 305 pgs: 305 active+clean; 171 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 187 op/s
Jan 31 07:31:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:33.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:33 compute-2 nova_compute[226829]: 2026-01-31 07:31:33.013 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:33.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:33 compute-2 ceph-mon[77282]: pgmap v919: 305 pgs: 305 active+clean; 189 MiB data, 304 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 168 op/s
Jan 31 07:31:33 compute-2 nova_compute[226829]: 2026-01-31 07:31:33.893 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/443679632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1935318245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:35.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:35.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:35 compute-2 ceph-mon[77282]: pgmap v920: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 07:31:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:37.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:37.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:37 compute-2 ovn_controller[133834]: 2026-01-31T07:31:37Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:d4:ba 10.100.0.10
Jan 31 07:31:37 compute-2 ovn_controller[133834]: 2026-01-31T07:31:37Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:d4:ba 10.100.0.10
Jan 31 07:31:37 compute-2 ceph-mon[77282]: pgmap v921: 305 pgs: 305 active+clean; 217 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 154 op/s
Jan 31 07:31:38 compute-2 nova_compute[226829]: 2026-01-31 07:31:38.045 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:38 compute-2 nova_compute[226829]: 2026-01-31 07:31:38.896 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:39.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:39.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:39 compute-2 ceph-mon[77282]: pgmap v922: 305 pgs: 305 active+clean; 227 MiB data, 343 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.9 MiB/s wr, 147 op/s
Jan 31 07:31:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:41.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:41.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:41 compute-2 ceph-mon[77282]: pgmap v923: 305 pgs: 305 active+clean; 238 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.0 MiB/s wr, 198 op/s
Jan 31 07:31:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:31:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:43.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:31:43 compute-2 nova_compute[226829]: 2026-01-31 07:31:43.083 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:43.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:43 compute-2 ceph-mon[77282]: pgmap v924: 305 pgs: 305 active+clean; 247 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 542 KiB/s rd, 4.2 MiB/s wr, 116 op/s
Jan 31 07:31:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3329455590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:43 compute-2 nova_compute[226829]: 2026-01-31 07:31:43.899 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:45.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:45.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:45 compute-2 nova_compute[226829]: 2026-01-31 07:31:45.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:31:45 compute-2 ceph-mon[77282]: pgmap v925: 305 pgs: 305 active+clean; 247 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 160 op/s
Jan 31 07:31:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2077544154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:31:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2077544154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:31:46 compute-2 sudo[230580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:46 compute-2 sudo[230580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:46 compute-2 sudo[230580]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:46 compute-2 sudo[230605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:46 compute-2 sudo[230605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:46 compute-2 sudo[230605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.765 226833 DEBUG oslo_concurrency.lockutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.766 226833 DEBUG oslo_concurrency.lockutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.766 226833 DEBUG oslo_concurrency.lockutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.767 226833 DEBUG oslo_concurrency.lockutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.767 226833 DEBUG oslo_concurrency.lockutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.769 226833 INFO nova.compute.manager [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Terminating instance
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.770 226833 DEBUG nova.compute.manager [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:31:46 compute-2 kernel: tapcbc5d20a-d4 (unregistering): left promiscuous mode
Jan 31 07:31:46 compute-2 NetworkManager[48999]: <info>  [1769844706.8308] device (tapcbc5d20a-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:31:46 compute-2 ovn_controller[133834]: 2026-01-31T07:31:46Z|00032|binding|INFO|Releasing lport cbc5d20a-d45f-4c15-9dc9-f76e172c59fa from this chassis (sb_readonly=0)
Jan 31 07:31:46 compute-2 ovn_controller[133834]: 2026-01-31T07:31:46Z|00033|binding|INFO|Setting lport cbc5d20a-d45f-4c15-9dc9-f76e172c59fa down in Southbound
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.868 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:46 compute-2 ovn_controller[133834]: 2026-01-31T07:31:46Z|00034|binding|INFO|Removing iface tapcbc5d20a-d4 ovn-installed in OVS
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.878 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:46 compute-2 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 31 07:31:46 compute-2 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000004.scope: Consumed 13.948s CPU time.
Jan 31 07:31:46 compute-2 systemd-machined[195142]: Machine qemu-1-instance-00000004 terminated.
Jan 31 07:31:46 compute-2 kernel: tapcbc5d20a-d4: entered promiscuous mode
Jan 31 07:31:46 compute-2 NetworkManager[48999]: <info>  [1769844706.9857] manager: (tapcbc5d20a-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Jan 31 07:31:46 compute-2 kernel: tapcbc5d20a-d4 (unregistering): left promiscuous mode
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.990 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:46 compute-2 ovn_controller[133834]: 2026-01-31T07:31:46Z|00035|if_status|INFO|Not updating pb chassis for cbc5d20a-d45f-4c15-9dc9-f76e172c59fa now as sb is readonly
Jan 31 07:31:46 compute-2 ovn_controller[133834]: 2026-01-31T07:31:46Z|00036|binding|INFO|Releasing lport cbc5d20a-d45f-4c15-9dc9-f76e172c59fa from this chassis (sb_readonly=1)
Jan 31 07:31:46 compute-2 nova_compute[226829]: 2026-01-31 07:31:46.998 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:46 compute-2 ovn_controller[133834]: 2026-01-31T07:31:46Z|00037|if_status|INFO|Not setting lport cbc5d20a-d45f-4c15-9dc9-f76e172c59fa down as sb is readonly
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.007 226833 INFO nova.virt.libvirt.driver [-] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Instance destroyed successfully.
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.008 226833 DEBUG nova.objects.instance [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lazy-loading 'resources' on Instance uuid bbbf9310-a6db-4d77-80bc-df8acf10ed4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:31:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:47.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.030 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:d4:ba 10.100.0.10'], port_security=['fa:16:3e:45:d4:ba 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bbbf9310-a6db-4d77-80bc-df8acf10ed4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44b45fc9f5584f0ca482be3aa129958e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'afa316ed-24f5-4171-ab84-996dd120cd11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a2fa1a5-5047-42e0-b079-f093555fa913, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=cbc5d20a-d45f-4c15-9dc9-f76e172c59fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.032 143841 INFO neutron.agent.ovn.metadata.agent [-] Port cbc5d20a-d45f-4c15-9dc9-f76e172c59fa in datapath c63b50b2-1358-4cfc-8846-5b8785b4f656 unbound from our chassis
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.036 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c63b50b2-1358-4cfc-8846-5b8785b4f656, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.038 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c9306a94-3ea1-433c-9479-9a96c8eaf46a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.039 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656 namespace which is not needed anymore
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.143 226833 DEBUG nova.virt.libvirt.vif [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:31:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1052945082',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1052945082',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1052945082',id=4,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDxTAQcLwGUMVPW0FQMEh2WkiJiP4UW6jyeXpyYN49oJPb9d8Ollm4RLXjbgyBtbDOEel5iqcdq3OwrhOFKp5uzFUPyPJ5dQoEavamFkLA01W1IRHGw1oCDni1Hm+Ki2Jw==',key_name='tempest-keypair-338823368',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:31:23Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44b45fc9f5584f0ca482be3aa129958e',ramdisk_id='',reservation_id='r-wh7xg7x6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-581936852',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-581936852-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:31:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='504b628b5b8b46f9a0fce37f63d8492e',uuid=bbbf9310-a6db-4d77-80bc-df8acf10ed4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.143 226833 DEBUG nova.network.os_vif_util [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converting VIF {"id": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "address": "fa:16:3e:45:d4:ba", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbc5d20a-d4", "ovs_interfaceid": "cbc5d20a-d45f-4c15-9dc9-f76e172c59fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.145 226833 DEBUG nova.network.os_vif_util [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:d4:ba,bridge_name='br-int',has_traffic_filtering=True,id=cbc5d20a-d45f-4c15-9dc9-f76e172c59fa,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbc5d20a-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.145 226833 DEBUG os_vif [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:d4:ba,bridge_name='br-int',has_traffic_filtering=True,id=cbc5d20a-d45f-4c15-9dc9-f76e172c59fa,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbc5d20a-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.151 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.152 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbc5d20a-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.155 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.157 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.166 226833 INFO os_vif [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:d4:ba,bridge_name='br-int',has_traffic_filtering=True,id=cbc5d20a-d45f-4c15-9dc9-f76e172c59fa,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbc5d20a-d4')
Jan 31 07:31:47 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[230535]: [NOTICE]   (230539) : haproxy version is 2.8.14-c23fe91
Jan 31 07:31:47 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[230535]: [NOTICE]   (230539) : path to executable is /usr/sbin/haproxy
Jan 31 07:31:47 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[230535]: [WARNING]  (230539) : Exiting Master process...
Jan 31 07:31:47 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[230535]: [ALERT]    (230539) : Current worker (230541) exited with code 143 (Terminated)
Jan 31 07:31:47 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[230535]: [WARNING]  (230539) : All workers exited. Exiting... (0)
Jan 31 07:31:47 compute-2 systemd[1]: libpod-50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f.scope: Deactivated successfully.
Jan 31 07:31:47 compute-2 podman[230659]: 2026-01-31 07:31:47.207706896 +0000 UTC m=+0.055807544 container died 50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 07:31:47 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f-userdata-shm.mount: Deactivated successfully.
Jan 31 07:31:47 compute-2 systemd[1]: var-lib-containers-storage-overlay-6e3674c72842af02511890942ffba48122ca6bd9e7d71a58bd717c22919fd16a-merged.mount: Deactivated successfully.
Jan 31 07:31:47 compute-2 podman[230659]: 2026-01-31 07:31:47.2566168 +0000 UTC m=+0.104717458 container cleanup 50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 07:31:47 compute-2 systemd[1]: libpod-conmon-50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f.scope: Deactivated successfully.
Jan 31 07:31:47 compute-2 podman[230704]: 2026-01-31 07:31:47.311091727 +0000 UTC m=+0.037429860 container remove 50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.316 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f9904b02-a2ca-43ba-89e7-50b2dc4c41a1]: (4, ('Sat Jan 31 07:31:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656 (50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f)\n50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f\nSat Jan 31 07:31:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656 (50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f)\n50eb650cce7c8d80ac0354638fa898cf16d862bcc06b49f35382a4719ce65b3f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.317 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aeac2511-b516-45cf-8e70-08546ae67193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.318 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc63b50b2-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:47.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.320 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:47 compute-2 kernel: tapc63b50b2-10: left promiscuous mode
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.329 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.332 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0fef19-9ba7-4c63-9d7e-c3cb7e3af583]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.344 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[756a8ca9-1660-4d9a-981a-2c82df3b476f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.346 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd9b92f-9df6-4dd1-9e11-c8ed16d58bea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.359 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd2bdaa-21c9-4bbb-842e-808e52524acd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493901, 'reachable_time': 27913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230722, 'error': None, 'target': 'ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:47 compute-2 systemd[1]: run-netns-ovnmeta\x2dc63b50b2\x2d1358\x2d4cfc\x2d8846\x2d5b8785b4f656.mount: Deactivated successfully.
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.375 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:31:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:31:47.379 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5b9bbf-fe85-4faa-924e-a9806170071e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:47 compute-2 ceph-mon[77282]: pgmap v926: 305 pgs: 305 active+clean; 247 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.662 226833 INFO nova.virt.libvirt.driver [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Deleting instance files /var/lib/nova/instances/bbbf9310-a6db-4d77-80bc-df8acf10ed4f_del
Jan 31 07:31:47 compute-2 nova_compute[226829]: 2026-01-31 07:31:47.663 226833 INFO nova.virt.libvirt.driver [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Deletion of /var/lib/nova/instances/bbbf9310-a6db-4d77-80bc-df8acf10ed4f_del complete
Jan 31 07:31:48 compute-2 nova_compute[226829]: 2026-01-31 07:31:48.118 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.400504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708400597, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1816, "num_deletes": 251, "total_data_size": 4117500, "memory_usage": 4179000, "flush_reason": "Manual Compaction"}
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708410567, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1666517, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20252, "largest_seqno": 22063, "table_properties": {"data_size": 1660650, "index_size": 2942, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14975, "raw_average_key_size": 20, "raw_value_size": 1647758, "raw_average_value_size": 2282, "num_data_blocks": 131, "num_entries": 722, "num_filter_entries": 722, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844559, "oldest_key_time": 1769844559, "file_creation_time": 1769844708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10154 microseconds, and 4878 cpu microseconds.
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.410650) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1666517 bytes OK
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.410673) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.411932) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.411957) EVENT_LOG_v1 {"time_micros": 1769844708411948, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.411984) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4109265, prev total WAL file size 4109265, number of live WAL files 2.
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.413311) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353037' seq:72057594037927935, type:22 .. '6D67727374617400373539' seq:0, type:0; will stop at (end)
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1627KB)], [39(9430KB)]
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708413403, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 11323496, "oldest_snapshot_seqno": -1}
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4764 keys, 8492847 bytes, temperature: kUnknown
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708477480, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8492847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8460935, "index_size": 18874, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 118484, "raw_average_key_size": 24, "raw_value_size": 8374621, "raw_average_value_size": 1757, "num_data_blocks": 781, "num_entries": 4764, "num_filter_entries": 4764, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769844708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.477762) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8492847 bytes
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.478870) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.5 rd, 132.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.2 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.9) write-amplify(5.1) OK, records in: 5217, records dropped: 453 output_compression: NoCompression
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.478891) EVENT_LOG_v1 {"time_micros": 1769844708478881, "job": 22, "event": "compaction_finished", "compaction_time_micros": 64173, "compaction_time_cpu_micros": 43508, "output_level": 6, "num_output_files": 1, "total_output_size": 8492847, "num_input_records": 5217, "num_output_records": 4764, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708479219, "job": 22, "event": "table_file_deletion", "file_number": 41}
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708480127, "job": 22, "event": "table_file_deletion", "file_number": 39}
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.413148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.480234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.480242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.480245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.480248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:31:48.480251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-2 nova_compute[226829]: 2026-01-31 07:31:48.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:31:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:49.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:49 compute-2 nova_compute[226829]: 2026-01-31 07:31:49.232 226833 DEBUG nova.virt.libvirt.host [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 31 07:31:49 compute-2 nova_compute[226829]: 2026-01-31 07:31:49.233 226833 INFO nova.virt.libvirt.host [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] UEFI support detected
Jan 31 07:31:49 compute-2 nova_compute[226829]: 2026-01-31 07:31:49.235 226833 INFO nova.compute.manager [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Took 2.47 seconds to destroy the instance on the hypervisor.
Jan 31 07:31:49 compute-2 nova_compute[226829]: 2026-01-31 07:31:49.236 226833 DEBUG oslo.service.loopingcall [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:31:49 compute-2 nova_compute[226829]: 2026-01-31 07:31:49.236 226833 DEBUG nova.compute.manager [-] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:31:49 compute-2 nova_compute[226829]: 2026-01-31 07:31:49.236 226833 DEBUG nova.network.neutron [-] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:31:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:49.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:49 compute-2 ceph-mon[77282]: pgmap v927: 305 pgs: 305 active+clean; 233 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 31 07:31:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:51.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:51 compute-2 podman[230726]: 2026-01-31 07:31:51.257433137 +0000 UTC m=+0.122781934 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:31:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:51.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:51 compute-2 ceph-mon[77282]: pgmap v928: 305 pgs: 305 active+clean; 191 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 150 op/s
Jan 31 07:31:52 compute-2 nova_compute[226829]: 2026-01-31 07:31:52.156 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:53.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:53 compute-2 nova_compute[226829]: 2026-01-31 07:31:53.121 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:53.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:53 compute-2 ceph-mon[77282]: pgmap v929: 305 pgs: 305 active+clean; 167 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 102 op/s
Jan 31 07:31:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.507 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.508 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.508 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.869 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.870 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.870 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.871 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.871 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.871 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.872 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.872 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.948 226833 DEBUG nova.compute.manager [req-54d0996e-b2ae-435f-b12d-c454e28b9bc6 req-940c7d5c-0111-4f85-9d83-05258d644376 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Received event network-vif-unplugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.948 226833 DEBUG oslo_concurrency.lockutils [req-54d0996e-b2ae-435f-b12d-c454e28b9bc6 req-940c7d5c-0111-4f85-9d83-05258d644376 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.948 226833 DEBUG oslo_concurrency.lockutils [req-54d0996e-b2ae-435f-b12d-c454e28b9bc6 req-940c7d5c-0111-4f85-9d83-05258d644376 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.949 226833 DEBUG oslo_concurrency.lockutils [req-54d0996e-b2ae-435f-b12d-c454e28b9bc6 req-940c7d5c-0111-4f85-9d83-05258d644376 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.949 226833 DEBUG nova.compute.manager [req-54d0996e-b2ae-435f-b12d-c454e28b9bc6 req-940c7d5c-0111-4f85-9d83-05258d644376 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] No waiting events found dispatching network-vif-unplugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:31:54 compute-2 nova_compute[226829]: 2026-01-31 07:31:54.949 226833 DEBUG nova.compute.manager [req-54d0996e-b2ae-435f-b12d-c454e28b9bc6 req-940c7d5c-0111-4f85-9d83-05258d644376 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Received event network-vif-unplugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:31:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:55.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:55 compute-2 nova_compute[226829]: 2026-01-31 07:31:55.106 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:55 compute-2 nova_compute[226829]: 2026-01-31 07:31:55.107 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:55 compute-2 nova_compute[226829]: 2026-01-31 07:31:55.107 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:55 compute-2 nova_compute[226829]: 2026-01-31 07:31:55.108 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:31:55 compute-2 nova_compute[226829]: 2026-01-31 07:31:55.108 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:55.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:31:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2422846504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:55 compute-2 nova_compute[226829]: 2026-01-31 07:31:55.539 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:55 compute-2 nova_compute[226829]: 2026-01-31 07:31:55.692 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:31:55 compute-2 nova_compute[226829]: 2026-01-31 07:31:55.693 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4979MB free_disk=20.897987365722656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:31:55 compute-2 nova_compute[226829]: 2026-01-31 07:31:55.693 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:55 compute-2 nova_compute[226829]: 2026-01-31 07:31:55.693 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:55 compute-2 ceph-mon[77282]: pgmap v930: 305 pgs: 305 active+clean; 195 MiB data, 343 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 31 07:31:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2422846504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:56 compute-2 nova_compute[226829]: 2026-01-31 07:31:56.364 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance bbbf9310-a6db-4d77-80bc-df8acf10ed4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:31:56 compute-2 nova_compute[226829]: 2026-01-31 07:31:56.365 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:31:56 compute-2 nova_compute[226829]: 2026-01-31 07:31:56.365 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:31:56 compute-2 nova_compute[226829]: 2026-01-31 07:31:56.461 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:31:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/350385416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:56 compute-2 nova_compute[226829]: 2026-01-31 07:31:56.893 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:56 compute-2 nova_compute[226829]: 2026-01-31 07:31:56.899 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:31:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:57.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:57 compute-2 nova_compute[226829]: 2026-01-31 07:31:57.158 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:31:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:57.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:31:57 compute-2 ceph-mon[77282]: pgmap v931: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 31 07:31:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/350385416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:58 compute-2 nova_compute[226829]: 2026-01-31 07:31:58.181 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:31:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:59.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.115 226833 DEBUG nova.compute.manager [req-819e81b5-410b-4b39-87a5-0c544dd42db6 req-e963f4df-a2e4-43ff-a214-08f3cef265a6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Received event network-vif-plugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.116 226833 DEBUG oslo_concurrency.lockutils [req-819e81b5-410b-4b39-87a5-0c544dd42db6 req-e963f4df-a2e4-43ff-a214-08f3cef265a6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.116 226833 DEBUG oslo_concurrency.lockutils [req-819e81b5-410b-4b39-87a5-0c544dd42db6 req-e963f4df-a2e4-43ff-a214-08f3cef265a6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.116 226833 DEBUG oslo_concurrency.lockutils [req-819e81b5-410b-4b39-87a5-0c544dd42db6 req-e963f4df-a2e4-43ff-a214-08f3cef265a6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.116 226833 DEBUG nova.compute.manager [req-819e81b5-410b-4b39-87a5-0c544dd42db6 req-e963f4df-a2e4-43ff-a214-08f3cef265a6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] No waiting events found dispatching network-vif-plugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.117 226833 WARNING nova.compute.manager [req-819e81b5-410b-4b39-87a5-0c544dd42db6 req-e963f4df-a2e4-43ff-a214-08f3cef265a6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Received unexpected event network-vif-plugged-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa for instance with vm_state active and task_state deleting.
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.157 226833 ERROR nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [req-d79125f9-ee16-43cc-b580-4b3d216e2574] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-d79125f9-ee16-43cc-b580-4b3d216e2574"}]}
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.174 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.190 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.190 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.204 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.223 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.263 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:31:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.010000271s ======
Jan 31 07:31:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:59.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.010000271s
Jan 31 07:31:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:31:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1371840649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.683 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:59 compute-2 nova_compute[226829]: 2026-01-31 07:31:59.686 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:31:59 compute-2 ceph-mon[77282]: pgmap v932: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 31 07:31:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1371840649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:00 compute-2 podman[230826]: 2026-01-31 07:32:00.152676424 +0000 UTC m=+0.041951424 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 07:32:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:01.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:01.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:01 compute-2 ceph-mon[77282]: pgmap v933: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 31 07:32:02 compute-2 nova_compute[226829]: 2026-01-31 07:32:02.006 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844707.0048242, bbbf9310-a6db-4d77-80bc-df8acf10ed4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:32:02 compute-2 nova_compute[226829]: 2026-01-31 07:32:02.007 226833 INFO nova.compute.manager [-] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] VM Stopped (Lifecycle Event)
Jan 31 07:32:02 compute-2 nova_compute[226829]: 2026-01-31 07:32:02.160 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:03.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:03 compute-2 nova_compute[226829]: 2026-01-31 07:32:03.183 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:03.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:03 compute-2 ceph-mon[77282]: pgmap v934: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 07:32:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:05.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:05 compute-2 nova_compute[226829]: 2026-01-31 07:32:05.390 226833 DEBUG nova.compute.manager [None req-1cca18e7-374b-4b88-b309-96205c6dbba7 - - - - - -] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:32:06 compute-2 ceph-mon[77282]: pgmap v935: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 257 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 07:32:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2406168050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.408 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updated inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.409 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.409 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.493 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.494 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 10.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:06 compute-2 sudo[230850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:06 compute-2 sudo[230850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:06 compute-2 sudo[230850]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:06 compute-2 sudo[230875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:06 compute-2 sudo[230875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:06 compute-2 sudo[230875]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.745 226833 DEBUG nova.network.neutron [-] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.798 226833 INFO nova.compute.manager [-] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Took 17.56 seconds to deallocate network for instance.
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.833 226833 DEBUG nova.compute.manager [req-bd148d7f-64a7-42a9-8f69-dee42ef9882f req-a70d0925-f1b5-4ce4-ab22-45a93ca242f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbbf9310-a6db-4d77-80bc-df8acf10ed4f] Received event network-vif-deleted-cbc5d20a-d45f-4c15-9dc9-f76e172c59fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:06.836 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:06.836 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:06.837 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.932 226833 DEBUG oslo_concurrency.lockutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.933 226833 DEBUG oslo_concurrency.lockutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:06 compute-2 nova_compute[226829]: 2026-01-31 07:32:06.996 226833 DEBUG oslo_concurrency.processutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:07.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:07 compute-2 nova_compute[226829]: 2026-01-31 07:32:07.162 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3877595757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:32:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4022648652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:07 compute-2 nova_compute[226829]: 2026-01-31 07:32:07.477 226833 DEBUG oslo_concurrency.processutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:07 compute-2 nova_compute[226829]: 2026-01-31 07:32:07.483 226833 DEBUG nova.compute.provider_tree [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:32:08 compute-2 nova_compute[226829]: 2026-01-31 07:32:08.069 226833 DEBUG nova.scheduler.client.report [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:32:08 compute-2 nova_compute[226829]: 2026-01-31 07:32:08.127 226833 DEBUG oslo_concurrency.lockutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:08 compute-2 nova_compute[226829]: 2026-01-31 07:32:08.161 226833 INFO nova.scheduler.client.report [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Deleted allocations for instance bbbf9310-a6db-4d77-80bc-df8acf10ed4f
Jan 31 07:32:08 compute-2 nova_compute[226829]: 2026-01-31 07:32:08.185 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:08 compute-2 ceph-mon[77282]: pgmap v936: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 81 KiB/s wr, 2 op/s
Jan 31 07:32:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3035829834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4022648652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:08 compute-2 nova_compute[226829]: 2026-01-31 07:32:08.493 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:08 compute-2 nova_compute[226829]: 2026-01-31 07:32:08.499 226833 DEBUG oslo_concurrency.lockutils [None req-f96d7495-c48d-4555-b7d1-06c2c01eda01 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "bbbf9310-a6db-4d77-80bc-df8acf10ed4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 21.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:09.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:09.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:09 compute-2 nova_compute[226829]: 2026-01-31 07:32:09.657 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:09 compute-2 nova_compute[226829]: 2026-01-31 07:32:09.658 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:09 compute-2 ceph-mon[77282]: pgmap v937: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 31 07:32:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1074906142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:09 compute-2 nova_compute[226829]: 2026-01-31 07:32:09.717 226833 DEBUG nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:32:09 compute-2 nova_compute[226829]: 2026-01-31 07:32:09.854 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:09 compute-2 nova_compute[226829]: 2026-01-31 07:32:09.854 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:09 compute-2 nova_compute[226829]: 2026-01-31 07:32:09.863 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:32:09 compute-2 nova_compute[226829]: 2026-01-31 07:32:09.864 226833 INFO nova.compute.claims [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:32:10 compute-2 nova_compute[226829]: 2026-01-31 07:32:10.118 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:32:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1413958164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:10 compute-2 nova_compute[226829]: 2026-01-31 07:32:10.544 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:10 compute-2 nova_compute[226829]: 2026-01-31 07:32:10.550 226833 DEBUG nova.compute.provider_tree [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:32:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1413958164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:11.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.275 226833 DEBUG nova.scheduler.client.report [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.344 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.345 226833 DEBUG nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:32:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:11.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.570 226833 DEBUG nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.570 226833 DEBUG nova.network.neutron [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:32:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:11.668 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.669 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:11.670 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.679 226833 INFO nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.749 226833 DEBUG nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.856 226833 DEBUG nova.policy [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '504b628b5b8b46f9a0fce37f63d8492e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '44b45fc9f5584f0ca482be3aa129958e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.926 226833 DEBUG nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.928 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.928 226833 INFO nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Creating image(s)
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.955 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:32:11 compute-2 ceph-mon[77282]: pgmap v938: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 31 07:32:11 compute-2 nova_compute[226829]: 2026-01-31 07:32:11.996 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:32:12 compute-2 nova_compute[226829]: 2026-01-31 07:32:12.024 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:32:12 compute-2 nova_compute[226829]: 2026-01-31 07:32:12.029 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:12 compute-2 nova_compute[226829]: 2026-01-31 07:32:12.106 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:12 compute-2 nova_compute[226829]: 2026-01-31 07:32:12.107 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:12 compute-2 nova_compute[226829]: 2026-01-31 07:32:12.108 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:12 compute-2 nova_compute[226829]: 2026-01-31 07:32:12.109 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:12 compute-2 nova_compute[226829]: 2026-01-31 07:32:12.134 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:32:12 compute-2 nova_compute[226829]: 2026-01-31 07:32:12.138 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:12 compute-2 nova_compute[226829]: 2026-01-31 07:32:12.163 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:13.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:13 compute-2 nova_compute[226829]: 2026-01-31 07:32:13.187 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:13.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:13 compute-2 nova_compute[226829]: 2026-01-31 07:32:13.431 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:13 compute-2 nova_compute[226829]: 2026-01-31 07:32:13.538 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] resizing rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:32:13 compute-2 nova_compute[226829]: 2026-01-31 07:32:13.621 226833 DEBUG nova.network.neutron [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Successfully created port: e790ca88-3f34-4710-ab34-8a5e963834de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.023 226833 DEBUG nova.objects.instance [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lazy-loading 'migration_context' on Instance uuid 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.080 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.122 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.127 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.129 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.129 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.148 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.149 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.175 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.176 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.213 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:32:14 compute-2 nova_compute[226829]: 2026-01-31 07:32:14.218 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:14 compute-2 ceph-mon[77282]: pgmap v939: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s wr, 0 op/s
Jan 31 07:32:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:15.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:15 compute-2 ceph-mon[77282]: pgmap v940: 305 pgs: 305 active+clean; 215 MiB data, 350 MiB used, 21 GiB / 21 GiB avail; 9.3 KiB/s rd, 466 KiB/s wr, 17 op/s
Jan 31 07:32:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:15.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:15 compute-2 nova_compute[226829]: 2026-01-31 07:32:15.449 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:15 compute-2 nova_compute[226829]: 2026-01-31 07:32:15.619 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:32:15 compute-2 nova_compute[226829]: 2026-01-31 07:32:15.620 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Ensure instance console log exists: /var/lib/nova/instances/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:32:15 compute-2 nova_compute[226829]: 2026-01-31 07:32:15.621 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:15 compute-2 nova_compute[226829]: 2026-01-31 07:32:15.622 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:15 compute-2 nova_compute[226829]: 2026-01-31 07:32:15.622 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:16 compute-2 sudo[231246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:16 compute-2 sudo[231246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:16 compute-2 sudo[231246]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:16 compute-2 sudo[231271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:32:16 compute-2 sudo[231271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:16 compute-2 sudo[231271]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:16 compute-2 sudo[231296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:16 compute-2 sudo[231296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:16 compute-2 sudo[231296]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3231850267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:32:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3231850267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:32:16 compute-2 sudo[231321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:32:16 compute-2 sudo[231321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:16 compute-2 nova_compute[226829]: 2026-01-31 07:32:16.305 226833 DEBUG nova.network.neutron [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Successfully updated port: e790ca88-3f34-4710-ab34-8a5e963834de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:32:16 compute-2 nova_compute[226829]: 2026-01-31 07:32:16.351 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "refresh_cache-329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:32:16 compute-2 nova_compute[226829]: 2026-01-31 07:32:16.351 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquired lock "refresh_cache-329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:32:16 compute-2 nova_compute[226829]: 2026-01-31 07:32:16.352 226833 DEBUG nova.network.neutron [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:32:16 compute-2 nova_compute[226829]: 2026-01-31 07:32:16.596 226833 DEBUG nova.compute.manager [req-5797dd28-9d25-4d6c-af95-19edb0d3a967 req-238a321b-2bbe-495c-ab50-489a77c94ba9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Received event network-changed-e790ca88-3f34-4710-ab34-8a5e963834de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:16 compute-2 nova_compute[226829]: 2026-01-31 07:32:16.597 226833 DEBUG nova.compute.manager [req-5797dd28-9d25-4d6c-af95-19edb0d3a967 req-238a321b-2bbe-495c-ab50-489a77c94ba9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Refreshing instance network info cache due to event network-changed-e790ca88-3f34-4710-ab34-8a5e963834de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:32:16 compute-2 nova_compute[226829]: 2026-01-31 07:32:16.598 226833 DEBUG oslo_concurrency.lockutils [req-5797dd28-9d25-4d6c-af95-19edb0d3a967 req-238a321b-2bbe-495c-ab50-489a77c94ba9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:32:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:16.673 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:16 compute-2 nova_compute[226829]: 2026-01-31 07:32:16.684 226833 DEBUG nova.network.neutron [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:32:16 compute-2 sudo[231321]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:17.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:17 compute-2 nova_compute[226829]: 2026-01-31 07:32:17.166 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:17 compute-2 ceph-mon[77282]: pgmap v941: 305 pgs: 305 active+clean; 236 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 MiB/s wr, 28 op/s
Jan 31 07:32:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:32:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:17.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:32:18 compute-2 nova_compute[226829]: 2026-01-31 07:32:18.243 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:19.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:32:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:32:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:32:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:32:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.214 226833 DEBUG nova.network.neutron [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Updating instance_info_cache with network_info: [{"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.261 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Releasing lock "refresh_cache-329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.262 226833 DEBUG nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Instance network_info: |[{"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.263 226833 DEBUG oslo_concurrency.lockutils [req-5797dd28-9d25-4d6c-af95-19edb0d3a967 req-238a321b-2bbe-495c-ab50-489a77c94ba9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.263 226833 DEBUG nova.network.neutron [req-5797dd28-9d25-4d6c-af95-19edb0d3a967 req-238a321b-2bbe-495c-ab50-489a77c94ba9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Refreshing network info cache for port e790ca88-3f34-4710-ab34-8a5e963834de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.269 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Start _get_guest_xml network_info=[{"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [{'size': 1, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vdb', 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.275 226833 WARNING nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.285 226833 DEBUG nova.virt.libvirt.host [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.285 226833 DEBUG nova.virt.libvirt.host [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.295 226833 DEBUG nova.virt.libvirt.host [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.295 226833 DEBUG nova.virt.libvirt.host [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.297 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.298 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:30:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='478771951',id=4,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-602715359',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.299 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.299 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.300 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.300 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.301 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.301 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.302 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.302 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.303 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.303 226833 DEBUG nova.virt.hardware [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.309 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:19.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:32:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2462985865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.820 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:19 compute-2 nova_compute[226829]: 2026-01-31 07:32:19.822 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:20 compute-2 ceph-mon[77282]: pgmap v942: 305 pgs: 305 active+clean; 230 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 31 07:32:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2462985865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:32:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:32:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/377370642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.285 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.322 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.327 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:32:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3032247754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.802 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.805 226833 DEBUG nova.virt.libvirt.vif [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:32:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-39512070',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-39512070',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-39512070',id=6,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDxTAQcLwGUMVPW0FQMEh2WkiJiP4UW6jyeXpyYN49oJPb9d8Ollm4RLXjbgyBtbDOEel5iqcdq3OwrhOFKp5uzFUPyPJ5dQoEavamFkLA01W1IRHGw1oCDni1Hm+Ki2Jw==',key_name='tempest-keypair-338823368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44b45fc9f5584f0ca482be3aa129958e',ramdisk_id='',reservation_id='r-0tibxui6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-581936852',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-581936852-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:32:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='504b628b5b8b46f9a0fce37f63d8492e',uuid=329b7a5b-02d3-4d71-b5a0-de26eba8e0f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.805 226833 DEBUG nova.network.os_vif_util [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converting VIF {"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.806 226833 DEBUG nova.network.os_vif_util [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:4d:f9,bridge_name='br-int',has_traffic_filtering=True,id=e790ca88-3f34-4710-ab34-8a5e963834de,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape790ca88-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.808 226833 DEBUG nova.objects.instance [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lazy-loading 'pci_devices' on Instance uuid 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.828 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <uuid>329b7a5b-02d3-4d71-b5a0-de26eba8e0f5</uuid>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <name>instance-00000006</name>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-39512070</nova:name>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:32:19</nova:creationTime>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-602715359">
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <nova:ephemeral>1</nova:ephemeral>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <nova:user uuid="504b628b5b8b46f9a0fce37f63d8492e">tempest-ServersWithSpecificFlavorTestJSON-581936852-project-member</nova:user>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <nova:project uuid="44b45fc9f5584f0ca482be3aa129958e">tempest-ServersWithSpecificFlavorTestJSON-581936852</nova:project>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <nova:port uuid="e790ca88-3f34-4710-ab34-8a5e963834de">
Jan 31 07:32:20 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <system>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <entry name="serial">329b7a5b-02d3-4d71-b5a0-de26eba8e0f5</entry>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <entry name="uuid">329b7a5b-02d3-4d71-b5a0-de26eba8e0f5</entry>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     </system>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <os>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   </os>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <features>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   </features>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk">
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       </source>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.eph0">
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       </source>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <target dev="vdb" bus="virtio"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.config">
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       </source>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:32:20 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:36:4d:f9"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <target dev="tape790ca88-3f"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5/console.log" append="off"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <video>
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     </video>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:32:20 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:32:20 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:32:20 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:32:20 compute-2 nova_compute[226829]: </domain>
Jan 31 07:32:20 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.830 226833 DEBUG nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Preparing to wait for external event network-vif-plugged-e790ca88-3f34-4710-ab34-8a5e963834de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.831 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.831 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.831 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.832 226833 DEBUG nova.virt.libvirt.vif [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:32:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-39512070',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-39512070',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-39512070',id=6,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDxTAQcLwGUMVPW0FQMEh2WkiJiP4UW6jyeXpyYN49oJPb9d8Ollm4RLXjbgyBtbDOEel5iqcdq3OwrhOFKp5uzFUPyPJ5dQoEavamFkLA01W1IRHGw1oCDni1Hm+Ki2Jw==',key_name='tempest-keypair-338823368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44b45fc9f5584f0ca482be3aa129958e',ramdisk_id='',reservation_id='r-0tibxui6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-581936852',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-581936852-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:32:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='504b628b5b8b46f9a0fce37f63d8492e',uuid=329b7a5b-02d3-4d71-b5a0-de26eba8e0f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.832 226833 DEBUG nova.network.os_vif_util [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converting VIF {"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.833 226833 DEBUG nova.network.os_vif_util [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:4d:f9,bridge_name='br-int',has_traffic_filtering=True,id=e790ca88-3f34-4710-ab34-8a5e963834de,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape790ca88-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.833 226833 DEBUG os_vif [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:4d:f9,bridge_name='br-int',has_traffic_filtering=True,id=e790ca88-3f34-4710-ab34-8a5e963834de,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape790ca88-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.834 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.835 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.835 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.843 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.843 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape790ca88-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.844 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape790ca88-3f, col_values=(('external_ids', {'iface-id': 'e790ca88-3f34-4710-ab34-8a5e963834de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:4d:f9', 'vm-uuid': '329b7a5b-02d3-4d71-b5a0-de26eba8e0f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.847 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:20 compute-2 NetworkManager[48999]: <info>  [1769844740.8478] manager: (tape790ca88-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.850 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.852 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.853 226833 INFO os_vif [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:4d:f9,bridge_name='br-int',has_traffic_filtering=True,id=e790ca88-3f34-4710-ab34-8a5e963834de,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape790ca88-3f')
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.982 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.982 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.982 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.983 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] No VIF found with MAC fa:16:3e:36:4d:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:32:20 compute-2 nova_compute[226829]: 2026-01-31 07:32:20.984 226833 INFO nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Using config drive
Jan 31 07:32:21 compute-2 nova_compute[226829]: 2026-01-31 07:32:21.033 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:32:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:21.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/377370642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:32:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3032247754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:32:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:21.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:22 compute-2 podman[231483]: 2026-01-31 07:32:22.216503383 +0000 UTC m=+0.092994356 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:32:22 compute-2 ceph-mon[77282]: pgmap v943: 305 pgs: 305 active+clean; 198 MiB data, 346 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 31 07:32:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2709580168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:22 compute-2 nova_compute[226829]: 2026-01-31 07:32:22.632 226833 INFO nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Creating config drive at /var/lib/nova/instances/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5/disk.config
Jan 31 07:32:22 compute-2 nova_compute[226829]: 2026-01-31 07:32:22.640 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6d4qb2ot execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:22 compute-2 nova_compute[226829]: 2026-01-31 07:32:22.767 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6d4qb2ot" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:22 compute-2 nova_compute[226829]: 2026-01-31 07:32:22.802 226833 DEBUG nova.storage.rbd_utils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] rbd image 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:32:22 compute-2 nova_compute[226829]: 2026-01-31 07:32:22.808 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5/disk.config 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.042 226833 DEBUG oslo_concurrency.processutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5/disk.config 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.043 226833 INFO nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Deleting local config drive /var/lib/nova/instances/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5/disk.config because it was imported into RBD.
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.047 226833 DEBUG nova.network.neutron [req-5797dd28-9d25-4d6c-af95-19edb0d3a967 req-238a321b-2bbe-495c-ab50-489a77c94ba9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Updated VIF entry in instance network info cache for port e790ca88-3f34-4710-ab34-8a5e963834de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.047 226833 DEBUG nova.network.neutron [req-5797dd28-9d25-4d6c-af95-19edb0d3a967 req-238a321b-2bbe-495c-ab50-489a77c94ba9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Updating instance_info_cache with network_info: [{"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.071 226833 DEBUG oslo_concurrency.lockutils [req-5797dd28-9d25-4d6c-af95-19edb0d3a967 req-238a321b-2bbe-495c-ab50-489a77c94ba9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:32:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:23.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:23 compute-2 kernel: tape790ca88-3f: entered promiscuous mode
Jan 31 07:32:23 compute-2 NetworkManager[48999]: <info>  [1769844743.1049] manager: (tape790ca88-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Jan 31 07:32:23 compute-2 ovn_controller[133834]: 2026-01-31T07:32:23Z|00038|binding|INFO|Claiming lport e790ca88-3f34-4710-ab34-8a5e963834de for this chassis.
Jan 31 07:32:23 compute-2 ovn_controller[133834]: 2026-01-31T07:32:23Z|00039|binding|INFO|e790ca88-3f34-4710-ab34-8a5e963834de: Claiming fa:16:3e:36:4d:f9 10.100.0.13
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.105 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:23 compute-2 ovn_controller[133834]: 2026-01-31T07:32:23Z|00040|binding|INFO|Setting lport e790ca88-3f34-4710-ab34-8a5e963834de ovn-installed in OVS
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.114 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.117 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:23 compute-2 systemd-udevd[231564]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:32:23 compute-2 ovn_controller[133834]: 2026-01-31T07:32:23Z|00041|binding|INFO|Setting lport e790ca88-3f34-4710-ab34-8a5e963834de up in Southbound
Jan 31 07:32:23 compute-2 systemd-machined[195142]: New machine qemu-2-instance-00000006.
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.135 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:4d:f9 10.100.0.13'], port_security=['fa:16:3e:36:4d:f9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '329b7a5b-02d3-4d71-b5a0-de26eba8e0f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44b45fc9f5584f0ca482be3aa129958e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'afa316ed-24f5-4171-ab84-996dd120cd11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a2fa1a5-5047-42e0-b079-f093555fa913, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=e790ca88-3f34-4710-ab34-8a5e963834de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.139 143841 INFO neutron.agent.ovn.metadata.agent [-] Port e790ca88-3f34-4710-ab34-8a5e963834de in datapath c63b50b2-1358-4cfc-8846-5b8785b4f656 bound to our chassis
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.143 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c63b50b2-1358-4cfc-8846-5b8785b4f656
Jan 31 07:32:23 compute-2 NetworkManager[48999]: <info>  [1769844743.1477] device (tape790ca88-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:32:23 compute-2 NetworkManager[48999]: <info>  [1769844743.1483] device (tape790ca88-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:32:23 compute-2 systemd[1]: Started Virtual Machine qemu-2-instance-00000006.
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.152 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1abbdfc1-a0b4-47cc-bcb3-1f3b198d90f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.154 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc63b50b2-11 in ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.155 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc63b50b2-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.155 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca257a3-8958-4ee3-a442-76665a9cae0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.156 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6377d5-26a5-4660-885e-7781532f5e9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.167 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[c4edffcf-f5b0-42af-b715-4065673e7982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.186 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[96ac096b-c6db-4479-8f10-fc133bc2e7f4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.209 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb96bb0-6de0-43cb-9dd6-7706b91eee27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 NetworkManager[48999]: <info>  [1769844743.2147] manager: (tapc63b50b2-10): new Veth device (/org/freedesktop/NetworkManager/Devices/34)
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.214 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a8279e31-4c63-4bab-a610-53d57215c3d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 systemd-udevd[231566]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.230 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5a70e7c9-82d0-4f19-a6d5-7fa77cc616c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.234 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[1b063357-bc38-4826-909b-b96e15c049f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:23 compute-2 NetworkManager[48999]: <info>  [1769844743.2513] device (tapc63b50b2-10): carrier: link connected
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.252 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3b106e-5c59-42f5-8481-f26797c067b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.265 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c1ea78-2c75-4962-b823-6b0a805ed986]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc63b50b2-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499572, 'reachable_time': 27693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231597, 'error': None, 'target': 'ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.276 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5abaca35-d392-4634-b2f2-edc9026c9870]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:a0ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499572, 'tstamp': 499572}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 231598, 'error': None, 'target': 'ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.287 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[446edb5f-626e-4590-8c4b-797c35140d82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc63b50b2-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499572, 'reachable_time': 27693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 231599, 'error': None, 'target': 'ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ceph-mon[77282]: pgmap v944: 305 pgs: 305 active+clean; 169 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.309 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bdab37af-4012-4496-b0bf-5eab6c96931b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.355 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb42e06-80a0-4aec-a7ab-26de74da9ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.357 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc63b50b2-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.357 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.357 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc63b50b2-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:23 compute-2 kernel: tapc63b50b2-10: entered promiscuous mode
Jan 31 07:32:23 compute-2 NetworkManager[48999]: <info>  [1769844743.3600] manager: (tapc63b50b2-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.359 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.362 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.363 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc63b50b2-10, col_values=(('external_ids', {'iface-id': '87211dc4-a75d-4ebb-a1af-acc15276f8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:23 compute-2 ovn_controller[133834]: 2026-01-31T07:32:23Z|00042|binding|INFO|Releasing lport 87211dc4-a75d-4ebb-a1af-acc15276f8a6 from this chassis (sb_readonly=0)
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.365 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c63b50b2-1358-4cfc-8846-5b8785b4f656.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c63b50b2-1358-4cfc-8846-5b8785b4f656.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.366 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f02f8048-76c6-458c-8ab0-ba78178af3e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.367 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-c63b50b2-1358-4cfc-8846-5b8785b4f656
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/c63b50b2-1358-4cfc-8846-5b8785b4f656.pid.haproxy
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID c63b50b2-1358-4cfc-8846-5b8785b4f656
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:32:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:23.368 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'env', 'PROCESS_TAG=haproxy-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c63b50b2-1358-4cfc-8846-5b8785b4f656.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:32:23 compute-2 nova_compute[226829]: 2026-01-31 07:32:23.371 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:23.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:24 compute-2 podman[231688]: 2026-01-31 07:32:24.013551164 +0000 UTC m=+0.059133565 container create 3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.038 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844744.037086, 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.038 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] VM Started (Lifecycle Event)
Jan 31 07:32:24 compute-2 systemd[1]: Started libpod-conmon-3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1.scope.
Jan 31 07:32:24 compute-2 podman[231688]: 2026-01-31 07:32:23.984207199 +0000 UTC m=+0.029789820 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:32:24 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:32:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735fcfb965dd3ee01ad58fb318eccdcb727cda5e5128f50e9847c30e1de3b834/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:24 compute-2 podman[231688]: 2026-01-31 07:32:24.116099102 +0000 UTC m=+0.161681483 container init 3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 07:32:24 compute-2 podman[231688]: 2026-01-31 07:32:24.122324903 +0000 UTC m=+0.167907264 container start 3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.127 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.133 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844744.037687, 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.133 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] VM Paused (Lifecycle Event)
Jan 31 07:32:24 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[231706]: [NOTICE]   (231710) : New worker (231712) forked
Jan 31 07:32:24 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[231706]: [NOTICE]   (231710) : Loading success.
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.181 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.186 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.217 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:32:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.668 226833 DEBUG nova.compute.manager [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Received event network-vif-plugged-e790ca88-3f34-4710-ab34-8a5e963834de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.669 226833 DEBUG oslo_concurrency.lockutils [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.670 226833 DEBUG oslo_concurrency.lockutils [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.670 226833 DEBUG oslo_concurrency.lockutils [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.670 226833 DEBUG nova.compute.manager [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Processing event network-vif-plugged-e790ca88-3f34-4710-ab34-8a5e963834de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.671 226833 DEBUG nova.compute.manager [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Received event network-vif-plugged-e790ca88-3f34-4710-ab34-8a5e963834de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.671 226833 DEBUG oslo_concurrency.lockutils [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.671 226833 DEBUG oslo_concurrency.lockutils [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.671 226833 DEBUG oslo_concurrency.lockutils [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.671 226833 DEBUG nova.compute.manager [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] No waiting events found dispatching network-vif-plugged-e790ca88-3f34-4710-ab34-8a5e963834de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.672 226833 WARNING nova.compute.manager [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Received unexpected event network-vif-plugged-e790ca88-3f34-4710-ab34-8a5e963834de for instance with vm_state building and task_state spawning.
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.672 226833 DEBUG nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.676 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844744.6762025, 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.676 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] VM Resumed (Lifecycle Event)
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.678 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.682 226833 INFO nova.virt.libvirt.driver [-] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Instance spawned successfully.
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.682 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.719 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.726 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.726 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.727 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.728 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.729 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.730 226833 DEBUG nova.virt.libvirt.driver [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.738 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.772 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.819 226833 INFO nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Took 12.89 seconds to spawn the instance on the hypervisor.
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.819 226833 DEBUG nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.925 226833 INFO nova.compute.manager [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Took 15.11 seconds to build instance.
Jan 31 07:32:24 compute-2 nova_compute[226829]: 2026-01-31 07:32:24.949 226833 DEBUG oslo_concurrency.lockutils [None req-a9af72a0-5bbc-413d-80ef-b50c389e0691 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:25 compute-2 sudo[231721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:25 compute-2 sudo[231721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:25 compute-2 sudo[231721]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:25.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:25 compute-2 sudo[231746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:32:25 compute-2 sudo[231746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:25 compute-2 sudo[231746]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:25.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:25 compute-2 ceph-mon[77282]: pgmap v945: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 31 07:32:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:25 compute-2 nova_compute[226829]: 2026-01-31 07:32:25.847 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:26 compute-2 sudo[231772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:26 compute-2 sudo[231772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:26 compute-2 sudo[231772]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:26 compute-2 sudo[231797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:26 compute-2 sudo[231797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:26 compute-2 sudo[231797]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:27.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:27.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:27 compute-2 ceph-mon[77282]: pgmap v946: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 847 KiB/s rd, 1.3 MiB/s wr, 105 op/s
Jan 31 07:32:28 compute-2 nova_compute[226829]: 2026-01-31 07:32:28.247 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:29.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:29.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:29 compute-2 ceph-mon[77282]: pgmap v947: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 420 KiB/s wr, 106 op/s
Jan 31 07:32:30 compute-2 nova_compute[226829]: 2026-01-31 07:32:30.850 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:31.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:31 compute-2 podman[231825]: 2026-01-31 07:32:31.224323923 +0000 UTC m=+0.101895200 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:32:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:31.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:31 compute-2 nova_compute[226829]: 2026-01-31 07:32:31.669 226833 DEBUG nova.compute.manager [req-51a373b4-5cba-4cc3-8cbb-cd06a1328a9f req-3b24856e-59c0-4e73-a718-c35bafa84cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Received event network-changed-e790ca88-3f34-4710-ab34-8a5e963834de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:31 compute-2 nova_compute[226829]: 2026-01-31 07:32:31.670 226833 DEBUG nova.compute.manager [req-51a373b4-5cba-4cc3-8cbb-cd06a1328a9f req-3b24856e-59c0-4e73-a718-c35bafa84cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Refreshing instance network info cache due to event network-changed-e790ca88-3f34-4710-ab34-8a5e963834de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:32:31 compute-2 nova_compute[226829]: 2026-01-31 07:32:31.670 226833 DEBUG oslo_concurrency.lockutils [req-51a373b4-5cba-4cc3-8cbb-cd06a1328a9f req-3b24856e-59c0-4e73-a718-c35bafa84cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:32:31 compute-2 nova_compute[226829]: 2026-01-31 07:32:31.670 226833 DEBUG oslo_concurrency.lockutils [req-51a373b4-5cba-4cc3-8cbb-cd06a1328a9f req-3b24856e-59c0-4e73-a718-c35bafa84cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:32:31 compute-2 nova_compute[226829]: 2026-01-31 07:32:31.670 226833 DEBUG nova.network.neutron [req-51a373b4-5cba-4cc3-8cbb-cd06a1328a9f req-3b24856e-59c0-4e73-a718-c35bafa84cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Refreshing network info cache for port e790ca88-3f34-4710-ab34-8a5e963834de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:32:31 compute-2 ceph-mon[77282]: pgmap v948: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 28 KiB/s wr, 96 op/s
Jan 31 07:32:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:33.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:33 compute-2 nova_compute[226829]: 2026-01-31 07:32:33.289 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:33.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:34 compute-2 ceph-mon[77282]: pgmap v949: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 88 op/s
Jan 31 07:32:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:35 compute-2 nova_compute[226829]: 2026-01-31 07:32:35.064 226833 DEBUG nova.network.neutron [req-51a373b4-5cba-4cc3-8cbb-cd06a1328a9f req-3b24856e-59c0-4e73-a718-c35bafa84cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Updated VIF entry in instance network info cache for port e790ca88-3f34-4710-ab34-8a5e963834de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:32:35 compute-2 nova_compute[226829]: 2026-01-31 07:32:35.066 226833 DEBUG nova.network.neutron [req-51a373b4-5cba-4cc3-8cbb-cd06a1328a9f req-3b24856e-59c0-4e73-a718-c35bafa84cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Updating instance_info_cache with network_info: [{"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:32:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:35.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:35.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:35 compute-2 nova_compute[226829]: 2026-01-31 07:32:35.853 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:35 compute-2 nova_compute[226829]: 2026-01-31 07:32:35.875 226833 DEBUG oslo_concurrency.lockutils [req-51a373b4-5cba-4cc3-8cbb-cd06a1328a9f req-3b24856e-59c0-4e73-a718-c35bafa84cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:32:36 compute-2 ceph-mon[77282]: pgmap v950: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 84 op/s
Jan 31 07:32:36 compute-2 ovn_controller[133834]: 2026-01-31T07:32:36Z|00043|binding|INFO|Releasing lport 87211dc4-a75d-4ebb-a1af-acc15276f8a6 from this chassis (sb_readonly=0)
Jan 31 07:32:36 compute-2 nova_compute[226829]: 2026-01-31 07:32:36.728 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:37.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:37.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:37 compute-2 ovn_controller[133834]: 2026-01-31T07:32:37Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:4d:f9 10.100.0.13
Jan 31 07:32:37 compute-2 ovn_controller[133834]: 2026-01-31T07:32:37Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:4d:f9 10.100.0.13
Jan 31 07:32:38 compute-2 ceph-mon[77282]: pgmap v951: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 69 op/s
Jan 31 07:32:38 compute-2 nova_compute[226829]: 2026-01-31 07:32:38.292 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:39.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2490782525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:39.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:40 compute-2 ceph-mon[77282]: pgmap v952: 305 pgs: 305 active+clean; 173 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 271 KiB/s wr, 47 op/s
Jan 31 07:32:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/529714511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:40 compute-2 nova_compute[226829]: 2026-01-31 07:32:40.855 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:41.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:41 compute-2 ceph-mon[77282]: pgmap v953: 305 pgs: 305 active+clean; 181 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 72 op/s
Jan 31 07:32:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:41.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:43.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:43 compute-2 nova_compute[226829]: 2026-01-31 07:32:43.294 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:43.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:43 compute-2 ceph-mon[77282]: pgmap v954: 305 pgs: 305 active+clean; 198 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 644 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 07:32:43 compute-2 ovn_controller[133834]: 2026-01-31T07:32:43Z|00044|binding|INFO|Releasing lport 87211dc4-a75d-4ebb-a1af-acc15276f8a6 from this chassis (sb_readonly=0)
Jan 31 07:32:43 compute-2 nova_compute[226829]: 2026-01-31 07:32:43.896 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:45.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:45 compute-2 nova_compute[226829]: 2026-01-31 07:32:45.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:45 compute-2 ceph-mon[77282]: pgmap v955: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 07:32:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/852389985' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:32:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/852389985' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.758377) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765758436, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 843, "num_deletes": 251, "total_data_size": 1606576, "memory_usage": 1633840, "flush_reason": "Manual Compaction"}
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765765693, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1049459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22068, "largest_seqno": 22906, "table_properties": {"data_size": 1045501, "index_size": 1674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9031, "raw_average_key_size": 19, "raw_value_size": 1037552, "raw_average_value_size": 2245, "num_data_blocks": 75, "num_entries": 462, "num_filter_entries": 462, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844708, "oldest_key_time": 1769844708, "file_creation_time": 1769844765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 7374 microseconds, and 4142 cpu microseconds.
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.765744) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1049459 bytes OK
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.765764) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.767917) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.767958) EVENT_LOG_v1 {"time_micros": 1769844765767954, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.767973) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1602212, prev total WAL file size 1602212, number of live WAL files 2.
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.768353) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1024KB)], [42(8293KB)]
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765768395, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 9542306, "oldest_snapshot_seqno": -1}
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 4711 keys, 7504661 bytes, temperature: kUnknown
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765830097, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 7504661, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7474037, "index_size": 17714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118053, "raw_average_key_size": 25, "raw_value_size": 7389620, "raw_average_value_size": 1568, "num_data_blocks": 726, "num_entries": 4711, "num_filter_entries": 4711, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769844765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.830357) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7504661 bytes
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.831690) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.4 rd, 121.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.1 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(16.2) write-amplify(7.2) OK, records in: 5226, records dropped: 515 output_compression: NoCompression
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.831706) EVENT_LOG_v1 {"time_micros": 1769844765831698, "job": 24, "event": "compaction_finished", "compaction_time_micros": 61811, "compaction_time_cpu_micros": 18349, "output_level": 6, "num_output_files": 1, "total_output_size": 7504661, "num_input_records": 5226, "num_output_records": 4711, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765831936, "job": 24, "event": "table_file_deletion", "file_number": 44}
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765832702, "job": 24, "event": "table_file_deletion", "file_number": 42}
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.768267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.832760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.832765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.832766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.832768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:45.832770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-2 nova_compute[226829]: 2026-01-31 07:32:45.859 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:46 compute-2 sudo[231850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:46 compute-2 sudo[231850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:46 compute-2 sudo[231850]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:47 compute-2 sudo[231875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:47 compute-2 sudo[231875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:47 compute-2 sudo[231875]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:47.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:47.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:47 compute-2 nova_compute[226829]: 2026-01-31 07:32:47.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:47 compute-2 ceph-mon[77282]: pgmap v956: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 31 07:32:48 compute-2 nova_compute[226829]: 2026-01-31 07:32:48.296 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:48 compute-2 nova_compute[226829]: 2026-01-31 07:32:48.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.166 226833 DEBUG oslo_concurrency.lockutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.167 226833 DEBUG oslo_concurrency.lockutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.167 226833 DEBUG oslo_concurrency.lockutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.168 226833 DEBUG oslo_concurrency.lockutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.168 226833 DEBUG oslo_concurrency.lockutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.170 226833 INFO nova.compute.manager [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Terminating instance
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.172 226833 DEBUG nova.compute.manager [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:32:49 compute-2 kernel: tape790ca88-3f (unregistering): left promiscuous mode
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.261 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:49 compute-2 NetworkManager[48999]: <info>  [1769844769.2629] device (tape790ca88-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:32:49 compute-2 ovn_controller[133834]: 2026-01-31T07:32:49Z|00045|binding|INFO|Releasing lport e790ca88-3f34-4710-ab34-8a5e963834de from this chassis (sb_readonly=0)
Jan 31 07:32:49 compute-2 ovn_controller[133834]: 2026-01-31T07:32:49Z|00046|binding|INFO|Setting lport e790ca88-3f34-4710-ab34-8a5e963834de down in Southbound
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.270 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:49 compute-2 ovn_controller[133834]: 2026-01-31T07:32:49Z|00047|binding|INFO|Removing iface tape790ca88-3f ovn-installed in OVS
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.277 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.299 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:4d:f9 10.100.0.13'], port_security=['fa:16:3e:36:4d:f9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '329b7a5b-02d3-4d71-b5a0-de26eba8e0f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44b45fc9f5584f0ca482be3aa129958e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'afa316ed-24f5-4171-ab84-996dd120cd11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a2fa1a5-5047-42e0-b079-f093555fa913, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=e790ca88-3f34-4710-ab34-8a5e963834de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.303 143841 INFO neutron.agent.ovn.metadata.agent [-] Port e790ca88-3f34-4710-ab34-8a5e963834de in datapath c63b50b2-1358-4cfc-8846-5b8785b4f656 unbound from our chassis
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.307 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c63b50b2-1358-4cfc-8846-5b8785b4f656, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.311 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[836cffc6-fb70-430a-af55-3ddc27794a6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.312 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656 namespace which is not needed anymore
Jan 31 07:32:49 compute-2 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 31 07:32:49 compute-2 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Consumed 15.159s CPU time.
Jan 31 07:32:49 compute-2 systemd-machined[195142]: Machine qemu-2-instance-00000006 terminated.
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.408 226833 INFO nova.virt.libvirt.driver [-] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Instance destroyed successfully.
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.409 226833 DEBUG nova.objects.instance [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lazy-loading 'resources' on Instance uuid 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.428 226833 DEBUG nova.virt.libvirt.vif [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:32:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-39512070',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-39512070',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-39512070',id=6,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDxTAQcLwGUMVPW0FQMEh2WkiJiP4UW6jyeXpyYN49oJPb9d8Ollm4RLXjbgyBtbDOEel5iqcdq3OwrhOFKp5uzFUPyPJ5dQoEavamFkLA01W1IRHGw1oCDni1Hm+Ki2Jw==',key_name='tempest-keypair-338823368',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:32:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44b45fc9f5584f0ca482be3aa129958e',ramdisk_id='',reservation_id='r-0tibxui6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-581936852',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-581936852-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:32:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='504b628b5b8b46f9a0fce37f63d8492e',uuid=329b7a5b-02d3-4d71-b5a0-de26eba8e0f5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.429 226833 DEBUG nova.network.os_vif_util [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converting VIF {"id": "e790ca88-3f34-4710-ab34-8a5e963834de", "address": "fa:16:3e:36:4d:f9", "network": {"id": "c63b50b2-1358-4cfc-8846-5b8785b4f656", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-749344601-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44b45fc9f5584f0ca482be3aa129958e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape790ca88-3f", "ovs_interfaceid": "e790ca88-3f34-4710-ab34-8a5e963834de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.429 226833 DEBUG nova.network.os_vif_util [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:4d:f9,bridge_name='br-int',has_traffic_filtering=True,id=e790ca88-3f34-4710-ab34-8a5e963834de,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape790ca88-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.430 226833 DEBUG os_vif [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:4d:f9,bridge_name='br-int',has_traffic_filtering=True,id=e790ca88-3f34-4710-ab34-8a5e963834de,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape790ca88-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.433 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.433 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape790ca88-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:49.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.473 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.475 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.478 226833 INFO os_vif [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:4d:f9,bridge_name='br-int',has_traffic_filtering=True,id=e790ca88-3f34-4710-ab34-8a5e963834de,network=Network(c63b50b2-1358-4cfc-8846-5b8785b4f656),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape790ca88-3f')
Jan 31 07:32:49 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[231706]: [NOTICE]   (231710) : haproxy version is 2.8.14-c23fe91
Jan 31 07:32:49 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[231706]: [NOTICE]   (231710) : path to executable is /usr/sbin/haproxy
Jan 31 07:32:49 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[231706]: [WARNING]  (231710) : Exiting Master process...
Jan 31 07:32:49 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[231706]: [ALERT]    (231710) : Current worker (231712) exited with code 143 (Terminated)
Jan 31 07:32:49 compute-2 neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656[231706]: [WARNING]  (231710) : All workers exited. Exiting... (0)
Jan 31 07:32:49 compute-2 systemd[1]: libpod-3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1.scope: Deactivated successfully.
Jan 31 07:32:49 compute-2 podman[231929]: 2026-01-31 07:32:49.494062687 +0000 UTC m=+0.089468059 container died 3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:32:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:49 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1-userdata-shm.mount: Deactivated successfully.
Jan 31 07:32:49 compute-2 systemd[1]: var-lib-containers-storage-overlay-735fcfb965dd3ee01ad58fb318eccdcb727cda5e5128f50e9847c30e1de3b834-merged.mount: Deactivated successfully.
Jan 31 07:32:49 compute-2 podman[231929]: 2026-01-31 07:32:49.523746123 +0000 UTC m=+0.119151505 container cleanup 3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:32:49 compute-2 systemd[1]: libpod-conmon-3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1.scope: Deactivated successfully.
Jan 31 07:32:49 compute-2 podman[231983]: 2026-01-31 07:32:49.571974048 +0000 UTC m=+0.035148807 container remove 3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.576 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9d1c8b-d45f-4653-b1a9-aac0861d238e]: (4, ('Sat Jan 31 07:32:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656 (3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1)\n3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1\nSat Jan 31 07:32:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656 (3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1)\n3e3743aa46579aaf713342897cbb470ac3e6ab42159d4fd504b2478447f65bb1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.578 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2f821b29-9951-4cb7-933d-4793612ba02e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.579 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc63b50b2-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.580 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:49 compute-2 kernel: tapc63b50b2-10: left promiscuous mode
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.583 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.586 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6caf2023-6cd1-4ec6-920d-817535bb1bf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.591 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.599 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[04d1259d-d8fa-4bbd-b240-1fde5284b492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.600 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[83be29cf-22d7-4692-b121-d17c14b41978]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.615 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[536d20e5-c060-4944-b10b-32ed97aaa236]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499568, 'reachable_time': 15372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231996, 'error': None, 'target': 'ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.617 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c63b50b2-1358-4cfc-8846-5b8785b4f656 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:32:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:32:49.618 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[e06cdbec-a4a5-4d27-a47c-e82cecd27950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:49 compute-2 systemd[1]: run-netns-ovnmeta\x2dc63b50b2\x2d1358\x2d4cfc\x2d8846\x2d5b8785b4f656.mount: Deactivated successfully.
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.747 226833 DEBUG nova.compute.manager [req-39527e18-cb54-4f04-b932-37289d100b5b req-9969b350-5f89-4878-b0bd-df59ee6b6c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Received event network-vif-unplugged-e790ca88-3f34-4710-ab34-8a5e963834de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.748 226833 DEBUG oslo_concurrency.lockutils [req-39527e18-cb54-4f04-b932-37289d100b5b req-9969b350-5f89-4878-b0bd-df59ee6b6c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.748 226833 DEBUG oslo_concurrency.lockutils [req-39527e18-cb54-4f04-b932-37289d100b5b req-9969b350-5f89-4878-b0bd-df59ee6b6c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.748 226833 DEBUG oslo_concurrency.lockutils [req-39527e18-cb54-4f04-b932-37289d100b5b req-9969b350-5f89-4878-b0bd-df59ee6b6c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.749 226833 DEBUG nova.compute.manager [req-39527e18-cb54-4f04-b932-37289d100b5b req-9969b350-5f89-4878-b0bd-df59ee6b6c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] No waiting events found dispatching network-vif-unplugged-e790ca88-3f34-4710-ab34-8a5e963834de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:32:49 compute-2 nova_compute[226829]: 2026-01-31 07:32:49.749 226833 DEBUG nova.compute.manager [req-39527e18-cb54-4f04-b932-37289d100b5b req-9969b350-5f89-4878-b0bd-df59ee6b6c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Received event network-vif-unplugged-e790ca88-3f34-4710-ab34-8a5e963834de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:32:49 compute-2 ceph-mon[77282]: pgmap v957: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 31 07:32:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1117879748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/467322244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:50 compute-2 nova_compute[226829]: 2026-01-31 07:32:50.173 226833 INFO nova.virt.libvirt.driver [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Deleting instance files /var/lib/nova/instances/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_del
Jan 31 07:32:50 compute-2 nova_compute[226829]: 2026-01-31 07:32:50.173 226833 INFO nova.virt.libvirt.driver [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Deletion of /var/lib/nova/instances/329b7a5b-02d3-4d71-b5a0-de26eba8e0f5_del complete
Jan 31 07:32:50 compute-2 nova_compute[226829]: 2026-01-31 07:32:50.251 226833 INFO nova.compute.manager [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Took 1.08 seconds to destroy the instance on the hypervisor.
Jan 31 07:32:50 compute-2 nova_compute[226829]: 2026-01-31 07:32:50.252 226833 DEBUG oslo.service.loopingcall [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:32:50 compute-2 nova_compute[226829]: 2026-01-31 07:32:50.252 226833 DEBUG nova.compute.manager [-] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:32:50 compute-2 nova_compute[226829]: 2026-01-31 07:32:50.252 226833 DEBUG nova.network.neutron [-] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:32:50 compute-2 nova_compute[226829]: 2026-01-31 07:32:50.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:51.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:51.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.579 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.580 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.580 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.581 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.581 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.581 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.761 226833 DEBUG nova.network.neutron [-] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.766 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.766 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.766 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.766 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.767 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.799 226833 INFO nova.compute.manager [-] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Took 1.55 seconds to deallocate network for instance.
Jan 31 07:32:51 compute-2 ceph-mon[77282]: pgmap v958: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 380 KiB/s rd, 1.9 MiB/s wr, 67 op/s
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.909 226833 DEBUG oslo_concurrency.lockutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.910 226833 DEBUG oslo_concurrency.lockutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:51 compute-2 nova_compute[226829]: 2026-01-31 07:32:51.987 226833 DEBUG oslo_concurrency.processutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.077 226833 DEBUG nova.compute.manager [req-c5ecc106-245f-46b5-8e8f-e7901a5f1b7a req-f69d60a4-5689-474c-bd45-c1cfb1a80a21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Received event network-vif-deleted-e790ca88-3f34-4710-ab34-8a5e963834de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.135 226833 DEBUG nova.compute.manager [req-cedf21af-0229-4ac0-81db-7c402c560a84 req-4bd57ee6-4d17-4014-b8a4-4e3673cf1dfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Received event network-vif-plugged-e790ca88-3f34-4710-ab34-8a5e963834de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.136 226833 DEBUG oslo_concurrency.lockutils [req-cedf21af-0229-4ac0-81db-7c402c560a84 req-4bd57ee6-4d17-4014-b8a4-4e3673cf1dfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.136 226833 DEBUG oslo_concurrency.lockutils [req-cedf21af-0229-4ac0-81db-7c402c560a84 req-4bd57ee6-4d17-4014-b8a4-4e3673cf1dfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.137 226833 DEBUG oslo_concurrency.lockutils [req-cedf21af-0229-4ac0-81db-7c402c560a84 req-4bd57ee6-4d17-4014-b8a4-4e3673cf1dfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.137 226833 DEBUG nova.compute.manager [req-cedf21af-0229-4ac0-81db-7c402c560a84 req-4bd57ee6-4d17-4014-b8a4-4e3673cf1dfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] No waiting events found dispatching network-vif-plugged-e790ca88-3f34-4710-ab34-8a5e963834de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.138 226833 WARNING nova.compute.manager [req-cedf21af-0229-4ac0-81db-7c402c560a84 req-4bd57ee6-4d17-4014-b8a4-4e3673cf1dfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Received unexpected event network-vif-plugged-e790ca88-3f34-4710-ab34-8a5e963834de for instance with vm_state deleted and task_state None.
Jan 31 07:32:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:32:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2472366945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.203 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.386 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.387 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4938MB free_disk=20.897197723388672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.387 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:32:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2348250006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.448 226833 DEBUG oslo_concurrency.processutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.453 226833 DEBUG nova.compute.provider_tree [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.477 226833 DEBUG nova.scheduler.client.report [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.504 226833 DEBUG oslo_concurrency.lockutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.507 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.566 226833 INFO nova.scheduler.client.report [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Deleted allocations for instance 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.612 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.613 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.643 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:52 compute-2 nova_compute[226829]: 2026-01-31 07:32:52.662 226833 DEBUG oslo_concurrency.lockutils [None req-728c1467-59dc-4e3c-aaf9-25f7b0941720 504b628b5b8b46f9a0fce37f63d8492e 44b45fc9f5584f0ca482be3aa129958e - - default default] Lock "329b7a5b-02d3-4d71-b5a0-de26eba8e0f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/198377444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2472366945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2348250006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2700459329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:32:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/399060722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:53 compute-2 nova_compute[226829]: 2026-01-31 07:32:53.078 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:53 compute-2 nova_compute[226829]: 2026-01-31 07:32:53.084 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:32:53 compute-2 nova_compute[226829]: 2026-01-31 07:32:53.146 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:32:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:53.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:53 compute-2 nova_compute[226829]: 2026-01-31 07:32:53.188 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:32:53 compute-2 nova_compute[226829]: 2026-01-31 07:32:53.189 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:53 compute-2 podman[232066]: 2026-01-31 07:32:53.23001909 +0000 UTC m=+0.101917582 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 07:32:53 compute-2 nova_compute[226829]: 2026-01-31 07:32:53.298 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.436696) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773436739, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 332, "num_deletes": 256, "total_data_size": 221867, "memory_usage": 230024, "flush_reason": "Manual Compaction"}
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773441688, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 146562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22911, "largest_seqno": 23238, "table_properties": {"data_size": 144471, "index_size": 255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4893, "raw_average_key_size": 16, "raw_value_size": 140335, "raw_average_value_size": 483, "num_data_blocks": 12, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844766, "oldest_key_time": 1769844766, "file_creation_time": 1769844773, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 5051 microseconds, and 1221 cpu microseconds.
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.441745) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 146562 bytes OK
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.441766) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.442906) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.442926) EVENT_LOG_v1 {"time_micros": 1769844773442919, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.442946) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 219515, prev total WAL file size 219515, number of live WAL files 2.
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.443394) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(143KB)], [45(7328KB)]
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773443482, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 7651223, "oldest_snapshot_seqno": -1}
Jan 31 07:32:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:53.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 4481 keys, 7530903 bytes, temperature: kUnknown
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773489344, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 7530903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7501133, "index_size": 17436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 114487, "raw_average_key_size": 25, "raw_value_size": 7420035, "raw_average_value_size": 1655, "num_data_blocks": 710, "num_entries": 4481, "num_filter_entries": 4481, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769844773, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.489582) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7530903 bytes
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.491021) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.5 rd, 163.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.2 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(103.6) write-amplify(51.4) OK, records in: 5001, records dropped: 520 output_compression: NoCompression
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.491066) EVENT_LOG_v1 {"time_micros": 1769844773491056, "job": 26, "event": "compaction_finished", "compaction_time_micros": 45941, "compaction_time_cpu_micros": 27395, "output_level": 6, "num_output_files": 1, "total_output_size": 7530903, "num_input_records": 5001, "num_output_records": 4481, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773491189, "job": 26, "event": "table_file_deletion", "file_number": 47}
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773491959, "job": 26, "event": "table_file_deletion", "file_number": 45}
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.443249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.492115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.492123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.492128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.492132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:32:53.492135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:54 compute-2 ceph-mon[77282]: pgmap v959: 305 pgs: 305 active+clean; 164 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 217 KiB/s rd, 1.1 MiB/s wr, 42 op/s
Jan 31 07:32:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/399060722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:54 compute-2 nova_compute[226829]: 2026-01-31 07:32:54.131 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:54 compute-2 nova_compute[226829]: 2026-01-31 07:32:54.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:54 compute-2 nova_compute[226829]: 2026-01-31 07:32:54.473 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:55.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:32:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:55.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:32:56 compute-2 nova_compute[226829]: 2026-01-31 07:32:56.097 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:56 compute-2 ceph-mon[77282]: pgmap v960: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 463 KiB/s rd, 27 KiB/s wr, 51 op/s
Jan 31 07:32:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:57.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:57.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:58 compute-2 ceph-mon[77282]: pgmap v961: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 11 KiB/s wr, 46 op/s
Jan 31 07:32:58 compute-2 nova_compute[226829]: 2026-01-31 07:32:58.300 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:59.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:59 compute-2 ceph-mon[77282]: pgmap v962: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.9 KiB/s wr, 49 op/s
Jan 31 07:32:59 compute-2 nova_compute[226829]: 2026-01-31 07:32:59.476 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:32:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:32:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:59.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:32:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:01.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:01.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:01 compute-2 ceph-mon[77282]: pgmap v963: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.9 KiB/s wr, 49 op/s
Jan 31 07:33:02 compute-2 podman[232099]: 2026-01-31 07:33:02.180886457 +0000 UTC m=+0.068092522 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:33:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:03.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:03 compute-2 nova_compute[226829]: 2026-01-31 07:33:03.304 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:03.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:03 compute-2 ceph-mon[77282]: pgmap v964: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.9 KiB/s wr, 49 op/s
Jan 31 07:33:04 compute-2 nova_compute[226829]: 2026-01-31 07:33:04.407 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844769.4067008, 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:04 compute-2 nova_compute[226829]: 2026-01-31 07:33:04.408 226833 INFO nova.compute.manager [-] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] VM Stopped (Lifecycle Event)
Jan 31 07:33:04 compute-2 nova_compute[226829]: 2026-01-31 07:33:04.479 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:04 compute-2 nova_compute[226829]: 2026-01-31 07:33:04.583 226833 DEBUG nova.compute.manager [None req-25af8389-8812-45cb-9879-c0224e26cdf1 - - - - - -] [instance: 329b7a5b-02d3-4d71-b5a0-de26eba8e0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:05.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:05 compute-2 ceph-mon[77282]: pgmap v965: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.4 KiB/s wr, 37 op/s
Jan 31 07:33:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1858440281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:06.836 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:06.837 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:06.837 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:07 compute-2 sudo[232120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:07 compute-2 sudo[232120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:07 compute-2 sudo[232120]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:07 compute-2 sudo[232145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:07 compute-2 sudo[232145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:07 compute-2 sudo[232145]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:07.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:07.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:08 compute-2 ceph-mon[77282]: pgmap v966: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 682 B/s wr, 5 op/s
Jan 31 07:33:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/39252063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/802696960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:08 compute-2 nova_compute[226829]: 2026-01-31 07:33:08.305 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:33:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:09.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:33:09 compute-2 nova_compute[226829]: 2026-01-31 07:33:09.482 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:09.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:10 compute-2 ceph-mon[77282]: pgmap v967: 305 pgs: 305 active+clean; 161 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.1 MiB/s wr, 29 op/s
Jan 31 07:33:10 compute-2 nova_compute[226829]: 2026-01-31 07:33:10.323 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "4a1447d0-8c0f-428b-b815-978ebbd1b48a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:10 compute-2 nova_compute[226829]: 2026-01-31 07:33:10.324 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4a1447d0-8c0f-428b-b815-978ebbd1b48a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:10 compute-2 nova_compute[226829]: 2026-01-31 07:33:10.359 226833 DEBUG nova.compute.manager [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:33:10 compute-2 nova_compute[226829]: 2026-01-31 07:33:10.461 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:10 compute-2 nova_compute[226829]: 2026-01-31 07:33:10.461 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:10 compute-2 nova_compute[226829]: 2026-01-31 07:33:10.468 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:33:10 compute-2 nova_compute[226829]: 2026-01-31 07:33:10.469 226833 INFO nova.compute.claims [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:33:10 compute-2 nova_compute[226829]: 2026-01-31 07:33:10.652 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2606589484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.066 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.073 226833 DEBUG nova.compute.provider_tree [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.097 226833 DEBUG nova.scheduler.client.report [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.129 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.130 226833 DEBUG nova.compute.manager [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:33:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:11.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.192 226833 DEBUG nova.compute.manager [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.213 226833 INFO nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.234 226833 DEBUG nova.compute.manager [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:33:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2606589484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.368 226833 DEBUG nova.compute.manager [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.371 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.371 226833 INFO nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Creating image(s)
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.408 226833 DEBUG nova.storage.rbd_utils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.445 226833 DEBUG nova.storage.rbd_utils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.483 226833 DEBUG nova.storage.rbd_utils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.487 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:11.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.585 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.586 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.586 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.587 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.616 226833 DEBUG nova.storage.rbd_utils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:11 compute-2 nova_compute[226829]: 2026-01-31 07:33:11.621 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:12 compute-2 ceph-mon[77282]: pgmap v968: 305 pgs: 305 active+clean; 181 MiB data, 337 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 3.0 MiB/s wr, 47 op/s
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.413 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.792s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.504 226833 DEBUG nova.storage.rbd_utils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] resizing rbd image 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.749 226833 DEBUG nova.objects.instance [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'migration_context' on Instance uuid 4a1447d0-8c0f-428b-b815-978ebbd1b48a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.770 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.771 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Ensure instance console log exists: /var/lib/nova/instances/4a1447d0-8c0f-428b-b815-978ebbd1b48a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.771 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.772 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.772 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.774 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.779 226833 WARNING nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.783 226833 DEBUG nova.virt.libvirt.host [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.784 226833 DEBUG nova.virt.libvirt.host [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.787 226833 DEBUG nova.virt.libvirt.host [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.788 226833 DEBUG nova.virt.libvirt.host [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.789 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.789 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.790 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.790 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.791 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.791 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.791 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.792 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.792 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.792 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.793 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.793 226833 DEBUG nova.virt.hardware [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:33:12 compute-2 nova_compute[226829]: 2026-01-31 07:33:12.796 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:33:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1621728046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:13.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.189 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.219 226833 DEBUG nova.storage.rbd_utils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.223 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.307 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:13 compute-2 ceph-mon[77282]: pgmap v969: 305 pgs: 305 active+clean; 213 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 327 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Jan 31 07:33:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1621728046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:13.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:33:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4094781662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.662 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.664 226833 DEBUG nova.objects.instance [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a1447d0-8c0f-428b-b815-978ebbd1b48a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.676 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <uuid>4a1447d0-8c0f-428b-b815-978ebbd1b48a</uuid>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <name>instance-00000008</name>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <nova:name>tempest-AutoAllocateNetworkTest-server-2026269817</nova:name>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:33:12</nova:creationTime>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <nova:user uuid="38001f2ce5654228b098939fd9619d3e">tempest-AutoAllocateNetworkTest-1036306085-project-member</nova:user>
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <nova:project uuid="ca84ce9280d74b4588f89bf679f563fa">tempest-AutoAllocateNetworkTest-1036306085</nova:project>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <system>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <entry name="serial">4a1447d0-8c0f-428b-b815-978ebbd1b48a</entry>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <entry name="uuid">4a1447d0-8c0f-428b-b815-978ebbd1b48a</entry>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     </system>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <os>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   </os>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <features>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   </features>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk">
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       </source>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk.config">
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       </source>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:33:13 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/4a1447d0-8c0f-428b-b815-978ebbd1b48a/console.log" append="off"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <video>
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     </video>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:33:13 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:33:13 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:33:13 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:33:13 compute-2 nova_compute[226829]: </domain>
Jan 31 07:33:13 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.728 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.728 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.729 226833 INFO nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Using config drive
Jan 31 07:33:13 compute-2 nova_compute[226829]: 2026-01-31 07:33:13.755 226833 DEBUG nova.storage.rbd_utils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:14 compute-2 nova_compute[226829]: 2026-01-31 07:33:14.014 226833 INFO nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Creating config drive at /var/lib/nova/instances/4a1447d0-8c0f-428b-b815-978ebbd1b48a/disk.config
Jan 31 07:33:14 compute-2 nova_compute[226829]: 2026-01-31 07:33:14.020 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a1447d0-8c0f-428b-b815-978ebbd1b48a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpy1nppp8e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:14.021 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:33:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:14.023 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:33:14 compute-2 nova_compute[226829]: 2026-01-31 07:33:14.034 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:14 compute-2 nova_compute[226829]: 2026-01-31 07:33:14.141 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a1447d0-8c0f-428b-b815-978ebbd1b48a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpy1nppp8e" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:14 compute-2 nova_compute[226829]: 2026-01-31 07:33:14.176 226833 DEBUG nova.storage.rbd_utils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:14 compute-2 nova_compute[226829]: 2026-01-31 07:33:14.181 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a1447d0-8c0f-428b-b815-978ebbd1b48a/disk.config 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4094781662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4174104288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:14 compute-2 nova_compute[226829]: 2026-01-31 07:33:14.475 226833 DEBUG oslo_concurrency.processutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a1447d0-8c0f-428b-b815-978ebbd1b48a/disk.config 4a1447d0-8c0f-428b-b815-978ebbd1b48a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:14 compute-2 nova_compute[226829]: 2026-01-31 07:33:14.476 226833 INFO nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Deleting local config drive /var/lib/nova/instances/4a1447d0-8c0f-428b-b815-978ebbd1b48a/disk.config because it was imported into RBD.
Jan 31 07:33:14 compute-2 nova_compute[226829]: 2026-01-31 07:33:14.484 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:14 compute-2 systemd-machined[195142]: New machine qemu-3-instance-00000008.
Jan 31 07:33:14 compute-2 systemd[1]: Started Virtual Machine qemu-3-instance-00000008.
Jan 31 07:33:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:33:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624948120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:15.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.357 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844795.3571115, 4a1447d0-8c0f-428b-b815-978ebbd1b48a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.359 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] VM Resumed (Lifecycle Event)
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.363 226833 DEBUG nova.compute.manager [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.364 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.368 226833 INFO nova.virt.libvirt.driver [-] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Instance spawned successfully.
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.368 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.412 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.420 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.426 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.427 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.428 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.429 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.430 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.431 226833 DEBUG nova.virt.libvirt.driver [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:15 compute-2 ceph-mon[77282]: pgmap v970: 305 pgs: 305 active+clean; 251 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Jan 31 07:33:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/624948120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3348589074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.495 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.496 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844795.3577795, 4a1447d0-8c0f-428b-b815-978ebbd1b48a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.496 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] VM Started (Lifecycle Event)
Jan 31 07:33:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:15.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.550 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.558 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.568 226833 INFO nova.compute.manager [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Took 4.20 seconds to spawn the instance on the hypervisor.
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.568 226833 DEBUG nova.compute.manager [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.600 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.645 226833 INFO nova.compute.manager [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Took 5.22 seconds to build instance.
Jan 31 07:33:15 compute-2 nova_compute[226829]: 2026-01-31 07:33:15.673 226833 DEBUG oslo_concurrency.lockutils [None req-f4db339e-46b8-456f-abc8-23c61a115ed4 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4a1447d0-8c0f-428b-b815-978ebbd1b48a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:17.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:17 compute-2 ceph-mon[77282]: pgmap v971: 305 pgs: 305 active+clean; 260 MiB data, 363 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 166 op/s
Jan 31 07:33:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2707124294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2950240011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:17.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:18 compute-2 nova_compute[226829]: 2026-01-31 07:33:18.309 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2389247019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:19.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:19 compute-2 nova_compute[226829]: 2026-01-31 07:33:19.487 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:19 compute-2 ceph-mon[77282]: pgmap v972: 305 pgs: 305 active+clean; 271 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 182 op/s
Jan 31 07:33:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:21.031 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.091 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.093 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.115 226833 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:33:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:21.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.239 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.240 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.247 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.247 226833 INFO nova.compute.claims [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.471 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:21.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:21 compute-2 ceph-mon[77282]: pgmap v973: 305 pgs: 305 active+clean; 295 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.4 MiB/s wr, 228 op/s
Jan 31 07:33:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:21 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2670094037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.935 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.943 226833 DEBUG nova.compute.provider_tree [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.962 226833 DEBUG nova.scheduler.client.report [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.995 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:21 compute-2 nova_compute[226829]: 2026-01-31 07:33:21.996 226833 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.060 226833 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.060 226833 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.095 226833 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.125 226833 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.271 226833 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.274 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.275 226833 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Creating image(s)
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.318 226833 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.358 226833 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.394 226833 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.398 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.441 226833 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Automatically allocating a network for project ca84ce9280d74b4588f89bf679f563fa. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.458 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.460 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.461 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.461 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.497 226833 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:22 compute-2 nova_compute[226829]: 2026-01-31 07:33:22.501 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2670094037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1758955280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3803425268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:23 compute-2 nova_compute[226829]: 2026-01-31 07:33:23.008 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:23.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:23 compute-2 nova_compute[226829]: 2026-01-31 07:33:23.442 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:23 compute-2 nova_compute[226829]: 2026-01-31 07:33:23.453 226833 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] resizing rbd image 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:33:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:23.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:23 compute-2 ceph-mon[77282]: pgmap v974: 305 pgs: 305 active+clean; 315 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.2 MiB/s wr, 292 op/s
Jan 31 07:33:24 compute-2 nova_compute[226829]: 2026-01-31 07:33:24.066 226833 DEBUG nova.objects.instance [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'migration_context' on Instance uuid 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:24 compute-2 nova_compute[226829]: 2026-01-31 07:33:24.099 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:33:24 compute-2 nova_compute[226829]: 2026-01-31 07:33:24.100 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Ensure instance console log exists: /var/lib/nova/instances/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:33:24 compute-2 nova_compute[226829]: 2026-01-31 07:33:24.101 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:24 compute-2 nova_compute[226829]: 2026-01-31 07:33:24.102 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:24 compute-2 nova_compute[226829]: 2026-01-31 07:33:24.102 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:24 compute-2 podman[232732]: 2026-01-31 07:33:24.229851073 +0000 UTC m=+0.107914133 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:33:24 compute-2 nova_compute[226829]: 2026-01-31 07:33:24.490 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:25.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:25 compute-2 sudo[232759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:25 compute-2 sudo[232759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:25 compute-2 sudo[232759]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:25 compute-2 sudo[232784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:33:25 compute-2 sudo[232784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:25 compute-2 sudo[232784]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:25 compute-2 sudo[232809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:25 compute-2 sudo[232809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:25 compute-2 sudo[232809]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:25 compute-2 sudo[232834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:33:25 compute-2 sudo[232834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:25.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:25 compute-2 ceph-mon[77282]: pgmap v975: 305 pgs: 305 active+clean; 380 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 7.1 MiB/s wr, 404 op/s
Jan 31 07:33:25 compute-2 sudo[232834]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:33:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:33:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:33:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:33:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:33:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:33:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1099092097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:27 compute-2 sudo[232892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:27.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:27 compute-2 sudo[232892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:27 compute-2 sudo[232892]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:27 compute-2 sudo[232917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:27 compute-2 sudo[232917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:27 compute-2 sudo[232917]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:28 compute-2 ceph-mon[77282]: pgmap v976: 305 pgs: 305 active+clean; 424 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 8.3 MiB/s wr, 374 op/s
Jan 31 07:33:28 compute-2 nova_compute[226829]: 2026-01-31 07:33:28.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:29.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:29 compute-2 nova_compute[226829]: 2026-01-31 07:33:29.542 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:29.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:30 compute-2 ceph-mon[77282]: pgmap v977: 305 pgs: 305 active+clean; 459 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 9.8 MiB/s wr, 421 op/s
Jan 31 07:33:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3738968175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2346403018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:31.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:31 compute-2 ceph-mon[77282]: pgmap v978: 305 pgs: 305 active+clean; 435 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 444 op/s
Jan 31 07:33:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:31.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:33 compute-2 podman[232945]: 2026-01-31 07:33:33.219170057 +0000 UTC m=+0.079752785 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:33:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:33.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:33 compute-2 nova_compute[226829]: 2026-01-31 07:33:33.317 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:33.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:33 compute-2 sudo[232965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:33 compute-2 sudo[232965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:33 compute-2 sudo[232965]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:33 compute-2 ceph-mon[77282]: pgmap v979: 305 pgs: 305 active+clean; 409 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 10 MiB/s wr, 402 op/s
Jan 31 07:33:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:33:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:33:33 compute-2 sudo[232990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:33:33 compute-2 sudo[232990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:33 compute-2 sudo[232990]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:34 compute-2 nova_compute[226829]: 2026-01-31 07:33:34.544 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:35.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:35.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:35 compute-2 ceph-mon[77282]: pgmap v980: 305 pgs: 305 active+clean; 389 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 9.2 MiB/s wr, 339 op/s
Jan 31 07:33:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:37.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:37.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:37 compute-2 ceph-mon[77282]: pgmap v981: 305 pgs: 305 active+clean; 405 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 619 KiB/s rd, 7.6 MiB/s wr, 233 op/s
Jan 31 07:33:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/882411958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:38 compute-2 nova_compute[226829]: 2026-01-31 07:33:38.319 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3081057397' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:39.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:39 compute-2 nova_compute[226829]: 2026-01-31 07:33:39.547 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:39.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:39 compute-2 ceph-mon[77282]: pgmap v982: 305 pgs: 305 active+clean; 417 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 667 KiB/s rd, 5.7 MiB/s wr, 204 op/s
Jan 31 07:33:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:41.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:41.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:41 compute-2 ceph-mon[77282]: pgmap v983: 305 pgs: 305 active+clean; 418 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 601 KiB/s rd, 3.7 MiB/s wr, 150 op/s
Jan 31 07:33:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/67064187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:43.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:43 compute-2 nova_compute[226829]: 2026-01-31 07:33:43.320 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:43.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:44 compute-2 ceph-mon[77282]: pgmap v984: 305 pgs: 305 active+clean; 418 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 525 KiB/s rd, 2.4 MiB/s wr, 109 op/s
Jan 31 07:33:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2891413516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:44 compute-2 nova_compute[226829]: 2026-01-31 07:33:44.550 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2106868559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3741976636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3958700523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:33:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3958700523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:33:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1044616484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:45.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:45.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:46 compute-2 ceph-mon[77282]: pgmap v985: 305 pgs: 305 active+clean; 433 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 395 KiB/s rd, 3.0 MiB/s wr, 102 op/s
Jan 31 07:33:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:47.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:47 compute-2 sudo[233022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:47 compute-2 sudo[233022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:47 compute-2 sudo[233022]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:47 compute-2 sudo[233047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:47 compute-2 sudo[233047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:47 compute-2 sudo[233047]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:47 compute-2 nova_compute[226829]: 2026-01-31 07:33:47.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:47.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:48 compute-2 ceph-mon[77282]: pgmap v986: 305 pgs: 305 active+clean; 446 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 347 KiB/s rd, 2.8 MiB/s wr, 96 op/s
Jan 31 07:33:48 compute-2 nova_compute[226829]: 2026-01-31 07:33:48.322 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:48 compute-2 nova_compute[226829]: 2026-01-31 07:33:48.524 226833 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Automatically allocated network: {'id': '3741a1e4-1d7f-4ca0-b02c-790a05701782', 'name': 'auto_allocated_network', 'tenant_id': 'ca84ce9280d74b4588f89bf679f563fa', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['023e1bb7-94fd-4d3e-985f-de3a59c69198', 'c3df2e1c-5e37-495f-93dc-b811f92707e7'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-31T07:33:22Z', 'updated_at': '2026-01-31T07:33:31Z', 'revision_number': 4, 'project_id': 'ca84ce9280d74b4588f89bf679f563fa'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Jan 31 07:33:48 compute-2 nova_compute[226829]: 2026-01-31 07:33:48.528 226833 DEBUG nova.policy [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38001f2ce5654228b098939fd9619d3e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca84ce9280d74b4588f89bf679f563fa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:33:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:49.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:49 compute-2 nova_compute[226829]: 2026-01-31 07:33:49.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:49 compute-2 nova_compute[226829]: 2026-01-31 07:33:49.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:49.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:49 compute-2 nova_compute[226829]: 2026-01-31 07:33:49.919 226833 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Successfully created port: 4ba2a715-8194-4398-8d19-a3eb77acd9c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:33:50 compute-2 ceph-mon[77282]: pgmap v987: 305 pgs: 305 active+clean; 464 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.3 MiB/s wr, 94 op/s
Jan 31 07:33:50 compute-2 nova_compute[226829]: 2026-01-31 07:33:50.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3297960115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:33:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:51.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.502 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.503 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.503 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.523 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:33:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:51.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.627 226833 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Successfully updated port: 4ba2a715-8194-4398-8d19-a3eb77acd9c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.644 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "refresh_cache-4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.645 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquired lock "refresh_cache-4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.645 226833 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.955 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-4a1447d0-8c0f-428b-b815-978ebbd1b48a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.955 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-4a1447d0-8c0f-428b-b815-978ebbd1b48a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.955 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:33:51 compute-2 nova_compute[226829]: 2026-01-31 07:33:51.956 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4a1447d0-8c0f-428b-b815-978ebbd1b48a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:52 compute-2 nova_compute[226829]: 2026-01-31 07:33:52.031 226833 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:33:52 compute-2 ceph-mon[77282]: pgmap v988: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 07:33:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2733010016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2439060104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3493479111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.064 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:33:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/214933026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.220 226833 DEBUG nova.compute.manager [req-558e1b0e-c372-43de-b25d-e1cdf678567f req-8ac44a93-0e2d-4e24-9145-ce2fcb9576d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Received event network-changed-4ba2a715-8194-4398-8d19-a3eb77acd9c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.220 226833 DEBUG nova.compute.manager [req-558e1b0e-c372-43de-b25d-e1cdf678567f req-8ac44a93-0e2d-4e24-9145-ce2fcb9576d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Refreshing instance network info cache due to event network-changed-4ba2a715-8194-4398-8d19-a3eb77acd9c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.221 226833 DEBUG oslo_concurrency.lockutils [req-558e1b0e-c372-43de-b25d-e1cdf678567f req-8ac44a93-0e2d-4e24-9145-ce2fcb9576d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:53.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.324 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:53.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.661 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.677 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-4a1447d0-8c0f-428b-b815-978ebbd1b48a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.677 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.677 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.678 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.678 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.678 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.701 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.701 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.702 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.702 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:33:53 compute-2 nova_compute[226829]: 2026-01-31 07:33:53.703 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/73668542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:54 compute-2 ceph-mon[77282]: pgmap v989: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 31 07:33:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1928453779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.290 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.347 226833 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Updating instance_info_cache with network_info: [{"id": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "address": "fa:16:3e:83:5a:ee", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.40", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::206", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ba2a715-81", "ovs_interfaceid": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.368 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.368 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.369 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Releasing lock "refresh_cache-4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.369 226833 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Instance network_info: |[{"id": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "address": "fa:16:3e:83:5a:ee", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.40", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::206", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ba2a715-81", "ovs_interfaceid": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.370 226833 DEBUG oslo_concurrency.lockutils [req-558e1b0e-c372-43de-b25d-e1cdf678567f req-8ac44a93-0e2d-4e24-9145-ce2fcb9576d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.370 226833 DEBUG nova.network.neutron [req-558e1b0e-c372-43de-b25d-e1cdf678567f req-8ac44a93-0e2d-4e24-9145-ce2fcb9576d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Refreshing network info cache for port 4ba2a715-8194-4398-8d19-a3eb77acd9c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.372 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Start _get_guest_xml network_info=[{"id": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "address": "fa:16:3e:83:5a:ee", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.40", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::206", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ba2a715-81", "ovs_interfaceid": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.378 226833 WARNING nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.382 226833 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.383 226833 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.385 226833 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.386 226833 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.387 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.387 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.387 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.387 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.388 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.388 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.388 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.388 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.388 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.389 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.389 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.389 226833 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.392 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:54 compute-2 podman[233098]: 2026-01-31 07:33:54.403950371 +0000 UTC m=+0.072119324 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 07:33:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.548 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.549 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4785MB free_disk=20.813865661621094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.550 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.550 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.553 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.640 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 4a1447d0-8c0f-428b-b815-978ebbd1b48a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.641 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.641 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.641 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.718 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:33:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2596142935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.820 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.844 226833 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:54 compute-2 nova_compute[226829]: 2026-01-31 07:33:54.848 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4293098453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.209 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.214 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.234 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:33:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:33:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4007716689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.270 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.271 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.280 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.281 226833 DEBUG nova.virt.libvirt.vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:33:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2133368505-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2133368505-1',id=11,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca84ce9280d74b4588f89bf679f563fa',ramdisk_id='',reservation_id='r-xsv2fd0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1036306085',owner_user_name='tempest-AutoAllocateNetworkTest-1036306085-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:33:22Z,user_data=None,user_id='38001f2ce5654228b098939fd9619d3e',uuid=4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "address": "fa:16:3e:83:5a:ee", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.40", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::206", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ba2a715-81", "ovs_interfaceid": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.282 226833 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converting VIF {"id": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "address": "fa:16:3e:83:5a:ee", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.40", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::206", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ba2a715-81", "ovs_interfaceid": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.283 226833 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:5a:ee,bridge_name='br-int',has_traffic_filtering=True,id=4ba2a715-8194-4398-8d19-a3eb77acd9c6,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ba2a715-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.285 226833 DEBUG nova.objects.instance [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.298 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <uuid>4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8</uuid>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <name>instance-0000000b</name>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <nova:name>tempest-tempest.common.compute-instance-2133368505-1</nova:name>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:33:54</nova:creationTime>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <nova:user uuid="38001f2ce5654228b098939fd9619d3e">tempest-AutoAllocateNetworkTest-1036306085-project-member</nova:user>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <nova:project uuid="ca84ce9280d74b4588f89bf679f563fa">tempest-AutoAllocateNetworkTest-1036306085</nova:project>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <nova:port uuid="4ba2a715-8194-4398-8d19-a3eb77acd9c6">
Jan 31 07:33:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.1.0.40" ipVersion="4"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="fdfe:381f:8400::206" ipVersion="6"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <system>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <entry name="serial">4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8</entry>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <entry name="uuid">4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8</entry>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     </system>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <os>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   </os>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <features>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   </features>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk">
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       </source>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk.config">
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       </source>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:33:55 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:83:5a:ee"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <target dev="tap4ba2a715-81"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8/console.log" append="off"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <video>
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     </video>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:33:55 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:33:55 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:33:55 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:33:55 compute-2 nova_compute[226829]: </domain>
Jan 31 07:33:55 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.299 226833 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Preparing to wait for external event network-vif-plugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.299 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.300 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.300 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.300 226833 DEBUG nova.virt.libvirt.vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:33:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2133368505-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2133368505-1',id=11,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca84ce9280d74b4588f89bf679f563fa',ramdisk_id='',reservation_id='r-xsv2fd0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1036306085',owner_user_name='tempest-AutoAllocateNetworkTest-1036306085-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:33:22Z,user_data=None,user_id='38001f2ce5654228b098939fd9619d3e',uuid=4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "address": "fa:16:3e:83:5a:ee", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.40", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::206", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ba2a715-81", "ovs_interfaceid": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.301 226833 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converting VIF {"id": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "address": "fa:16:3e:83:5a:ee", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.40", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::206", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ba2a715-81", "ovs_interfaceid": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.301 226833 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:5a:ee,bridge_name='br-int',has_traffic_filtering=True,id=4ba2a715-8194-4398-8d19-a3eb77acd9c6,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ba2a715-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.302 226833 DEBUG os_vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:5a:ee,bridge_name='br-int',has_traffic_filtering=True,id=4ba2a715-8194-4398-8d19-a3eb77acd9c6,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ba2a715-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.302 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.303 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.303 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/73668542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3030788466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-2 ceph-mon[77282]: pgmap v990: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 31 07:33:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2596142935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2053054528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4293098453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4007716689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.310 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.311 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ba2a715-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.311 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ba2a715-81, col_values=(('external_ids', {'iface-id': '4ba2a715-8194-4398-8d19-a3eb77acd9c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:5a:ee', 'vm-uuid': '4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.365 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:55 compute-2 NetworkManager[48999]: <info>  [1769844835.3679] manager: (tap4ba2a715-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.368 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.370 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.372 226833 INFO os_vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:5a:ee,bridge_name='br-int',has_traffic_filtering=True,id=4ba2a715-8194-4398-8d19-a3eb77acd9c6,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ba2a715-81')
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.434 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.434 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.435 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No VIF found with MAC fa:16:3e:83:5a:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.435 226833 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Using config drive
Jan 31 07:33:55 compute-2 nova_compute[226829]: 2026-01-31 07:33:55.465 226833 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:55.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:56 compute-2 nova_compute[226829]: 2026-01-31 07:33:56.270 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:56 compute-2 nova_compute[226829]: 2026-01-31 07:33:56.271 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:56 compute-2 nova_compute[226829]: 2026-01-31 07:33:56.465 226833 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Creating config drive at /var/lib/nova/instances/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8/disk.config
Jan 31 07:33:56 compute-2 nova_compute[226829]: 2026-01-31 07:33:56.472 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpth0dg_c9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:56 compute-2 nova_compute[226829]: 2026-01-31 07:33:56.601 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpth0dg_c9" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:56 compute-2 nova_compute[226829]: 2026-01-31 07:33:56.641 226833 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:56 compute-2 nova_compute[226829]: 2026-01-31 07:33:56.645 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8/disk.config 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:56 compute-2 nova_compute[226829]: 2026-01-31 07:33:56.942 226833 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8/disk.config 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:56 compute-2 nova_compute[226829]: 2026-01-31 07:33:56.943 226833 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Deleting local config drive /var/lib/nova/instances/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8/disk.config because it was imported into RBD.
Jan 31 07:33:56 compute-2 kernel: tap4ba2a715-81: entered promiscuous mode
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:56.998 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-2 NetworkManager[48999]: <info>  [1769844837.0003] manager: (tap4ba2a715-81): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Jan 31 07:33:57 compute-2 ovn_controller[133834]: 2026-01-31T07:33:56Z|00048|binding|INFO|Claiming lport 4ba2a715-8194-4398-8d19-a3eb77acd9c6 for this chassis.
Jan 31 07:33:57 compute-2 ovn_controller[133834]: 2026-01-31T07:33:56Z|00049|binding|INFO|4ba2a715-8194-4398-8d19-a3eb77acd9c6: Claiming fa:16:3e:83:5a:ee 10.1.0.40 fdfe:381f:8400::206
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.013 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-2 systemd-machined[195142]: New machine qemu-4-instance-0000000b.
Jan 31 07:33:57 compute-2 systemd[1]: Started Virtual Machine qemu-4-instance-0000000b.
Jan 31 07:33:57 compute-2 ovn_controller[133834]: 2026-01-31T07:33:57Z|00050|binding|INFO|Setting lport 4ba2a715-8194-4398-8d19-a3eb77acd9c6 ovn-installed in OVS
Jan 31 07:33:57 compute-2 systemd-udevd[233283]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.051 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-2 NetworkManager[48999]: <info>  [1769844837.0672] device (tap4ba2a715-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:33:57 compute-2 NetworkManager[48999]: <info>  [1769844837.0683] device (tap4ba2a715-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:33:57 compute-2 ovn_controller[133834]: 2026-01-31T07:33:57Z|00051|binding|INFO|Setting lport 4ba2a715-8194-4398-8d19-a3eb77acd9c6 up in Southbound
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.091 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:5a:ee 10.1.0.40 fdfe:381f:8400::206'], port_security=['fa:16:3e:83:5a:ee 10.1.0.40 fdfe:381f:8400::206'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.40/26 fdfe:381f:8400::206/64', 'neutron:device_id': '4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca84ce9280d74b4588f89bf679f563fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c568fd1-1fe0-466e-9918-da0b2ee34267', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5e73fc0-b2f3-4e49-a543-f424bee97362, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=4ba2a715-8194-4398-8d19-a3eb77acd9c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.094 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 4ba2a715-8194-4398-8d19-a3eb77acd9c6 in datapath 3741a1e4-1d7f-4ca0-b02c-790a05701782 bound to our chassis
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.098 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3741a1e4-1d7f-4ca0-b02c-790a05701782
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.113 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4cb8a3-ac11-41e4-8d44-0d4834b0ebe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.115 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3741a1e4-11 in ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.117 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3741a1e4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.117 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[42fa7fd5-c49d-475b-9091-e6707aae6613]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.118 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0d0f20-4fcb-42e4-a6a1-6ffd0783426b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.135 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[d616be63-33db-4214-ab51-22845f5018a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.162 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5e0eca26-6afe-42d6-9726-aaacde91a5fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.185 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f82239f8-007c-42b4-b57e-2738403ff7bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.190 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dc14bae4-6420-43b7-9210-10fabf8e0476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 NetworkManager[48999]: <info>  [1769844837.1908] manager: (tap3741a1e4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.209 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb00db7-ccc4-425a-877e-edd715bb27f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.211 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[45cbb4b6-89aa-4564-8413-e6a7aaab1666]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 NetworkManager[48999]: <info>  [1769844837.2242] device (tap3741a1e4-10): carrier: link connected
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.226 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2eea9cf3-3a9d-4368-ae1d-fcc0285eeff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.237 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4bcc28-12b8-4bb8-acae-e5d55cf7ade8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3741a1e4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:7f:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508969, 'reachable_time': 20932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233318, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.246 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3886f8-2b9d-4951-8b88-55ec7cf9aff9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:7f1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508969, 'tstamp': 508969}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233319, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.254 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[586de959-3223-4883-84f2-f77fca18802b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3741a1e4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:7f:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508969, 'reachable_time': 20932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 233320, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:57.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.273 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[75ce015c-f1fd-4c8b-9213-9203aeb1e584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.307 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[34e0a349-d34e-4852-b848-79df72460d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.309 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3741a1e4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.309 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.310 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3741a1e4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.312 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-2 NetworkManager[48999]: <info>  [1769844837.3127] manager: (tap3741a1e4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Jan 31 07:33:57 compute-2 kernel: tap3741a1e4-10: entered promiscuous mode
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.316 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3741a1e4-10, col_values=(('external_ids', {'iface-id': 'b790fc33-bc7c-415a-abab-507204f5d28b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:57 compute-2 ovn_controller[133834]: 2026-01-31T07:33:57Z|00052|binding|INFO|Releasing lport b790fc33-bc7c-415a-abab-507204f5d28b from this chassis (sb_readonly=0)
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.318 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.322 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3741a1e4-1d7f-4ca0-b02c-790a05701782.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3741a1e4-1d7f-4ca0-b02c-790a05701782.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.323 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.323 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[310e0b8e-8a1c-4514-8544-b0b62b93e948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.324 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-3741a1e4-1d7f-4ca0-b02c-790a05701782
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/3741a1e4-1d7f-4ca0-b02c-790a05701782.pid.haproxy
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 3741a1e4-1d7f-4ca0-b02c-790a05701782
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:33:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:33:57.325 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'env', 'PROCESS_TAG=haproxy-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3741a1e4-1d7f-4ca0-b02c-790a05701782.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.409 226833 DEBUG nova.compute.manager [req-8f48262a-1b82-4ba5-8f9f-d12d0e744fa5 req-8d878fcb-c878-4c81-9df6-57edd5a232f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Received event network-vif-plugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.410 226833 DEBUG oslo_concurrency.lockutils [req-8f48262a-1b82-4ba5-8f9f-d12d0e744fa5 req-8d878fcb-c878-4c81-9df6-57edd5a232f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.410 226833 DEBUG oslo_concurrency.lockutils [req-8f48262a-1b82-4ba5-8f9f-d12d0e744fa5 req-8d878fcb-c878-4c81-9df6-57edd5a232f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.411 226833 DEBUG oslo_concurrency.lockutils [req-8f48262a-1b82-4ba5-8f9f-d12d0e744fa5 req-8d878fcb-c878-4c81-9df6-57edd5a232f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.412 226833 DEBUG nova.compute.manager [req-8f48262a-1b82-4ba5-8f9f-d12d0e744fa5 req-8d878fcb-c878-4c81-9df6-57edd5a232f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Processing event network-vif-plugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:33:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.005000131s ======
Jan 31 07:33:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:57.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000131s
Jan 31 07:33:57 compute-2 podman[233351]: 2026-01-31 07:33:57.648367664 +0000 UTC m=+0.051627621 container create 3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:33:57 compute-2 systemd[1]: Started libpod-conmon-3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16.scope.
Jan 31 07:33:57 compute-2 ceph-mon[77282]: pgmap v991: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 963 KiB/s wr, 155 op/s
Jan 31 07:33:57 compute-2 podman[233351]: 2026-01-31 07:33:57.61729583 +0000 UTC m=+0.020555787 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:33:57 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:33:57 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557884c87db9081ad269a4c0bc7f6bff2e2448661847900f33f3cede518c5281/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:57 compute-2 podman[233351]: 2026-01-31 07:33:57.730573484 +0000 UTC m=+0.133833521 container init 3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 07:33:57 compute-2 podman[233351]: 2026-01-31 07:33:57.738461283 +0000 UTC m=+0.141721280 container start 3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.776 226833 DEBUG nova.network.neutron [req-558e1b0e-c372-43de-b25d-e1cdf678567f req-8ac44a93-0e2d-4e24-9145-ce2fcb9576d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Updated VIF entry in instance network info cache for port 4ba2a715-8194-4398-8d19-a3eb77acd9c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.778 226833 DEBUG nova.network.neutron [req-558e1b0e-c372-43de-b25d-e1cdf678567f req-8ac44a93-0e2d-4e24-9145-ce2fcb9576d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Updating instance_info_cache with network_info: [{"id": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "address": "fa:16:3e:83:5a:ee", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.40", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::206", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ba2a715-81", "ovs_interfaceid": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:33:57 compute-2 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[233382]: [NOTICE]   (233403) : New worker (233409) forked
Jan 31 07:33:57 compute-2 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[233382]: [NOTICE]   (233403) : Loading success.
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.861 226833 DEBUG oslo_concurrency.lockutils [req-558e1b0e-c372-43de-b25d-e1cdf678567f req-8ac44a93-0e2d-4e24-9145-ce2fcb9576d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.902 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844837.9019477, 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.903 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] VM Started (Lifecycle Event)
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.905 226833 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.910 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.915 226833 INFO nova.virt.libvirt.driver [-] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Instance spawned successfully.
Jan 31 07:33:57 compute-2 nova_compute[226829]: 2026-01-31 07:33:57.916 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.128 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.132 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.132 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.132 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.133 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.133 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.133 226833 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.137 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.182 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.183 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844837.903189, 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.183 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] VM Paused (Lifecycle Event)
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.220 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.224 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844837.9083762, 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.224 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] VM Resumed (Lifecycle Event)
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.235 226833 INFO nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Took 35.96 seconds to spawn the instance on the hypervisor.
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.235 226833 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.244 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.247 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.330 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.333 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.403 226833 INFO nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Took 37.19 seconds to build instance.
Jan 31 07:33:58 compute-2 nova_compute[226829]: 2026-01-31 07:33:58.423 226833 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 37.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:33:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:33:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:33:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:59.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:59 compute-2 nova_compute[226829]: 2026-01-31 07:33:59.580 226833 DEBUG nova.compute.manager [req-ffce8ac2-8e2f-44dc-8252-ceb672be3204 req-714ad401-6ae7-4e17-8701-810457bc08fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Received event network-vif-plugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:59 compute-2 nova_compute[226829]: 2026-01-31 07:33:59.581 226833 DEBUG oslo_concurrency.lockutils [req-ffce8ac2-8e2f-44dc-8252-ceb672be3204 req-714ad401-6ae7-4e17-8701-810457bc08fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:59 compute-2 nova_compute[226829]: 2026-01-31 07:33:59.581 226833 DEBUG oslo_concurrency.lockutils [req-ffce8ac2-8e2f-44dc-8252-ceb672be3204 req-714ad401-6ae7-4e17-8701-810457bc08fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:59 compute-2 nova_compute[226829]: 2026-01-31 07:33:59.581 226833 DEBUG oslo_concurrency.lockutils [req-ffce8ac2-8e2f-44dc-8252-ceb672be3204 req-714ad401-6ae7-4e17-8701-810457bc08fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:59 compute-2 nova_compute[226829]: 2026-01-31 07:33:59.582 226833 DEBUG nova.compute.manager [req-ffce8ac2-8e2f-44dc-8252-ceb672be3204 req-714ad401-6ae7-4e17-8701-810457bc08fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] No waiting events found dispatching network-vif-plugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:33:59 compute-2 nova_compute[226829]: 2026-01-31 07:33:59.582 226833 WARNING nova.compute.manager [req-ffce8ac2-8e2f-44dc-8252-ceb672be3204 req-714ad401-6ae7-4e17-8701-810457bc08fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Received unexpected event network-vif-plugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 for instance with vm_state active and task_state None.
Jan 31 07:33:59 compute-2 ceph-mon[77282]: pgmap v992: 305 pgs: 305 active+clean; 469 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 153 op/s
Jan 31 07:34:00 compute-2 nova_compute[226829]: 2026-01-31 07:34:00.364 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:01.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:34:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:01.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:34:01 compute-2 ceph-mon[77282]: pgmap v993: 305 pgs: 305 active+clean; 494 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.4 MiB/s wr, 219 op/s
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.439 226833 DEBUG oslo_concurrency.lockutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.440 226833 DEBUG oslo_concurrency.lockutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.440 226833 DEBUG oslo_concurrency.lockutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.440 226833 DEBUG oslo_concurrency.lockutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.440 226833 DEBUG oslo_concurrency.lockutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.441 226833 INFO nova.compute.manager [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Terminating instance
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.442 226833 DEBUG nova.compute.manager [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:34:02 compute-2 kernel: tap4ba2a715-81 (unregistering): left promiscuous mode
Jan 31 07:34:02 compute-2 NetworkManager[48999]: <info>  [1769844842.5206] device (tap4ba2a715-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.527 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:02 compute-2 ovn_controller[133834]: 2026-01-31T07:34:02Z|00053|binding|INFO|Releasing lport 4ba2a715-8194-4398-8d19-a3eb77acd9c6 from this chassis (sb_readonly=0)
Jan 31 07:34:02 compute-2 ovn_controller[133834]: 2026-01-31T07:34:02Z|00054|binding|INFO|Setting lport 4ba2a715-8194-4398-8d19-a3eb77acd9c6 down in Southbound
Jan 31 07:34:02 compute-2 ovn_controller[133834]: 2026-01-31T07:34:02Z|00055|binding|INFO|Removing iface tap4ba2a715-81 ovn-installed in OVS
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.529 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.535 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:5a:ee 10.1.0.40 fdfe:381f:8400::206'], port_security=['fa:16:3e:83:5a:ee 10.1.0.40 fdfe:381f:8400::206'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.40/26 fdfe:381f:8400::206/64', 'neutron:device_id': '4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca84ce9280d74b4588f89bf679f563fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4c568fd1-1fe0-466e-9918-da0b2ee34267', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5e73fc0-b2f3-4e49-a543-f424bee97362, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=4ba2a715-8194-4398-8d19-a3eb77acd9c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.537 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 4ba2a715-8194-4398-8d19-a3eb77acd9c6 in datapath 3741a1e4-1d7f-4ca0-b02c-790a05701782 unbound from our chassis
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.539 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.540 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3741a1e4-1d7f-4ca0-b02c-790a05701782, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.541 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6eebe8a3-7b73-402f-8058-6c4236f79a00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.542 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 namespace which is not needed anymore
Jan 31 07:34:02 compute-2 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 31 07:34:02 compute-2 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 5.299s CPU time.
Jan 31 07:34:02 compute-2 systemd-machined[195142]: Machine qemu-4-instance-0000000b terminated.
Jan 31 07:34:02 compute-2 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[233382]: [NOTICE]   (233403) : haproxy version is 2.8.14-c23fe91
Jan 31 07:34:02 compute-2 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[233382]: [NOTICE]   (233403) : path to executable is /usr/sbin/haproxy
Jan 31 07:34:02 compute-2 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[233382]: [WARNING]  (233403) : Exiting Master process...
Jan 31 07:34:02 compute-2 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[233382]: [WARNING]  (233403) : Exiting Master process...
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.670 226833 INFO nova.virt.libvirt.driver [-] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Instance destroyed successfully.
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.671 226833 DEBUG nova.objects.instance [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'resources' on Instance uuid 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:02 compute-2 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[233382]: [ALERT]    (233403) : Current worker (233409) exited with code 143 (Terminated)
Jan 31 07:34:02 compute-2 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[233382]: [WARNING]  (233403) : All workers exited. Exiting... (0)
Jan 31 07:34:02 compute-2 systemd[1]: libpod-3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16.scope: Deactivated successfully.
Jan 31 07:34:02 compute-2 podman[233450]: 2026-01-31 07:34:02.682462655 +0000 UTC m=+0.052897274 container died 3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.685 226833 DEBUG nova.virt.libvirt.vif [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:33:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2133368505-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2133368505-1',id=11,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:33:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca84ce9280d74b4588f89bf679f563fa',ramdisk_id='',reservation_id='r-xsv2fd0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1036306085',owner_user_name='tempest-AutoAllocateNetworkTest-1036306085-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:33:58Z,user_data=None,user_id='38001f2ce5654228b098939fd9619d3e',uuid=4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "address": "fa:16:3e:83:5a:ee", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.40", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::206", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ba2a715-81", "ovs_interfaceid": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.687 226833 DEBUG nova.network.os_vif_util [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converting VIF {"id": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "address": "fa:16:3e:83:5a:ee", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.40", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::206", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ba2a715-81", "ovs_interfaceid": "4ba2a715-8194-4398-8d19-a3eb77acd9c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.688 226833 DEBUG nova.network.os_vif_util [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:5a:ee,bridge_name='br-int',has_traffic_filtering=True,id=4ba2a715-8194-4398-8d19-a3eb77acd9c6,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ba2a715-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.689 226833 DEBUG os_vif [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:5a:ee,bridge_name='br-int',has_traffic_filtering=True,id=4ba2a715-8194-4398-8d19-a3eb77acd9c6,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ba2a715-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.691 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.691 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ba2a715-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.696 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.698 226833 INFO os_vif [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:5a:ee,bridge_name='br-int',has_traffic_filtering=True,id=4ba2a715-8194-4398-8d19-a3eb77acd9c6,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ba2a715-81')
Jan 31 07:34:02 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16-userdata-shm.mount: Deactivated successfully.
Jan 31 07:34:02 compute-2 systemd[1]: var-lib-containers-storage-overlay-557884c87db9081ad269a4c0bc7f6bff2e2448661847900f33f3cede518c5281-merged.mount: Deactivated successfully.
Jan 31 07:34:02 compute-2 podman[233450]: 2026-01-31 07:34:02.720868754 +0000 UTC m=+0.091303373 container cleanup 3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:34:02 compute-2 systemd[1]: libpod-conmon-3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16.scope: Deactivated successfully.
Jan 31 07:34:02 compute-2 podman[233504]: 2026-01-31 07:34:02.78596475 +0000 UTC m=+0.044066130 container remove 3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.790 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a55de2ce-54d0-4a0b-a6c4-848a968d7fea]: (4, ('Sat Jan 31 07:34:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 (3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16)\n3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16\nSat Jan 31 07:34:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 (3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16)\n3037f89e2bb9d6b4b5e554d3edf3da099e14f6d454eb21bfd710de0895e76c16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.791 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4857f9da-b10f-423a-8abd-4f8eb23a717c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.792 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3741a1e4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.794 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:02 compute-2 kernel: tap3741a1e4-10: left promiscuous mode
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.800 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:02 compute-2 nova_compute[226829]: 2026-01-31 07:34:02.801 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.803 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b48d0e64-13e5-4c7d-b66f-e73f99e975c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.820 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba9551e-42e9-46f1-8862-e3e0326ae3ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.821 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ee19fca1-a60b-42da-8787-bee1b59742b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.835 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ff1d6c-c2b5-469e-af2f-a09205c3692c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508965, 'reachable_time': 32866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233523, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:02 compute-2 systemd[1]: run-netns-ovnmeta\x2d3741a1e4\x2d1d7f\x2d4ca0\x2db02c\x2d790a05701782.mount: Deactivated successfully.
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.838 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:34:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:02.838 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5ef8db-4215-48c4-aa65-bf20fe00e05e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:03.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.319 226833 DEBUG nova.compute.manager [req-dd9cd701-ec70-4508-95c9-9c28050ead3b req-2e58ca2d-73ec-46c5-80b0-aa9cae13ad6e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Received event network-vif-unplugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.319 226833 DEBUG oslo_concurrency.lockutils [req-dd9cd701-ec70-4508-95c9-9c28050ead3b req-2e58ca2d-73ec-46c5-80b0-aa9cae13ad6e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.320 226833 DEBUG oslo_concurrency.lockutils [req-dd9cd701-ec70-4508-95c9-9c28050ead3b req-2e58ca2d-73ec-46c5-80b0-aa9cae13ad6e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.320 226833 DEBUG oslo_concurrency.lockutils [req-dd9cd701-ec70-4508-95c9-9c28050ead3b req-2e58ca2d-73ec-46c5-80b0-aa9cae13ad6e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.321 226833 DEBUG nova.compute.manager [req-dd9cd701-ec70-4508-95c9-9c28050ead3b req-2e58ca2d-73ec-46c5-80b0-aa9cae13ad6e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] No waiting events found dispatching network-vif-unplugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.321 226833 DEBUG nova.compute.manager [req-dd9cd701-ec70-4508-95c9-9c28050ead3b req-2e58ca2d-73ec-46c5-80b0-aa9cae13ad6e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Received event network-vif-unplugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.331 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:03.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.776 226833 INFO nova.virt.libvirt.driver [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Deleting instance files /var/lib/nova/instances/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_del
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.777 226833 INFO nova.virt.libvirt.driver [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Deletion of /var/lib/nova/instances/4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8_del complete
Jan 31 07:34:03 compute-2 ceph-mon[77282]: pgmap v994: 305 pgs: 305 active+clean; 522 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.8 MiB/s wr, 275 op/s
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.871 226833 INFO nova.compute.manager [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Took 1.43 seconds to destroy the instance on the hypervisor.
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.872 226833 DEBUG oslo.service.loopingcall [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.873 226833 DEBUG nova.compute.manager [-] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:34:03 compute-2 nova_compute[226829]: 2026-01-31 07:34:03.873 226833 DEBUG nova.network.neutron [-] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:34:04 compute-2 podman[233526]: 2026-01-31 07:34:04.195020893 +0000 UTC m=+0.074176039 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:34:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2122459944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.050 226833 DEBUG nova.network.neutron [-] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.190 226833 DEBUG nova.compute.manager [req-052f80a7-23a5-4d74-8af2-5d427fba4759 req-d6503b31-a2f3-4d3b-9440-fb6d9bd75933 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Received event network-vif-deleted-4ba2a715-8194-4398-8d19-a3eb77acd9c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.191 226833 INFO nova.compute.manager [req-052f80a7-23a5-4d74-8af2-5d427fba4759 req-d6503b31-a2f3-4d3b-9440-fb6d9bd75933 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Neutron deleted interface 4ba2a715-8194-4398-8d19-a3eb77acd9c6; detaching it from the instance and deleting it from the info cache
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.192 226833 DEBUG nova.network.neutron [req-052f80a7-23a5-4d74-8af2-5d427fba4759 req-d6503b31-a2f3-4d3b-9440-fb6d9bd75933 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.212 226833 INFO nova.compute.manager [-] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Took 1.34 seconds to deallocate network for instance.
Jan 31 07:34:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:34:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:05.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.378 226833 DEBUG nova.compute.manager [req-052f80a7-23a5-4d74-8af2-5d427fba4759 req-d6503b31-a2f3-4d3b-9440-fb6d9bd75933 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Detach interface failed, port_id=4ba2a715-8194-4398-8d19-a3eb77acd9c6, reason: Instance 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:34:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:05.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.602 226833 DEBUG oslo_concurrency.lockutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.603 226833 DEBUG oslo_concurrency.lockutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.670 226833 DEBUG oslo_concurrency.processutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:05 compute-2 ceph-mon[77282]: pgmap v995: 305 pgs: 305 active+clean; 508 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 288 op/s
Jan 31 07:34:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4057250205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.900 226833 DEBUG nova.compute.manager [req-ad2599ed-ca03-4058-9758-a5d666bdd20f req-405f81ec-2dc8-4f55-97bb-2873a95ae8d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Received event network-vif-plugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.901 226833 DEBUG oslo_concurrency.lockutils [req-ad2599ed-ca03-4058-9758-a5d666bdd20f req-405f81ec-2dc8-4f55-97bb-2873a95ae8d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.901 226833 DEBUG oslo_concurrency.lockutils [req-ad2599ed-ca03-4058-9758-a5d666bdd20f req-405f81ec-2dc8-4f55-97bb-2873a95ae8d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.902 226833 DEBUG oslo_concurrency.lockutils [req-ad2599ed-ca03-4058-9758-a5d666bdd20f req-405f81ec-2dc8-4f55-97bb-2873a95ae8d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.902 226833 DEBUG nova.compute.manager [req-ad2599ed-ca03-4058-9758-a5d666bdd20f req-405f81ec-2dc8-4f55-97bb-2873a95ae8d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] No waiting events found dispatching network-vif-plugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:34:05 compute-2 nova_compute[226829]: 2026-01-31 07:34:05.902 226833 WARNING nova.compute.manager [req-ad2599ed-ca03-4058-9758-a5d666bdd20f req-405f81ec-2dc8-4f55-97bb-2873a95ae8d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Received unexpected event network-vif-plugged-4ba2a715-8194-4398-8d19-a3eb77acd9c6 for instance with vm_state deleted and task_state None.
Jan 31 07:34:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:34:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3395463091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:06 compute-2 nova_compute[226829]: 2026-01-31 07:34:06.140 226833 DEBUG oslo_concurrency.processutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:06 compute-2 nova_compute[226829]: 2026-01-31 07:34:06.146 226833 DEBUG nova.compute.provider_tree [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:34:06 compute-2 nova_compute[226829]: 2026-01-31 07:34:06.402 226833 DEBUG nova.scheduler.client.report [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:34:06 compute-2 nova_compute[226829]: 2026-01-31 07:34:06.549 226833 DEBUG oslo_concurrency.lockutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:06 compute-2 nova_compute[226829]: 2026-01-31 07:34:06.589 226833 INFO nova.scheduler.client.report [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Deleted allocations for instance 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8
Jan 31 07:34:06 compute-2 nova_compute[226829]: 2026-01-31 07:34:06.700 226833 DEBUG oslo_concurrency.lockutils [None req-286465e6-87ac-4af4-8039-56e1c16257ac 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:06.838 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:06.839 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:06.839 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3395463091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:07.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:07 compute-2 sudo[233570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:07 compute-2 sudo[233570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:07 compute-2 sudo[233570]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:07 compute-2 sudo[233595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:07 compute-2 sudo[233595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:07 compute-2 sudo[233595]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:34:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:07.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:34:07 compute-2 nova_compute[226829]: 2026-01-31 07:34:07.749 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:07 compute-2 ceph-mon[77282]: pgmap v996: 305 pgs: 305 active+clean; 495 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 304 op/s
Jan 31 07:34:08 compute-2 nova_compute[226829]: 2026-01-31 07:34:08.334 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e141 e141: 3 total, 3 up, 3 in
Jan 31 07:34:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:09.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:09.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:34:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3166809655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:34:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:34:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3166809655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:34:10 compute-2 ceph-mon[77282]: pgmap v997: 305 pgs: 305 active+clean; 484 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 311 op/s
Jan 31 07:34:10 compute-2 ceph-mon[77282]: osdmap e141: 3 total, 3 up, 3 in
Jan 31 07:34:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1687764905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3166809655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:34:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3166809655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:34:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:11.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1560336513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1505288083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:11.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:12 compute-2 ceph-mon[77282]: pgmap v999: 305 pgs: 305 active+clean; 474 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 277 op/s
Jan 31 07:34:12 compute-2 nova_compute[226829]: 2026-01-31 07:34:12.752 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:13.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:13 compute-2 nova_compute[226829]: 2026-01-31 07:34:13.388 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:13.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e142 e142: 3 total, 3 up, 3 in
Jan 31 07:34:14 compute-2 ceph-mon[77282]: pgmap v1000: 305 pgs: 305 active+clean; 428 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 685 KiB/s wr, 185 op/s
Jan 31 07:34:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:15 compute-2 ceph-mon[77282]: osdmap e142: 3 total, 3 up, 3 in
Jan 31 07:34:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1040447222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2027725969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:15.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:34:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:15.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:34:16 compute-2 ceph-mon[77282]: pgmap v1002: 305 pgs: 305 active+clean; 259 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 262 op/s
Jan 31 07:34:16 compute-2 nova_compute[226829]: 2026-01-31 07:34:16.722 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:16.721 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:34:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:16.724 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:34:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:17.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3529144254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:17.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:17 compute-2 nova_compute[226829]: 2026-01-31 07:34:17.668 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844842.6670632, 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:17 compute-2 nova_compute[226829]: 2026-01-31 07:34:17.669 226833 INFO nova.compute.manager [-] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] VM Stopped (Lifecycle Event)
Jan 31 07:34:17 compute-2 nova_compute[226829]: 2026-01-31 07:34:17.690 226833 DEBUG nova.compute.manager [None req-9d99c898-dd49-40b8-8f11-33d6e1c9da0c - - - - - -] [instance: 4bf22b10-4c79-4ec2-8ec0-34c2cf9265b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:17 compute-2 nova_compute[226829]: 2026-01-31 07:34:17.754 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:18 compute-2 nova_compute[226829]: 2026-01-31 07:34:18.391 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:18 compute-2 ceph-mon[77282]: pgmap v1003: 305 pgs: 305 active+clean; 217 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.1 KiB/s wr, 300 op/s
Jan 31 07:34:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:19.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:19 compute-2 ceph-mon[77282]: pgmap v1004: 305 pgs: 305 active+clean; 200 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.5 KiB/s wr, 257 op/s
Jan 31 07:34:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:34:19.727 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:19 compute-2 nova_compute[226829]: 2026-01-31 07:34:19.994 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "34693a4b-4cec-41ed-a872-facd378ad627" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:19 compute-2 nova_compute[226829]: 2026-01-31 07:34:19.995 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "34693a4b-4cec-41ed-a872-facd378ad627" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.012 226833 DEBUG nova.compute.manager [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.144 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.145 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.154 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.155 226833 INFO nova.compute.claims [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.316 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:34:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2465457816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.825 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.832 226833 DEBUG nova.compute.provider_tree [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.885 226833 DEBUG nova.scheduler.client.report [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.923 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.924 226833 DEBUG nova.compute.manager [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.984 226833 DEBUG nova.compute.manager [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:34:20 compute-2 nova_compute[226829]: 2026-01-31 07:34:20.984 226833 DEBUG nova.network.neutron [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.004 226833 INFO nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.029 226833 DEBUG nova.compute.manager [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.122 226833 DEBUG nova.compute.manager [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.124 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.124 226833 INFO nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Creating image(s)
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.159 226833 DEBUG nova.storage.rbd_utils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 34693a4b-4cec-41ed-a872-facd378ad627_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.189 226833 DEBUG nova.storage.rbd_utils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 34693a4b-4cec-41ed-a872-facd378ad627_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.217 226833 DEBUG nova.storage.rbd_utils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 34693a4b-4cec-41ed-a872-facd378ad627_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.220 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.271 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.273 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.273 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.274 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:34:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:21.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.310 226833 DEBUG nova.storage.rbd_utils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 34693a4b-4cec-41ed-a872-facd378ad627_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.314 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 34693a4b-4cec-41ed-a872-facd378ad627_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.468 226833 DEBUG nova.network.neutron [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.468 226833 DEBUG nova.compute.manager [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:34:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:21.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:21 compute-2 ceph-mon[77282]: pgmap v1005: 305 pgs: 305 active+clean; 200 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.8 KiB/s wr, 209 op/s
Jan 31 07:34:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2465457816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:21 compute-2 nova_compute[226829]: 2026-01-31 07:34:21.926 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.093 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 34693a4b-4cec-41ed-a872-facd378ad627_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.780s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.182 226833 DEBUG nova.storage.rbd_utils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] resizing rbd image 34693a4b-4cec-41ed-a872-facd378ad627_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.603 226833 DEBUG nova.objects.instance [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'migration_context' on Instance uuid 34693a4b-4cec-41ed-a872-facd378ad627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.681 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.681 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Ensure instance console log exists: /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.682 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.682 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.682 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.684 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.688 226833 WARNING nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.693 226833 DEBUG nova.virt.libvirt.host [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.694 226833 DEBUG nova.virt.libvirt.host [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.697 226833 DEBUG nova.virt.libvirt.host [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.698 226833 DEBUG nova.virt.libvirt.host [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.700 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.701 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.702 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.703 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.704 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.704 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.705 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.706 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.706 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.707 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.707 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.708 226833 DEBUG nova.virt.hardware [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.713 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:22 compute-2 nova_compute[226829]: 2026-01-31 07:34:22.756 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:34:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1668458985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.192 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.222 226833 DEBUG nova.storage.rbd_utils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 34693a4b-4cec-41ed-a872-facd378ad627_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.226 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:23.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.393 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e143 e143: 3 total, 3 up, 3 in
Jan 31 07:34:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:23.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:34:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3704704504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.678 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.679 226833 DEBUG nova.objects.instance [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'pci_devices' on Instance uuid 34693a4b-4cec-41ed-a872-facd378ad627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.706 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <uuid>34693a4b-4cec-41ed-a872-facd378ad627</uuid>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <name>instance-0000000f</name>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <nova:name>tempest-MigrationsAdminTest-server-919055966</nova:name>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:34:22</nova:creationTime>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <nova:user uuid="71f887fd92fb486a959e5ca100cb1e10">tempest-MigrationsAdminTest-137263588-project-member</nova:user>
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <nova:project uuid="7c1ddd67115f4f7bab056dbb2f270ccc">tempest-MigrationsAdminTest-137263588</nova:project>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <system>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <entry name="serial">34693a4b-4cec-41ed-a872-facd378ad627</entry>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <entry name="uuid">34693a4b-4cec-41ed-a872-facd378ad627</entry>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     </system>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <os>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   </os>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <features>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   </features>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/34693a4b-4cec-41ed-a872-facd378ad627_disk">
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       </source>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/34693a4b-4cec-41ed-a872-facd378ad627_disk.config">
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       </source>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:34:23 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/console.log" append="off"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <video>
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     </video>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:34:23 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:34:23 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:34:23 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:34:23 compute-2 nova_compute[226829]: </domain>
Jan 31 07:34:23 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.760 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.760 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.761 226833 INFO nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Using config drive
Jan 31 07:34:23 compute-2 nova_compute[226829]: 2026-01-31 07:34:23.786 226833 DEBUG nova.storage.rbd_utils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 34693a4b-4cec-41ed-a872-facd378ad627_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.021 226833 INFO nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Creating config drive at /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/disk.config
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.026 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgcw9xxll execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:24 compute-2 ceph-mon[77282]: pgmap v1006: 305 pgs: 305 active+clean; 200 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.7 KiB/s wr, 179 op/s
Jan 31 07:34:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1668458985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:24 compute-2 ceph-mon[77282]: osdmap e143: 3 total, 3 up, 3 in
Jan 31 07:34:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3704704504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.144 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgcw9xxll" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.165 226833 DEBUG nova.storage.rbd_utils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 34693a4b-4cec-41ed-a872-facd378ad627_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.167 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/disk.config 34693a4b-4cec-41ed-a872-facd378ad627_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.320 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "4a1447d0-8c0f-428b-b815-978ebbd1b48a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.321 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4a1447d0-8c0f-428b-b815-978ebbd1b48a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.321 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "4a1447d0-8c0f-428b-b815-978ebbd1b48a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.322 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4a1447d0-8c0f-428b-b815-978ebbd1b48a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.322 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4a1447d0-8c0f-428b-b815-978ebbd1b48a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.325 226833 INFO nova.compute.manager [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Terminating instance
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.326 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "refresh_cache-4a1447d0-8c0f-428b-b815-978ebbd1b48a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.327 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquired lock "refresh_cache-4a1447d0-8c0f-428b-b815-978ebbd1b48a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.327 226833 DEBUG nova.network.neutron [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.537 226833 DEBUG nova.network.neutron [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:34:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.652 226833 DEBUG oslo_concurrency.processutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/disk.config 34693a4b-4cec-41ed-a872-facd378ad627_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.653 226833 INFO nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Deleting local config drive /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/disk.config because it was imported into RBD.
Jan 31 07:34:24 compute-2 systemd-machined[195142]: New machine qemu-5-instance-0000000f.
Jan 31 07:34:24 compute-2 systemd[1]: Started Virtual Machine qemu-5-instance-0000000f.
Jan 31 07:34:24 compute-2 podman[233943]: 2026-01-31 07:34:24.809440506 +0000 UTC m=+0.094538667 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.908 226833 DEBUG nova.network.neutron [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.926 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Releasing lock "refresh_cache-4a1447d0-8c0f-428b-b815-978ebbd1b48a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:34:24 compute-2 nova_compute[226829]: 2026-01-31 07:34:24.927 226833 DEBUG nova.compute.manager [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:34:25 compute-2 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 31 07:34:25 compute-2 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Consumed 15.895s CPU time.
Jan 31 07:34:25 compute-2 systemd-machined[195142]: Machine qemu-3-instance-00000008 terminated.
Jan 31 07:34:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:25.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.344 226833 INFO nova.virt.libvirt.driver [-] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Instance destroyed successfully.
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.345 226833 DEBUG nova.objects.instance [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'resources' on Instance uuid 4a1447d0-8c0f-428b-b815-978ebbd1b48a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.360 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844865.3598247, 34693a4b-4cec-41ed-a872-facd378ad627 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.360 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] VM Resumed (Lifecycle Event)
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.363 226833 DEBUG nova.compute.manager [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.363 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.366 226833 INFO nova.virt.libvirt.driver [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance spawned successfully.
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.366 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.405 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.411 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.414 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.414 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.415 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.415 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.416 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.416 226833 DEBUG nova.virt.libvirt.driver [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.429 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.430 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844865.3612764, 34693a4b-4cec-41ed-a872-facd378ad627 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.430 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] VM Started (Lifecycle Event)
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.521 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.524 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:34:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.712 226833 INFO nova.compute.manager [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Took 4.59 seconds to spawn the instance on the hypervisor.
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.712 226833 DEBUG nova.compute.manager [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:25 compute-2 nova_compute[226829]: 2026-01-31 07:34:25.717 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:34:26 compute-2 nova_compute[226829]: 2026-01-31 07:34:26.032 226833 INFO nova.compute.manager [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Took 5.93 seconds to build instance.
Jan 31 07:34:26 compute-2 nova_compute[226829]: 2026-01-31 07:34:26.463 226833 DEBUG oslo_concurrency.lockutils [None req-6c84aa6a-5a39-43a9-9349-673e3ca9ec30 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "34693a4b-4cec-41ed-a872-facd378ad627" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:26 compute-2 ceph-mon[77282]: pgmap v1008: 305 pgs: 305 active+clean; 239 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 870 KiB/s rd, 2.0 MiB/s wr, 103 op/s
Jan 31 07:34:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:27.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:27.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:27 compute-2 sudo[234046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:27 compute-2 sudo[234046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:27 compute-2 sudo[234046]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:27 compute-2 sudo[234071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:27 compute-2 sudo[234071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:27 compute-2 sudo[234071]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:27 compute-2 nova_compute[226829]: 2026-01-31 07:34:27.760 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:27 compute-2 ceph-mon[77282]: pgmap v1009: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 536 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Jan 31 07:34:28 compute-2 nova_compute[226829]: 2026-01-31 07:34:28.395 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:29.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:29.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:30 compute-2 ceph-mon[77282]: pgmap v1010: 305 pgs: 305 active+clean; 234 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Jan 31 07:34:30 compute-2 nova_compute[226829]: 2026-01-31 07:34:30.855 226833 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:34:30 compute-2 nova_compute[226829]: 2026-01-31 07:34:30.856 226833 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:34:30 compute-2 nova_compute[226829]: 2026-01-31 07:34:30.857 226833 DEBUG nova.network.neutron [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:34:31 compute-2 nova_compute[226829]: 2026-01-31 07:34:31.076 226833 DEBUG nova.network.neutron [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:34:31 compute-2 nova_compute[226829]: 2026-01-31 07:34:31.315 226833 DEBUG nova.network.neutron [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:31.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:31 compute-2 nova_compute[226829]: 2026-01-31 07:34:31.343 226833 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:34:31 compute-2 nova_compute[226829]: 2026-01-31 07:34:31.481 226833 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 07:34:31 compute-2 nova_compute[226829]: 2026-01-31 07:34:31.482 226833 DEBUG nova.virt.libvirt.volume.remotefs [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Creating file /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/0f2a095e83054cfc8e14fcc36e127028.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 31 07:34:31 compute-2 nova_compute[226829]: 2026-01-31 07:34:31.482 226833 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/0f2a095e83054cfc8e14fcc36e127028.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:31.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:31 compute-2 ceph-mon[77282]: pgmap v1011: 305 pgs: 305 active+clean; 189 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 189 op/s
Jan 31 07:34:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1600340645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.029 226833 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/0f2a095e83054cfc8e14fcc36e127028.tmp" returned: 1 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.030 226833 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/0f2a095e83054cfc8e14fcc36e127028.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.030 226833 DEBUG nova.virt.libvirt.volume.remotefs [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Creating directory /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.030 226833 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.218 226833 INFO nova.virt.libvirt.driver [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Deleting instance files /var/lib/nova/instances/4a1447d0-8c0f-428b-b815-978ebbd1b48a_del
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.219 226833 INFO nova.virt.libvirt.driver [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Deletion of /var/lib/nova/instances/4a1447d0-8c0f-428b-b815-978ebbd1b48a_del complete
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.224 226833 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.227 226833 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.268 226833 INFO nova.compute.manager [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Took 7.34 seconds to destroy the instance on the hypervisor.
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.269 226833 DEBUG oslo.service.loopingcall [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.269 226833 DEBUG nova.compute.manager [-] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.270 226833 DEBUG nova.network.neutron [-] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.531 226833 DEBUG nova.network.neutron [-] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.763 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.972 226833 DEBUG nova.network.neutron [-] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:32 compute-2 nova_compute[226829]: 2026-01-31 07:34:32.996 226833 INFO nova.compute.manager [-] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Took 0.73 seconds to deallocate network for instance.
Jan 31 07:34:33 compute-2 nova_compute[226829]: 2026-01-31 07:34:33.049 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:33 compute-2 nova_compute[226829]: 2026-01-31 07:34:33.049 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:33 compute-2 nova_compute[226829]: 2026-01-31 07:34:33.129 226833 DEBUG oslo_concurrency.processutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:33.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:33 compute-2 nova_compute[226829]: 2026-01-31 07:34:33.445 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:33.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:34:33 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1570788477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:33 compute-2 nova_compute[226829]: 2026-01-31 07:34:33.669 226833 DEBUG oslo_concurrency.processutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:33 compute-2 nova_compute[226829]: 2026-01-31 07:34:33.674 226833 DEBUG nova.compute.provider_tree [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:34:33 compute-2 nova_compute[226829]: 2026-01-31 07:34:33.687 226833 DEBUG nova.scheduler.client.report [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:34:33 compute-2 nova_compute[226829]: 2026-01-31 07:34:33.705 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:33 compute-2 nova_compute[226829]: 2026-01-31 07:34:33.733 226833 INFO nova.scheduler.client.report [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Deleted allocations for instance 4a1447d0-8c0f-428b-b815-978ebbd1b48a
Jan 31 07:34:33 compute-2 sudo[234125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:33 compute-2 nova_compute[226829]: 2026-01-31 07:34:33.830 226833 DEBUG oslo_concurrency.lockutils [None req-d395a2e7-29de-467c-9a70-53670079f15a 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "4a1447d0-8c0f-428b-b815-978ebbd1b48a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:33 compute-2 sudo[234125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:33 compute-2 sudo[234125]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:33 compute-2 sudo[234150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:34:33 compute-2 sudo[234150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:33 compute-2 sudo[234150]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:33 compute-2 sudo[234175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:33 compute-2 sudo[234175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:33 compute-2 sudo[234175]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:33 compute-2 sudo[234200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:34:33 compute-2 sudo[234200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:34 compute-2 podman[234268]: 2026-01-31 07:34:34.396854788 +0000 UTC m=+0.062954121 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 07:34:34 compute-2 podman[234317]: 2026-01-31 07:34:34.519906732 +0000 UTC m=+0.078774770 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:34:34 compute-2 ceph-mon[77282]: pgmap v1012: 305 pgs: 305 active+clean; 175 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 207 op/s
Jan 31 07:34:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1570788477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:34 compute-2 podman[234317]: 2026-01-31 07:34:34.67555581 +0000 UTC m=+0.234423828 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:34:35 compute-2 podman[234475]: 2026-01-31 07:34:35.271972189 +0000 UTC m=+0.059020576 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:34:35 compute-2 podman[234475]: 2026-01-31 07:34:35.313630134 +0000 UTC m=+0.100678511 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:34:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:35.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:35 compute-2 podman[234541]: 2026-01-31 07:34:35.511439251 +0000 UTC m=+0.060740552 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-type=git, io.openshift.expose-services=, release=1793)
Jan 31 07:34:35 compute-2 podman[234541]: 2026-01-31 07:34:35.522361541 +0000 UTC m=+0.071662872 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, build-date=2023-02-22T09:23:20, version=2.2.4, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, distribution-scope=public)
Jan 31 07:34:35 compute-2 sudo[234200]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:35.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:35 compute-2 ceph-mon[77282]: pgmap v1013: 305 pgs: 305 active+clean; 169 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 176 op/s
Jan 31 07:34:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:35 compute-2 sudo[234574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:35 compute-2 sudo[234574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:35 compute-2 sudo[234574]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:35 compute-2 sudo[234599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:34:35 compute-2 sudo[234599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:35 compute-2 sudo[234599]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:35 compute-2 sudo[234624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:35 compute-2 sudo[234624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:35 compute-2 sudo[234624]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:35 compute-2 sudo[234649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:34:35 compute-2 sudo[234649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:36 compute-2 sudo[234649]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:34:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:34:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:34:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:34:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:34:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:37.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:37.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:37 compute-2 nova_compute[226829]: 2026-01-31 07:34:37.765 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:37 compute-2 ceph-mon[77282]: pgmap v1014: 305 pgs: 305 active+clean; 169 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 148 KiB/s wr, 132 op/s
Jan 31 07:34:38 compute-2 nova_compute[226829]: 2026-01-31 07:34:38.447 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:39.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:39.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:40 compute-2 ceph-mon[77282]: pgmap v1015: 305 pgs: 305 active+clean; 177 MiB data, 333 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 523 KiB/s wr, 114 op/s
Jan 31 07:34:40 compute-2 nova_compute[226829]: 2026-01-31 07:34:40.343 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844865.3420882, 4a1447d0-8c0f-428b-b815-978ebbd1b48a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:40 compute-2 nova_compute[226829]: 2026-01-31 07:34:40.344 226833 INFO nova.compute.manager [-] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] VM Stopped (Lifecycle Event)
Jan 31 07:34:40 compute-2 nova_compute[226829]: 2026-01-31 07:34:40.632 226833 DEBUG nova.compute.manager [None req-0d4ff51a-aaba-4119-b4fe-8ede69dcde15 - - - - - -] [instance: 4a1447d0-8c0f-428b-b815-978ebbd1b48a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:41.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:41 compute-2 ceph-mon[77282]: pgmap v1016: 305 pgs: 305 active+clean; 184 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 108 op/s
Jan 31 07:34:42 compute-2 nova_compute[226829]: 2026-01-31 07:34:42.275 226833 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:34:42 compute-2 nova_compute[226829]: 2026-01-31 07:34:42.769 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:43.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:43 compute-2 nova_compute[226829]: 2026-01-31 07:34:43.449 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:44 compute-2 ceph-mon[77282]: pgmap v1017: 305 pgs: 305 active+clean; 198 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 07:34:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:45 compute-2 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 31 07:34:45 compute-2 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000f.scope: Consumed 13.006s CPU time.
Jan 31 07:34:45 compute-2 systemd-machined[195142]: Machine qemu-5-instance-0000000f terminated.
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.290 226833 INFO nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance shutdown successfully after 13 seconds.
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.299 226833 INFO nova.virt.libvirt.driver [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance destroyed successfully.
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.304 226833 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.305 226833 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:34:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2215771126' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:34:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2215771126' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:34:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:45.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.442 226833 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.443 226833 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.466 226833 INFO nova.compute.rpcapi [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.467 226833 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.496 226833 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "34693a4b-4cec-41ed-a872-facd378ad627-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.497 226833 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "34693a4b-4cec-41ed-a872-facd378ad627-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:45 compute-2 nova_compute[226829]: 2026-01-31 07:34:45.497 226833 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "34693a4b-4cec-41ed-a872-facd378ad627-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:45.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:46 compute-2 ceph-mon[77282]: pgmap v1018: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 230 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:34:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:47.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:47 compute-2 sudo[234714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:47 compute-2 sudo[234714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:47 compute-2 sudo[234714]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:47 compute-2 nova_compute[226829]: 2026-01-31 07:34:47.771 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:47 compute-2 sudo[234739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:47 compute-2 sudo[234739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:47 compute-2 sudo[234739]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:47 compute-2 sudo[234746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:34:47 compute-2 sudo[234746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:47 compute-2 sudo[234746]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:47 compute-2 sudo[234789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:47 compute-2 sudo[234789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:47 compute-2 sudo[234789]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e144 e144: 3 total, 3 up, 3 in
Jan 31 07:34:48 compute-2 ceph-mon[77282]: pgmap v1019: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 230 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:34:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:48 compute-2 nova_compute[226829]: 2026-01-31 07:34:48.452 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:48 compute-2 nova_compute[226829]: 2026-01-31 07:34:48.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:49.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:49 compute-2 nova_compute[226829]: 2026-01-31 07:34:49.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:49.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:49 compute-2 ceph-mon[77282]: osdmap e144: 3 total, 3 up, 3 in
Jan 31 07:34:49 compute-2 ceph-mon[77282]: pgmap v1021: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 251 KiB/s rd, 2.0 MiB/s wr, 64 op/s
Jan 31 07:34:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3806352133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2794457443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:51.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:51.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:51 compute-2 ceph-mon[77282]: pgmap v1022: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 916 KiB/s wr, 33 op/s
Jan 31 07:34:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1232797084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:52 compute-2 nova_compute[226829]: 2026-01-31 07:34:52.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:52 compute-2 nova_compute[226829]: 2026-01-31 07:34:52.492 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:34:52 compute-2 nova_compute[226829]: 2026-01-31 07:34:52.492 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:34:52 compute-2 nova_compute[226829]: 2026-01-31 07:34:52.691 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:34:52 compute-2 nova_compute[226829]: 2026-01-31 07:34:52.691 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:52 compute-2 nova_compute[226829]: 2026-01-31 07:34:52.773 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2232137789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1433445899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:53.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:53 compute-2 nova_compute[226829]: 2026-01-31 07:34:53.454 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:53 compute-2 nova_compute[226829]: 2026-01-31 07:34:53.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:53 compute-2 nova_compute[226829]: 2026-01-31 07:34:53.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:53 compute-2 nova_compute[226829]: 2026-01-31 07:34:53.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:34:53 compute-2 nova_compute[226829]: 2026-01-31 07:34:53.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:53 compute-2 nova_compute[226829]: 2026-01-31 07:34:53.525 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:53 compute-2 nova_compute[226829]: 2026-01-31 07:34:53.526 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:53 compute-2 nova_compute[226829]: 2026-01-31 07:34:53.526 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:53 compute-2 nova_compute[226829]: 2026-01-31 07:34:53.526 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:34:53 compute-2 nova_compute[226829]: 2026-01-31 07:34:53.527 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:53.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.036 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.114 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.115 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:34:54 compute-2 ceph-mon[77282]: pgmap v1023: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 105 KiB/s wr, 30 op/s
Jan 31 07:34:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2380325277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1992311293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.275 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.276 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4884MB free_disk=20.897262573242188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.277 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.277 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.348 226833 DEBUG oslo_concurrency.lockutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "34693a4b-4cec-41ed-a872-facd378ad627" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.349 226833 DEBUG oslo_concurrency.lockutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "34693a4b-4cec-41ed-a872-facd378ad627" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.350 226833 DEBUG nova.compute.manager [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Going to confirm migration 5 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.352 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Migration for instance 34693a4b-4cec-41ed-a872-facd378ad627 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.389 226833 INFO nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Updating resource usage from migration d2cd5cab-5814-4f5a-8fff-f5e2478f3078
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.390 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Starting to track outgoing migration d2cd5cab-5814-4f5a-8fff-f5e2478f3078 with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.432 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Migration d2cd5cab-5814-4f5a-8fff-f5e2478f3078 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.432 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.433 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.470 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.717 226833 DEBUG oslo_concurrency.lockutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.718 226833 DEBUG oslo_concurrency.lockutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.719 226833 DEBUG nova.network.neutron [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.720 226833 DEBUG nova.objects.instance [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'info_cache' on Instance uuid 34693a4b-4cec-41ed-a872-facd378ad627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:34:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1768258591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.921 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:54 compute-2 nova_compute[226829]: 2026-01-31 07:34:54.930 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:34:55 compute-2 nova_compute[226829]: 2026-01-31 07:34:55.032 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:34:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1768258591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:55 compute-2 nova_compute[226829]: 2026-01-31 07:34:55.199 226833 DEBUG nova.network.neutron [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:34:55 compute-2 nova_compute[226829]: 2026-01-31 07:34:55.228 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:34:55 compute-2 nova_compute[226829]: 2026-01-31 07:34:55.228 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:55 compute-2 podman[234862]: 2026-01-31 07:34:55.254799585 +0000 UTC m=+0.132868295 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:34:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:55.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:55 compute-2 nova_compute[226829]: 2026-01-31 07:34:55.601 226833 DEBUG nova.network.neutron [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:55 compute-2 nova_compute[226829]: 2026-01-31 07:34:55.644 226833 DEBUG oslo_concurrency.lockutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:34:55 compute-2 nova_compute[226829]: 2026-01-31 07:34:55.645 226833 DEBUG nova.objects.instance [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'migration_context' on Instance uuid 34693a4b-4cec-41ed-a872-facd378ad627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:34:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:55.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:34:55 compute-2 nova_compute[226829]: 2026-01-31 07:34:55.980 226833 DEBUG nova.storage.rbd_utils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] removing snapshot(nova-resize) on rbd image(34693a4b-4cec-41ed-a872-facd378ad627_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:34:56 compute-2 ceph-mon[77282]: pgmap v1024: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 921 B/s wr, 86 op/s
Jan 31 07:34:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e145 e145: 3 total, 3 up, 3 in
Jan 31 07:34:56 compute-2 nova_compute[226829]: 2026-01-31 07:34:56.284 226833 DEBUG oslo_concurrency.lockutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:56 compute-2 nova_compute[226829]: 2026-01-31 07:34:56.284 226833 DEBUG oslo_concurrency.lockutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:56 compute-2 nova_compute[226829]: 2026-01-31 07:34:56.354 226833 DEBUG oslo_concurrency.processutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:34:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2027388659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:56 compute-2 nova_compute[226829]: 2026-01-31 07:34:56.822 226833 DEBUG oslo_concurrency.processutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:56 compute-2 nova_compute[226829]: 2026-01-31 07:34:56.830 226833 DEBUG nova.compute.provider_tree [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:34:56 compute-2 nova_compute[226829]: 2026-01-31 07:34:56.853 226833 DEBUG nova.scheduler.client.report [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:34:56 compute-2 nova_compute[226829]: 2026-01-31 07:34:56.901 226833 DEBUG oslo_concurrency.lockutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:57 compute-2 nova_compute[226829]: 2026-01-31 07:34:57.017 226833 INFO nova.scheduler.client.report [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Deleted allocation for migration d2cd5cab-5814-4f5a-8fff-f5e2478f3078
Jan 31 07:34:57 compute-2 nova_compute[226829]: 2026-01-31 07:34:57.188 226833 DEBUG oslo_concurrency.lockutils [None req-d9cdd3c4-e286-42d6-a8a5-2657d944c7c3 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "34693a4b-4cec-41ed-a872-facd378ad627" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 2.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:57 compute-2 nova_compute[226829]: 2026-01-31 07:34:57.228 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:57 compute-2 nova_compute[226829]: 2026-01-31 07:34:57.229 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:57 compute-2 ceph-mon[77282]: osdmap e145: 3 total, 3 up, 3 in
Jan 31 07:34:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2027388659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:57.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:57 compute-2 nova_compute[226829]: 2026-01-31 07:34:57.774 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:57.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:58 compute-2 ovn_controller[133834]: 2026-01-31T07:34:58Z|00056|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Jan 31 07:34:58 compute-2 ceph-mon[77282]: pgmap v1026: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 KiB/s wr, 107 op/s
Jan 31 07:34:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/586764486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:58 compute-2 nova_compute[226829]: 2026-01-31 07:34:58.456 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:59.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:59 compute-2 ceph-mon[77282]: pgmap v1027: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 KiB/s wr, 107 op/s
Jan 31 07:34:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:34:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:59.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:00 compute-2 nova_compute[226829]: 2026-01-31 07:35:00.254 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844885.2527285, 34693a4b-4cec-41ed-a872-facd378ad627 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:00 compute-2 nova_compute[226829]: 2026-01-31 07:35:00.254 226833 INFO nova.compute.manager [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] VM Stopped (Lifecycle Event)
Jan 31 07:35:00 compute-2 nova_compute[226829]: 2026-01-31 07:35:00.290 226833 DEBUG nova.compute.manager [None req-4a4bb9f6-d3e0-4ec4-bb07-e064abcbd5c9 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4086487062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:01.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:01 compute-2 ceph-mon[77282]: pgmap v1028: 305 pgs: 305 active+clean; 217 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 804 KiB/s wr, 121 op/s
Jan 31 07:35:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2055595108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:35:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:35:02 compute-2 nova_compute[226829]: 2026-01-31 07:35:02.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:03.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:03 compute-2 nova_compute[226829]: 2026-01-31 07:35:03.478 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e146 e146: 3 total, 3 up, 3 in
Jan 31 07:35:03 compute-2 ceph-mon[77282]: pgmap v1029: 305 pgs: 305 active+clean; 225 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 130 op/s
Jan 31 07:35:03 compute-2 ceph-mon[77282]: osdmap e146: 3 total, 3 up, 3 in
Jan 31 07:35:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:03.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:05 compute-2 podman[234952]: 2026-01-31 07:35:05.162319976 +0000 UTC m=+0.049788302 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:35:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:05.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:05 compute-2 nova_compute[226829]: 2026-01-31 07:35:05.694 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "871711de-f993-4592-83a2-a36c4039786d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:05 compute-2 nova_compute[226829]: 2026-01-31 07:35:05.695 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "871711de-f993-4592-83a2-a36c4039786d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:05 compute-2 nova_compute[226829]: 2026-01-31 07:35:05.715 226833 DEBUG nova.compute.manager [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:35:05 compute-2 nova_compute[226829]: 2026-01-31 07:35:05.808 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:05 compute-2 ceph-mon[77282]: pgmap v1031: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Jan 31 07:35:05 compute-2 nova_compute[226829]: 2026-01-31 07:35:05.809 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:05 compute-2 nova_compute[226829]: 2026-01-31 07:35:05.818 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:35:05 compute-2 nova_compute[226829]: 2026-01-31 07:35:05.818 226833 INFO nova.compute.claims [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:35:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:35:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:05.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:35:05 compute-2 nova_compute[226829]: 2026-01-31 07:35:05.975 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1665908925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.368 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.375 226833 DEBUG nova.compute.provider_tree [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.394 226833 DEBUG nova.scheduler.client.report [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.423 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.424 226833 DEBUG nova.compute.manager [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.476 226833 DEBUG nova.compute.manager [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.477 226833 DEBUG nova.network.neutron [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.504 226833 INFO nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.529 226833 DEBUG nova.compute.manager [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.657 226833 DEBUG nova.compute.manager [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.660 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.661 226833 INFO nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Creating image(s)
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.700 226833 DEBUG nova.storage.rbd_utils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 871711de-f993-4592-83a2-a36c4039786d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.741 226833 DEBUG nova.storage.rbd_utils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 871711de-f993-4592-83a2-a36c4039786d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.773 226833 DEBUG nova.storage.rbd_utils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 871711de-f993-4592-83a2-a36c4039786d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.778 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.843 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.844 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:35:06.842 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.845 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.845 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:35:06.845 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:35:06.846 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.871 226833 DEBUG nova.storage.rbd_utils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 871711de-f993-4592-83a2-a36c4039786d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:06 compute-2 nova_compute[226829]: 2026-01-31 07:35:06.875 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 871711de-f993-4592-83a2-a36c4039786d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1665908925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:07 compute-2 nova_compute[226829]: 2026-01-31 07:35:07.081 226833 DEBUG nova.network.neutron [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:35:07 compute-2 nova_compute[226829]: 2026-01-31 07:35:07.082 226833 DEBUG nova.compute.manager [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:35:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:07.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:07 compute-2 nova_compute[226829]: 2026-01-31 07:35:07.777 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:07 compute-2 nova_compute[226829]: 2026-01-31 07:35:07.843 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 871711de-f993-4592-83a2-a36c4039786d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.968s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:07 compute-2 nova_compute[226829]: 2026-01-31 07:35:07.926 226833 DEBUG nova.storage.rbd_utils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] resizing rbd image 871711de-f993-4592-83a2-a36c4039786d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:35:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:07.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:07 compute-2 sudo[235123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:07 compute-2 sudo[235123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:07 compute-2 sudo[235123]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:08 compute-2 sudo[235168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:08 compute-2 sudo[235168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:08 compute-2 sudo[235168]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:08 compute-2 ceph-mon[77282]: pgmap v1032: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 31 07:35:08 compute-2 nova_compute[226829]: 2026-01-31 07:35:08.484 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:08 compute-2 nova_compute[226829]: 2026-01-31 07:35:08.956 226833 DEBUG nova.objects.instance [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'migration_context' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.018 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.019 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Ensure instance console log exists: /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.019 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.020 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.020 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.023 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.028 226833 WARNING nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.065 226833 DEBUG nova.virt.libvirt.host [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.066 226833 DEBUG nova.virt.libvirt.host [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.108 226833 DEBUG nova.virt.libvirt.host [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.109 226833 DEBUG nova.virt.libvirt.host [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.111 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.112 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:35:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='702f80f9-864a-4ea0-a349-0b64802d04ab',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-1842587455',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.113 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.113 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.114 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.114 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.114 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.115 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.115 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.116 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.116 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.117 226833 DEBUG nova.virt.hardware [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.122 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1369829920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:09.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1004872550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.611 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.647 226833 DEBUG nova.storage.rbd_utils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 871711de-f993-4592-83a2-a36c4039786d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:09 compute-2 nova_compute[226829]: 2026-01-31 07:35:09.651 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2775679440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:10 compute-2 nova_compute[226829]: 2026-01-31 07:35:10.182 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:10 compute-2 nova_compute[226829]: 2026-01-31 07:35:10.185 226833 DEBUG nova.objects.instance [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'pci_devices' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:10 compute-2 nova_compute[226829]: 2026-01-31 07:35:10.212 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <uuid>871711de-f993-4592-83a2-a36c4039786d</uuid>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <name>instance-00000011</name>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <nova:name>tempest-MigrationsAdminTest-server-1825538190</nova:name>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:35:09</nova:creationTime>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <nova:flavor name="tempest-test_resize_flavor_-1842587455">
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <nova:user uuid="71f887fd92fb486a959e5ca100cb1e10">tempest-MigrationsAdminTest-137263588-project-member</nova:user>
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <nova:project uuid="7c1ddd67115f4f7bab056dbb2f270ccc">tempest-MigrationsAdminTest-137263588</nova:project>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <system>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <entry name="serial">871711de-f993-4592-83a2-a36c4039786d</entry>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <entry name="uuid">871711de-f993-4592-83a2-a36c4039786d</entry>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     </system>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <os>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   </os>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <features>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   </features>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/871711de-f993-4592-83a2-a36c4039786d_disk">
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       </source>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/871711de-f993-4592-83a2-a36c4039786d_disk.config">
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       </source>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:35:10 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/console.log" append="off"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <video>
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     </video>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:35:10 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:35:10 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:35:10 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:35:10 compute-2 nova_compute[226829]: </domain>
Jan 31 07:35:10 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:35:10 compute-2 nova_compute[226829]: 2026-01-31 07:35:10.268 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:35:10 compute-2 nova_compute[226829]: 2026-01-31 07:35:10.269 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:35:10 compute-2 nova_compute[226829]: 2026-01-31 07:35:10.270 226833 INFO nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Using config drive
Jan 31 07:35:10 compute-2 ceph-mon[77282]: pgmap v1033: 305 pgs: 305 active+clean; 264 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 182 op/s
Jan 31 07:35:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1004872550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2775679440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:10 compute-2 nova_compute[226829]: 2026-01-31 07:35:10.303 226833 DEBUG nova.storage.rbd_utils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 871711de-f993-4592-83a2-a36c4039786d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:11 compute-2 nova_compute[226829]: 2026-01-31 07:35:11.099 226833 INFO nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Creating config drive at /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/disk.config
Jan 31 07:35:11 compute-2 nova_compute[226829]: 2026-01-31 07:35:11.106 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi3_hbcgr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:11 compute-2 nova_compute[226829]: 2026-01-31 07:35:11.246 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi3_hbcgr" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:11 compute-2 nova_compute[226829]: 2026-01-31 07:35:11.288 226833 DEBUG nova.storage.rbd_utils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 871711de-f993-4592-83a2-a36c4039786d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:11 compute-2 nova_compute[226829]: 2026-01-31 07:35:11.293 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/disk.config 871711de-f993-4592-83a2-a36c4039786d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:11.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:11 compute-2 ceph-mon[77282]: pgmap v1034: 305 pgs: 305 active+clean; 281 MiB data, 390 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 170 op/s
Jan 31 07:35:11 compute-2 nova_compute[226829]: 2026-01-31 07:35:11.875 226833 DEBUG oslo_concurrency.processutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/disk.config 871711de-f993-4592-83a2-a36c4039786d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:11 compute-2 nova_compute[226829]: 2026-01-31 07:35:11.877 226833 INFO nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Deleting local config drive /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/disk.config because it was imported into RBD.
Jan 31 07:35:11 compute-2 systemd-machined[195142]: New machine qemu-6-instance-00000011.
Jan 31 07:35:11 compute-2 systemd[1]: Started Virtual Machine qemu-6-instance-00000011.
Jan 31 07:35:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:35:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:11.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:35:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2981617375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2042362911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.458 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844912.4581978, 871711de-f993-4592-83a2-a36c4039786d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.459 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] VM Resumed (Lifecycle Event)
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.462 226833 DEBUG nova.compute.manager [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.463 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.466 226833 INFO nova.virt.libvirt.driver [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance spawned successfully.
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.466 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.547 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.554 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.555 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.556 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.557 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.558 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.559 226833 DEBUG nova.virt.libvirt.driver [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.565 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.630 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.631 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844912.4616218, 871711de-f993-4592-83a2-a36c4039786d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.632 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] VM Started (Lifecycle Event)
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.659 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.662 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.687 226833 INFO nova.compute.manager [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Took 6.03 seconds to spawn the instance on the hypervisor.
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.687 226833 DEBUG nova.compute.manager [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.688 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.743 226833 INFO nova.compute.manager [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Took 6.98 seconds to build instance.
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.762 226833 DEBUG oslo_concurrency.lockutils [None req-457aa703-06d2-4bd5-9ff5-1d5417ea564f 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "871711de-f993-4592-83a2-a36c4039786d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:12 compute-2 nova_compute[226829]: 2026-01-31 07:35:12.781 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:13.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:13 compute-2 ceph-mon[77282]: pgmap v1035: 305 pgs: 305 active+clean; 311 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 198 op/s
Jan 31 07:35:13 compute-2 nova_compute[226829]: 2026-01-31 07:35:13.541 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:13.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:15.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:15.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:15 compute-2 ceph-mon[77282]: pgmap v1036: 305 pgs: 305 active+clean; 343 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.1 MiB/s wr, 233 op/s
Jan 31 07:35:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:35:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:17.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:35:17 compute-2 nova_compute[226829]: 2026-01-31 07:35:17.783 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:17.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:18 compute-2 ceph-mon[77282]: pgmap v1037: 305 pgs: 305 active+clean; 357 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 227 op/s
Jan 31 07:35:18 compute-2 nova_compute[226829]: 2026-01-31 07:35:18.581 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:19.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:19 compute-2 ceph-mon[77282]: pgmap v1038: 305 pgs: 305 active+clean; 361 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.4 MiB/s wr, 253 op/s
Jan 31 07:35:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:35:19.808 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:35:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:35:19.809 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:35:19 compute-2 nova_compute[226829]: 2026-01-31 07:35:19.810 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:19 compute-2 nova_compute[226829]: 2026-01-31 07:35:19.963 226833 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:35:19 compute-2 nova_compute[226829]: 2026-01-31 07:35:19.963 226833 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:35:19 compute-2 nova_compute[226829]: 2026-01-31 07:35:19.964 226833 DEBUG nova.network.neutron [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:35:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:35:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:19.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:35:20 compute-2 nova_compute[226829]: 2026-01-31 07:35:20.235 226833 DEBUG nova.network.neutron [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:20 compute-2 nova_compute[226829]: 2026-01-31 07:35:20.472 226833 DEBUG nova.network.neutron [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:20 compute-2 nova_compute[226829]: 2026-01-31 07:35:20.502 226833 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:35:20 compute-2 nova_compute[226829]: 2026-01-31 07:35:20.652 226833 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 07:35:20 compute-2 nova_compute[226829]: 2026-01-31 07:35:20.653 226833 DEBUG nova.virt.libvirt.volume.remotefs [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Creating file /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/c1e642cfb0ff4a07aeec1a9e854928e7.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 31 07:35:20 compute-2 nova_compute[226829]: 2026-01-31 07:35:20.653 226833 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/c1e642cfb0ff4a07aeec1a9e854928e7.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1939639995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:35:20.813 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:21 compute-2 nova_compute[226829]: 2026-01-31 07:35:21.143 226833 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/c1e642cfb0ff4a07aeec1a9e854928e7.tmp" returned: 1 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:21 compute-2 nova_compute[226829]: 2026-01-31 07:35:21.144 226833 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/c1e642cfb0ff4a07aeec1a9e854928e7.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 07:35:21 compute-2 nova_compute[226829]: 2026-01-31 07:35:21.145 226833 DEBUG nova.virt.libvirt.volume.remotefs [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Creating directory /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 31 07:35:21 compute-2 nova_compute[226829]: 2026-01-31 07:35:21.145 226833 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:21 compute-2 nova_compute[226829]: 2026-01-31 07:35:21.336 226833 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:21 compute-2 nova_compute[226829]: 2026-01-31 07:35:21.341 226833 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:35:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:35:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:35:21 compute-2 ceph-mon[77282]: pgmap v1039: 305 pgs: 305 active+clean; 345 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.2 MiB/s wr, 252 op/s
Jan 31 07:35:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:35:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:21.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:35:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3973763842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:22 compute-2 nova_compute[226829]: 2026-01-31 07:35:22.786 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:23.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:23 compute-2 nova_compute[226829]: 2026-01-31 07:35:23.584 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:23 compute-2 ceph-mon[77282]: pgmap v1040: 305 pgs: 305 active+clean; 309 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 282 op/s
Jan 31 07:35:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/577500255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:35:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:23.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:35:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:25.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:25 compute-2 ceph-mon[77282]: pgmap v1041: 305 pgs: 305 active+clean; 260 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 270 op/s
Jan 31 07:35:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:35:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:35:26 compute-2 podman[235399]: 2026-01-31 07:35:26.232730156 +0000 UTC m=+0.106370102 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:35:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:27.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:27 compute-2 nova_compute[226829]: 2026-01-31 07:35:27.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:27 compute-2 ceph-mon[77282]: pgmap v1042: 305 pgs: 305 active+clean; 264 MiB data, 386 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 238 op/s
Jan 31 07:35:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:27.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:28 compute-2 sudo[235424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:28 compute-2 sudo[235424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:28 compute-2 sudo[235424]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:28 compute-2 sudo[235449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:28 compute-2 sudo[235449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:28 compute-2 sudo[235449]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:28 compute-2 nova_compute[226829]: 2026-01-31 07:35:28.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:29.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:29 compute-2 ceph-mon[77282]: pgmap v1043: 305 pgs: 305 active+clean; 276 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 200 op/s
Jan 31 07:35:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3479424062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:29.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:31 compute-2 nova_compute[226829]: 2026-01-31 07:35:31.394 226833 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:35:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:31 compute-2 ceph-mon[77282]: pgmap v1044: 305 pgs: 305 active+clean; 280 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.4 MiB/s wr, 157 op/s
Jan 31 07:35:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:32.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:32 compute-2 nova_compute[226829]: 2026-01-31 07:35:32.790 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:33.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:33 compute-2 nova_compute[226829]: 2026-01-31 07:35:33.589 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:33 compute-2 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 31 07:35:33 compute-2 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000011.scope: Consumed 13.863s CPU time.
Jan 31 07:35:33 compute-2 systemd-machined[195142]: Machine qemu-6-instance-00000011 terminated.
Jan 31 07:35:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:34.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:34 compute-2 ceph-mon[77282]: pgmap v1045: 305 pgs: 305 active+clean; 301 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 377 KiB/s rd, 2.8 MiB/s wr, 117 op/s
Jan 31 07:35:34 compute-2 nova_compute[226829]: 2026-01-31 07:35:34.407 226833 INFO nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance shutdown successfully after 13 seconds.
Jan 31 07:35:34 compute-2 nova_compute[226829]: 2026-01-31 07:35:34.412 226833 INFO nova.virt.libvirt.driver [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance destroyed successfully.
Jan 31 07:35:34 compute-2 nova_compute[226829]: 2026-01-31 07:35:34.416 226833 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:35:34 compute-2 nova_compute[226829]: 2026-01-31 07:35:34.417 226833 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:35:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:35 compute-2 nova_compute[226829]: 2026-01-31 07:35:35.357 226833 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "871711de-f993-4592-83a2-a36c4039786d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:35 compute-2 nova_compute[226829]: 2026-01-31 07:35:35.358 226833 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "871711de-f993-4592-83a2-a36c4039786d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:35 compute-2 nova_compute[226829]: 2026-01-31 07:35:35.359 226833 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "871711de-f993-4592-83a2-a36c4039786d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:35.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3916158877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:36 compute-2 podman[235482]: 2026-01-31 07:35:36.171666753 +0000 UTC m=+0.056969507 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 07:35:36 compute-2 ceph-mon[77282]: pgmap v1046: 305 pgs: 305 active+clean; 329 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 349 KiB/s rd, 3.9 MiB/s wr, 110 op/s
Jan 31 07:35:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/749401607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:37.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:37 compute-2 ceph-mon[77282]: pgmap v1047: 305 pgs: 305 active+clean; 329 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 318 KiB/s rd, 3.0 MiB/s wr, 86 op/s
Jan 31 07:35:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3628664984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e147 e147: 3 total, 3 up, 3 in
Jan 31 07:35:37 compute-2 nova_compute[226829]: 2026-01-31 07:35:37.791 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:38.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:38 compute-2 ceph-mon[77282]: osdmap e147: 3 total, 3 up, 3 in
Jan 31 07:35:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1976072540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:38 compute-2 nova_compute[226829]: 2026-01-31 07:35:38.608 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:38 compute-2 nova_compute[226829]: 2026-01-31 07:35:38.708 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Acquiring lock "1c49c3f4-bab2-45dc-8352-02a4a3b9c372" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:38 compute-2 nova_compute[226829]: 2026-01-31 07:35:38.709 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "1c49c3f4-bab2-45dc-8352-02a4a3b9c372" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:38 compute-2 nova_compute[226829]: 2026-01-31 07:35:38.727 226833 DEBUG nova.compute.manager [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:35:38 compute-2 nova_compute[226829]: 2026-01-31 07:35:38.820 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:38 compute-2 nova_compute[226829]: 2026-01-31 07:35:38.821 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:38 compute-2 nova_compute[226829]: 2026-01-31 07:35:38.833 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:35:38 compute-2 nova_compute[226829]: 2026-01-31 07:35:38.834 226833 INFO nova.compute.claims [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:35:38 compute-2 nova_compute[226829]: 2026-01-31 07:35:38.969 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2133135503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.431 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.439 226833 DEBUG nova.compute.provider_tree [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.466 226833 DEBUG nova.scheduler.client.report [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.496 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.497 226833 DEBUG nova.compute.manager [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:35:39 compute-2 ceph-mon[77282]: pgmap v1049: 305 pgs: 305 active+clean; 329 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 31 07:35:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/628810261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3834992716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/120825983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2133135503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.574 226833 DEBUG nova.compute.manager [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.575 226833 DEBUG nova.network.neutron [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.607 226833 INFO nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.639 226833 DEBUG nova.compute.manager [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.764 226833 DEBUG nova.compute.manager [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.767 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.768 226833 INFO nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Creating image(s)
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.802 226833 DEBUG nova.storage.rbd_utils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] rbd image 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.831 226833 DEBUG nova.storage.rbd_utils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] rbd image 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.859 226833 DEBUG nova.storage.rbd_utils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] rbd image 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.863 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.940 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.941 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.942 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.942 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.968 226833 DEBUG nova.storage.rbd_utils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] rbd image 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:39 compute-2 nova_compute[226829]: 2026-01-31 07:35:39.972 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.020 226833 DEBUG nova.network.neutron [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.021 226833 DEBUG nova.compute.manager [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.384 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.450 226833 DEBUG nova.storage.rbd_utils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] resizing rbd image 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.538 226833 DEBUG nova.objects.instance [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lazy-loading 'migration_context' on Instance uuid 1c49c3f4-bab2-45dc-8352-02a4a3b9c372 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.573 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.573 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Ensure instance console log exists: /var/lib/nova/instances/1c49c3f4-bab2-45dc-8352-02a4a3b9c372/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.574 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.574 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.575 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.577 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.583 226833 WARNING nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.588 226833 DEBUG nova.virt.libvirt.host [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.589 226833 DEBUG nova.virt.libvirt.host [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.593 226833 DEBUG nova.virt.libvirt.host [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.594 226833 DEBUG nova.virt.libvirt.host [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.596 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.596 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.597 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.597 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.597 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.598 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.598 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.598 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.599 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.599 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.599 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.599 226833 DEBUG nova.virt.hardware [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:35:40 compute-2 nova_compute[226829]: 2026-01-31 07:35:40.604 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:41 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/935936433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.034 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.066 226833 DEBUG nova.storage.rbd_utils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] rbd image 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.070 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:41.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:41 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4081352400' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.506 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.508 226833 DEBUG nova.objects.instance [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c49c3f4-bab2-45dc-8352-02a4a3b9c372 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.586 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <uuid>1c49c3f4-bab2-45dc-8352-02a4a3b9c372</uuid>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <name>instance-00000015</name>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerExternalEventsTest-server-829553464</nova:name>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:35:40</nova:creationTime>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <nova:user uuid="68453ccd293b4153b94a2fd75baa2dfc">tempest-ServerExternalEventsTest-775108062-project-member</nova:user>
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <nova:project uuid="be48b50673d940d1955ee07f4627026a">tempest-ServerExternalEventsTest-775108062</nova:project>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <system>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <entry name="serial">1c49c3f4-bab2-45dc-8352-02a4a3b9c372</entry>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <entry name="uuid">1c49c3f4-bab2-45dc-8352-02a4a3b9c372</entry>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     </system>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <os>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   </os>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <features>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   </features>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk">
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       </source>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk.config">
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       </source>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:35:41 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/1c49c3f4-bab2-45dc-8352-02a4a3b9c372/console.log" append="off"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <video>
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     </video>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:35:41 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:35:41 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:35:41 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:35:41 compute-2 nova_compute[226829]: </domain>
Jan 31 07:35:41 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.638 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.639 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.639 226833 INFO nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Using config drive
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.672 226833 DEBUG nova.storage.rbd_utils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] rbd image 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:41 compute-2 ceph-mon[77282]: pgmap v1050: 305 pgs: 305 active+clean; 349 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 650 KiB/s rd, 3.5 MiB/s wr, 110 op/s
Jan 31 07:35:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/935936433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4081352400' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.896 226833 INFO nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Creating config drive at /var/lib/nova/instances/1c49c3f4-bab2-45dc-8352-02a4a3b9c372/disk.config
Jan 31 07:35:41 compute-2 nova_compute[226829]: 2026-01-31 07:35:41.900 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c49c3f4-bab2-45dc-8352-02a4a3b9c372/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi83pgsir execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:42.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:42 compute-2 nova_compute[226829]: 2026-01-31 07:35:42.022 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c49c3f4-bab2-45dc-8352-02a4a3b9c372/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi83pgsir" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:42 compute-2 nova_compute[226829]: 2026-01-31 07:35:42.064 226833 DEBUG nova.storage.rbd_utils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] rbd image 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:42 compute-2 nova_compute[226829]: 2026-01-31 07:35:42.068 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c49c3f4-bab2-45dc-8352-02a4a3b9c372/disk.config 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:42 compute-2 nova_compute[226829]: 2026-01-31 07:35:42.793 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:42 compute-2 nova_compute[226829]: 2026-01-31 07:35:42.956 226833 DEBUG oslo_concurrency.processutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c49c3f4-bab2-45dc-8352-02a4a3b9c372/disk.config 1c49c3f4-bab2-45dc-8352-02a4a3b9c372_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.888s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:42 compute-2 nova_compute[226829]: 2026-01-31 07:35:42.957 226833 INFO nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Deleting local config drive /var/lib/nova/instances/1c49c3f4-bab2-45dc-8352-02a4a3b9c372/disk.config because it was imported into RBD.
Jan 31 07:35:43 compute-2 systemd-machined[195142]: New machine qemu-7-instance-00000015.
Jan 31 07:35:43 compute-2 systemd[1]: Started Virtual Machine qemu-7-instance-00000015.
Jan 31 07:35:43 compute-2 nova_compute[226829]: 2026-01-31 07:35:43.312 226833 INFO nova.compute.manager [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Swapping old allocation on dict_keys(['2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc']) held by migration 7ac5adc1-a6e9-43a5-8509-2086500fde0f for instance
Jan 31 07:35:43 compute-2 nova_compute[226829]: 2026-01-31 07:35:43.355 226833 DEBUG nova.scheduler.client.report [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Overwriting current allocation {'allocations': {'09a2f316-8f9d-47b2-922f-864a1d14c517': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 24}}, 'project_id': '7c1ddd67115f4f7bab056dbb2f270ccc', 'user_id': '71f887fd92fb486a959e5ca100cb1e10', 'consumer_generation': 1} on consumer 871711de-f993-4592-83a2-a36c4039786d move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 31 07:35:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:43.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:43 compute-2 nova_compute[226829]: 2026-01-31 07:35:43.598 226833 DEBUG oslo_concurrency.lockutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:35:43 compute-2 nova_compute[226829]: 2026-01-31 07:35:43.599 226833 DEBUG oslo_concurrency.lockutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:35:43 compute-2 nova_compute[226829]: 2026-01-31 07:35:43.599 226833 DEBUG nova.network.neutron [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:35:43 compute-2 nova_compute[226829]: 2026-01-31 07:35:43.609 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:43 compute-2 nova_compute[226829]: 2026-01-31 07:35:43.743 226833 DEBUG nova.network.neutron [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.007 226833 DEBUG nova.network.neutron [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:44.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.022 226833 DEBUG oslo_concurrency.lockutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.023 226833 DEBUG nova.virt.libvirt.driver [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 31 07:35:44 compute-2 ceph-mon[77282]: pgmap v1051: 305 pgs: 305 active+clean; 378 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.7 MiB/s wr, 153 op/s
Jan 31 07:35:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2520892971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.213 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844944.1849308, 1c49c3f4-bab2-45dc-8352-02a4a3b9c372 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.214 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] VM Resumed (Lifecycle Event)
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.215 226833 DEBUG nova.compute.manager [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.216 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.216 226833 DEBUG nova.storage.rbd_utils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rolling back rbd image(871711de-f993-4592-83a2-a36c4039786d_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.220 226833 INFO nova.virt.libvirt.driver [-] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Instance spawned successfully.
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.221 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.238 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.241 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.248 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.249 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.249 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.249 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.250 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.250 226833 DEBUG nova.virt.libvirt.driver [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.287 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.287 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844944.1857126, 1c49c3f4-bab2-45dc-8352-02a4a3b9c372 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.288 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] VM Started (Lifecycle Event)
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.319 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.322 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.334 226833 INFO nova.compute.manager [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Took 4.57 seconds to spawn the instance on the hypervisor.
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.335 226833 DEBUG nova.compute.manager [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.358 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.390 226833 INFO nova.compute.manager [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Took 5.60 seconds to build instance.
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.405 226833 DEBUG oslo_concurrency.lockutils [None req-fc2fc8de-47c5-44c8-8c09-a7790a8cfaaf 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "1c49c3f4-bab2-45dc-8352-02a4a3b9c372" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:44 compute-2 nova_compute[226829]: 2026-01-31 07:35:44.633 226833 DEBUG nova.storage.rbd_utils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] removing snapshot(nova-resize) on rbd image(871711de-f993-4592-83a2-a36c4039786d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:35:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:35:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3352251804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:35:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:35:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3352251804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:35:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3352251804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:35:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3352251804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:35:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e148 e148: 3 total, 3 up, 3 in
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.290 226833 DEBUG nova.virt.libvirt.driver [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.292 226833 WARNING nova.virt.libvirt.driver [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.297 226833 DEBUG nova.virt.libvirt.host [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.297 226833 DEBUG nova.virt.libvirt.host [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.301 226833 DEBUG nova.virt.libvirt.host [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.301 226833 DEBUG nova.virt.libvirt.host [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.302 226833 DEBUG nova.virt.libvirt.driver [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.302 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:35:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='702f80f9-864a-4ea0-a349-0b64802d04ab',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-1842587455',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.303 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.303 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.303 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.303 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.303 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.304 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.304 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.304 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.305 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.305 226833 DEBUG nova.virt.hardware [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.305 226833 DEBUG nova.objects.instance [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'vcpu_model' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.336 226833 DEBUG oslo_concurrency.processutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:45.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/128629596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.820 226833 DEBUG oslo_concurrency.processutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:45 compute-2 nova_compute[226829]: 2026-01-31 07:35:45.859 226833 DEBUG oslo_concurrency.processutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:46.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.134 226833 DEBUG nova.compute.manager [None req-bb37cc4c-716b-463d-ba56-16e5f6147b67 c4dea6982d1143478b34c77cdceb634f 03e0ea82a6324e9cadfab5161ce66dbb - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.135 226833 DEBUG nova.compute.manager [None req-bb37cc4c-716b-463d-ba56-16e5f6147b67 c4dea6982d1143478b34c77cdceb634f 03e0ea82a6324e9cadfab5161ce66dbb - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.135 226833 DEBUG oslo_concurrency.lockutils [None req-bb37cc4c-716b-463d-ba56-16e5f6147b67 c4dea6982d1143478b34c77cdceb634f 03e0ea82a6324e9cadfab5161ce66dbb - - default default] Acquiring lock "refresh_cache-1c49c3f4-bab2-45dc-8352-02a4a3b9c372" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.135 226833 DEBUG oslo_concurrency.lockutils [None req-bb37cc4c-716b-463d-ba56-16e5f6147b67 c4dea6982d1143478b34c77cdceb634f 03e0ea82a6324e9cadfab5161ce66dbb - - default default] Acquired lock "refresh_cache-1c49c3f4-bab2-45dc-8352-02a4a3b9c372" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.135 226833 DEBUG nova.network.neutron [None req-bb37cc4c-716b-463d-ba56-16e5f6147b67 c4dea6982d1143478b34c77cdceb634f 03e0ea82a6324e9cadfab5161ce66dbb - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:35:46 compute-2 ceph-mon[77282]: pgmap v1052: 305 pgs: 305 active+clean; 402 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.3 MiB/s wr, 330 op/s
Jan 31 07:35:46 compute-2 ceph-mon[77282]: osdmap e148: 3 total, 3 up, 3 in
Jan 31 07:35:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/128629596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4105849663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.294 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Acquiring lock "1c49c3f4-bab2-45dc-8352-02a4a3b9c372" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.295 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "1c49c3f4-bab2-45dc-8352-02a4a3b9c372" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.296 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Acquiring lock "1c49c3f4-bab2-45dc-8352-02a4a3b9c372-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.296 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "1c49c3f4-bab2-45dc-8352-02a4a3b9c372-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.297 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "1c49c3f4-bab2-45dc-8352-02a4a3b9c372-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.299 226833 INFO nova.compute.manager [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Terminating instance
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.300 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Acquiring lock "refresh_cache-1c49c3f4-bab2-45dc-8352-02a4a3b9c372" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:35:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1401127398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.324 226833 DEBUG oslo_concurrency.processutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.326 226833 DEBUG nova.virt.libvirt.driver [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <uuid>871711de-f993-4592-83a2-a36c4039786d</uuid>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <name>instance-00000011</name>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <nova:name>tempest-MigrationsAdminTest-server-1825538190</nova:name>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:35:45</nova:creationTime>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <nova:flavor name="tempest-test_resize_flavor_-1842587455">
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <nova:user uuid="71f887fd92fb486a959e5ca100cb1e10">tempest-MigrationsAdminTest-137263588-project-member</nova:user>
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <nova:project uuid="7c1ddd67115f4f7bab056dbb2f270ccc">tempest-MigrationsAdminTest-137263588</nova:project>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <system>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <entry name="serial">871711de-f993-4592-83a2-a36c4039786d</entry>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <entry name="uuid">871711de-f993-4592-83a2-a36c4039786d</entry>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     </system>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <os>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   </os>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <features>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   </features>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/871711de-f993-4592-83a2-a36c4039786d_disk">
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       </source>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/871711de-f993-4592-83a2-a36c4039786d_disk.config">
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       </source>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:35:46 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/console.log" append="off"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <video>
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     </video>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:35:46 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:35:46 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:35:46 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:35:46 compute-2 nova_compute[226829]: </domain>
Jan 31 07:35:46 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.336 226833 DEBUG nova.network.neutron [None req-bb37cc4c-716b-463d-ba56-16e5f6147b67 c4dea6982d1143478b34c77cdceb634f 03e0ea82a6324e9cadfab5161ce66dbb - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:46 compute-2 systemd-machined[195142]: New machine qemu-8-instance-00000011.
Jan 31 07:35:46 compute-2 systemd[1]: Started Virtual Machine qemu-8-instance-00000011.
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.791 226833 DEBUG nova.network.neutron [None req-bb37cc4c-716b-463d-ba56-16e5f6147b67 c4dea6982d1143478b34c77cdceb634f 03e0ea82a6324e9cadfab5161ce66dbb - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.811 226833 DEBUG oslo_concurrency.lockutils [None req-bb37cc4c-716b-463d-ba56-16e5f6147b67 c4dea6982d1143478b34c77cdceb634f 03e0ea82a6324e9cadfab5161ce66dbb - - default default] Releasing lock "refresh_cache-1c49c3f4-bab2-45dc-8352-02a4a3b9c372" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.812 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Acquired lock "refresh_cache-1c49c3f4-bab2-45dc-8352-02a4a3b9c372" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.812 226833 DEBUG nova.network.neutron [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:35:46 compute-2 nova_compute[226829]: 2026-01-31 07:35:46.958 226833 DEBUG nova.network.neutron [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1401127398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.215 226833 DEBUG nova.network.neutron [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.232 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Releasing lock "refresh_cache-1c49c3f4-bab2-45dc-8352-02a4a3b9c372" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.233 226833 DEBUG nova.compute.manager [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:35:47 compute-2 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 31 07:35:47 compute-2 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000015.scope: Consumed 3.579s CPU time.
Jan 31 07:35:47 compute-2 systemd-machined[195142]: Machine qemu-7-instance-00000015 terminated.
Jan 31 07:35:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:35:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:47.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.457 226833 INFO nova.virt.libvirt.driver [-] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Instance destroyed successfully.
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.458 226833 DEBUG nova.objects.instance [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lazy-loading 'resources' on Instance uuid 1c49c3f4-bab2-45dc-8352-02a4a3b9c372 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.652 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for 871711de-f993-4592-83a2-a36c4039786d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.652 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844947.651721, 871711de-f993-4592-83a2-a36c4039786d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.652 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] VM Resumed (Lifecycle Event)
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.655 226833 DEBUG nova.compute.manager [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.658 226833 INFO nova.virt.libvirt.driver [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance running successfully.
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.659 226833 DEBUG nova.virt.libvirt.driver [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.680 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.694 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.739 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.739 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844947.6547291, 871711de-f993-4592-83a2-a36c4039786d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.740 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] VM Started (Lifecycle Event)
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.750 226833 INFO nova.compute.manager [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Updating instance to original state: 'active'
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.774 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.777 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.796 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.803 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 07:35:47 compute-2 sudo[236070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:47 compute-2 sudo[236070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:47 compute-2 sudo[236070]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.928 226833 INFO nova.virt.libvirt.driver [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Deleting instance files /var/lib/nova/instances/1c49c3f4-bab2-45dc-8352-02a4a3b9c372_del
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.930 226833 INFO nova.virt.libvirt.driver [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Deletion of /var/lib/nova/instances/1c49c3f4-bab2-45dc-8352-02a4a3b9c372_del complete
Jan 31 07:35:47 compute-2 sudo[236095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:35:47 compute-2 sudo[236095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:47 compute-2 sudo[236095]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.975 226833 INFO nova.compute.manager [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.975 226833 DEBUG oslo.service.loopingcall [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.976 226833 DEBUG nova.compute.manager [-] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:35:47 compute-2 nova_compute[226829]: 2026-01-31 07:35:47.976 226833 DEBUG nova.network.neutron [-] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:35:47 compute-2 sudo[236120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:47 compute-2 sudo[236120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:47 compute-2 sudo[236120]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:48.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:48 compute-2 sudo[236145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:35:48 compute-2 sudo[236145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:48 compute-2 ceph-mon[77282]: pgmap v1054: 305 pgs: 305 active+clean; 388 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 8.6 MiB/s rd, 4.7 MiB/s wr, 452 op/s
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.247 226833 DEBUG nova.network.neutron [-] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:48 compute-2 sudo[236183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:48 compute-2 sudo[236183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:48 compute-2 sudo[236183]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.264 226833 DEBUG nova.network.neutron [-] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.281 226833 INFO nova.compute.manager [-] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Took 0.31 seconds to deallocate network for instance.
Jan 31 07:35:48 compute-2 sudo[236208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:48 compute-2 sudo[236208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:48 compute-2 sudo[236208]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.328 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.329 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.393 226833 DEBUG oslo_concurrency.processutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:48 compute-2 sudo[236145]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.490 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.612 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1412638885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.831 226833 DEBUG oslo_concurrency.processutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.838 226833 DEBUG nova.compute.provider_tree [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.853 226833 DEBUG nova.scheduler.client.report [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.874 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.898 226833 INFO nova.scheduler.client.report [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Deleted allocations for instance 1c49c3f4-bab2-45dc-8352-02a4a3b9c372
Jan 31 07:35:48 compute-2 nova_compute[226829]: 2026-01-31 07:35:48.976 226833 DEBUG oslo_concurrency.lockutils [None req-2d00c38a-326c-40e1-b2e9-683006168ff4 68453ccd293b4153b94a2fd75baa2dfc be48b50673d940d1955ee07f4627026a - - default default] Lock "1c49c3f4-bab2-45dc-8352-02a4a3b9c372" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1412638885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:35:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:35:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:49.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:49 compute-2 nova_compute[226829]: 2026-01-31 07:35:49.524 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:50.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:50 compute-2 ceph-mon[77282]: pgmap v1055: 305 pgs: 305 active+clean; 369 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 8.4 MiB/s rd, 4.3 MiB/s wr, 424 op/s
Jan 31 07:35:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:35:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:35:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:35:50 compute-2 nova_compute[226829]: 2026-01-31 07:35:50.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:51 compute-2 ceph-mon[77282]: pgmap v1056: 305 pgs: 305 active+clean; 344 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 9.5 MiB/s rd, 3.0 MiB/s wr, 481 op/s
Jan 31 07:35:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:51.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:51 compute-2 nova_compute[226829]: 2026-01-31 07:35:51.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:51 compute-2 nova_compute[226829]: 2026-01-31 07:35:51.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:35:51 compute-2 nova_compute[226829]: 2026-01-31 07:35:51.508 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:35:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:52.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:52 compute-2 nova_compute[226829]: 2026-01-31 07:35:52.798 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:53 compute-2 nova_compute[226829]: 2026-01-31 07:35:53.503 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:53 compute-2 nova_compute[226829]: 2026-01-31 07:35:53.504 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:53 compute-2 ceph-mon[77282]: pgmap v1057: 305 pgs: 305 active+clean; 334 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 9.4 MiB/s rd, 2.7 MiB/s wr, 471 op/s
Jan 31 07:35:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1651880508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:53 compute-2 nova_compute[226829]: 2026-01-31 07:35:53.616 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e149 e149: 3 total, 3 up, 3 in
Jan 31 07:35:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:54.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:54 compute-2 nova_compute[226829]: 2026-01-31 07:35:54.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:54 compute-2 nova_compute[226829]: 2026-01-31 07:35:54.507 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:54 compute-2 nova_compute[226829]: 2026-01-31 07:35:54.507 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:35:54 compute-2 nova_compute[226829]: 2026-01-31 07:35:54.507 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:35:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:54 compute-2 ceph-mon[77282]: osdmap e149: 3 total, 3 up, 3 in
Jan 31 07:35:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/209638890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:54 compute-2 nova_compute[226829]: 2026-01-31 07:35:54.710 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:35:54 compute-2 nova_compute[226829]: 2026-01-31 07:35:54.710 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:35:54 compute-2 nova_compute[226829]: 2026-01-31 07:35:54.711 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:35:54 compute-2 nova_compute[226829]: 2026-01-31 07:35:54.711 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:54 compute-2 sudo[236275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:54 compute-2 sudo[236275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:54 compute-2 sudo[236275]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:55 compute-2 sudo[236300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:35:55 compute-2 sudo[236300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:55 compute-2 sudo[236300]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.281 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:55 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:35:55 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:35:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:55.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.669 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.691 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.691 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.692 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.692 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.693 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.693 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.721 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.722 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.722 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.722 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:35:55 compute-2 nova_compute[226829]: 2026-01-31 07:35:55.723 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:55 compute-2 ceph-mon[77282]: pgmap v1059: 305 pgs: 305 active+clean; 355 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 6.6 MiB/s rd, 2.7 MiB/s wr, 352 op/s
Jan 31 07:35:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2736452277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/694674579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3262193131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:56.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:56 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 31 07:35:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/781511916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.146 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.344 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.345 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.517 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.518 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4677MB free_disk=20.806453704833984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.518 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.519 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.744 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 871711de-f993-4592-83a2-a36c4039786d actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.745 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.745 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:35:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/781511916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:56 compute-2 nova_compute[226829]: 2026-01-31 07:35:56.926 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:57 compute-2 podman[236370]: 2026-01-31 07:35:57.227857927 +0000 UTC m=+0.103119378 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:35:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1877029634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:57 compute-2 nova_compute[226829]: 2026-01-31 07:35:57.403 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:57 compute-2 nova_compute[226829]: 2026-01-31 07:35:57.409 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:35:57 compute-2 nova_compute[226829]: 2026-01-31 07:35:57.429 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:35:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:57.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:57 compute-2 nova_compute[226829]: 2026-01-31 07:35:57.457 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:35:57 compute-2 nova_compute[226829]: 2026-01-31 07:35:57.458 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:57 compute-2 nova_compute[226829]: 2026-01-31 07:35:57.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:57 compute-2 nova_compute[226829]: 2026-01-31 07:35:57.492 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:57 compute-2 nova_compute[226829]: 2026-01-31 07:35:57.801 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:57 compute-2 ceph-mon[77282]: pgmap v1060: 305 pgs: 305 active+clean; 362 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 272 op/s
Jan 31 07:35:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1150089042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3645022635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2250225815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1877029634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:35:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:58.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:35:58 compute-2 nova_compute[226829]: 2026-01-31 07:35:58.615 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1301746315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3732196518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:35:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:59.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:59 compute-2 nova_compute[226829]: 2026-01-31 07:35:59.680 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "c8abb380-680c-41b2-8156-a5e2b5b96f42" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:59 compute-2 nova_compute[226829]: 2026-01-31 07:35:59.680 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "c8abb380-680c-41b2-8156-a5e2b5b96f42" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:59 compute-2 nova_compute[226829]: 2026-01-31 07:35:59.701 226833 DEBUG nova.compute.manager [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:35:59 compute-2 nova_compute[226829]: 2026-01-31 07:35:59.762 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:59 compute-2 nova_compute[226829]: 2026-01-31 07:35:59.763 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:59 compute-2 nova_compute[226829]: 2026-01-31 07:35:59.770 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:35:59 compute-2 nova_compute[226829]: 2026-01-31 07:35:59.771 226833 INFO nova.compute.claims [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:35:59 compute-2 ceph-mon[77282]: pgmap v1061: 305 pgs: 305 active+clean; 371 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.0 MiB/s wr, 250 op/s
Jan 31 07:35:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1681356475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1061276305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.880573) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959880653, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2380, "num_deletes": 254, "total_data_size": 5327081, "memory_usage": 5404608, "flush_reason": "Manual Compaction"}
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959896708, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 3479806, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23243, "largest_seqno": 25618, "table_properties": {"data_size": 3470339, "index_size": 5833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21135, "raw_average_key_size": 20, "raw_value_size": 3450801, "raw_average_value_size": 3393, "num_data_blocks": 257, "num_entries": 1017, "num_filter_entries": 1017, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844773, "oldest_key_time": 1769844773, "file_creation_time": 1769844959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 16198 microseconds, and 7573 cpu microseconds.
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.896777) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 3479806 bytes OK
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.896800) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.899042) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.899058) EVENT_LOG_v1 {"time_micros": 1769844959899052, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.899077) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 5316419, prev total WAL file size 5316419, number of live WAL files 2.
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.899954) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(3398KB)], [48(7354KB)]
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959900064, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 11010709, "oldest_snapshot_seqno": -1}
Jan 31 07:35:59 compute-2 nova_compute[226829]: 2026-01-31 07:35:59.912 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 4973 keys, 8996070 bytes, temperature: kUnknown
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959971722, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 8996070, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8962019, "index_size": 20501, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125756, "raw_average_key_size": 25, "raw_value_size": 8871423, "raw_average_value_size": 1783, "num_data_blocks": 837, "num_entries": 4973, "num_filter_entries": 4973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769844959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.971986) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 8996070 bytes
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.976431) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.5 rd, 125.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 5498, records dropped: 525 output_compression: NoCompression
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.976566) EVENT_LOG_v1 {"time_micros": 1769844959976489, "job": 28, "event": "compaction_finished", "compaction_time_micros": 71748, "compaction_time_cpu_micros": 26863, "output_level": 6, "num_output_files": 1, "total_output_size": 8996070, "num_input_records": 5498, "num_output_records": 4973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959977518, "job": 28, "event": "table_file_deletion", "file_number": 50}
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959978996, "job": 28, "event": "table_file_deletion", "file_number": 48}
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.899821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.979068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.979074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.979075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.979076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:35:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:35:59.979078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:36:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:00.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1477228523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.342 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.347 226833 DEBUG nova.compute.provider_tree [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.389 226833 DEBUG nova.scheduler.client.report [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.441 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.441 226833 DEBUG nova.compute.manager [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.497 226833 DEBUG nova.compute.manager [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.520 226833 INFO nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.540 226833 DEBUG nova.compute.manager [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.637 226833 DEBUG nova.compute.manager [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.639 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.640 226833 INFO nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Creating image(s)
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.688 226833 DEBUG nova.storage.rbd_utils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.715 226833 DEBUG nova.storage.rbd_utils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.741 226833 DEBUG nova.storage.rbd_utils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.744 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.761 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.790 226833 WARNING nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.790 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Triggering sync for uuid 871711de-f993-4592-83a2-a36c4039786d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.790 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Triggering sync for uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.791 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "871711de-f993-4592-83a2-a36c4039786d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.791 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "871711de-f993-4592-83a2-a36c4039786d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.791 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "c8abb380-680c-41b2-8156-a5e2b5b96f42" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.795 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.795 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.796 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.796 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.821 226833 DEBUG nova.storage.rbd_utils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.825 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c8abb380-680c-41b2-8156-a5e2b5b96f42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:00 compute-2 nova_compute[226829]: 2026-01-31 07:36:00.847 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "871711de-f993-4592-83a2-a36c4039786d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1477228523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.226 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c8abb380-680c-41b2-8156-a5e2b5b96f42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.303 226833 DEBUG nova.storage.rbd_utils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] resizing rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:36:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:01.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.751 226833 DEBUG nova.objects.instance [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'migration_context' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.778 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.778 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Ensure instance console log exists: /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.779 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.780 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.780 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.783 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.789 226833 WARNING nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.794 226833 DEBUG nova.virt.libvirt.host [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.796 226833 DEBUG nova.virt.libvirt.host [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.800 226833 DEBUG nova.virt.libvirt.host [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.800 226833 DEBUG nova.virt.libvirt.host [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.802 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.803 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.803 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.804 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.805 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.805 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.805 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.806 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.806 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.807 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.807 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.808 226833 DEBUG nova.virt.hardware [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:36:01 compute-2 nova_compute[226829]: 2026-01-31 07:36:01.813 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:01 compute-2 ceph-mon[77282]: pgmap v1062: 305 pgs: 305 active+clean; 417 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 217 op/s
Jan 31 07:36:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/200743085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:02.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3216309909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:02 compute-2 nova_compute[226829]: 2026-01-31 07:36:02.311 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:02 compute-2 nova_compute[226829]: 2026-01-31 07:36:02.337 226833 DEBUG nova.storage.rbd_utils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:02 compute-2 nova_compute[226829]: 2026-01-31 07:36:02.340 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:02 compute-2 nova_compute[226829]: 2026-01-31 07:36:02.454 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844947.4529235, 1c49c3f4-bab2-45dc-8352-02a4a3b9c372 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:02 compute-2 nova_compute[226829]: 2026-01-31 07:36:02.454 226833 INFO nova.compute.manager [-] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] VM Stopped (Lifecycle Event)
Jan 31 07:36:02 compute-2 nova_compute[226829]: 2026-01-31 07:36:02.661 226833 DEBUG nova.compute.manager [None req-873bd996-9551-41f6-9ada-941a89a8149d - - - - - -] [instance: 1c49c3f4-bab2-45dc-8352-02a4a3b9c372] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2003429390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:02 compute-2 nova_compute[226829]: 2026-01-31 07:36:02.853 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:02 compute-2 nova_compute[226829]: 2026-01-31 07:36:02.856 226833 DEBUG nova.objects.instance [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:02 compute-2 nova_compute[226829]: 2026-01-31 07:36:02.858 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3216309909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2003429390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:03 compute-2 nova_compute[226829]: 2026-01-31 07:36:03.262 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <uuid>c8abb380-680c-41b2-8156-a5e2b5b96f42</uuid>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <name>instance-00000018</name>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <nova:name>tempest-ServersAdmin275Test-server-287964715</nova:name>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:36:01</nova:creationTime>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <nova:user uuid="b9769a806c6f4874ab462acac1b4bfcc">tempest-ServersAdmin275Test-732223889-project-member</nova:user>
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <nova:project uuid="74b1ee3fc09b40608d0892724c5ddba4">tempest-ServersAdmin275Test-732223889</nova:project>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <system>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <entry name="serial">c8abb380-680c-41b2-8156-a5e2b5b96f42</entry>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <entry name="uuid">c8abb380-680c-41b2-8156-a5e2b5b96f42</entry>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     </system>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <os>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   </os>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <features>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   </features>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c8abb380-680c-41b2-8156-a5e2b5b96f42_disk">
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       </source>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config">
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       </source>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:36:03 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/console.log" append="off"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <video>
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     </video>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:36:03 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:36:03 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:36:03 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:36:03 compute-2 nova_compute[226829]: </domain>
Jan 31 07:36:03 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:36:03 compute-2 nova_compute[226829]: 2026-01-31 07:36:03.354 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:03 compute-2 nova_compute[226829]: 2026-01-31 07:36:03.355 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:03 compute-2 nova_compute[226829]: 2026-01-31 07:36:03.355 226833 INFO nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Using config drive
Jan 31 07:36:03 compute-2 nova_compute[226829]: 2026-01-31 07:36:03.386 226833 DEBUG nova.storage.rbd_utils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:03.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:03 compute-2 nova_compute[226829]: 2026-01-31 07:36:03.579 226833 INFO nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Creating config drive at /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config
Jan 31 07:36:03 compute-2 nova_compute[226829]: 2026-01-31 07:36:03.588 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptn1gqpbz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:03 compute-2 nova_compute[226829]: 2026-01-31 07:36:03.618 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:03 compute-2 ceph-mon[77282]: pgmap v1063: 305 pgs: 305 active+clean; 451 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 232 op/s
Jan 31 07:36:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:04 compute-2 nova_compute[226829]: 2026-01-31 07:36:04.419 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptn1gqpbz" returned: 0 in 0.831s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:04 compute-2 nova_compute[226829]: 2026-01-31 07:36:04.463 226833 DEBUG nova.storage.rbd_utils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:04 compute-2 nova_compute[226829]: 2026-01-31 07:36:04.468 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:04 compute-2 nova_compute[226829]: 2026-01-31 07:36:04.761 226833 DEBUG oslo_concurrency.processutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:04 compute-2 nova_compute[226829]: 2026-01-31 07:36:04.762 226833 INFO nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Deleting local config drive /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config because it was imported into RBD.
Jan 31 07:36:04 compute-2 systemd-machined[195142]: New machine qemu-9-instance-00000018.
Jan 31 07:36:04 compute-2 systemd[1]: Started Virtual Machine qemu-9-instance-00000018.
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.293 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844965.2930195, c8abb380-680c-41b2-8156-a5e2b5b96f42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.295 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] VM Resumed (Lifecycle Event)
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.297 226833 DEBUG nova.compute.manager [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.297 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.301 226833 INFO nova.virt.libvirt.driver [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance spawned successfully.
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.302 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.327 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.333 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.337 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.337 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.338 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.338 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.339 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.339 226833 DEBUG nova.virt.libvirt.driver [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.375 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.376 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844965.2946918, c8abb380-680c-41b2-8156-a5e2b5b96f42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.376 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] VM Started (Lifecycle Event)
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.410 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.413 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.435 226833 INFO nova.compute.manager [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Took 4.80 seconds to spawn the instance on the hypervisor.
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.435 226833 DEBUG nova.compute.manager [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.447 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:36:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:05.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.560 226833 INFO nova.compute.manager [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Took 5.81 seconds to build instance.
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.583 226833 DEBUG oslo_concurrency.lockutils [None req-9d1a1115-4bc8-4699-8ffd-af91c11a6e0a b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "c8abb380-680c-41b2-8156-a5e2b5b96f42" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.584 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "c8abb380-680c-41b2-8156-a5e2b5b96f42" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.584 226833 INFO nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:36:05 compute-2 nova_compute[226829]: 2026-01-31 07:36:05.585 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "c8abb380-680c-41b2-8156-a5e2b5b96f42" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:05 compute-2 ceph-mon[77282]: pgmap v1064: 305 pgs: 305 active+clean; 484 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 329 op/s
Jan 31 07:36:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1924817307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:06.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:06.842 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:06.844 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:06.844 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2434256068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:07 compute-2 podman[236768]: 2026-01-31 07:36:07.181091756 +0000 UTC m=+0.062795028 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:36:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:07.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:07 compute-2 nova_compute[226829]: 2026-01-31 07:36:07.861 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:08.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:08 compute-2 ceph-mon[77282]: pgmap v1065: 305 pgs: 305 active+clean; 501 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.4 MiB/s wr, 293 op/s
Jan 31 07:36:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2234691212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:08 compute-2 nova_compute[226829]: 2026-01-31 07:36:08.172 226833 INFO nova.compute.manager [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Rebuilding instance
Jan 31 07:36:08 compute-2 sudo[236788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:08 compute-2 sudo[236788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:08 compute-2 sudo[236788]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:08 compute-2 sudo[236813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:08 compute-2 sudo[236813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:08 compute-2 sudo[236813]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:08 compute-2 nova_compute[226829]: 2026-01-31 07:36:08.583 226833 DEBUG nova.objects.instance [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:08 compute-2 nova_compute[226829]: 2026-01-31 07:36:08.602 226833 DEBUG nova.compute.manager [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:08 compute-2 nova_compute[226829]: 2026-01-31 07:36:08.656 226833 DEBUG nova.objects.instance [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'pci_requests' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:08 compute-2 nova_compute[226829]: 2026-01-31 07:36:08.667 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:08 compute-2 nova_compute[226829]: 2026-01-31 07:36:08.671 226833 DEBUG nova.objects.instance [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:08 compute-2 nova_compute[226829]: 2026-01-31 07:36:08.683 226833 DEBUG nova.objects.instance [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'resources' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:08 compute-2 nova_compute[226829]: 2026-01-31 07:36:08.698 226833 DEBUG nova.objects.instance [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'migration_context' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:08 compute-2 nova_compute[226829]: 2026-01-31 07:36:08.710 226833 DEBUG nova.objects.instance [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 07:36:08 compute-2 nova_compute[226829]: 2026-01-31 07:36:08.714 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:36:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:09.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:10.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:10 compute-2 ceph-mon[77282]: pgmap v1066: 305 pgs: 305 active+clean; 510 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.6 MiB/s wr, 314 op/s
Jan 31 07:36:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:11.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:12.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:12 compute-2 ceph-mon[77282]: pgmap v1067: 305 pgs: 305 active+clean; 548 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.6 MiB/s wr, 347 op/s
Jan 31 07:36:12 compute-2 nova_compute[226829]: 2026-01-31 07:36:12.863 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:13.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:13 compute-2 nova_compute[226829]: 2026-01-31 07:36:13.655 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:13 compute-2 nova_compute[226829]: 2026-01-31 07:36:13.656 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:13 compute-2 nova_compute[226829]: 2026-01-31 07:36:13.669 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:13 compute-2 nova_compute[226829]: 2026-01-31 07:36:13.671 226833 DEBUG nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:36:13 compute-2 nova_compute[226829]: 2026-01-31 07:36:13.870 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:13 compute-2 nova_compute[226829]: 2026-01-31 07:36:13.871 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:13 compute-2 nova_compute[226829]: 2026-01-31 07:36:13.876 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:36:13 compute-2 nova_compute[226829]: 2026-01-31 07:36:13.877 226833 INFO nova.compute.claims [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:36:14 compute-2 nova_compute[226829]: 2026-01-31 07:36:14.026 226833 DEBUG oslo_concurrency.processutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:14.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:14 compute-2 ceph-mon[77282]: pgmap v1068: 305 pgs: 305 active+clean; 579 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 6.2 MiB/s wr, 360 op/s
Jan 31 07:36:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/125616842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:14 compute-2 nova_compute[226829]: 2026-01-31 07:36:14.595 226833 DEBUG oslo_concurrency.processutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:14 compute-2 nova_compute[226829]: 2026-01-31 07:36:14.599 226833 DEBUG nova.compute.provider_tree [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:14 compute-2 nova_compute[226829]: 2026-01-31 07:36:14.615 226833 DEBUG nova.scheduler.client.report [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:14 compute-2 nova_compute[226829]: 2026-01-31 07:36:14.641 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:14 compute-2 nova_compute[226829]: 2026-01-31 07:36:14.642 226833 DEBUG nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:36:14 compute-2 nova_compute[226829]: 2026-01-31 07:36:14.705 226833 DEBUG nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:36:14 compute-2 nova_compute[226829]: 2026-01-31 07:36:14.706 226833 DEBUG nova.network.neutron [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:36:14 compute-2 nova_compute[226829]: 2026-01-31 07:36:14.736 226833 INFO nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:36:14 compute-2 nova_compute[226829]: 2026-01-31 07:36:14.819 226833 DEBUG nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:36:15 compute-2 nova_compute[226829]: 2026-01-31 07:36:15.013 226833 INFO nova.virt.block_device [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Booting with volume 21ac0cb5-f889-4135-9b17-5debc0b9246e at /dev/vda
Jan 31 07:36:15 compute-2 nova_compute[226829]: 2026-01-31 07:36:15.293 226833 DEBUG os_brick.utils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:36:15 compute-2 nova_compute[226829]: 2026-01-31 07:36:15.295 226833 INFO oslo.privsep.daemon [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp4fj5w8c3/privsep.sock']
Jan 31 07:36:15 compute-2 nova_compute[226829]: 2026-01-31 07:36:15.314 226833 DEBUG nova.policy [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea44c45fe7df4f36b5c722fbfc214f2e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '29d136be5e384689a95acd607131dfd0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:36:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/125616842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:15.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:16.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.100 226833 DEBUG nova.network.neutron [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Successfully created port: 84bfd4cb-8188-4fde-bca2-fdb0a732119f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:36:16 compute-2 ceph-mon[77282]: pgmap v1069: 305 pgs: 305 active+clean; 640 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 8.8 MiB/s wr, 425 op/s
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.573 226833 INFO oslo.privsep.daemon [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Spawned new privsep daemon via rootwrap
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.432 236868 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.441 236868 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.445 236868 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.446 236868 INFO oslo.privsep.daemon [-] privsep daemon running as pid 236868
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.578 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[497f05ec-433c-438e-aa61-ad480326d9ed]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.755 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.793 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.793 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[150ed4c3-6ddc-4e43-a26e-99c15e296b9a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.795 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.801 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.801 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[bd251e4f-d14f-4294-937a-8a343485af35]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.804 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.813 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.814 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[01c15328-ea13-4e43-9610-b61e5007b327]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.817 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[800f33e7-f1c8-4178-868e-1b0c80f53c7a]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.818 226833 DEBUG oslo_concurrency.processutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.846 226833 DEBUG oslo_concurrency.processutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.850 226833 DEBUG os_brick.initiator.connectors.lightos [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.851 226833 DEBUG os_brick.initiator.connectors.lightos [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.852 226833 DEBUG os_brick.initiator.connectors.lightos [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.852 226833 DEBUG os_brick.utils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] <== get_connector_properties: return (1558ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.853 226833 DEBUG nova.virt.block_device [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating existing volume attachment record: 07354d43-185b-4fd2-8dc3-63e0a2cce98a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.874 226833 DEBUG nova.network.neutron [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Successfully updated port: 84bfd4cb-8188-4fde-bca2-fdb0a732119f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.897 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.898 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquired lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.898 226833 DEBUG nova.network.neutron [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.978 226833 DEBUG nova.compute.manager [req-e0f3805c-2736-458e-9849-ee46ff348143 req-3384988a-e57e-4033-994b-f6f9fdb1d9f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-changed-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.979 226833 DEBUG nova.compute.manager [req-e0f3805c-2736-458e-9849-ee46ff348143 req-3384988a-e57e-4033-994b-f6f9fdb1d9f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Refreshing instance network info cache due to event network-changed-84bfd4cb-8188-4fde-bca2-fdb0a732119f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:36:16 compute-2 nova_compute[226829]: 2026-01-31 07:36:16.980 226833 DEBUG oslo_concurrency.lockutils [req-e0f3805c-2736-458e-9849-ee46ff348143 req-3384988a-e57e-4033-994b-f6f9fdb1d9f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.044 226833 DEBUG nova.network.neutron [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:17 compute-2 ceph-mon[77282]: pgmap v1070: 305 pgs: 305 active+clean; 659 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 8.8 MiB/s wr, 327 op/s
Jan 31 07:36:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:17.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1923683185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.865 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.888 226833 DEBUG nova.network.neutron [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating instance_info_cache with network_info: [{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.919 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Releasing lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.919 226833 DEBUG nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Instance network_info: |[{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.920 226833 DEBUG oslo_concurrency.lockutils [req-e0f3805c-2736-458e-9849-ee46ff348143 req-3384988a-e57e-4033-994b-f6f9fdb1d9f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.920 226833 DEBUG nova.network.neutron [req-e0f3805c-2736-458e-9849-ee46ff348143 req-3384988a-e57e-4033-994b-f6f9fdb1d9f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Refreshing network info cache for port 84bfd4cb-8188-4fde-bca2-fdb0a732119f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.931 226833 DEBUG nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.933 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.933 226833 INFO nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Creating image(s)
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.934 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.934 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Ensure instance console log exists: /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.934 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.935 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.935 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.938 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Start _get_guest_xml network_info=[{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-21ac0cb5-f889-4135-9b17-5debc0b9246e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '21ac0cb5-f889-4135-9b17-5debc0b9246e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '71265e55-f168-471c-80bc-80b49177a637', 'attached_at': '', 'detached_at': '', 'volume_id': '21ac0cb5-f889-4135-9b17-5debc0b9246e', 'serial': '21ac0cb5-f889-4135-9b17-5debc0b9246e'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '07354d43-185b-4fd2-8dc3-63e0a2cce98a', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.947 226833 WARNING nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.953 226833 DEBUG nova.virt.libvirt.host [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.953 226833 DEBUG nova.virt.libvirt.host [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.957 226833 DEBUG nova.virt.libvirt.host [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.957 226833 DEBUG nova.virt.libvirt.host [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.959 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.959 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.959 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.960 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.960 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.960 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.961 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.961 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.961 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.962 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.962 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.962 226833 DEBUG nova.virt.hardware [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.992 226833 DEBUG nova.storage.rbd_utils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] rbd image 71265e55-f168-471c-80bc-80b49177a637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:17 compute-2 nova_compute[226829]: 2026-01-31 07:36:17.995 226833 DEBUG oslo_concurrency.processutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:18.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:18 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/54137756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.394 226833 DEBUG oslo_concurrency.processutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.395 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.396 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.398 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.467 226833 DEBUG nova.virt.libvirt.vif [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-596053430',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-596053430',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-w7qr5y3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:36:14Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=71265e55-f168-471c-80bc-80b49177a637,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.467 226833 DEBUG nova.network.os_vif_util [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converting VIF {"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.469 226833 DEBUG nova.network.os_vif_util [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.471 226833 DEBUG nova.objects.instance [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 71265e55-f168-471c-80bc-80b49177a637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1923683185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1305993749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/54137756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.523 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <uuid>71265e55-f168-471c-80bc-80b49177a637</uuid>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <name>instance-0000001a</name>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-596053430</nova:name>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:36:17</nova:creationTime>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <nova:user uuid="ea44c45fe7df4f36b5c722fbfc214f2e">tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member</nova:user>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <nova:project uuid="29d136be5e384689a95acd607131dfd0">tempest-LiveAutoBlockMigrationV225Test-1421195096</nova:project>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <nova:port uuid="84bfd4cb-8188-4fde-bca2-fdb0a732119f">
Jan 31 07:36:18 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <system>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <entry name="serial">71265e55-f168-471c-80bc-80b49177a637</entry>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <entry name="uuid">71265e55-f168-471c-80bc-80b49177a637</entry>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     </system>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <os>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   </os>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <features>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   </features>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/71265e55-f168-471c-80bc-80b49177a637_disk.config">
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       </source>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-21ac0cb5-f889-4135-9b17-5debc0b9246e">
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       </source>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:36:18 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <serial>21ac0cb5-f889-4135-9b17-5debc0b9246e</serial>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:eb:c1:cc"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <target dev="tap84bfd4cb-81"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637/console.log" append="off"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <video>
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     </video>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:36:18 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:36:18 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:36:18 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:36:18 compute-2 nova_compute[226829]: </domain>
Jan 31 07:36:18 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.524 226833 DEBUG nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Preparing to wait for external event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.525 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.526 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.526 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.528 226833 DEBUG nova.virt.libvirt.vif [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-596053430',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-596053430',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-w7qr5y3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:36:14Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=71265e55-f168-471c-80bc-80b49177a637,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.528 226833 DEBUG nova.network.os_vif_util [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converting VIF {"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.529 226833 DEBUG nova.network.os_vif_util [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.550 226833 DEBUG os_vif [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.555 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.556 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.558 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.570 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.571 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84bfd4cb-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.572 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84bfd4cb-81, col_values=(('external_ids', {'iface-id': '84bfd4cb-8188-4fde-bca2-fdb0a732119f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:c1:cc', 'vm-uuid': '71265e55-f168-471c-80bc-80b49177a637'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.574 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.577 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:36:18 compute-2 NetworkManager[48999]: <info>  [1769844978.5777] manager: (tap84bfd4cb-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.581 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.582 226833 INFO os_vif [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81')
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.639 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.639 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.640 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] No VIF found with MAC fa:16:3e:eb:c1:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.641 226833 INFO nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Using config drive
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.665 226833 DEBUG nova.storage.rbd_utils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] rbd image 71265e55-f168-471c-80bc-80b49177a637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.670 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:18 compute-2 nova_compute[226829]: 2026-01-31 07:36:18.760 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.004 226833 INFO nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Creating config drive at /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637/disk.config
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.007 226833 DEBUG oslo_concurrency.processutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvu52iqzq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.124 226833 DEBUG oslo_concurrency.processutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvu52iqzq" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.158 226833 DEBUG nova.storage.rbd_utils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] rbd image 71265e55-f168-471c-80bc-80b49177a637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.163 226833 DEBUG oslo_concurrency.processutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637/disk.config 71265e55-f168-471c-80bc-80b49177a637_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.347 226833 DEBUG oslo_concurrency.processutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637/disk.config 71265e55-f168-471c-80bc-80b49177a637_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.349 226833 INFO nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Deleting local config drive /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637/disk.config because it was imported into RBD.
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.357 226833 DEBUG nova.network.neutron [req-e0f3805c-2736-458e-9849-ee46ff348143 req-3384988a-e57e-4033-994b-f6f9fdb1d9f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updated VIF entry in instance network info cache for port 84bfd4cb-8188-4fde-bca2-fdb0a732119f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.358 226833 DEBUG nova.network.neutron [req-e0f3805c-2736-458e-9849-ee46ff348143 req-3384988a-e57e-4033-994b-f6f9fdb1d9f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating instance_info_cache with network_info: [{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.378 226833 DEBUG oslo_concurrency.lockutils [req-e0f3805c-2736-458e-9849-ee46ff348143 req-3384988a-e57e-4033-994b-f6f9fdb1d9f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:19 compute-2 kernel: tap84bfd4cb-81: entered promiscuous mode
Jan 31 07:36:19 compute-2 ovn_controller[133834]: 2026-01-31T07:36:19Z|00057|binding|INFO|Claiming lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f for this chassis.
Jan 31 07:36:19 compute-2 ovn_controller[133834]: 2026-01-31T07:36:19Z|00058|binding|INFO|84bfd4cb-8188-4fde-bca2-fdb0a732119f: Claiming fa:16:3e:eb:c1:cc 10.100.0.8
Jan 31 07:36:19 compute-2 NetworkManager[48999]: <info>  [1769844979.3984] manager: (tap84bfd4cb-81): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.401 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.404 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.410 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:c1:cc 10.100.0.8'], port_security=['fa:16:3e:eb:c1:cc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '71265e55-f168-471c-80bc-80b49177a637', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=84bfd4cb-8188-4fde-bca2-fdb0a732119f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.413 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 84bfd4cb-8188-4fde-bca2-fdb0a732119f in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 bound to our chassis
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.416 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 07:36:19 compute-2 systemd-machined[195142]: New machine qemu-10-instance-0000001a.
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.426 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[79fb0fe6-ecc4-4fff-8340-bdb48536c760]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.428 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4d1862b-21 in ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:36:19 compute-2 systemd[1]: Started Virtual Machine qemu-10-instance-0000001a.
Jan 31 07:36:19 compute-2 ovn_controller[133834]: 2026-01-31T07:36:19Z|00059|binding|INFO|Setting lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f ovn-installed in OVS
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.431 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4d1862b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.431 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fba3655a-5e4b-4d24-bdaf-850da8630457]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ovn_controller[133834]: 2026-01-31T07:36:19Z|00060|binding|INFO|Setting lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f up in Southbound
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.433 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.433 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cc59dcec-918f-4d0e-a059-be3000d0fa88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 systemd-udevd[236993]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.448 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[d471baf5-d8ae-4a43-91b5-c005eb3b6c10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 NetworkManager[48999]: <info>  [1769844979.4612] device (tap84bfd4cb-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:36:19 compute-2 NetworkManager[48999]: <info>  [1769844979.4626] device (tap84bfd4cb-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.470 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6640de67-d5dd-4c00-9419-3731de603e7c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.498 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8a84a0ad-d3c3-4e48-b9d8-5027b2b0009e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e150 e150: 3 total, 3 up, 3 in
Jan 31 07:36:19 compute-2 NetworkManager[48999]: <info>  [1769844979.5054] manager: (tape4d1862b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.504 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5a253a0f-72e9-43f2-8070-a5531a7a023f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ceph-mon[77282]: pgmap v1071: 305 pgs: 305 active+clean; 657 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 8.1 MiB/s wr, 321 op/s
Jan 31 07:36:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:19.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.532 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3635c7-e212-44c4-b214-c00005d41b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.535 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[dce509f3-60b6-413f-aa6a-907c55b49e0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 NetworkManager[48999]: <info>  [1769844979.5573] device (tape4d1862b-20): carrier: link connected
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.561 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5fbf442e-4faa-4fab-bd4e-739b302aa155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.574 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9f30382e-7924-4698-bb16-02ddcfe45313]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d1862b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:38:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523203, 'reachable_time': 26156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237025, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.585 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[55de7ea6-9aa5-42db-91df-5cd756718301]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:3856'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523203, 'tstamp': 523203}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237026, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.599 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[02d848df-a68c-46a3-ab77-86b509177b24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d1862b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:38:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523203, 'reachable_time': 26156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237027, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.628 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f503defa-35fc-437a-8451-70204656ff1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.716 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[43af7c4d-ea16-4d05-be72-f1681ed20d9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.725 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d1862b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.726 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.727 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4d1862b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.731 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:19 compute-2 NetworkManager[48999]: <info>  [1769844979.7316] manager: (tape4d1862b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Jan 31 07:36:19 compute-2 kernel: tape4d1862b-20: entered promiscuous mode
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.736 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4d1862b-20, col_values=(('external_ids', {'iface-id': '632f26c5-40a9-4337-84da-ea4b4bbdf89c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:19 compute-2 ovn_controller[133834]: 2026-01-31T07:36:19Z|00061|binding|INFO|Releasing lport 632f26c5-40a9-4337-84da-ea4b4bbdf89c from this chassis (sb_readonly=0)
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.737 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.743 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.744 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.745 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[68ef4d6b-d247-4972-932f-51c7c89ef22d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.746 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:36:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:19.747 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'env', 'PROCESS_TAG=haproxy-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4d1862b-2abc-4d60-bc48-19a5318038f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.838 226833 DEBUG nova.compute.manager [req-cee60c8e-4222-4f6a-b7fa-4b0fcbae34e6 req-884652b7-db66-434d-b52a-6d72766ac676 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.838 226833 DEBUG oslo_concurrency.lockutils [req-cee60c8e-4222-4f6a-b7fa-4b0fcbae34e6 req-884652b7-db66-434d-b52a-6d72766ac676 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.839 226833 DEBUG oslo_concurrency.lockutils [req-cee60c8e-4222-4f6a-b7fa-4b0fcbae34e6 req-884652b7-db66-434d-b52a-6d72766ac676 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.839 226833 DEBUG oslo_concurrency.lockutils [req-cee60c8e-4222-4f6a-b7fa-4b0fcbae34e6 req-884652b7-db66-434d-b52a-6d72766ac676 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:19 compute-2 nova_compute[226829]: 2026-01-31 07:36:19.839 226833 DEBUG nova.compute.manager [req-cee60c8e-4222-4f6a-b7fa-4b0fcbae34e6 req-884652b7-db66-434d-b52a-6d72766ac676 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Processing event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.045 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:20.045 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:36:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:20.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.166 226833 DEBUG nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.167 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844980.1657364, 71265e55-f168-471c-80bc-80b49177a637 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.168 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Started (Lifecycle Event)
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.171 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.178 226833 INFO nova.virt.libvirt.driver [-] [instance: 71265e55-f168-471c-80bc-80b49177a637] Instance spawned successfully.
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.178 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:36:20 compute-2 podman[237099]: 2026-01-31 07:36:20.1833831 +0000 UTC m=+0.081163988 container create 7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.189 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.200 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.203 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.204 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.204 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.209 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.209 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.209 226833 DEBUG nova.virt.libvirt.driver [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.217 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.217 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844980.1670759, 71265e55-f168-471c-80bc-80b49177a637 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:20 compute-2 systemd[1]: Started libpod-conmon-7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c.scope.
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.218 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Paused (Lifecycle Event)
Jan 31 07:36:20 compute-2 podman[237099]: 2026-01-31 07:36:20.130814234 +0000 UTC m=+0.028595182 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:36:20 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:36:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8547daaccd6fb35e223a2715a66345ced6f91c1167d3667d8ee1287d1961b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.240 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.243 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844980.1704516, 71265e55-f168-471c-80bc-80b49177a637 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.243 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Resumed (Lifecycle Event)
Jan 31 07:36:20 compute-2 podman[237099]: 2026-01-31 07:36:20.246813063 +0000 UTC m=+0.144593941 container init 7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 07:36:20 compute-2 podman[237099]: 2026-01-31 07:36:20.252982102 +0000 UTC m=+0.150762960 container start 7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.270 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.276 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:20 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[237117]: [NOTICE]   (237121) : New worker (237123) forked
Jan 31 07:36:20 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[237117]: [NOTICE]   (237121) : Loading success.
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.288 226833 INFO nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Took 2.36 seconds to spawn the instance on the hypervisor.
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.289 226833 DEBUG nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.300 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:36:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:20.303 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.350 226833 INFO nova.compute.manager [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Took 6.54 seconds to build instance.
Jan 31 07:36:20 compute-2 nova_compute[226829]: 2026-01-31 07:36:20.367 226833 DEBUG oslo_concurrency.lockutils [None req-211196e5-43cf-4b7e-9496-06a53583c56f ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:20 compute-2 ceph-mon[77282]: osdmap e150: 3 total, 3 up, 3 in
Jan 31 07:36:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1679574900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:21 compute-2 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 31 07:36:21 compute-2 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000018.scope: Consumed 13.561s CPU time.
Jan 31 07:36:21 compute-2 systemd-machined[195142]: Machine qemu-9-instance-00000018 terminated.
Jan 31 07:36:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:21.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:21 compute-2 ceph-mon[77282]: pgmap v1073: 305 pgs: 305 active+clean; 651 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 9.1 MiB/s wr, 346 op/s
Jan 31 07:36:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4154236657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:21 compute-2 nova_compute[226829]: 2026-01-31 07:36:21.775 226833 INFO nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance shutdown successfully after 13 seconds.
Jan 31 07:36:21 compute-2 nova_compute[226829]: 2026-01-31 07:36:21.781 226833 INFO nova.virt.libvirt.driver [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance destroyed successfully.
Jan 31 07:36:21 compute-2 nova_compute[226829]: 2026-01-31 07:36:21.787 226833 INFO nova.virt.libvirt.driver [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance destroyed successfully.
Jan 31 07:36:21 compute-2 nova_compute[226829]: 2026-01-31 07:36:21.939 226833 DEBUG nova.compute.manager [req-2b08f416-6625-42fa-8b91-4e49207261ba req-15592473-5953-47e9-92c0-0a32ba903541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:21 compute-2 nova_compute[226829]: 2026-01-31 07:36:21.941 226833 DEBUG oslo_concurrency.lockutils [req-2b08f416-6625-42fa-8b91-4e49207261ba req-15592473-5953-47e9-92c0-0a32ba903541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:21 compute-2 nova_compute[226829]: 2026-01-31 07:36:21.942 226833 DEBUG oslo_concurrency.lockutils [req-2b08f416-6625-42fa-8b91-4e49207261ba req-15592473-5953-47e9-92c0-0a32ba903541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:21 compute-2 nova_compute[226829]: 2026-01-31 07:36:21.943 226833 DEBUG oslo_concurrency.lockutils [req-2b08f416-6625-42fa-8b91-4e49207261ba req-15592473-5953-47e9-92c0-0a32ba903541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:21 compute-2 nova_compute[226829]: 2026-01-31 07:36:21.944 226833 DEBUG nova.compute.manager [req-2b08f416-6625-42fa-8b91-4e49207261ba req-15592473-5953-47e9-92c0-0a32ba903541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:36:21 compute-2 nova_compute[226829]: 2026-01-31 07:36:21.944 226833 WARNING nova.compute.manager [req-2b08f416-6625-42fa-8b91-4e49207261ba req-15592473-5953-47e9-92c0-0a32ba903541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state None.
Jan 31 07:36:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 31 07:36:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:22.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 31 07:36:22 compute-2 nova_compute[226829]: 2026-01-31 07:36:22.286 226833 INFO nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Deleting instance files /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42_del
Jan 31 07:36:22 compute-2 nova_compute[226829]: 2026-01-31 07:36:22.286 226833 INFO nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Deletion of /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42_del complete
Jan 31 07:36:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3505978811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:22 compute-2 nova_compute[226829]: 2026-01-31 07:36:22.655 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:36:22 compute-2 nova_compute[226829]: 2026-01-31 07:36:22.657 226833 INFO nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Creating image(s)
Jan 31 07:36:22 compute-2 nova_compute[226829]: 2026-01-31 07:36:22.684 226833 DEBUG nova.storage.rbd_utils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:22 compute-2 nova_compute[226829]: 2026-01-31 07:36:22.712 226833 DEBUG nova.storage.rbd_utils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:22 compute-2 nova_compute[226829]: 2026-01-31 07:36:22.739 226833 DEBUG nova.storage.rbd_utils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:22 compute-2 nova_compute[226829]: 2026-01-31 07:36:22.743 226833 DEBUG oslo_concurrency.lockutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:22 compute-2 nova_compute[226829]: 2026-01-31 07:36:22.744 226833 DEBUG oslo_concurrency.lockutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:23 compute-2 nova_compute[226829]: 2026-01-31 07:36:23.423 226833 DEBUG nova.virt.libvirt.imagebackend [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/40cf2ff3-f7ff-4843-b4ab-b7dcc843006f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/40cf2ff3-f7ff-4843-b4ab-b7dcc843006f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 07:36:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:23.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:23 compute-2 nova_compute[226829]: 2026-01-31 07:36:23.575 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:23 compute-2 ceph-mon[77282]: pgmap v1074: 305 pgs: 305 active+clean; 624 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 8.3 MiB/s wr, 323 op/s
Jan 31 07:36:23 compute-2 nova_compute[226829]: 2026-01-31 07:36:23.672 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:24.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:24.305 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:24 compute-2 nova_compute[226829]: 2026-01-31 07:36:24.744 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:24 compute-2 nova_compute[226829]: 2026-01-31 07:36:24.821 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:24 compute-2 nova_compute[226829]: 2026-01-31 07:36:24.822 226833 DEBUG nova.virt.images [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] 40cf2ff3-f7ff-4843-b4ab-b7dcc843006f was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 31 07:36:24 compute-2 nova_compute[226829]: 2026-01-31 07:36:24.824 226833 DEBUG nova.privsep.utils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 07:36:24 compute-2 nova_compute[226829]: 2026-01-31 07:36:24.825 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.058 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.064 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.142 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.144 226833 DEBUG oslo_concurrency.lockutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.173 226833 DEBUG nova.storage.rbd_utils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.177 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c c8abb380-680c-41b2-8156-a5e2b5b96f42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:25.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.533 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c c8abb380-680c-41b2-8156-a5e2b5b96f42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.624 226833 DEBUG nova.storage.rbd_utils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] resizing rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:36:25 compute-2 ceph-mon[77282]: pgmap v1075: 305 pgs: 305 active+clean; 516 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 358 op/s
Jan 31 07:36:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1987441858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.739 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.740 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Ensure instance console log exists: /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.741 226833 DEBUG oslo_concurrency.lockutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.741 226833 DEBUG oslo_concurrency.lockutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.741 226833 DEBUG oslo_concurrency.lockutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.743 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.746 226833 WARNING nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.751 226833 DEBUG nova.virt.libvirt.host [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.752 226833 DEBUG nova.virt.libvirt.host [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.756 226833 DEBUG nova.virt.libvirt.host [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.756 226833 DEBUG nova.virt.libvirt.host [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.758 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.758 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.758 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.759 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.759 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.759 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.759 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.760 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.760 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.760 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.761 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.761 226833 DEBUG nova.virt.hardware [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.761 226833 DEBUG nova.objects.instance [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:25 compute-2 nova_compute[226829]: 2026-01-31 07:36:25.791 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2460872254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.202 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.234 226833 DEBUG nova.storage.rbd_utils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.239 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3819849537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.628 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.634 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <uuid>c8abb380-680c-41b2-8156-a5e2b5b96f42</uuid>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <name>instance-00000018</name>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <nova:name>tempest-ServersAdmin275Test-server-287964715</nova:name>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:36:25</nova:creationTime>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <nova:user uuid="b9769a806c6f4874ab462acac1b4bfcc">tempest-ServersAdmin275Test-732223889-project-member</nova:user>
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <nova:project uuid="74b1ee3fc09b40608d0892724c5ddba4">tempest-ServersAdmin275Test-732223889</nova:project>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <system>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <entry name="serial">c8abb380-680c-41b2-8156-a5e2b5b96f42</entry>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <entry name="uuid">c8abb380-680c-41b2-8156-a5e2b5b96f42</entry>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     </system>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <os>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   </os>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <features>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   </features>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c8abb380-680c-41b2-8156-a5e2b5b96f42_disk">
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       </source>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config">
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       </source>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:36:26 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/console.log" append="off"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <video>
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     </video>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:36:26 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:36:26 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:36:26 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:36:26 compute-2 nova_compute[226829]: </domain>
Jan 31 07:36:26 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:36:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2460872254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3819849537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.793 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.794 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.795 226833 INFO nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Using config drive
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.827 226833 DEBUG nova.storage.rbd_utils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.850 226833 DEBUG nova.objects.instance [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:26 compute-2 nova_compute[226829]: 2026-01-31 07:36:26.941 226833 DEBUG nova.objects.instance [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'keypairs' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:27 compute-2 nova_compute[226829]: 2026-01-31 07:36:27.071 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Check if temp file /var/lib/nova/instances/tmp7yca7bub exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Jan 31 07:36:27 compute-2 nova_compute[226829]: 2026-01-31 07:36:27.072 226833 DEBUG nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7yca7bub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Jan 31 07:36:27 compute-2 nova_compute[226829]: 2026-01-31 07:36:27.283 226833 INFO nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Creating config drive at /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config
Jan 31 07:36:27 compute-2 nova_compute[226829]: 2026-01-31 07:36:27.288 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpu62e28w2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:27 compute-2 nova_compute[226829]: 2026-01-31 07:36:27.417 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpu62e28w2" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:27 compute-2 nova_compute[226829]: 2026-01-31 07:36:27.457 226833 DEBUG nova.storage.rbd_utils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:27 compute-2 nova_compute[226829]: 2026-01-31 07:36:27.462 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:27.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:27 compute-2 nova_compute[226829]: 2026-01-31 07:36:27.658 226833 DEBUG oslo_concurrency.processutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:27 compute-2 nova_compute[226829]: 2026-01-31 07:36:27.659 226833 INFO nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Deleting local config drive /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config because it was imported into RBD.
Jan 31 07:36:27 compute-2 systemd-machined[195142]: New machine qemu-11-instance-00000018.
Jan 31 07:36:27 compute-2 systemd[1]: Started Virtual Machine qemu-11-instance-00000018.
Jan 31 07:36:27 compute-2 ceph-mon[77282]: pgmap v1076: 305 pgs: 305 active+clean; 498 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.1 MiB/s wr, 399 op/s
Jan 31 07:36:27 compute-2 podman[237464]: 2026-01-31 07:36:27.841038153 +0000 UTC m=+0.105884714 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:36:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e151 e151: 3 total, 3 up, 3 in
Jan 31 07:36:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:28.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:28 compute-2 sudo[237536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:28 compute-2 sudo[237536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:28 compute-2 sudo[237536]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:28 compute-2 sudo[237565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:28 compute-2 sudo[237565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:28 compute-2 sudo[237565]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.577 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.621 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for c8abb380-680c-41b2-8156-a5e2b5b96f42 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.622 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844988.6213143, c8abb380-680c-41b2-8156-a5e2b5b96f42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.622 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] VM Resumed (Lifecycle Event)
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.629 226833 DEBUG nova.compute.manager [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.629 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.632 226833 INFO nova.virt.libvirt.driver [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance spawned successfully.
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.632 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.651 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.654 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.662 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.662 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.663 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.663 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.664 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.664 226833 DEBUG nova.virt.libvirt.driver [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.672 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.672 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844988.6282973, c8abb380-680c-41b2-8156-a5e2b5b96f42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.672 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] VM Started (Lifecycle Event)
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.673 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.694 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.696 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.720 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.730 226833 DEBUG nova.compute.manager [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.810 226833 DEBUG oslo_concurrency.lockutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.811 226833 DEBUG oslo_concurrency.lockutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.811 226833 DEBUG nova.objects.instance [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 07:36:28 compute-2 nova_compute[226829]: 2026-01-31 07:36:28.867 226833 DEBUG oslo_concurrency.lockutils [None req-eb39b294-8e76-4bbd-81e2-75f0bfccc016 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:28 compute-2 ceph-mon[77282]: osdmap e151: 3 total, 3 up, 3 in
Jan 31 07:36:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:29.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:29 compute-2 ceph-mon[77282]: pgmap v1078: 305 pgs: 305 active+clean; 510 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 2.4 MiB/s wr, 380 op/s
Jan 31 07:36:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2498875659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4127953444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:30.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:30 compute-2 nova_compute[226829]: 2026-01-31 07:36:30.971 226833 INFO nova.compute.manager [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Rebuilding instance
Jan 31 07:36:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3923018290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:31.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:31 compute-2 nova_compute[226829]: 2026-01-31 07:36:31.671 226833 DEBUG nova.objects.instance [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lazy-loading 'trusted_certs' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:31 compute-2 nova_compute[226829]: 2026-01-31 07:36:31.688 226833 DEBUG nova.compute.manager [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:31 compute-2 nova_compute[226829]: 2026-01-31 07:36:31.737 226833 DEBUG nova.objects.instance [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lazy-loading 'pci_requests' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:31 compute-2 nova_compute[226829]: 2026-01-31 07:36:31.748 226833 DEBUG nova.objects.instance [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lazy-loading 'pci_devices' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:31 compute-2 nova_compute[226829]: 2026-01-31 07:36:31.759 226833 DEBUG nova.objects.instance [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lazy-loading 'resources' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:31 compute-2 nova_compute[226829]: 2026-01-31 07:36:31.777 226833 DEBUG nova.objects.instance [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lazy-loading 'migration_context' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:31 compute-2 nova_compute[226829]: 2026-01-31 07:36:31.843 226833 DEBUG nova.objects.instance [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 07:36:31 compute-2 nova_compute[226829]: 2026-01-31 07:36:31.849 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:36:32 compute-2 ceph-mon[77282]: pgmap v1079: 305 pgs: 305 active+clean; 536 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 3.2 MiB/s wr, 382 op/s
Jan 31 07:36:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3923018290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:32.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:33.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:33 compute-2 nova_compute[226829]: 2026-01-31 07:36:33.580 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:33 compute-2 ovn_controller[133834]: 2026-01-31T07:36:33Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:c1:cc 10.100.0.8
Jan 31 07:36:33 compute-2 ovn_controller[133834]: 2026-01-31T07:36:33Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:c1:cc 10.100.0.8
Jan 31 07:36:33 compute-2 nova_compute[226829]: 2026-01-31 07:36:33.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:34 compute-2 ceph-mon[77282]: pgmap v1080: 305 pgs: 305 active+clean; 536 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 2.2 MiB/s wr, 340 op/s
Jan 31 07:36:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:34.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.423 226833 DEBUG nova.compute.manager [req-65e5d450-97b4-4087-ad2e-26ad345766da req-f5bb9408-7c40-4935-b030-c1c1da6df5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.424 226833 DEBUG oslo_concurrency.lockutils [req-65e5d450-97b4-4087-ad2e-26ad345766da req-f5bb9408-7c40-4935-b030-c1c1da6df5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.425 226833 DEBUG oslo_concurrency.lockutils [req-65e5d450-97b4-4087-ad2e-26ad345766da req-f5bb9408-7c40-4935-b030-c1c1da6df5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.426 226833 DEBUG oslo_concurrency.lockutils [req-65e5d450-97b4-4087-ad2e-26ad345766da req-f5bb9408-7c40-4935-b030-c1c1da6df5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.427 226833 DEBUG nova.compute.manager [req-65e5d450-97b4-4087-ad2e-26ad345766da req-f5bb9408-7c40-4935-b030-c1c1da6df5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.428 226833 DEBUG nova.compute.manager [req-65e5d450-97b4-4087-ad2e-26ad345766da req-f5bb9408-7c40-4935-b030-c1c1da6df5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:36:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.599 226833 INFO nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Took 6.74 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.600 226833 DEBUG nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.618 226833 DEBUG nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7yca7bub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(73112d7f-1d39-4e75-8fd6-e97fb4741daf),old_vol_attachment_ids={21ac0cb5-f889-4135-9b17-5debc0b9246e='07354d43-185b-4fd2-8dc3-63e0a2cce98a'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.627 226833 DEBUG nova.objects.instance [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lazy-loading 'migration_context' on Instance uuid 71265e55-f168-471c-80bc-80b49177a637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.631 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.634 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.634 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.656 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Find same serial number: pos=1, serial=21ac0cb5-f889-4135-9b17-5debc0b9246e _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.657 226833 DEBUG nova.virt.libvirt.vif [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-596053430',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-596053430',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:36:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-w7qr5y3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:36:20Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=71265e55-f168-471c-80bc-80b49177a637,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.657 226833 DEBUG nova.network.os_vif_util [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converting VIF {"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.659 226833 DEBUG nova.network.os_vif_util [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.660 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating guest XML with vif config: <interface type="ethernet">
Jan 31 07:36:34 compute-2 nova_compute[226829]:   <mac address="fa:16:3e:eb:c1:cc"/>
Jan 31 07:36:34 compute-2 nova_compute[226829]:   <model type="virtio"/>
Jan 31 07:36:34 compute-2 nova_compute[226829]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:36:34 compute-2 nova_compute[226829]:   <mtu size="1442"/>
Jan 31 07:36:34 compute-2 nova_compute[226829]:   <target dev="tap84bfd4cb-81"/>
Jan 31 07:36:34 compute-2 nova_compute[226829]: </interface>
Jan 31 07:36:34 compute-2 nova_compute[226829]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Jan 31 07:36:34 compute-2 nova_compute[226829]: 2026-01-31 07:36:34.662 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Jan 31 07:36:35 compute-2 nova_compute[226829]: 2026-01-31 07:36:35.137 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 31 07:36:35 compute-2 nova_compute[226829]: 2026-01-31 07:36:35.138 226833 INFO nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Increasing downtime to 50 ms after 0 sec elapsed time
Jan 31 07:36:35 compute-2 nova_compute[226829]: 2026-01-31 07:36:35.292 226833 INFO nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Jan 31 07:36:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:35.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:35 compute-2 nova_compute[226829]: 2026-01-31 07:36:35.795 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 31 07:36:35 compute-2 nova_compute[226829]: 2026-01-31 07:36:35.796 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 31 07:36:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:36.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:36 compute-2 ceph-mon[77282]: pgmap v1081: 305 pgs: 305 active+clean; 544 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.4 MiB/s wr, 296 op/s
Jan 31 07:36:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/267814869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.299 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.299 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.538 226833 DEBUG nova.compute.manager [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.538 226833 DEBUG oslo_concurrency.lockutils [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.539 226833 DEBUG oslo_concurrency.lockutils [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.539 226833 DEBUG oslo_concurrency.lockutils [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.540 226833 DEBUG nova.compute.manager [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.540 226833 WARNING nova.compute.manager [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state migrating.
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.541 226833 DEBUG nova.compute.manager [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-changed-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.541 226833 DEBUG nova.compute.manager [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Refreshing instance network info cache due to event network-changed-84bfd4cb-8188-4fde-bca2-fdb0a732119f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.542 226833 DEBUG oslo_concurrency.lockutils [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.542 226833 DEBUG oslo_concurrency.lockutils [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.543 226833 DEBUG nova.network.neutron [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Refreshing network info cache for port 84bfd4cb-8188-4fde-bca2-fdb0a732119f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.802 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 31 07:36:36 compute-2 nova_compute[226829]: 2026-01-31 07:36:36.803 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.307 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.307 226833 DEBUG nova.virt.libvirt.migration [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 31 07:36:37 compute-2 ceph-mon[77282]: pgmap v1082: 305 pgs: 305 active+clean; 566 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.2 MiB/s wr, 275 op/s
Jan 31 07:36:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:37.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.644 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769844997.6436296, 71265e55-f168-471c-80bc-80b49177a637 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.644 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Paused (Lifecycle Event)
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.663 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.667 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.690 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] During sync_power_state the instance has a pending task (migrating). Skip.
Jan 31 07:36:37 compute-2 kernel: tap84bfd4cb-81 (unregistering): left promiscuous mode
Jan 31 07:36:37 compute-2 NetworkManager[48999]: <info>  [1769844997.8572] device (tap84bfd4cb-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.900 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:37 compute-2 ovn_controller[133834]: 2026-01-31T07:36:37Z|00062|binding|INFO|Releasing lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f from this chassis (sb_readonly=0)
Jan 31 07:36:37 compute-2 ovn_controller[133834]: 2026-01-31T07:36:37Z|00063|binding|INFO|Setting lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f down in Southbound
Jan 31 07:36:37 compute-2 ovn_controller[133834]: 2026-01-31T07:36:37Z|00064|binding|INFO|Removing iface tap84bfd4cb-81 ovn-installed in OVS
Jan 31 07:36:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:37.905 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:c1:cc 10.100.0.8'], port_security=['fa:16:3e:eb:c1:cc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'a8402939-fce1-46a9-9749-88c4c6334003'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '71265e55-f168-471c-80bc-80b49177a637', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=84bfd4cb-8188-4fde-bca2-fdb0a732119f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.908 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:37.910 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 84bfd4cb-8188-4fde-bca2-fdb0a732119f in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis
Jan 31 07:36:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:37.912 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d1862b-2abc-4d60-bc48-19a5318038f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:36:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:37.914 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa6ebc2-28e5-4793-9e8f-e22e2c4bfbda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:37.919 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 namespace which is not needed anymore
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.919 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:37 compute-2 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 31 07:36:37 compute-2 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000001a.scope: Consumed 14.608s CPU time.
Jan 31 07:36:37 compute-2 systemd-machined[195142]: Machine qemu-10-instance-0000001a terminated.
Jan 31 07:36:37 compute-2 virtqemud[226546]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-21ac0cb5-f889-4135-9b17-5debc0b9246e: No such file or directory
Jan 31 07:36:37 compute-2 virtqemud[226546]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-21ac0cb5-f889-4135-9b17-5debc0b9246e: No such file or directory
Jan 31 07:36:37 compute-2 kernel: tap84bfd4cb-81: entered promiscuous mode
Jan 31 07:36:37 compute-2 kernel: tap84bfd4cb-81 (unregistering): left promiscuous mode
Jan 31 07:36:37 compute-2 NetworkManager[48999]: <info>  [1769844997.9809] manager: (tap84bfd4cb-81): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.985 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:37 compute-2 ovn_controller[133834]: 2026-01-31T07:36:37Z|00065|binding|INFO|Claiming lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f for this chassis.
Jan 31 07:36:37 compute-2 ovn_controller[133834]: 2026-01-31T07:36:37Z|00066|binding|INFO|84bfd4cb-8188-4fde-bca2-fdb0a732119f: Claiming fa:16:3e:eb:c1:cc 10.100.0.8
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.995 226833 DEBUG nova.virt.libvirt.guest [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.996 226833 INFO nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migration operation has completed
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.996 226833 INFO nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] _post_live_migration() is started..
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.997 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:37 compute-2 ovn_controller[133834]: 2026-01-31T07:36:37Z|00067|binding|INFO|Releasing lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f from this chassis (sb_readonly=0)
Jan 31 07:36:37 compute-2 podman[237601]: 2026-01-31 07:36:37.998015406 +0000 UTC m=+0.124464271 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.998 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Jan 31 07:36:37 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.999 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:37.999 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.000 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:c1:cc 10.100.0.8'], port_security=['fa:16:3e:eb:c1:cc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'a8402939-fce1-46a9-9749-88c4c6334003'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '71265e55-f168-471c-80bc-80b49177a637', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=84bfd4cb-8188-4fde-bca2-fdb0a732119f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.006 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:c1:cc 10.100.0.8'], port_security=['fa:16:3e:eb:c1:cc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'a8402939-fce1-46a9-9749-88c4c6334003'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '71265e55-f168-471c-80bc-80b49177a637', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=84bfd4cb-8188-4fde-bca2-fdb0a732119f) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:36:38 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[237117]: [NOTICE]   (237121) : haproxy version is 2.8.14-c23fe91
Jan 31 07:36:38 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[237117]: [NOTICE]   (237121) : path to executable is /usr/sbin/haproxy
Jan 31 07:36:38 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[237117]: [WARNING]  (237121) : Exiting Master process...
Jan 31 07:36:38 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[237117]: [ALERT]    (237121) : Current worker (237123) exited with code 143 (Terminated)
Jan 31 07:36:38 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[237117]: [WARNING]  (237121) : All workers exited. Exiting... (0)
Jan 31 07:36:38 compute-2 systemd[1]: libpod-7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c.scope: Deactivated successfully.
Jan 31 07:36:38 compute-2 podman[237649]: 2026-01-31 07:36:38.071791522 +0000 UTC m=+0.048036124 container died 7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:36:38 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c-userdata-shm.mount: Deactivated successfully.
Jan 31 07:36:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:38.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-fa8547daaccd6fb35e223a2715a66345ced6f91c1167d3667d8ee1287d1961b3-merged.mount: Deactivated successfully.
Jan 31 07:36:38 compute-2 podman[237649]: 2026-01-31 07:36:38.114051327 +0000 UTC m=+0.090295899 container cleanup 7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 07:36:38 compute-2 systemd[1]: libpod-conmon-7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c.scope: Deactivated successfully.
Jan 31 07:36:38 compute-2 podman[237680]: 2026-01-31 07:36:38.172260766 +0000 UTC m=+0.042609394 container remove 7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.184 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a63cace3-1989-4b17-a869-4a6f7361100d]: (4, ('Sat Jan 31 07:36:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 (7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c)\n7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c\nSat Jan 31 07:36:38 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 (7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c)\n7174892c07aa21bd41712bf29e7b3b762ad2bc7c28ff8ad89193759b202e933c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.186 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bc30c524-fba6-44f7-89fb-6dfbe373076a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.187 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d1862b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:38 compute-2 kernel: tape4d1862b-20: left promiscuous mode
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:38.189 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:38.203 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.207 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a67c7a-2956-4371-be50-3c451905dd7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.228 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[612e0549-e265-462d-88e7-1191255c7a15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.230 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a53441-3e8d-4308-9622-616010b73e52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.247 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5b807c-1a89-4da2-8cbf-239cb8d390af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523196, 'reachable_time': 37039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237699, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:38 compute-2 systemd[1]: run-netns-ovnmeta\x2de4d1862b\x2d2abc\x2d4d60\x2dbc48\x2d19a5318038f4.mount: Deactivated successfully.
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.254 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.255 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4dd88e-a0f7-44ec-820b-d57e4fd39873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.256 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 84bfd4cb-8188-4fde-bca2-fdb0a732119f in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.258 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d1862b-2abc-4d60-bc48-19a5318038f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.259 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8a2095-7be3-46fb-9810-0591b7650ee1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.260 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 84bfd4cb-8188-4fde-bca2-fdb0a732119f in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.262 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d1862b-2abc-4d60-bc48-19a5318038f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:36:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:38.263 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[902497a1-f71f-4a6d-b737-e5d8644c339f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:38.453 226833 DEBUG nova.compute.manager [req-2fc362b5-1070-4840-9dcc-efa727c513fe req-b7005ea6-c042-4d5b-934f-7fe9aaca953b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:38.454 226833 DEBUG oslo_concurrency.lockutils [req-2fc362b5-1070-4840-9dcc-efa727c513fe req-b7005ea6-c042-4d5b-934f-7fe9aaca953b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:38.455 226833 DEBUG oslo_concurrency.lockutils [req-2fc362b5-1070-4840-9dcc-efa727c513fe req-b7005ea6-c042-4d5b-934f-7fe9aaca953b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:38.455 226833 DEBUG oslo_concurrency.lockutils [req-2fc362b5-1070-4840-9dcc-efa727c513fe req-b7005ea6-c042-4d5b-934f-7fe9aaca953b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:38.456 226833 DEBUG nova.compute.manager [req-2fc362b5-1070-4840-9dcc-efa727c513fe req-b7005ea6-c042-4d5b-934f-7fe9aaca953b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:38.456 226833 DEBUG nova.compute.manager [req-2fc362b5-1070-4840-9dcc-efa727c513fe req-b7005ea6-c042-4d5b-934f-7fe9aaca953b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:38.583 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e152 e152: 3 total, 3 up, 3 in
Jan 31 07:36:38 compute-2 nova_compute[226829]: 2026-01-31 07:36:38.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:39.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:39 compute-2 ceph-mon[77282]: pgmap v1083: 305 pgs: 305 active+clean; 569 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.4 MiB/s wr, 255 op/s
Jan 31 07:36:39 compute-2 ceph-mon[77282]: osdmap e152: 3 total, 3 up, 3 in
Jan 31 07:36:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2898010128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1723160795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.754 226833 DEBUG nova.compute.manager [req-b63accf1-19fa-422a-9ad1-a1ad473e3429 req-1666f04a-5f25-49a1-a440-50680482231f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.754 226833 DEBUG oslo_concurrency.lockutils [req-b63accf1-19fa-422a-9ad1-a1ad473e3429 req-1666f04a-5f25-49a1-a440-50680482231f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.755 226833 DEBUG oslo_concurrency.lockutils [req-b63accf1-19fa-422a-9ad1-a1ad473e3429 req-1666f04a-5f25-49a1-a440-50680482231f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.755 226833 DEBUG oslo_concurrency.lockutils [req-b63accf1-19fa-422a-9ad1-a1ad473e3429 req-1666f04a-5f25-49a1-a440-50680482231f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.755 226833 DEBUG nova.compute.manager [req-b63accf1-19fa-422a-9ad1-a1ad473e3429 req-1666f04a-5f25-49a1-a440-50680482231f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.756 226833 DEBUG nova.compute.manager [req-b63accf1-19fa-422a-9ad1-a1ad473e3429 req-1666f04a-5f25-49a1-a440-50680482231f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.811 226833 DEBUG nova.network.neutron [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updated VIF entry in instance network info cache for port 84bfd4cb-8188-4fde-bca2-fdb0a732119f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.812 226833 DEBUG nova.network.neutron [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating instance_info_cache with network_info: [{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.817 226833 DEBUG nova.network.neutron [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Activated binding for port 84bfd4cb-8188-4fde-bca2-fdb0a732119f and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.818 226833 DEBUG nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.819 226833 DEBUG nova.virt.libvirt.vif [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-596053430',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-596053430',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:36:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-w7qr5y3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:36:26Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=71265e55-f168-471c-80bc-80b49177a637,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.819 226833 DEBUG nova.network.os_vif_util [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converting VIF {"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.820 226833 DEBUG nova.network.os_vif_util [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.821 226833 DEBUG os_vif [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.826 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.827 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84bfd4cb-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.829 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.833 226833 DEBUG oslo_concurrency.lockutils [req-2474c132-9879-47cb-a63f-3d8052b773a1 req-f53710c4-74b0-4b28-aa24-aeae4cd10a5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.834 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.840 226833 INFO os_vif [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81')
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.841 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.841 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.841 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.842 226833 DEBUG nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.842 226833 INFO nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Deleting instance files /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637_del
Jan 31 07:36:39 compute-2 nova_compute[226829]: 2026-01-31 07:36:39.842 226833 INFO nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Deletion of /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637_del complete
Jan 31 07:36:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:40.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.687 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "871711de-f993-4592-83a2-a36c4039786d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.688 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "871711de-f993-4592-83a2-a36c4039786d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.689 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "871711de-f993-4592-83a2-a36c4039786d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.689 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "871711de-f993-4592-83a2-a36c4039786d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.689 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "871711de-f993-4592-83a2-a36c4039786d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.691 226833 INFO nova.compute.manager [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Terminating instance
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.693 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.693 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.693 226833 DEBUG nova.network.neutron [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:36:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1358593603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.769 226833 DEBUG nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.770 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.771 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.771 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.772 226833 DEBUG nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.772 226833 WARNING nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state migrating.
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.773 226833 DEBUG nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.773 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.774 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.774 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.775 226833 DEBUG nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.775 226833 WARNING nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state migrating.
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.776 226833 DEBUG nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.776 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.776 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.777 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.777 226833 DEBUG nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.778 226833 WARNING nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state migrating.
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.778 226833 DEBUG nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.779 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.779 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.780 226833 DEBUG oslo_concurrency.lockutils [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.780 226833 DEBUG nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:36:40 compute-2 nova_compute[226829]: 2026-01-31 07:36:40.780 226833 WARNING nova.compute.manager [req-a6bc87dd-af1e-444f-9521-94db031714db req-a93c522c-c527-4f80-9b9e-81c4ed6701f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state migrating.
Jan 31 07:36:41 compute-2 nova_compute[226829]: 2026-01-31 07:36:41.295 226833 DEBUG nova.network.neutron [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:41 compute-2 nova_compute[226829]: 2026-01-31 07:36:41.654 226833 DEBUG nova.network.neutron [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:41 compute-2 nova_compute[226829]: 2026-01-31 07:36:41.675 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:41 compute-2 nova_compute[226829]: 2026-01-31 07:36:41.676 226833 DEBUG nova.compute.manager [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:36:41 compute-2 ceph-mon[77282]: pgmap v1085: 305 pgs: 305 active+clean; 544 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 283 op/s
Jan 31 07:36:41 compute-2 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 31 07:36:41 compute-2 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000011.scope: Consumed 15.698s CPU time.
Jan 31 07:36:41 compute-2 systemd-machined[195142]: Machine qemu-8-instance-00000011 terminated.
Jan 31 07:36:41 compute-2 nova_compute[226829]: 2026-01-31 07:36:41.928 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:36:42 compute-2 nova_compute[226829]: 2026-01-31 07:36:42.101 226833 INFO nova.virt.libvirt.driver [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance destroyed successfully.
Jan 31 07:36:42 compute-2 nova_compute[226829]: 2026-01-31 07:36:42.102 226833 DEBUG nova.objects.instance [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'resources' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:42.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:43 compute-2 nova_compute[226829]: 2026-01-31 07:36:43.338 226833 INFO nova.virt.libvirt.driver [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Deleting instance files /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d_del
Jan 31 07:36:43 compute-2 nova_compute[226829]: 2026-01-31 07:36:43.338 226833 INFO nova.virt.libvirt.driver [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Deletion of /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d_del complete
Jan 31 07:36:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:43.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:43 compute-2 nova_compute[226829]: 2026-01-31 07:36:43.581 226833 INFO nova.compute.manager [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Took 1.90 seconds to destroy the instance on the hypervisor.
Jan 31 07:36:43 compute-2 nova_compute[226829]: 2026-01-31 07:36:43.582 226833 DEBUG oslo.service.loopingcall [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:36:43 compute-2 nova_compute[226829]: 2026-01-31 07:36:43.583 226833 DEBUG nova.compute.manager [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:36:43 compute-2 nova_compute[226829]: 2026-01-31 07:36:43.583 226833 DEBUG nova.network.neutron [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:36:43 compute-2 nova_compute[226829]: 2026-01-31 07:36:43.680 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:43 compute-2 nova_compute[226829]: 2026-01-31 07:36:43.707 226833 DEBUG nova.network.neutron [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:43 compute-2 nova_compute[226829]: 2026-01-31 07:36:43.833 226833 DEBUG nova.network.neutron [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:43 compute-2 ceph-mon[77282]: pgmap v1086: 305 pgs: 305 active+clean; 540 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.2 MiB/s wr, 331 op/s
Jan 31 07:36:43 compute-2 nova_compute[226829]: 2026-01-31 07:36:43.954 226833 INFO nova.compute.manager [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] Took 0.37 seconds to deallocate network for instance.
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.043 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.043 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:44.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.138 226833 DEBUG oslo_concurrency.processutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2658974630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.563 226833 DEBUG oslo_concurrency.processutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.573 226833 DEBUG nova.compute.provider_tree [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.612 226833 DEBUG nova.scheduler.client.report [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.643 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.678 226833 INFO nova.scheduler.client.report [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Deleted allocations for instance 871711de-f993-4592-83a2-a36c4039786d
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.763 226833 DEBUG oslo_concurrency.lockutils [None req-0420c642-141d-4f6d-ba34-c6c63e61721a 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "871711de-f993-4592-83a2-a36c4039786d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.766 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.766 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.767 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.818 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.819 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.819 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.820 226833 DEBUG nova.compute.resource_tracker [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.820 226833 DEBUG oslo_concurrency.processutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:44 compute-2 nova_compute[226829]: 2026-01-31 07:36:44.840 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2658974630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1356891215' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:36:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1356891215' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:36:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3469651264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.568 226833 DEBUG oslo_concurrency.processutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.748s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.642 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.643 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.855 226833 WARNING nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.857 226833 DEBUG nova.compute.resource_tracker [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4591MB free_disk=20.764923095703125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.857 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.858 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.939 226833 DEBUG nova.compute.resource_tracker [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Migration for instance 71265e55-f168-471c-80bc-80b49177a637 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.964 226833 DEBUG nova.compute.resource_tracker [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.982 226833 DEBUG nova.compute.resource_tracker [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Instance c8abb380-680c-41b2-8156-a5e2b5b96f42 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.983 226833 DEBUG nova.compute.resource_tracker [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Migration 73112d7f-1d39-4e75-8fd6-e97fb4741daf is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.983 226833 DEBUG nova.compute.resource_tracker [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:36:45 compute-2 nova_compute[226829]: 2026-01-31 07:36:45.984 226833 DEBUG nova.compute.resource_tracker [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.050 226833 DEBUG oslo_concurrency.processutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:46.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:46 compute-2 ceph-mon[77282]: pgmap v1087: 305 pgs: 305 active+clean; 521 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.3 MiB/s wr, 292 op/s
Jan 31 07:36:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3469651264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3146995631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:46 compute-2 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 31 07:36:46 compute-2 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000018.scope: Consumed 14.612s CPU time.
Jan 31 07:36:46 compute-2 systemd-machined[195142]: Machine qemu-11-instance-00000018 terminated.
Jan 31 07:36:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/567785149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.512 226833 DEBUG oslo_concurrency.processutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.520 226833 DEBUG nova.compute.provider_tree [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.538 226833 DEBUG nova.scheduler.client.report [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.566 226833 DEBUG nova.compute.resource_tracker [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.567 226833 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.576 226833 INFO nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.667 226833 INFO nova.scheduler.client.report [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Deleted allocation for migration 73112d7f-1d39-4e75-8fd6-e97fb4741daf
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.667 226833 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.825 226833 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Creating tmpfile /var/lib/nova/instances/tmp0wegpadh to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.953 226833 INFO nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance shutdown successfully after 15 seconds.
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.956 226833 DEBUG nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0wegpadh',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.968 226833 INFO nova.virt.libvirt.driver [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance destroyed successfully.
Jan 31 07:36:46 compute-2 nova_compute[226829]: 2026-01-31 07:36:46.980 226833 INFO nova.virt.libvirt.driver [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance destroyed successfully.
Jan 31 07:36:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1771992844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/567785149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:48.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:48 compute-2 ceph-mon[77282]: pgmap v1088: 305 pgs: 305 active+clean; 516 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.8 MiB/s wr, 274 op/s
Jan 31 07:36:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4120442856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2233105137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.550 226833 INFO nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Deleting instance files /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42_del
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.551 226833 INFO nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Deletion of /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42_del complete
Jan 31 07:36:48 compute-2 sudo[237816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:48 compute-2 sudo[237816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:48 compute-2 sudo[237816]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.699 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.700 226833 INFO nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Creating image(s)
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.749 226833 DEBUG nova.storage.rbd_utils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:48 compute-2 sudo[237841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:48 compute-2 sudo[237841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.784 226833 DEBUG nova.storage.rbd_utils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:48 compute-2 sudo[237841]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.826 226833 DEBUG nova.storage.rbd_utils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.831 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.848 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.905 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.906 226833 DEBUG oslo_concurrency.lockutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.907 226833 DEBUG oslo_concurrency.lockutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.907 226833 DEBUG oslo_concurrency.lockutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.966 226833 DEBUG nova.storage.rbd_utils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:48 compute-2 nova_compute[226829]: 2026-01-31 07:36:48.971 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c8abb380-680c-41b2-8156-a5e2b5b96f42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.290 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c8abb380-680c-41b2-8156-a5e2b5b96f42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.383 226833 DEBUG nova.storage.rbd_utils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] resizing rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:36:49 compute-2 ceph-mon[77282]: pgmap v1089: 305 pgs: 305 active+clean; 479 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.2 MiB/s wr, 276 op/s
Jan 31 07:36:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3986344607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/569068459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.495 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.495 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Ensure instance console log exists: /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.496 226833 DEBUG oslo_concurrency.lockutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.496 226833 DEBUG oslo_concurrency.lockutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.497 226833 DEBUG oslo_concurrency.lockutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.499 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.503 226833 WARNING nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.508 226833 DEBUG nova.virt.libvirt.host [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.509 226833 DEBUG nova.virt.libvirt.host [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.512 226833 DEBUG nova.virt.libvirt.host [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.513 226833 DEBUG nova.virt.libvirt.host [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.514 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.514 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.515 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.515 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.516 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.516 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.516 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.516 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.517 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.517 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.517 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.518 226833 DEBUG nova.virt.hardware [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.518 226833 DEBUG nova.objects.instance [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lazy-loading 'vcpu_model' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.520 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:49.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.567 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.765 226833 DEBUG nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0wegpadh',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.817 226833 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.817 226833 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquired lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.818 226833 DEBUG nova.network.neutron [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:36:49 compute-2 nova_compute[226829]: 2026-01-31 07:36:49.843 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:49 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/926484333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.008 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.043 226833 DEBUG nova.storage.rbd_utils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.048 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:50.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/926484333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3894641582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.471 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.475 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <uuid>c8abb380-680c-41b2-8156-a5e2b5b96f42</uuid>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <name>instance-00000018</name>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <nova:name>tempest-ServersAdmin275Test-server-287964715</nova:name>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:36:49</nova:creationTime>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <nova:user uuid="b9769a806c6f4874ab462acac1b4bfcc">tempest-ServersAdmin275Test-732223889-project-member</nova:user>
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <nova:project uuid="74b1ee3fc09b40608d0892724c5ddba4">tempest-ServersAdmin275Test-732223889</nova:project>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <system>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <entry name="serial">c8abb380-680c-41b2-8156-a5e2b5b96f42</entry>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <entry name="uuid">c8abb380-680c-41b2-8156-a5e2b5b96f42</entry>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     </system>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <os>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   </os>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <features>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   </features>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c8abb380-680c-41b2-8156-a5e2b5b96f42_disk">
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       </source>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config">
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       </source>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:36:50 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/console.log" append="off"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <video>
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     </video>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:36:50 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:36:50 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:36:50 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:36:50 compute-2 nova_compute[226829]: </domain>
Jan 31 07:36:50 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.556 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.556 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.557 226833 INFO nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Using config drive
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.596 226833 DEBUG nova.storage.rbd_utils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.636 226833 DEBUG nova.objects.instance [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lazy-loading 'ec2_ids' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:50 compute-2 nova_compute[226829]: 2026-01-31 07:36:50.679 226833 DEBUG nova.objects.instance [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lazy-loading 'keypairs' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:51 compute-2 nova_compute[226829]: 2026-01-31 07:36:51.332 226833 INFO nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Creating config drive at /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config
Jan 31 07:36:51 compute-2 nova_compute[226829]: 2026-01-31 07:36:51.338 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp61fdds73 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3894641582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:51 compute-2 ceph-mon[77282]: pgmap v1090: 305 pgs: 305 active+clean; 463 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.1 MiB/s wr, 351 op/s
Jan 31 07:36:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/339851413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:51 compute-2 nova_compute[226829]: 2026-01-31 07:36:51.467 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp61fdds73" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:51 compute-2 nova_compute[226829]: 2026-01-31 07:36:51.507 226833 DEBUG nova.storage.rbd_utils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] rbd image c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:51 compute-2 nova_compute[226829]: 2026-01-31 07:36:51.512 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:51.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:51 compute-2 nova_compute[226829]: 2026-01-31 07:36:51.786 226833 DEBUG oslo_concurrency.processutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config c8abb380-680c-41b2-8156-a5e2b5b96f42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:51 compute-2 nova_compute[226829]: 2026-01-31 07:36:51.787 226833 INFO nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Deleting local config drive /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42/disk.config because it was imported into RBD.
Jan 31 07:36:51 compute-2 systemd-machined[195142]: New machine qemu-12-instance-00000018.
Jan 31 07:36:51 compute-2 systemd[1]: Started Virtual Machine qemu-12-instance-00000018.
Jan 31 07:36:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:52.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.364 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for c8abb380-680c-41b2-8156-a5e2b5b96f42 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.367 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845012.3638783, c8abb380-680c-41b2-8156-a5e2b5b96f42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.367 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] VM Resumed (Lifecycle Event)
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.373 226833 DEBUG nova.compute.manager [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.374 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.382 226833 INFO nova.virt.libvirt.driver [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance spawned successfully.
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.384 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.391 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.397 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.411 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.412 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.413 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.414 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.415 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.416 226833 DEBUG nova.virt.libvirt.driver [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.423 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.423 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845012.3656743, c8abb380-680c-41b2-8156-a5e2b5b96f42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.424 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] VM Started (Lifecycle Event)
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.462 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.467 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.502 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.517 226833 DEBUG nova.compute.manager [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.591 226833 DEBUG oslo_concurrency.lockutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.592 226833 DEBUG oslo_concurrency.lockutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.592 226833 DEBUG nova.objects.instance [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.656 226833 DEBUG oslo_concurrency.lockutils [None req-e056437b-ce73-4c13-b550-ae1eee8aa0d7 0e29105f1ef2472eaf65de6288d98848 e55550048fad4c1394daa948984ef12b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.868 226833 DEBUG nova.network.neutron [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating instance_info_cache with network_info: [{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.897 226833 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Releasing lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.900 226833 DEBUG os_brick.utils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.903 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.918 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.918 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[6d50e81b-2982-497c-97e0-8969e48ff5fc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.920 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.929 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.929 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[fea86515-bf14-4689-a80e-e965fae4d810]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.931 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.940 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.940 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[a49c74ef-3b0f-4461-a6d6-fbf1598a6652]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.942 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9f32f0-3230-41a0-b987-33b372c16106]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.943 226833 DEBUG oslo_concurrency.processutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.963 226833 DEBUG oslo_concurrency.processutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.968 226833 DEBUG os_brick.initiator.connectors.lightos [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.969 226833 DEBUG os_brick.initiator.connectors.lightos [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.970 226833 DEBUG os_brick.initiator.connectors.lightos [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:36:52 compute-2 nova_compute[226829]: 2026-01-31 07:36:52.972 226833 DEBUG os_brick.utils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.001 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844997.9976575, 71265e55-f168-471c-80bc-80b49177a637 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.002 226833 INFO nova.compute.manager [-] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Stopped (Lifecycle Event)
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.023 226833 DEBUG nova.compute.manager [None req-7b12bc8d-155f-40d4-b6ed-445e187b7245 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:53.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:53 compute-2 ceph-mon[77282]: pgmap v1091: 305 pgs: 305 active+clean; 466 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.9 MiB/s wr, 355 op/s
Jan 31 07:36:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3600492494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.725 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.850 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "c8abb380-680c-41b2-8156-a5e2b5b96f42" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.851 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "c8abb380-680c-41b2-8156-a5e2b5b96f42" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.851 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "c8abb380-680c-41b2-8156-a5e2b5b96f42-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.851 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "c8abb380-680c-41b2-8156-a5e2b5b96f42-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.852 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "c8abb380-680c-41b2-8156-a5e2b5b96f42-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.853 226833 INFO nova.compute.manager [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Terminating instance
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.854 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "refresh_cache-c8abb380-680c-41b2-8156-a5e2b5b96f42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.854 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquired lock "refresh_cache-c8abb380-680c-41b2-8156-a5e2b5b96f42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:53 compute-2 nova_compute[226829]: 2026-01-31 07:36:53.855 226833 DEBUG nova.network.neutron [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.032 226833 DEBUG nova.network.neutron [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:54.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.280 226833 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0wegpadh',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={21ac0cb5-f889-4135-9b17-5debc0b9246e='813cfeab-a1f5-44af-8928-fed3ed24d043'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.281 226833 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Creating instance directory: /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.282 226833 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Ensure instance console log exists: /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.284 226833 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.288 226833 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.290 226833 DEBUG nova.virt.libvirt.vif [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-596053430',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-596053430',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:36:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-w7qr5y3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:36:44Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=71265e55-f168-471c-80bc-80b49177a637,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.291 226833 DEBUG nova.network.os_vif_util [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converting VIF {"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.293 226833 DEBUG nova.network.os_vif_util [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.294 226833 DEBUG os_vif [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.296 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.297 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.298 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.308 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.309 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84bfd4cb-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.309 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84bfd4cb-81, col_values=(('external_ids', {'iface-id': '84bfd4cb-8188-4fde-bca2-fdb0a732119f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:c1:cc', 'vm-uuid': '71265e55-f168-471c-80bc-80b49177a637'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:54 compute-2 NetworkManager[48999]: <info>  [1769845014.3128] manager: (tap84bfd4cb-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.319 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.320 226833 INFO os_vif [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81')
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.323 226833 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.323 226833 DEBUG nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0wegpadh',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={21ac0cb5-f889-4135-9b17-5debc0b9246e='813cfeab-a1f5-44af-8928-fed3ed24d043'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 31 07:36:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/483490000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/654419686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.648 226833 DEBUG nova.network.neutron [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.663 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Releasing lock "refresh_cache-c8abb380-680c-41b2-8156-a5e2b5b96f42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.664 226833 DEBUG nova.compute.manager [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:36:54 compute-2 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 31 07:36:54 compute-2 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000018.scope: Consumed 2.957s CPU time.
Jan 31 07:36:54 compute-2 systemd-machined[195142]: Machine qemu-12-instance-00000018 terminated.
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.885 226833 INFO nova.virt.libvirt.driver [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance destroyed successfully.
Jan 31 07:36:54 compute-2 nova_compute[226829]: 2026-01-31 07:36:54.886 226833 DEBUG nova.objects.instance [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lazy-loading 'resources' on Instance uuid c8abb380-680c-41b2-8156-a5e2b5b96f42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:55 compute-2 sudo[238243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:55 compute-2 sudo[238243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-2 sudo[238243]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-2 sudo[238269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:36:55 compute-2 sudo[238269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-2 sudo[238269]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-2 sudo[238295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:55 compute-2 sudo[238295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-2 sudo[238295]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-2 sudo[238320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:36:55 compute-2 sudo[238320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.384 226833 INFO nova.virt.libvirt.driver [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Deleting instance files /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42_del
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.386 226833 INFO nova.virt.libvirt.driver [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Deletion of /var/lib/nova/instances/c8abb380-680c-41b2-8156-a5e2b5b96f42_del complete
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.460 226833 INFO nova.compute.manager [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Took 0.80 seconds to destroy the instance on the hypervisor.
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.460 226833 DEBUG oslo.service.loopingcall [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.461 226833 DEBUG nova.compute.manager [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.461 226833 DEBUG nova.network.neutron [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:36:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:55.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:55 compute-2 ceph-mon[77282]: pgmap v1092: 305 pgs: 305 active+clean; 454 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 8.0 MiB/s wr, 439 op/s
Jan 31 07:36:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1667545813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/360608350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.667 226833 DEBUG nova.network.neutron [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.680 226833 DEBUG nova.network.neutron [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.697 226833 INFO nova.compute.manager [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Took 0.24 seconds to deallocate network for instance.
Jan 31 07:36:55 compute-2 sudo[238320]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.773 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.773 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:55 compute-2 nova_compute[226829]: 2026-01-31 07:36:55.882 226833 DEBUG oslo_concurrency.processutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:55 compute-2 sudo[238378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:55 compute-2 sudo[238378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-2 sudo[238378]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-2 sudo[238404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:36:55 compute-2 sudo[238404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-2 sudo[238404]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:56 compute-2 sudo[238429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:56 compute-2 sudo[238429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:56 compute-2 sudo[238429]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.022 226833 DEBUG nova.network.neutron [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Port 84bfd4cb-8188-4fde-bca2-fdb0a732119f updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 31 07:36:56 compute-2 sudo[238455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 07:36:56 compute-2 sudo[238455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:36:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:56.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:36:56 compute-2 sudo[238455]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1402203378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.341 226833 DEBUG oslo_concurrency.processutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.345 226833 DEBUG nova.compute.provider_tree [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.360 226833 DEBUG nova.scheduler.client.report [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.380 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.395 226833 DEBUG nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0wegpadh',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={21ac0cb5-f889-4135-9b17-5debc0b9246e='813cfeab-a1f5-44af-8928-fed3ed24d043'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.404 226833 INFO nova.scheduler.client.report [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Deleted allocations for instance c8abb380-680c-41b2-8156-a5e2b5b96f42
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.480 226833 DEBUG oslo_concurrency.lockutils [None req-20b468c7-328e-4fec-aec4-3b336d7c5189 b9769a806c6f4874ab462acac1b4bfcc 74b1ee3fc09b40608d0892724c5ddba4 - - default default] Lock "c8abb380-680c-41b2-8156-a5e2b5b96f42" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.510 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:36:56 compute-2 systemd[1]: Starting libvirt proxy daemon...
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.510 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.532 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.533 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.533 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.534 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.534 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:56 compute-2 systemd[1]: Started libvirt proxy daemon.
Jan 31 07:36:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2607464021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1402203378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-2 kernel: tap84bfd4cb-81: entered promiscuous mode
Jan 31 07:36:56 compute-2 ovn_controller[133834]: 2026-01-31T07:36:56Z|00068|binding|INFO|Claiming lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f for this additional chassis.
Jan 31 07:36:56 compute-2 ovn_controller[133834]: 2026-01-31T07:36:56Z|00069|binding|INFO|84bfd4cb-8188-4fde-bca2-fdb0a732119f: Claiming fa:16:3e:eb:c1:cc 10.100.0.8
Jan 31 07:36:56 compute-2 NetworkManager[48999]: <info>  [1769845016.6722] manager: (tap84bfd4cb-81): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.673 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.678 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.680 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:56 compute-2 ovn_controller[133834]: 2026-01-31T07:36:56Z|00070|binding|INFO|Setting lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f ovn-installed in OVS
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.682 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:56 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.686 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:56 compute-2 systemd-machined[195142]: New machine qemu-13-instance-0000001a.
Jan 31 07:36:56 compute-2 systemd[1]: Started Virtual Machine qemu-13-instance-0000001a.
Jan 31 07:36:56 compute-2 systemd-udevd[238573]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:36:56 compute-2 NetworkManager[48999]: <info>  [1769845016.7341] device (tap84bfd4cb-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:36:56 compute-2 NetworkManager[48999]: <info>  [1769845016.7350] device (tap84bfd4cb-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:36:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1944305336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:56.998 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.098 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845002.0972416, 871711de-f993-4592-83a2-a36c4039786d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.099 226833 INFO nova.compute.manager [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] VM Stopped (Lifecycle Event)
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.115 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.115 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.132 226833 DEBUG nova.compute.manager [None req-a9685bd1-b110-44da-88bd-a1f29bcb6e55 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.291 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.291 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4755MB free_disk=20.820262908935547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.292 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.292 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.331 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Migration for instance 71265e55-f168-471c-80bc-80b49177a637 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.353 226833 INFO nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating resource usage from migration 4a1b0c4a-1bfe-448c-bdfa-5cee17568893
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.354 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Starting to track incoming migration 4a1b0c4a-1bfe-448c-bdfa-5cee17568893 with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.393 226833 WARNING nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 71265e55-f168-471c-80bc-80b49177a637 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}.
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.393 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.393 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.468 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:57.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.622 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845017.622423, 71265e55-f168-471c-80bc-80b49177a637 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.623 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Started (Lifecycle Event)
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.674 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:57 compute-2 ceph-mon[77282]: pgmap v1093: 305 pgs: 305 active+clean; 409 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 7.3 MiB/s wr, 456 op/s
Jan 31 07:36:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1944305336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3246726691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.916 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:57 compute-2 nova_compute[226829]: 2026-01-31 07:36:57.920 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:58 compute-2 nova_compute[226829]: 2026-01-31 07:36:58.035 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:58 compute-2 nova_compute[226829]: 2026-01-31 07:36:58.068 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845018.0684657, 71265e55-f168-471c-80bc-80b49177a637 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:58 compute-2 nova_compute[226829]: 2026-01-31 07:36:58.069 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Resumed (Lifecycle Event)
Jan 31 07:36:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:58.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:58 compute-2 nova_compute[226829]: 2026-01-31 07:36:58.168 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:58 compute-2 nova_compute[226829]: 2026-01-31 07:36:58.172 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:58 compute-2 nova_compute[226829]: 2026-01-31 07:36:58.228 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:36:58 compute-2 nova_compute[226829]: 2026-01-31 07:36:58.228 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:58 compute-2 podman[238649]: 2026-01-31 07:36:58.248746056 +0000 UTC m=+0.120781250 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 07:36:58 compute-2 nova_compute[226829]: 2026-01-31 07:36:58.266 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-2.ctlplane.example.com
Jan 31 07:36:58 compute-2 nova_compute[226829]: 2026-01-31 07:36:58.728 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3246726691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.079 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "a8d72f45-583b-4dea-959f-508e0e8273d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.081 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "a8d72f45-583b-4dea-959f-508e0e8273d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.105 226833 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.176 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.177 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.186 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.186 226833 INFO nova.compute.claims [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.281 226833 DEBUG nova.scheduler.client.report [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.300 226833 DEBUG nova.scheduler.client.report [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.301 226833 DEBUG nova.compute.provider_tree [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.324 226833 DEBUG nova.scheduler.client.report [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.328 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.358 226833 DEBUG nova.scheduler.client.report [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:36:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:36:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5067 writes, 26K keys, 5067 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 5067 writes, 5067 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1589 writes, 7638 keys, 1589 commit groups, 1.0 writes per commit group, ingest: 15.95 MB, 0.03 MB/s
                                           Interval WAL: 1589 writes, 1589 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    113.5      0.27              0.08        14    0.019       0      0       0.0       0.0
                                             L6      1/0    8.58 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    158.6    131.6      0.82              0.33        13    0.063     61K   6832       0.0       0.0
                                            Sum      1/0    8.58 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    119.2    127.1      1.09              0.41        27    0.040     61K   6832       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1    134.9    138.0      0.36              0.17        10    0.036     25K   2532       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    158.6    131.6      0.82              0.33        13    0.063     61K   6832       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    114.2      0.27              0.08        13    0.021       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.030, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.08 MB/s write, 0.13 GB read, 0.07 MB/s read, 1.1 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 12.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000153 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(704,11.69 MB,3.84423%) FilterBlock(27,177.36 KB,0.0569745%) IndexBlock(27,333.45 KB,0.107118%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.428 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:36:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:59.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:59 compute-2 ovn_controller[133834]: 2026-01-31T07:36:59Z|00071|binding|INFO|Claiming lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f for this chassis.
Jan 31 07:36:59 compute-2 ovn_controller[133834]: 2026-01-31T07:36:59Z|00072|binding|INFO|84bfd4cb-8188-4fde-bca2-fdb0a732119f: Claiming fa:16:3e:eb:c1:cc 10.100.0.8
Jan 31 07:36:59 compute-2 ovn_controller[133834]: 2026-01-31T07:36:59Z|00073|binding|INFO|Setting lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f up in Southbound
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.728 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:c1:cc 10.100.0.8'], port_security=['fa:16:3e:eb:c1:cc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '71265e55-f168-471c-80bc-80b49177a637', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '21', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=84bfd4cb-8188-4fde-bca2-fdb0a732119f) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.732 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 84bfd4cb-8188-4fde-bca2-fdb0a732119f in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 bound to our chassis
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.735 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.748 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ec25f8-0099-4fcd-a3f4-97b0f0cfb4df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.749 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4d1862b-21 in ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.751 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4d1862b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.751 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6c9969-9613-4338-9226-7d74c0e00c90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.752 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ea10c0-b7b6-4cc0-9051-b7398cbefc5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.772 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1bf148-ff87-4126-a5e0-9c0013cd21f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.782 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fa48d251-383f-4fad-a13b-0a8a42065d31]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.812 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[940f3966-7963-44cd-86fb-19e7fc8749a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.817 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fd22c135-ee70-46a8-ad55-82ec6face390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 NetworkManager[48999]: <info>  [1769845019.8306] manager: (tape4d1862b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 31 07:36:59 compute-2 ceph-mon[77282]: pgmap v1094: 305 pgs: 305 active+clean; 388 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.5 MiB/s wr, 476 op/s
Jan 31 07:36:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:36:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:36:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:36:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:36:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:36:59 compute-2 systemd-udevd[238704]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.878 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6a808f-e530-4164-92fd-8aabd1770faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.882 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad2a263-6d74-4ece-8425-6965f69c822b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 NetworkManager[48999]: <info>  [1769845019.9047] device (tape4d1862b-20): carrier: link connected
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.908 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa1e1cd-89e7-4faa-a685-837437f4257b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1256924332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.926 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f77c1570-ea3d-4a97-934a-e772ef15f0ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d1862b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:38:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527237, 'reachable_time': 22529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238724, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.929 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.936 226833 DEBUG nova.compute.provider_tree [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.940 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f3aaf7fd-3cbc-431f-a739-51f72bc88fce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:3856'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527237, 'tstamp': 527237}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238726, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.953 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[19fcfe3e-3e19-415d-8083-f931bcdaafdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d1862b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:38:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527237, 'reachable_time': 22529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238727, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.961 226833 DEBUG nova.scheduler.client.report [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:36:59.978 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ec66e6fb-ac81-418e-b6f3-7cfb4a998d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:36:59 compute-2 nova_compute[226829]: 2026-01-31 07:36:59.993 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.024 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "d9753a5d-0d10-4222-ad74-25f55ff7027f" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.025 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "d9753a5d-0d10-4222-ad74-25f55ff7027f" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:00.028 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[340d67b3-c229-4e8f-9be3-56d47ae21b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:00.030 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d1862b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:00.031 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:00.031 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4d1862b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.033 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:00 compute-2 NetworkManager[48999]: <info>  [1769845020.0345] manager: (tape4d1862b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 31 07:37:00 compute-2 kernel: tape4d1862b-20: entered promiscuous mode
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:00.037 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4d1862b-20, col_values=(('external_ids', {'iface-id': '632f26c5-40a9-4337-84da-ea4b4bbdf89c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:00 compute-2 ovn_controller[133834]: 2026-01-31T07:37:00Z|00074|binding|INFO|Releasing lport 632f26c5-40a9-4337-84da-ea4b4bbdf89c from this chassis (sb_readonly=0)
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.040 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:00.040 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:00.041 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cddf5f81-913f-488b-a2ab-0bb15eec28a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:00.042 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:37:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:00.043 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'env', 'PROCESS_TAG=haproxy-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4d1862b-2abc-4d60-bc48-19a5318038f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.045 226833 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] No node specified, defaulting to compute-2.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.047 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.089 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "d9753a5d-0d10-4222-ad74-25f55ff7027f" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.090 226833 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:37:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:00.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.162 226833 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.163 226833 DEBUG nova.network.neutron [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.188 226833 INFO nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.208 226833 INFO nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Post operation of migration started
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.214 226833 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.342 226833 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.344 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.345 226833 INFO nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Creating image(s)
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.374 226833 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image a8d72f45-583b-4dea-959f-508e0e8273d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.418 226833 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image a8d72f45-583b-4dea-959f-508e0e8273d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.451 226833 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image a8d72f45-583b-4dea-959f-508e0e8273d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.455 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:00 compute-2 podman[238775]: 2026-01-31 07:37:00.469821214 +0000 UTC m=+0.093201427 container create 5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:37:00 compute-2 systemd[1]: Started libpod-conmon-5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d.scope.
Jan 31 07:37:00 compute-2 podman[238775]: 2026-01-31 07:37:00.424216829 +0000 UTC m=+0.047597102 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.526 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.528 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.529 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.529 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:00 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:37:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f67d0e1c4fc2f4ea3bfd30881c5eeb25dc2cd852b609b7300d1b1961d67675/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:00 compute-2 podman[238775]: 2026-01-31 07:37:00.557368026 +0000 UTC m=+0.180748269 container init 5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:37:00 compute-2 podman[238775]: 2026-01-31 07:37:00.562593499 +0000 UTC m=+0.185973702 container start 5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.571 226833 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image a8d72f45-583b-4dea-959f-508e0e8273d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.576 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a8d72f45-583b-4dea-959f-508e0e8273d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:00 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[238830]: [NOTICE]   (238851) : New worker (238857) forked
Jan 31 07:37:00 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[238830]: [NOTICE]   (238851) : Loading success.
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.822 226833 DEBUG nova.network.neutron [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.823 226833 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.899 226833 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.900 226833 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquired lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.900 226833 DEBUG nova.network.neutron [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:37:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1256924332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/351743600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:00 compute-2 nova_compute[226829]: 2026-01-31 07:37:00.918 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a8d72f45-583b-4dea-959f-508e0e8273d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.031 226833 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] resizing rbd image a8d72f45-583b-4dea-959f-508e0e8273d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.194 226833 DEBUG nova.objects.instance [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'migration_context' on Instance uuid a8d72f45-583b-4dea-959f-508e0e8273d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.206 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.213 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.213 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Ensure instance console log exists: /var/lib/nova/instances/a8d72f45-583b-4dea-959f-508e0e8273d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.213 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.214 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.214 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.216 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.222 226833 WARNING nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.228 226833 DEBUG nova.virt.libvirt.host [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.228 226833 DEBUG nova.virt.libvirt.host [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.231 226833 DEBUG nova.virt.libvirt.host [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.232 226833 DEBUG nova.virt.libvirt.host [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.233 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.234 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.234 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.234 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.235 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.235 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.236 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.236 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.236 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.237 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.237 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.237 226833 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.241 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.005000135s ======
Jan 31 07:37:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:01.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000135s
Jan 31 07:37:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:37:01 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4239516949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.711 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.742 226833 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image a8d72f45-583b-4dea-959f-508e0e8273d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:01 compute-2 nova_compute[226829]: 2026-01-31 07:37:01.748 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:01 compute-2 ceph-mon[77282]: pgmap v1095: 305 pgs: 305 active+clean; 372 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.1 MiB/s wr, 472 op/s
Jan 31 07:37:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3581421991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4239516949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:02.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:37:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3213225161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.186 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.188 226833 DEBUG nova.objects.instance [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8d72f45-583b-4dea-959f-508e0e8273d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.203 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <uuid>a8d72f45-583b-4dea-959f-508e0e8273d7</uuid>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <name>instance-0000001e</name>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <nova:name>tempest-ServersOnMultiNodesTest-server-459196044-1</nova:name>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:37:01</nova:creationTime>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <nova:user uuid="741e8133b32342e083b6dd5f0e316abf">tempest-ServersOnMultiNodesTest-1893229170-project-member</nova:user>
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <nova:project uuid="b2c9f3f1d94b49ae835ac14aae70bd73">tempest-ServersOnMultiNodesTest-1893229170</nova:project>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <system>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <entry name="serial">a8d72f45-583b-4dea-959f-508e0e8273d7</entry>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <entry name="uuid">a8d72f45-583b-4dea-959f-508e0e8273d7</entry>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     </system>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <os>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   </os>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <features>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   </features>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/a8d72f45-583b-4dea-959f-508e0e8273d7_disk">
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       </source>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/a8d72f45-583b-4dea-959f-508e0e8273d7_disk.config">
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       </source>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:37:02 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/a8d72f45-583b-4dea-959f-508e0e8273d7/console.log" append="off"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <video>
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     </video>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:37:02 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:37:02 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:37:02 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:37:02 compute-2 nova_compute[226829]: </domain>
Jan 31 07:37:02 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.253 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.254 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.254 226833 INFO nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Using config drive
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.281 226833 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image a8d72f45-583b-4dea-959f-508e0e8273d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.445 226833 INFO nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Creating config drive at /var/lib/nova/instances/a8d72f45-583b-4dea-959f-508e0e8273d7/disk.config
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.449 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8d72f45-583b-4dea-959f-508e0e8273d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyjvuzxc0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.576 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8d72f45-583b-4dea-959f-508e0e8273d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyjvuzxc0" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.611 226833 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image a8d72f45-583b-4dea-959f-508e0e8273d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.615 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8d72f45-583b-4dea-959f-508e0e8273d7/disk.config a8d72f45-583b-4dea-959f-508e0e8273d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:02 compute-2 nova_compute[226829]: 2026-01-31 07:37:02.946 226833 DEBUG nova.network.neutron [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating instance_info_cache with network_info: [{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:03 compute-2 nova_compute[226829]: 2026-01-31 07:37:03.098 226833 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Releasing lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:37:03 compute-2 nova_compute[226829]: 2026-01-31 07:37:03.119 226833 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:03 compute-2 nova_compute[226829]: 2026-01-31 07:37:03.119 226833 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:03 compute-2 nova_compute[226829]: 2026-01-31 07:37:03.120 226833 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:03 compute-2 nova_compute[226829]: 2026-01-31 07:37:03.126 226833 INFO nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 31 07:37:03 compute-2 virtqemud[226546]: Domain id=13 name='instance-0000001a' uuid=71265e55-f168-471c-80bc-80b49177a637 is tainted: custom-monitor
Jan 31 07:37:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2162880359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3213225161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:03 compute-2 nova_compute[226829]: 2026-01-31 07:37:03.315 226833 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8d72f45-583b-4dea-959f-508e0e8273d7/disk.config a8d72f45-583b-4dea-959f-508e0e8273d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:03 compute-2 nova_compute[226829]: 2026-01-31 07:37:03.316 226833 INFO nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Deleting local config drive /var/lib/nova/instances/a8d72f45-583b-4dea-959f-508e0e8273d7/disk.config because it was imported into RBD.
Jan 31 07:37:03 compute-2 systemd-machined[195142]: New machine qemu-14-instance-0000001e.
Jan 31 07:37:03 compute-2 systemd[1]: Started Virtual Machine qemu-14-instance-0000001e.
Jan 31 07:37:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:03.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:03 compute-2 nova_compute[226829]: 2026-01-31 07:37:03.732 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:04.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.133 226833 INFO nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.138 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845024.1365476, a8d72f45-583b-4dea-959f-508e0e8273d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.138 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] VM Resumed (Lifecycle Event)
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.141 226833 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.141 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.147 226833 INFO nova.virt.libvirt.driver [-] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Instance spawned successfully.
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.148 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:37:04 compute-2 ceph-mon[77282]: pgmap v1096: 305 pgs: 305 active+clean; 419 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.1 MiB/s wr, 390 op/s
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.330 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.517 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.523 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.544 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.544 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.545 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.546 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.547 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.547 226833 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.554 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.555 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845024.1401274, a8d72f45-583b-4dea-959f-508e0e8273d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.555 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] VM Started (Lifecycle Event)
Jan 31 07:37:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.600 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.605 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.712 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.730 226833 INFO nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Took 4.39 seconds to spawn the instance on the hypervisor.
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.731 226833 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:04 compute-2 nova_compute[226829]: 2026-01-31 07:37:04.938 226833 INFO nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Took 5.79 seconds to build instance.
Jan 31 07:37:05 compute-2 nova_compute[226829]: 2026-01-31 07:37:05.060 226833 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "a8d72f45-583b-4dea-959f-508e0e8273d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:05 compute-2 nova_compute[226829]: 2026-01-31 07:37:05.145 226833 INFO nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 31 07:37:05 compute-2 nova_compute[226829]: 2026-01-31 07:37:05.151 226833 DEBUG nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:05 compute-2 nova_compute[226829]: 2026-01-31 07:37:05.290 226833 DEBUG nova.objects.instance [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 07:37:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:06.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:06 compute-2 sudo[239137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:06 compute-2 sudo[239137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:06 compute-2 sudo[239137]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:06 compute-2 ceph-mon[77282]: pgmap v1097: 305 pgs: 305 active+clean; 473 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 6.6 MiB/s wr, 405 op/s
Jan 31 07:37:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:37:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:37:06 compute-2 sudo[239162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:37:06 compute-2 sudo[239162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:06 compute-2 sudo[239162]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:06.844 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:06.845 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:06.846 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3541215565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:07.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:07 compute-2 ovn_controller[133834]: 2026-01-31T07:37:07Z|00075|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Jan 31 07:37:07 compute-2 nova_compute[226829]: 2026-01-31 07:37:07.961 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "a8d72f45-583b-4dea-959f-508e0e8273d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:07 compute-2 nova_compute[226829]: 2026-01-31 07:37:07.961 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "a8d72f45-583b-4dea-959f-508e0e8273d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:07 compute-2 nova_compute[226829]: 2026-01-31 07:37:07.961 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "a8d72f45-583b-4dea-959f-508e0e8273d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:07 compute-2 nova_compute[226829]: 2026-01-31 07:37:07.962 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "a8d72f45-583b-4dea-959f-508e0e8273d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:07 compute-2 nova_compute[226829]: 2026-01-31 07:37:07.962 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "a8d72f45-583b-4dea-959f-508e0e8273d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:07 compute-2 nova_compute[226829]: 2026-01-31 07:37:07.963 226833 INFO nova.compute.manager [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Terminating instance
Jan 31 07:37:07 compute-2 nova_compute[226829]: 2026-01-31 07:37:07.964 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "refresh_cache-a8d72f45-583b-4dea-959f-508e0e8273d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:37:07 compute-2 nova_compute[226829]: 2026-01-31 07:37:07.964 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquired lock "refresh_cache-a8d72f45-583b-4dea-959f-508e0e8273d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:37:07 compute-2 nova_compute[226829]: 2026-01-31 07:37:07.964 226833 DEBUG nova.network.neutron [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:37:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:08.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:08 compute-2 nova_compute[226829]: 2026-01-31 07:37:08.149 226833 DEBUG nova.network.neutron [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:37:08 compute-2 podman[239188]: 2026-01-31 07:37:08.181412141 +0000 UTC m=+0.058216272 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 07:37:08 compute-2 ceph-mon[77282]: pgmap v1098: 305 pgs: 305 active+clean; 509 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 8.1 MiB/s wr, 349 op/s
Jan 31 07:37:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/741240861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:08 compute-2 nova_compute[226829]: 2026-01-31 07:37:08.596 226833 DEBUG nova.network.neutron [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:08 compute-2 nova_compute[226829]: 2026-01-31 07:37:08.615 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Releasing lock "refresh_cache-a8d72f45-583b-4dea-959f-508e0e8273d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:37:08 compute-2 nova_compute[226829]: 2026-01-31 07:37:08.615 226833 DEBUG nova.compute.manager [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:37:08 compute-2 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Jan 31 07:37:08 compute-2 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001e.scope: Consumed 5.417s CPU time.
Jan 31 07:37:08 compute-2 systemd-machined[195142]: Machine qemu-14-instance-0000001e terminated.
Jan 31 07:37:08 compute-2 nova_compute[226829]: 2026-01-31 07:37:08.735 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:08 compute-2 nova_compute[226829]: 2026-01-31 07:37:08.843 226833 INFO nova.virt.libvirt.driver [-] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Instance destroyed successfully.
Jan 31 07:37:08 compute-2 nova_compute[226829]: 2026-01-31 07:37:08.844 226833 DEBUG nova.objects.instance [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'resources' on Instance uuid a8d72f45-583b-4dea-959f-508e0e8273d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:08 compute-2 sudo[239209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:08 compute-2 sudo[239209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:08 compute-2 sudo[239209]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:08 compute-2 sudo[239236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:08 compute-2 sudo[239236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:08 compute-2 sudo[239236]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.005 226833 DEBUG oslo_concurrency.lockutils [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.006 226833 DEBUG oslo_concurrency.lockutils [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.007 226833 DEBUG oslo_concurrency.lockutils [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.007 226833 DEBUG oslo_concurrency.lockutils [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.008 226833 DEBUG oslo_concurrency.lockutils [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.009 226833 INFO nova.compute.manager [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Terminating instance
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.011 226833 DEBUG nova.compute.manager [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:37:09 compute-2 kernel: tap84bfd4cb-81 (unregistering): left promiscuous mode
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.183 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 NetworkManager[48999]: <info>  [1769845029.1865] device (tap84bfd4cb-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.195 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 ovn_controller[133834]: 2026-01-31T07:37:09Z|00076|binding|INFO|Releasing lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f from this chassis (sb_readonly=0)
Jan 31 07:37:09 compute-2 ovn_controller[133834]: 2026-01-31T07:37:09Z|00077|binding|INFO|Setting lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f down in Southbound
Jan 31 07:37:09 compute-2 ovn_controller[133834]: 2026-01-31T07:37:09Z|00078|binding|INFO|Removing iface tap84bfd4cb-81 ovn-installed in OVS
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.199 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.211 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 31 07:37:09 compute-2 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001a.scope: Consumed 1.763s CPU time.
Jan 31 07:37:09 compute-2 systemd-machined[195142]: Machine qemu-13-instance-0000001a terminated.
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.332 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.353 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:c1:cc 10.100.0.8'], port_security=['fa:16:3e:eb:c1:cc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '71265e55-f168-471c-80bc-80b49177a637', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '23', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=84bfd4cb-8188-4fde-bca2-fdb0a732119f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.355 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 84bfd4cb-8188-4fde-bca2-fdb0a732119f in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.358 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d1862b-2abc-4d60-bc48-19a5318038f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.359 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0504af78-8f25-4d82-8335-f4776c99edf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.360 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 namespace which is not needed anymore
Jan 31 07:37:09 compute-2 kernel: tap84bfd4cb-81: entered promiscuous mode
Jan 31 07:37:09 compute-2 kernel: tap84bfd4cb-81 (unregistering): left promiscuous mode
Jan 31 07:37:09 compute-2 NetworkManager[48999]: <info>  [1769845029.4321] manager: (tap84bfd4cb-81): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Jan 31 07:37:09 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[238830]: [NOTICE]   (238851) : haproxy version is 2.8.14-c23fe91
Jan 31 07:37:09 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[238830]: [NOTICE]   (238851) : path to executable is /usr/sbin/haproxy
Jan 31 07:37:09 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[238830]: [WARNING]  (238851) : Exiting Master process...
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.486 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[238830]: [ALERT]    (238851) : Current worker (238857) exited with code 143 (Terminated)
Jan 31 07:37:09 compute-2 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[238830]: [WARNING]  (238851) : All workers exited. Exiting... (0)
Jan 31 07:37:09 compute-2 systemd[1]: libpod-5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d.scope: Deactivated successfully.
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.495 226833 INFO nova.virt.libvirt.driver [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Deleting instance files /var/lib/nova/instances/a8d72f45-583b-4dea-959f-508e0e8273d7_del
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.496 226833 INFO nova.virt.libvirt.driver [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Deletion of /var/lib/nova/instances/a8d72f45-583b-4dea-959f-508e0e8273d7_del complete
Jan 31 07:37:09 compute-2 podman[239304]: 2026-01-31 07:37:09.497216737 +0000 UTC m=+0.061700137 container died 5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.503 226833 INFO nova.virt.libvirt.driver [-] [instance: 71265e55-f168-471c-80bc-80b49177a637] Instance destroyed successfully.
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.503 226833 DEBUG nova.objects.instance [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lazy-loading 'resources' on Instance uuid 71265e55-f168-471c-80bc-80b49177a637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:09 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d-userdata-shm.mount: Deactivated successfully.
Jan 31 07:37:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-96f67d0e1c4fc2f4ea3bfd30881c5eeb25dc2cd852b609b7300d1b1961d67675-merged.mount: Deactivated successfully.
Jan 31 07:37:09 compute-2 podman[239304]: 2026-01-31 07:37:09.536460019 +0000 UTC m=+0.100943449 container cleanup 5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:37:09 compute-2 systemd[1]: libpod-conmon-5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d.scope: Deactivated successfully.
Jan 31 07:37:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:09 compute-2 podman[239337]: 2026-01-31 07:37:09.589630712 +0000 UTC m=+0.037985439 container remove 5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:37:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:09.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.594 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[85ae44e7-490b-4f88-8b96-ee59ea537b7f]: (4, ('Sat Jan 31 07:37:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 (5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d)\n5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d\nSat Jan 31 07:37:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 (5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d)\n5641bb5d717c680fe42fe5146beeb6c9a23d38ad5868acd30c863be9218dc39d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.596 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[74b2f450-1667-4918-8f60-adae16dc51ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.597 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d1862b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.600 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 kernel: tape4d1862b-20: left promiscuous mode
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.624 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[252486dd-2fd2-443f-952b-02226b443613]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.636 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2e6b59-2de5-45b7-8e56-e31c38bcb177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.637 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e3efd8-8afa-44b6-9ec3-8540d30d849c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.654 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[07f9278c-6797-4c1c-9552-b8cce12b4bc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527228, 'reachable_time': 32166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239359, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.658 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:37:09 compute-2 systemd[1]: run-netns-ovnmeta\x2de4d1862b\x2d2abc\x2d4d60\x2dbc48\x2d19a5318038f4.mount: Deactivated successfully.
Jan 31 07:37:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:09.659 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1cacb7-75a0-4caf-a4b9-f19ef4fb1db6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.881 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845014.8802903, c8abb380-680c-41b2-8156-a5e2b5b96f42 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.882 226833 INFO nova.compute.manager [-] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] VM Stopped (Lifecycle Event)
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.925 226833 DEBUG nova.virt.libvirt.vif [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-596053430',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-596053430',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:36:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-w7qr5y3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:37:05Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=71265e55-f168-471c-80bc-80b49177a637,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.926 226833 DEBUG nova.network.os_vif_util [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converting VIF {"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.926 226833 DEBUG nova.network.os_vif_util [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.927 226833 DEBUG os_vif [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.929 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.929 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84bfd4cb-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.931 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.932 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.935 226833 INFO os_vif [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81')
Jan 31 07:37:09 compute-2 nova_compute[226829]: 2026-01-31 07:37:09.997 226833 DEBUG nova.compute.manager [None req-8396aa75-d0fc-4208-ae17-46b99532d0f4 - - - - - -] [instance: c8abb380-680c-41b2-8156-a5e2b5b96f42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.026 226833 INFO nova.compute.manager [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Took 1.41 seconds to destroy the instance on the hypervisor.
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.028 226833 DEBUG oslo.service.loopingcall [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.028 226833 DEBUG nova.compute.manager [-] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.029 226833 DEBUG nova.network.neutron [-] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:37:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:10.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.204 226833 INFO nova.virt.libvirt.driver [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Deleting instance files /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637_del
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.204 226833 INFO nova.virt.libvirt.driver [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Deletion of /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637_del complete
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.272 226833 INFO nova.compute.manager [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Took 1.26 seconds to destroy the instance on the hypervisor.
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.272 226833 DEBUG oslo.service.loopingcall [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.272 226833 DEBUG nova.compute.manager [-] [instance: 71265e55-f168-471c-80bc-80b49177a637] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.273 226833 DEBUG nova.network.neutron [-] [instance: 71265e55-f168-471c-80bc-80b49177a637] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.279 226833 DEBUG nova.compute.manager [req-1ce06a87-3c63-439b-ac20-ccc23361dc30 req-e17addfc-9aee-48a3-9e29-8da30eb2fb05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.279 226833 DEBUG oslo_concurrency.lockutils [req-1ce06a87-3c63-439b-ac20-ccc23361dc30 req-e17addfc-9aee-48a3-9e29-8da30eb2fb05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.280 226833 DEBUG oslo_concurrency.lockutils [req-1ce06a87-3c63-439b-ac20-ccc23361dc30 req-e17addfc-9aee-48a3-9e29-8da30eb2fb05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.280 226833 DEBUG oslo_concurrency.lockutils [req-1ce06a87-3c63-439b-ac20-ccc23361dc30 req-e17addfc-9aee-48a3-9e29-8da30eb2fb05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.280 226833 DEBUG nova.compute.manager [req-1ce06a87-3c63-439b-ac20-ccc23361dc30 req-e17addfc-9aee-48a3-9e29-8da30eb2fb05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.281 226833 DEBUG nova.compute.manager [req-1ce06a87-3c63-439b-ac20-ccc23361dc30 req-e17addfc-9aee-48a3-9e29-8da30eb2fb05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:37:10 compute-2 ceph-mon[77282]: pgmap v1099: 305 pgs: 305 active+clean; 524 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 8.7 MiB/s wr, 350 op/s
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.341 226833 DEBUG nova.network.neutron [-] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.358 226833 DEBUG nova.network.neutron [-] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.373 226833 INFO nova.compute.manager [-] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Took 0.34 seconds to deallocate network for instance.
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.427 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.427 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.537 226833 DEBUG oslo_concurrency.processutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2091659878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.990 226833 DEBUG oslo_concurrency.processutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:10 compute-2 nova_compute[226829]: 2026-01-31 07:37:10.997 226833 DEBUG nova.compute.provider_tree [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:37:11 compute-2 nova_compute[226829]: 2026-01-31 07:37:11.016 226833 DEBUG nova.scheduler.client.report [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:37:11 compute-2 nova_compute[226829]: 2026-01-31 07:37:11.038 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:11 compute-2 nova_compute[226829]: 2026-01-31 07:37:11.066 226833 INFO nova.scheduler.client.report [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Deleted allocations for instance a8d72f45-583b-4dea-959f-508e0e8273d7
Jan 31 07:37:11 compute-2 nova_compute[226829]: 2026-01-31 07:37:11.139 226833 DEBUG oslo_concurrency.lockutils [None req-1975dd6e-a183-48fa-acb9-05c241b3679a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "a8d72f45-583b-4dea-959f-508e0e8273d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:11 compute-2 nova_compute[226829]: 2026-01-31 07:37:11.165 226833 DEBUG nova.network.neutron [-] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:11 compute-2 nova_compute[226829]: 2026-01-31 07:37:11.213 226833 INFO nova.compute.manager [-] [instance: 71265e55-f168-471c-80bc-80b49177a637] Took 0.94 seconds to deallocate network for instance.
Jan 31 07:37:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2091659878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1151136867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:11.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:11 compute-2 nova_compute[226829]: 2026-01-31 07:37:11.802 226833 INFO nova.compute.manager [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Took 0.59 seconds to detach 1 volumes for instance.
Jan 31 07:37:11 compute-2 nova_compute[226829]: 2026-01-31 07:37:11.803 226833 DEBUG nova.compute.manager [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Deleting volume: 21ac0cb5-f889-4135-9b17-5debc0b9246e _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.034 226833 DEBUG oslo_concurrency.lockutils [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.034 226833 DEBUG oslo_concurrency.lockutils [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.040 226833 DEBUG oslo_concurrency.lockutils [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.075 226833 INFO nova.scheduler.client.report [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Deleted allocations for instance 71265e55-f168-471c-80bc-80b49177a637
Jan 31 07:37:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:12.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.145 226833 DEBUG oslo_concurrency.lockutils [None req-6ffdcb32-c382-473f-9ea1-5413baa6e5d2 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:12 compute-2 ceph-mon[77282]: pgmap v1100: 305 pgs: 305 active+clean; 496 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 7.8 MiB/s wr, 366 op/s
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.464 226833 DEBUG nova.compute.manager [req-e324cedd-7af0-40e8-ad0b-b2806f81b890 req-8d595641-ef6f-4c60-87c9-5287f0b3985f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.465 226833 DEBUG oslo_concurrency.lockutils [req-e324cedd-7af0-40e8-ad0b-b2806f81b890 req-8d595641-ef6f-4c60-87c9-5287f0b3985f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.465 226833 DEBUG oslo_concurrency.lockutils [req-e324cedd-7af0-40e8-ad0b-b2806f81b890 req-8d595641-ef6f-4c60-87c9-5287f0b3985f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.466 226833 DEBUG oslo_concurrency.lockutils [req-e324cedd-7af0-40e8-ad0b-b2806f81b890 req-8d595641-ef6f-4c60-87c9-5287f0b3985f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.466 226833 DEBUG nova.compute.manager [req-e324cedd-7af0-40e8-ad0b-b2806f81b890 req-8d595641-ef6f-4c60-87c9-5287f0b3985f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.467 226833 WARNING nova.compute.manager [req-e324cedd-7af0-40e8-ad0b-b2806f81b890 req-8d595641-ef6f-4c60-87c9-5287f0b3985f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state deleted and task_state None.
Jan 31 07:37:12 compute-2 nova_compute[226829]: 2026-01-31 07:37:12.467 226833 DEBUG nova.compute.manager [req-e324cedd-7af0-40e8-ad0b-b2806f81b890 req-8d595641-ef6f-4c60-87c9-5287f0b3985f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-deleted-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:37:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:37:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2138114906' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:37:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:37:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2138114906' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:37:13 compute-2 ceph-mon[77282]: pgmap v1101: 305 pgs: 305 active+clean; 437 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.8 MiB/s wr, 390 op/s
Jan 31 07:37:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2138114906' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:37:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2138114906' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:37:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:13.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:13 compute-2 nova_compute[226829]: 2026-01-31 07:37:13.737 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:14.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:14 compute-2 nova_compute[226829]: 2026-01-31 07:37:14.931 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 07:37:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:15.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:15 compute-2 ceph-mon[77282]: pgmap v1102: 305 pgs: 305 active+clean; 292 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 6.1 MiB/s wr, 411 op/s
Jan 31 07:37:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3545881058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4023877950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:16.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:17.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:17 compute-2 ceph-mon[77282]: pgmap v1103: 305 pgs: 305 active+clean; 181 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 377 op/s
Jan 31 07:37:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2312642852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:18.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:18 compute-2 nova_compute[226829]: 2026-01-31 07:37:18.739 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:19.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e153 e153: 3 total, 3 up, 3 in
Jan 31 07:37:19 compute-2 ceph-mon[77282]: pgmap v1104: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 306 op/s
Jan 31 07:37:19 compute-2 nova_compute[226829]: 2026-01-31 07:37:19.933 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:20.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:20 compute-2 ceph-mon[77282]: osdmap e153: 3 total, 3 up, 3 in
Jan 31 07:37:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3163764212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:37:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:21.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:37:22 compute-2 ceph-mon[77282]: pgmap v1106: 305 pgs: 305 active+clean; 67 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 21 KiB/s wr, 180 op/s
Jan 31 07:37:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:22.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:22.243 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:37:22 compute-2 nova_compute[226829]: 2026-01-31 07:37:22.243 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:22.244 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:37:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:22.245 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:23 compute-2 nova_compute[226829]: 2026-01-31 07:37:23.163 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e154 e154: 3 total, 3 up, 3 in
Jan 31 07:37:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:23.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:23 compute-2 nova_compute[226829]: 2026-01-31 07:37:23.741 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:23 compute-2 nova_compute[226829]: 2026-01-31 07:37:23.840 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845028.836326, a8d72f45-583b-4dea-959f-508e0e8273d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:23 compute-2 nova_compute[226829]: 2026-01-31 07:37:23.840 226833 INFO nova.compute.manager [-] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] VM Stopped (Lifecycle Event)
Jan 31 07:37:23 compute-2 nova_compute[226829]: 2026-01-31 07:37:23.890 226833 DEBUG nova.compute.manager [None req-50c4d7a0-ccbc-4104-aeeb-bbbfffa4d7c3 - - - - - -] [instance: a8d72f45-583b-4dea-959f-508e0e8273d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:24.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:24 compute-2 ceph-mon[77282]: pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 20 KiB/s wr, 158 op/s
Jan 31 07:37:24 compute-2 ceph-mon[77282]: osdmap e154: 3 total, 3 up, 3 in
Jan 31 07:37:24 compute-2 nova_compute[226829]: 2026-01-31 07:37:24.501 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845029.4947722, 71265e55-f168-471c-80bc-80b49177a637 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:24 compute-2 nova_compute[226829]: 2026-01-31 07:37:24.501 226833 INFO nova.compute.manager [-] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Stopped (Lifecycle Event)
Jan 31 07:37:24 compute-2 nova_compute[226829]: 2026-01-31 07:37:24.530 226833 DEBUG nova.compute.manager [None req-7dc1474b-646a-478a-8e84-7dbbb83f6640 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:24 compute-2 nova_compute[226829]: 2026-01-31 07:37:24.935 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:25 compute-2 ceph-mon[77282]: pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 5.7 KiB/s wr, 105 op/s
Jan 31 07:37:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:25.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:26.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:27.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:27 compute-2 ceph-mon[77282]: pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 4.7 KiB/s wr, 83 op/s
Jan 31 07:37:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:28.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:37:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 14K writes, 61K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 14K writes, 4437 syncs, 3.32 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8690 writes, 36K keys, 8690 commit groups, 1.0 writes per commit group, ingest: 35.69 MB, 0.06 MB/s
                                           Interval WAL: 8690 writes, 3392 syncs, 2.56 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 07:37:28 compute-2 nova_compute[226829]: 2026-01-31 07:37:28.743 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:29 compute-2 sudo[239410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:29 compute-2 sudo[239410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:29 compute-2 sudo[239410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:29 compute-2 sudo[239441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:29 compute-2 sudo[239441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:29 compute-2 sudo[239441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:29 compute-2 podman[239434]: 2026-01-31 07:37:29.155482413 +0000 UTC m=+0.114776686 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:37:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:29.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:29 compute-2 ceph-mon[77282]: pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 4.4 KiB/s wr, 74 op/s
Jan 31 07:37:29 compute-2 nova_compute[226829]: 2026-01-31 07:37:29.937 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:30.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:31.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:31 compute-2 ceph-mon[77282]: pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 62 op/s
Jan 31 07:37:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:32.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:33.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e155 e155: 3 total, 3 up, 3 in
Jan 31 07:37:33 compute-2 ceph-mon[77282]: pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 2.4 KiB/s wr, 38 op/s
Jan 31 07:37:33 compute-2 ceph-mon[77282]: osdmap e155: 3 total, 3 up, 3 in
Jan 31 07:37:33 compute-2 nova_compute[226829]: 2026-01-31 07:37:33.746 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:34.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:34 compute-2 ceph-mgr[77635]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 07:37:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:34 compute-2 nova_compute[226829]: 2026-01-31 07:37:34.939 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:35 compute-2 ceph-mon[77282]: pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 716 B/s wr, 7 op/s
Jan 31 07:37:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:36.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:37 compute-2 ceph-mon[77282]: pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 716 B/s wr, 6 op/s
Jan 31 07:37:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:38.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:38 compute-2 nova_compute[226829]: 2026-01-31 07:37:38.747 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:39 compute-2 podman[239492]: 2026-01-31 07:37:39.199388175 +0000 UTC m=+0.078866366 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 07:37:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:39.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:39 compute-2 ceph-mon[77282]: pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 409 B/s wr, 5 op/s
Jan 31 07:37:39 compute-2 nova_compute[226829]: 2026-01-31 07:37:39.942 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:40.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:41.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:41 compute-2 ceph-mon[77282]: pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:43.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:43 compute-2 nova_compute[226829]: 2026-01-31 07:37:43.750 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:43 compute-2 ceph-mon[77282]: pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:44.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3203814439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:37:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3203814439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:37:44 compute-2 nova_compute[226829]: 2026-01-31 07:37:44.944 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:45.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:45 compute-2 ceph-mon[77282]: pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:46.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:47.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:47 compute-2 ceph-mon[77282]: pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2405675773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:48.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:48 compute-2 nova_compute[226829]: 2026-01-31 07:37:48.752 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2122428780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4058046253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:49 compute-2 sudo[239516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:49 compute-2 sudo[239516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:49 compute-2 sudo[239516]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:49 compute-2 sudo[239542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:49 compute-2 sudo[239542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:49 compute-2 sudo[239542]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:49.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:49 compute-2 nova_compute[226829]: 2026-01-31 07:37:49.946 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:50 compute-2 ceph-mon[77282]: pgmap v1122: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3753196070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:50.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:50 compute-2 nova_compute[226829]: 2026-01-31 07:37:50.371 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "ae28510e-134b-4be3-b0b5-1eed1b4da893" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:50 compute-2 nova_compute[226829]: 2026-01-31 07:37:50.372 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:50 compute-2 nova_compute[226829]: 2026-01-31 07:37:50.393 226833 DEBUG nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:37:50 compute-2 nova_compute[226829]: 2026-01-31 07:37:50.515 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:50 compute-2 nova_compute[226829]: 2026-01-31 07:37:50.515 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:50 compute-2 nova_compute[226829]: 2026-01-31 07:37:50.523 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:37:50 compute-2 nova_compute[226829]: 2026-01-31 07:37:50.524 226833 INFO nova.compute.claims [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:37:50 compute-2 nova_compute[226829]: 2026-01-31 07:37:50.851 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:51 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1392450364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.331 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.337 226833 DEBUG nova.compute.provider_tree [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.354 226833 DEBUG nova.scheduler.client.report [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.383 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.383 226833 DEBUG nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.458 226833 DEBUG nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.459 226833 DEBUG nova.network.neutron [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.504 226833 INFO nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.525 226833 DEBUG nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:37:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:51.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.666 226833 DEBUG nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.668 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.668 226833 INFO nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Creating image(s)
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.707 226833 DEBUG nova.storage.rbd_utils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image ae28510e-134b-4be3-b0b5-1eed1b4da893_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.747 226833 DEBUG nova.storage.rbd_utils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image ae28510e-134b-4be3-b0b5-1eed1b4da893_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.783 226833 DEBUG nova.storage.rbd_utils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image ae28510e-134b-4be3-b0b5-1eed1b4da893_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.788 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.809 226833 DEBUG nova.policy [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a44db09acbd4aeb990147dc979f0bfd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0554655ad0a48c8bf0551298dd31919', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.864 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.865 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.866 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.866 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.891 226833 DEBUG nova.storage.rbd_utils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image ae28510e-134b-4be3-b0b5-1eed1b4da893_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:51 compute-2 nova_compute[226829]: 2026-01-31 07:37:51.894 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ae28510e-134b-4be3-b0b5-1eed1b4da893_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:52 compute-2 ceph-mon[77282]: pgmap v1123: 305 pgs: 305 active+clean; 104 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 2.2 MiB/s wr, 28 op/s
Jan 31 07:37:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1392450364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:52.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:52 compute-2 nova_compute[226829]: 2026-01-31 07:37:52.193 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ae28510e-134b-4be3-b0b5-1eed1b4da893_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:52 compute-2 nova_compute[226829]: 2026-01-31 07:37:52.284 226833 DEBUG nova.storage.rbd_utils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] resizing rbd image ae28510e-134b-4be3-b0b5-1eed1b4da893_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:37:52 compute-2 nova_compute[226829]: 2026-01-31 07:37:52.411 226833 DEBUG nova.objects.instance [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'migration_context' on Instance uuid ae28510e-134b-4be3-b0b5-1eed1b4da893 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:52 compute-2 nova_compute[226829]: 2026-01-31 07:37:52.434 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:37:52 compute-2 nova_compute[226829]: 2026-01-31 07:37:52.435 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Ensure instance console log exists: /var/lib/nova/instances/ae28510e-134b-4be3-b0b5-1eed1b4da893/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:37:52 compute-2 nova_compute[226829]: 2026-01-31 07:37:52.435 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:52 compute-2 nova_compute[226829]: 2026-01-31 07:37:52.436 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:52 compute-2 nova_compute[226829]: 2026-01-31 07:37:52.436 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:52 compute-2 nova_compute[226829]: 2026-01-31 07:37:52.705 226833 DEBUG nova.network.neutron [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Successfully created port: 34b1d0d0-9a83-46fb-8663-0daad4c676e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:37:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2146859592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2987293693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1404436523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:53 compute-2 nova_compute[226829]: 2026-01-31 07:37:53.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:53.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:53 compute-2 nova_compute[226829]: 2026-01-31 07:37:53.696 226833 DEBUG nova.network.neutron [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Successfully updated port: 34b1d0d0-9a83-46fb-8663-0daad4c676e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:37:53 compute-2 nova_compute[226829]: 2026-01-31 07:37:53.716 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "refresh_cache-ae28510e-134b-4be3-b0b5-1eed1b4da893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:37:53 compute-2 nova_compute[226829]: 2026-01-31 07:37:53.717 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquired lock "refresh_cache-ae28510e-134b-4be3-b0b5-1eed1b4da893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:37:53 compute-2 nova_compute[226829]: 2026-01-31 07:37:53.717 226833 DEBUG nova.network.neutron [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:37:53 compute-2 nova_compute[226829]: 2026-01-31 07:37:53.754 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:53 compute-2 nova_compute[226829]: 2026-01-31 07:37:53.901 226833 DEBUG nova.compute.manager [req-581d92ee-5dcc-4a46-b479-03a418588753 req-895b7020-951b-4f30-881e-86fca2e68a06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Received event network-changed-34b1d0d0-9a83-46fb-8663-0daad4c676e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:37:53 compute-2 nova_compute[226829]: 2026-01-31 07:37:53.902 226833 DEBUG nova.compute.manager [req-581d92ee-5dcc-4a46-b479-03a418588753 req-895b7020-951b-4f30-881e-86fca2e68a06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Refreshing instance network info cache due to event network-changed-34b1d0d0-9a83-46fb-8663-0daad4c676e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:37:53 compute-2 nova_compute[226829]: 2026-01-31 07:37:53.902 226833 DEBUG oslo_concurrency.lockutils [req-581d92ee-5dcc-4a46-b479-03a418588753 req-895b7020-951b-4f30-881e-86fca2e68a06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ae28510e-134b-4be3-b0b5-1eed1b4da893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:37:54 compute-2 nova_compute[226829]: 2026-01-31 07:37:54.077 226833 DEBUG nova.network.neutron [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:37:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:54.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:54 compute-2 ceph-mon[77282]: pgmap v1124: 305 pgs: 305 active+clean; 143 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 449 KiB/s rd, 4.0 MiB/s wr, 69 op/s
Jan 31 07:37:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/343125461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:54 compute-2 nova_compute[226829]: 2026-01-31 07:37:54.948 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.150 226833 DEBUG nova.network.neutron [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Updating instance_info_cache with network_info: [{"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.421 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Releasing lock "refresh_cache-ae28510e-134b-4be3-b0b5-1eed1b4da893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.422 226833 DEBUG nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Instance network_info: |[{"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.422 226833 DEBUG oslo_concurrency.lockutils [req-581d92ee-5dcc-4a46-b479-03a418588753 req-895b7020-951b-4f30-881e-86fca2e68a06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ae28510e-134b-4be3-b0b5-1eed1b4da893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.422 226833 DEBUG nova.network.neutron [req-581d92ee-5dcc-4a46-b479-03a418588753 req-895b7020-951b-4f30-881e-86fca2e68a06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Refreshing network info cache for port 34b1d0d0-9a83-46fb-8663-0daad4c676e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.425 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Start _get_guest_xml network_info=[{"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.429 226833 WARNING nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.435 226833 DEBUG nova.virt.libvirt.host [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.436 226833 DEBUG nova.virt.libvirt.host [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.438 226833 DEBUG nova.virt.libvirt.host [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.439 226833 DEBUG nova.virt.libvirt.host [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.440 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.440 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.441 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.441 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.442 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.442 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.442 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.443 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.443 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.443 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.444 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.444 226833 DEBUG nova.virt.hardware [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.448 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:55.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:55 compute-2 ceph-mon[77282]: pgmap v1125: 305 pgs: 305 active+clean; 133 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.9 MiB/s wr, 147 op/s
Jan 31 07:37:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:37:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/348254901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.891 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.928 226833 DEBUG nova.storage.rbd_utils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image ae28510e-134b-4be3-b0b5-1eed1b4da893_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:55 compute-2 nova_compute[226829]: 2026-01-31 07:37:55.933 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:37:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:56.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:37:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:37:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/567436037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.355 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.358 226833 DEBUG nova.virt.libvirt.vif [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:37:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-263437114',display_name='tempest-ServersAdminTestJSON-server-263437114',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-263437114',id=34,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-4q08ca15',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:37:51Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=ae28510e-134b-4be3-b0b5-1eed1b4da893,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.359 226833 DEBUG nova.network.os_vif_util [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.361 226833 DEBUG nova.network.os_vif_util [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:60:aa,bridge_name='br-int',has_traffic_filtering=True,id=34b1d0d0-9a83-46fb-8663-0daad4c676e3,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34b1d0d0-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.363 226833 DEBUG nova.objects.instance [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae28510e-134b-4be3-b0b5-1eed1b4da893 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.378 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <uuid>ae28510e-134b-4be3-b0b5-1eed1b4da893</uuid>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <name>instance-00000022</name>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <nova:name>tempest-ServersAdminTestJSON-server-263437114</nova:name>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:37:55</nova:creationTime>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <nova:user uuid="8a44db09acbd4aeb990147dc979f0bfd">tempest-ServersAdminTestJSON-1156607975-project-member</nova:user>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <nova:project uuid="b0554655ad0a48c8bf0551298dd31919">tempest-ServersAdminTestJSON-1156607975</nova:project>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <nova:port uuid="34b1d0d0-9a83-46fb-8663-0daad4c676e3">
Jan 31 07:37:56 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <system>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <entry name="serial">ae28510e-134b-4be3-b0b5-1eed1b4da893</entry>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <entry name="uuid">ae28510e-134b-4be3-b0b5-1eed1b4da893</entry>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     </system>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <os>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   </os>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <features>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   </features>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/ae28510e-134b-4be3-b0b5-1eed1b4da893_disk">
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       </source>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/ae28510e-134b-4be3-b0b5-1eed1b4da893_disk.config">
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       </source>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:37:56 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:ca:60:aa"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <target dev="tap34b1d0d0-9a"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/ae28510e-134b-4be3-b0b5-1eed1b4da893/console.log" append="off"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <video>
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     </video>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:37:56 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:37:56 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:37:56 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:37:56 compute-2 nova_compute[226829]: </domain>
Jan 31 07:37:56 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.380 226833 DEBUG nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Preparing to wait for external event network-vif-plugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.381 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.381 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.382 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.383 226833 DEBUG nova.virt.libvirt.vif [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:37:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-263437114',display_name='tempest-ServersAdminTestJSON-server-263437114',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-263437114',id=34,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-4q08ca15',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:37:51Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=ae28510e-134b-4be3-b0b5-1eed1b4da893,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.384 226833 DEBUG nova.network.os_vif_util [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.385 226833 DEBUG nova.network.os_vif_util [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:60:aa,bridge_name='br-int',has_traffic_filtering=True,id=34b1d0d0-9a83-46fb-8663-0daad4c676e3,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34b1d0d0-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.385 226833 DEBUG os_vif [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:60:aa,bridge_name='br-int',has_traffic_filtering=True,id=34b1d0d0-9a83-46fb-8663-0daad4c676e3,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34b1d0d0-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.387 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.388 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.388 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.396 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.397 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b1d0d0-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.398 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34b1d0d0-9a, col_values=(('external_ids', {'iface-id': '34b1d0d0-9a83-46fb-8663-0daad4c676e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:60:aa', 'vm-uuid': 'ae28510e-134b-4be3-b0b5-1eed1b4da893'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:56 compute-2 NetworkManager[48999]: <info>  [1769845076.4026] manager: (tap34b1d0d0-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.404 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.409 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.409 226833 INFO os_vif [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:60:aa,bridge_name='br-int',has_traffic_filtering=True,id=34b1d0d0-9a83-46fb-8663-0daad4c676e3,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34b1d0d0-9a')
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.490 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.517 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.518 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.518 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.526 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.527 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.527 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No VIF found with MAC fa:16:3e:ca:60:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.527 226833 INFO nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Using config drive
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.550 226833 DEBUG nova.storage.rbd_utils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image ae28510e-134b-4be3-b0b5-1eed1b4da893_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.556 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.557 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.557 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.557 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.557 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:56 compute-2 ovn_controller[133834]: 2026-01-31T07:37:56Z|00079|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 07:37:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/348254901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4241698449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/567436037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.989 226833 INFO nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Creating config drive at /var/lib/nova/instances/ae28510e-134b-4be3-b0b5-1eed1b4da893/disk.config
Jan 31 07:37:56 compute-2 nova_compute[226829]: 2026-01-31 07:37:56.996 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ae28510e-134b-4be3-b0b5-1eed1b4da893/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpahuultmf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3136175005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.055 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.115 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.116 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.126 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ae28510e-134b-4be3-b0b5-1eed1b4da893/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpahuultmf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.166 226833 DEBUG nova.storage.rbd_utils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image ae28510e-134b-4be3-b0b5-1eed1b4da893_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.171 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ae28510e-134b-4be3-b0b5-1eed1b4da893/disk.config ae28510e-134b-4be3-b0b5-1eed1b4da893_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.192 226833 DEBUG nova.network.neutron [req-581d92ee-5dcc-4a46-b479-03a418588753 req-895b7020-951b-4f30-881e-86fca2e68a06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Updated VIF entry in instance network info cache for port 34b1d0d0-9a83-46fb-8663-0daad4c676e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.193 226833 DEBUG nova.network.neutron [req-581d92ee-5dcc-4a46-b479-03a418588753 req-895b7020-951b-4f30-881e-86fca2e68a06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Updating instance_info_cache with network_info: [{"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.214 226833 DEBUG oslo_concurrency.lockutils [req-581d92ee-5dcc-4a46-b479-03a418588753 req-895b7020-951b-4f30-881e-86fca2e68a06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ae28510e-134b-4be3-b0b5-1eed1b4da893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.400 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.402 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4792MB free_disk=20.942790985107422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.402 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.402 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.536 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance ae28510e-134b-4be3-b0b5-1eed1b4da893 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.536 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.537 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.622 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:57.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.710 226833 DEBUG oslo_concurrency.processutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ae28510e-134b-4be3-b0b5-1eed1b4da893/disk.config ae28510e-134b-4be3-b0b5-1eed1b4da893_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.712 226833 INFO nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Deleting local config drive /var/lib/nova/instances/ae28510e-134b-4be3-b0b5-1eed1b4da893/disk.config because it was imported into RBD.
Jan 31 07:37:57 compute-2 ceph-mon[77282]: pgmap v1126: 305 pgs: 305 active+clean; 142 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 167 op/s
Jan 31 07:37:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3136175005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4084864772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:57 compute-2 kernel: tap34b1d0d0-9a: entered promiscuous mode
Jan 31 07:37:57 compute-2 ovn_controller[133834]: 2026-01-31T07:37:57Z|00080|binding|INFO|Claiming lport 34b1d0d0-9a83-46fb-8663-0daad4c676e3 for this chassis.
Jan 31 07:37:57 compute-2 ovn_controller[133834]: 2026-01-31T07:37:57Z|00081|binding|INFO|34b1d0d0-9a83-46fb-8663-0daad4c676e3: Claiming fa:16:3e:ca:60:aa 10.100.0.8
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.775 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:57 compute-2 NetworkManager[48999]: <info>  [1769845077.7773] manager: (tap34b1d0d0-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.779 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.784 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.796 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:60:aa 10.100.0.8'], port_security=['fa:16:3e:ca:60:aa 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ae28510e-134b-4be3-b0b5-1eed1b4da893', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=34b1d0d0-9a83-46fb-8663-0daad4c676e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:37:57 compute-2 systemd-udevd[239929]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.802 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 34b1d0d0-9a83-46fb-8663-0daad4c676e3 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 bound to our chassis
Jan 31 07:37:57 compute-2 systemd-machined[195142]: New machine qemu-15-instance-00000022.
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.805 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53
Jan 31 07:37:57 compute-2 NetworkManager[48999]: <info>  [1769845077.8132] device (tap34b1d0d0-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:37:57 compute-2 NetworkManager[48999]: <info>  [1769845077.8136] device (tap34b1d0d0-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:37:57 compute-2 systemd[1]: Started Virtual Machine qemu-15-instance-00000022.
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.815 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[994c6106-a203-4803-afce-59342839ecc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.816 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c92e27e-f1 in ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.820 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c92e27e-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.820 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8fd0d9-25d1-426c-a87d-898dbced708a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.821 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[05f4eb3f-7897-4a0d-9ddd-89a81b66b15c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:57 compute-2 ovn_controller[133834]: 2026-01-31T07:37:57Z|00082|binding|INFO|Setting lport 34b1d0d0-9a83-46fb-8663-0daad4c676e3 ovn-installed in OVS
Jan 31 07:37:57 compute-2 ovn_controller[133834]: 2026-01-31T07:37:57Z|00083|binding|INFO|Setting lport 34b1d0d0-9a83-46fb-8663-0daad4c676e3 up in Southbound
Jan 31 07:37:57 compute-2 nova_compute[226829]: 2026-01-31 07:37:57.855 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.854 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8d1795-d628-42b5-b80d-125ea3839231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.868 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f7559d36-abfa-4c8d-bcd5-f3fe6ab18069]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.903 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c9214c-28ed-4aed-ad83-f88c84eccbc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.913 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d027f6dc-49d2-4da0-b462-9b7b7192f7e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:57 compute-2 NetworkManager[48999]: <info>  [1769845077.9153] manager: (tap8c92e27e-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.949 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2a5e29-4948-4eff-a7df-3963bd14fa4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.953 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[dfde7cde-37fd-4d04-a98c-58e246800f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:57 compute-2 NetworkManager[48999]: <info>  [1769845077.9768] device (tap8c92e27e-f0): carrier: link connected
Jan 31 07:37:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:57.981 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9230385d-bc1a-4c1b-a82f-a9a69acab459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.002 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa38e5d-27ed-4c82-b7d5-d00d2435eb07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533045, 'reachable_time': 28058, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239970, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.025 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6a8111-086b-4bd0-b7ff-ac056601d1fc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:629b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533045, 'tstamp': 533045}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239972, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.045 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0044f525-61a4-47ae-9227-ce2019970061]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533045, 'reachable_time': 28058, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239973, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.077 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4f502e81-32ce-4b20-b903-8449a7a2d508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1574188447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.130 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.131 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9b78e2-448f-44ac-bf1e-ad5d89952e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.133 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.134 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.135 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c92e27e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:58 compute-2 kernel: tap8c92e27e-f0: entered promiscuous mode
Jan 31 07:37:58 compute-2 NetworkManager[48999]: <info>  [1769845078.1405] manager: (tap8c92e27e-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.140 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c92e27e-f0, col_values=(('external_ids', {'iface-id': 'b682c189-93d2-4c14-8b2a-bafbda6df8a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:58 compute-2 ovn_controller[133834]: 2026-01-31T07:37:58Z|00084|binding|INFO|Releasing lport b682c189-93d2-4c14-8b2a-bafbda6df8a4 from this chassis (sb_readonly=0)
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.142 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.148 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.153 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.155 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c92e27e-f16c-4df2-a299-60ef2ca44f53.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c92e27e-f16c-4df2-a299-60ef2ca44f53.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.156 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[81bad2b1-0a55-4675-9910-9e6d18505a7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.157 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-8c92e27e-f16c-4df2-a299-60ef2ca44f53
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/8c92e27e-f16c-4df2-a299-60ef2ca44f53.pid.haproxy
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 8c92e27e-f16c-4df2-a299-60ef2ca44f53
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:37:58.158 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'env', 'PROCESS_TAG=haproxy-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c92e27e-f16c-4df2-a299-60ef2ca44f53.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.165 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:37:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:37:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:58.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.201 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.203 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.413 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845078.4129393, ae28510e-134b-4be3-b0b5-1eed1b4da893 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.414 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] VM Started (Lifecycle Event)
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.438 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.442 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845078.413139, ae28510e-134b-4be3-b0b5-1eed1b4da893 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.442 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] VM Paused (Lifecycle Event)
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.494 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.499 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.547 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:37:58 compute-2 podman[240049]: 2026-01-31 07:37:58.5775122 +0000 UTC m=+0.067492625 container create 0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:37:58 compute-2 systemd[1]: Started libpod-conmon-0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0.scope.
Jan 31 07:37:58 compute-2 podman[240049]: 2026-01-31 07:37:58.54384701 +0000 UTC m=+0.033827455 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:37:58 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:37:58 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7155d948b40ed956dfeaff1435c1e770dc2223bbbb10bd820c41a6201511310f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:58 compute-2 podman[240049]: 2026-01-31 07:37:58.667336965 +0000 UTC m=+0.157317430 container init 0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:37:58 compute-2 podman[240049]: 2026-01-31 07:37:58.674385957 +0000 UTC m=+0.164366402 container start 0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:37:58 compute-2 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[240064]: [NOTICE]   (240068) : New worker (240070) forked
Jan 31 07:37:58 compute-2 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[240064]: [NOTICE]   (240068) : Loading success.
Jan 31 07:37:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3113696239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1574188447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3287800186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.757 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.806 226833 DEBUG nova.compute.manager [req-7c16dc8c-de7b-49e5-b677-86c22cc98b91 req-4e38d35c-f1de-412f-91b7-75303f643e0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Received event network-vif-plugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.807 226833 DEBUG oslo_concurrency.lockutils [req-7c16dc8c-de7b-49e5-b677-86c22cc98b91 req-4e38d35c-f1de-412f-91b7-75303f643e0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.807 226833 DEBUG oslo_concurrency.lockutils [req-7c16dc8c-de7b-49e5-b677-86c22cc98b91 req-4e38d35c-f1de-412f-91b7-75303f643e0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.808 226833 DEBUG oslo_concurrency.lockutils [req-7c16dc8c-de7b-49e5-b677-86c22cc98b91 req-4e38d35c-f1de-412f-91b7-75303f643e0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.808 226833 DEBUG nova.compute.manager [req-7c16dc8c-de7b-49e5-b677-86c22cc98b91 req-4e38d35c-f1de-412f-91b7-75303f643e0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Processing event network-vif-plugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.810 226833 DEBUG nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.815 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845078.8149028, ae28510e-134b-4be3-b0b5-1eed1b4da893 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.815 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] VM Resumed (Lifecycle Event)
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.818 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.822 226833 INFO nova.virt.libvirt.driver [-] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Instance spawned successfully.
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.823 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.843 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.850 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.854 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.854 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.855 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.855 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.856 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.856 226833 DEBUG nova.virt.libvirt.driver [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.890 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.930 226833 INFO nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Took 7.26 seconds to spawn the instance on the hypervisor.
Jan 31 07:37:58 compute-2 nova_compute[226829]: 2026-01-31 07:37:58.931 226833 DEBUG nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:59 compute-2 nova_compute[226829]: 2026-01-31 07:37:59.036 226833 INFO nova.compute.manager [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Took 8.58 seconds to build instance.
Jan 31 07:37:59 compute-2 nova_compute[226829]: 2026-01-31 07:37:59.052 226833 DEBUG oslo_concurrency.lockutils [None req-2adabfe8-ea0b-4573-a928-82f068c08b1c 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:59 compute-2 nova_compute[226829]: 2026-01-31 07:37:59.173 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:59 compute-2 nova_compute[226829]: 2026-01-31 07:37:59.174 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:59 compute-2 nova_compute[226829]: 2026-01-31 07:37:59.198 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:59 compute-2 nova_compute[226829]: 2026-01-31 07:37:59.205 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:59 compute-2 nova_compute[226829]: 2026-01-31 07:37:59.205 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:37:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:37:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:59.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:59 compute-2 ceph-mon[77282]: pgmap v1127: 305 pgs: 305 active+clean; 134 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 212 op/s
Jan 31 07:38:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:00.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:00 compute-2 podman[240080]: 2026-01-31 07:38:00.221549304 +0000 UTC m=+0.102694296 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 07:38:01 compute-2 nova_compute[226829]: 2026-01-31 07:38:01.042 226833 DEBUG nova.compute.manager [req-22aa2e0a-da5a-44f0-b630-644c0cd59eb4 req-32d08c0a-ec85-42b0-a75b-ad862cdcd349 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Received event network-vif-plugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:01 compute-2 nova_compute[226829]: 2026-01-31 07:38:01.042 226833 DEBUG oslo_concurrency.lockutils [req-22aa2e0a-da5a-44f0-b630-644c0cd59eb4 req-32d08c0a-ec85-42b0-a75b-ad862cdcd349 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:01 compute-2 nova_compute[226829]: 2026-01-31 07:38:01.043 226833 DEBUG oslo_concurrency.lockutils [req-22aa2e0a-da5a-44f0-b630-644c0cd59eb4 req-32d08c0a-ec85-42b0-a75b-ad862cdcd349 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:01 compute-2 nova_compute[226829]: 2026-01-31 07:38:01.043 226833 DEBUG oslo_concurrency.lockutils [req-22aa2e0a-da5a-44f0-b630-644c0cd59eb4 req-32d08c0a-ec85-42b0-a75b-ad862cdcd349 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:01 compute-2 nova_compute[226829]: 2026-01-31 07:38:01.044 226833 DEBUG nova.compute.manager [req-22aa2e0a-da5a-44f0-b630-644c0cd59eb4 req-32d08c0a-ec85-42b0-a75b-ad862cdcd349 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] No waiting events found dispatching network-vif-plugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:38:01 compute-2 nova_compute[226829]: 2026-01-31 07:38:01.044 226833 WARNING nova.compute.manager [req-22aa2e0a-da5a-44f0-b630-644c0cd59eb4 req-32d08c0a-ec85-42b0-a75b-ad862cdcd349 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Received unexpected event network-vif-plugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 for instance with vm_state active and task_state None.
Jan 31 07:38:01 compute-2 nova_compute[226829]: 2026-01-31 07:38:01.402 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:01 compute-2 nova_compute[226829]: 2026-01-31 07:38:01.495 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:01.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:01 compute-2 ceph-mon[77282]: pgmap v1128: 305 pgs: 305 active+clean; 148 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.8 MiB/s wr, 289 op/s
Jan 31 07:38:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:02.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:03.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:03 compute-2 nova_compute[226829]: 2026-01-31 07:38:03.760 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:03 compute-2 ceph-mon[77282]: pgmap v1129: 305 pgs: 305 active+clean; 157 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.1 MiB/s wr, 279 op/s
Jan 31 07:38:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:04.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3435878554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1164294150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:05.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:05 compute-2 ceph-mon[77282]: pgmap v1130: 305 pgs: 305 active+clean; 181 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 285 op/s
Jan 31 07:38:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4156463602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:06.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:06 compute-2 nova_compute[226829]: 2026-01-31 07:38:06.405 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:06 compute-2 sudo[240110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:06 compute-2 sudo[240110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:06 compute-2 sudo[240110]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:06 compute-2 sudo[240135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:38:06 compute-2 sudo[240135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:06 compute-2 sudo[240135]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:06 compute-2 sudo[240160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:06 compute-2 sudo[240160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:06 compute-2 sudo[240160]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:06 compute-2 sudo[240185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:38:06 compute-2 sudo[240185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:38:06.845 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:38:06.847 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:38:06.848 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:06 compute-2 sudo[240185]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:07 compute-2 sudo[240230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:07 compute-2 sudo[240230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:07 compute-2 sudo[240230]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:07 compute-2 sudo[240255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:38:07 compute-2 sudo[240255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:07 compute-2 sudo[240255]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:07 compute-2 sudo[240281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:07 compute-2 sudo[240281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:07 compute-2 sudo[240281]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:07 compute-2 sudo[240306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:38:07 compute-2 sudo[240306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:07 compute-2 sudo[240306]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:07.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:07 compute-2 ceph-mon[77282]: pgmap v1131: 305 pgs: 305 active+clean; 181 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 207 op/s
Jan 31 07:38:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:38:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:38:07 compute-2 sudo[240362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:07 compute-2 sudo[240362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:07 compute-2 sudo[240362]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:07 compute-2 sudo[240387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:38:07 compute-2 sudo[240387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:07 compute-2 sudo[240387]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:07 compute-2 sudo[240412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:07 compute-2 sudo[240412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:07 compute-2 sudo[240412]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:08 compute-2 sudo[240437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 07:38:08 compute-2 sudo[240437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:08.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:08 compute-2 podman[240501]: 2026-01-31 07:38:08.529651176 +0000 UTC m=+0.075007240 container create 0e2df1a5f87847b695d4e6c9954bbe9a149b65d62f2f68927e851a415071ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bartik, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:38:08 compute-2 systemd[1]: Started libpod-conmon-0e2df1a5f87847b695d4e6c9954bbe9a149b65d62f2f68927e851a415071ca62.scope.
Jan 31 07:38:08 compute-2 podman[240501]: 2026-01-31 07:38:08.490360923 +0000 UTC m=+0.035717037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:38:08 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:38:08 compute-2 podman[240501]: 2026-01-31 07:38:08.630955744 +0000 UTC m=+0.176311808 container init 0e2df1a5f87847b695d4e6c9954bbe9a149b65d62f2f68927e851a415071ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bartik, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:38:08 compute-2 podman[240501]: 2026-01-31 07:38:08.640558527 +0000 UTC m=+0.185914591 container start 0e2df1a5f87847b695d4e6c9954bbe9a149b65d62f2f68927e851a415071ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bartik, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:38:08 compute-2 podman[240501]: 2026-01-31 07:38:08.646834598 +0000 UTC m=+0.192190662 container attach 0e2df1a5f87847b695d4e6c9954bbe9a149b65d62f2f68927e851a415071ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 31 07:38:08 compute-2 serene_bartik[240518]: 167 167
Jan 31 07:38:08 compute-2 systemd[1]: libpod-0e2df1a5f87847b695d4e6c9954bbe9a149b65d62f2f68927e851a415071ca62.scope: Deactivated successfully.
Jan 31 07:38:08 compute-2 podman[240501]: 2026-01-31 07:38:08.653327135 +0000 UTC m=+0.198683169 container died 0e2df1a5f87847b695d4e6c9954bbe9a149b65d62f2f68927e851a415071ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:38:08 compute-2 systemd[1]: var-lib-containers-storage-overlay-7da2bf9e0cbdb54eed60cc088c71653ae2190faa9544cbccdfe4c44124e1b90a-merged.mount: Deactivated successfully.
Jan 31 07:38:08 compute-2 podman[240501]: 2026-01-31 07:38:08.714314821 +0000 UTC m=+0.259670875 container remove 0e2df1a5f87847b695d4e6c9954bbe9a149b65d62f2f68927e851a415071ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bartik, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:38:08 compute-2 systemd[1]: libpod-conmon-0e2df1a5f87847b695d4e6c9954bbe9a149b65d62f2f68927e851a415071ca62.scope: Deactivated successfully.
Jan 31 07:38:08 compute-2 nova_compute[226829]: 2026-01-31 07:38:08.761 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:08 compute-2 podman[240544]: 2026-01-31 07:38:08.885141958 +0000 UTC m=+0.056498324 container create 86f4cb1c77861bafb6a5d818eafd5d3aa88d8135293f3371c15e940388a3d709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 31 07:38:08 compute-2 systemd[1]: Started libpod-conmon-86f4cb1c77861bafb6a5d818eafd5d3aa88d8135293f3371c15e940388a3d709.scope.
Jan 31 07:38:08 compute-2 podman[240544]: 2026-01-31 07:38:08.864250957 +0000 UTC m=+0.035607413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:38:08 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:38:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f9848a99a8ce32d8e2673180837dc2e45c24f94d1f3e0d4bea9e28c1727628/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f9848a99a8ce32d8e2673180837dc2e45c24f94d1f3e0d4bea9e28c1727628/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f9848a99a8ce32d8e2673180837dc2e45c24f94d1f3e0d4bea9e28c1727628/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f9848a99a8ce32d8e2673180837dc2e45c24f94d1f3e0d4bea9e28c1727628/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:08 compute-2 podman[240544]: 2026-01-31 07:38:08.987696641 +0000 UTC m=+0.159053027 container init 86f4cb1c77861bafb6a5d818eafd5d3aa88d8135293f3371c15e940388a3d709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:38:08 compute-2 podman[240544]: 2026-01-31 07:38:08.99756176 +0000 UTC m=+0.168918156 container start 86f4cb1c77861bafb6a5d818eafd5d3aa88d8135293f3371c15e940388a3d709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:38:09 compute-2 podman[240544]: 2026-01-31 07:38:09.037243193 +0000 UTC m=+0.208599579 container attach 86f4cb1c77861bafb6a5d818eafd5d3aa88d8135293f3371c15e940388a3d709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:38:09 compute-2 sudo[240567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:09 compute-2 sudo[240567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:09 compute-2 sudo[240567]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:09 compute-2 sudo[240593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:09 compute-2 sudo[240593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:09 compute-2 sudo[240593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:09 compute-2 podman[240591]: 2026-01-31 07:38:09.469830822 +0000 UTC m=+0.110808579 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:38:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:09.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:09 compute-2 ceph-mon[77282]: pgmap v1132: 305 pgs: 305 active+clean; 186 MiB data, 401 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Jan 31 07:38:10 compute-2 clever_keldysh[240561]: [
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:     {
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:         "available": false,
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:         "ceph_device": false,
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:         "lsm_data": {},
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:         "lvs": [],
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:         "path": "/dev/sr0",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:         "rejected_reasons": [
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "Insufficient space (<5GB)",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "Has a FileSystem"
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:         ],
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:         "sys_api": {
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "actuators": null,
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "device_nodes": "sr0",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "devname": "sr0",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "human_readable_size": "482.00 KB",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "id_bus": "ata",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "model": "QEMU DVD-ROM",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "nr_requests": "2",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "parent": "/dev/sr0",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "partitions": {},
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "path": "/dev/sr0",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "removable": "1",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "rev": "2.5+",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "ro": "0",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "rotational": "1",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "sas_address": "",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "sas_device_handle": "",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "scheduler_mode": "mq-deadline",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "sectors": 0,
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "sectorsize": "2048",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "size": 493568.0,
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "support_discard": "2048",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "type": "disk",
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:             "vendor": "QEMU"
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:         }
Jan 31 07:38:10 compute-2 clever_keldysh[240561]:     }
Jan 31 07:38:10 compute-2 clever_keldysh[240561]: ]
Jan 31 07:38:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:10.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:10 compute-2 systemd[1]: libpod-86f4cb1c77861bafb6a5d818eafd5d3aa88d8135293f3371c15e940388a3d709.scope: Deactivated successfully.
Jan 31 07:38:10 compute-2 systemd[1]: libpod-86f4cb1c77861bafb6a5d818eafd5d3aa88d8135293f3371c15e940388a3d709.scope: Consumed 1.202s CPU time.
Jan 31 07:38:10 compute-2 podman[240544]: 2026-01-31 07:38:10.224720715 +0000 UTC m=+1.396077071 container died 86f4cb1c77861bafb6a5d818eafd5d3aa88d8135293f3371c15e940388a3d709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:38:10 compute-2 systemd[1]: var-lib-containers-storage-overlay-11f9848a99a8ce32d8e2673180837dc2e45c24f94d1f3e0d4bea9e28c1727628-merged.mount: Deactivated successfully.
Jan 31 07:38:10 compute-2 podman[240544]: 2026-01-31 07:38:10.276773377 +0000 UTC m=+1.448129743 container remove 86f4cb1c77861bafb6a5d818eafd5d3aa88d8135293f3371c15e940388a3d709 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 07:38:10 compute-2 systemd[1]: libpod-conmon-86f4cb1c77861bafb6a5d818eafd5d3aa88d8135293f3371c15e940388a3d709.scope: Deactivated successfully.
Jan 31 07:38:10 compute-2 sudo[240437]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:38:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:38:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:38:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:38:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:38:11 compute-2 ceph-mon[77282]: pgmap v1133: 305 pgs: 305 active+clean; 259 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 290 op/s
Jan 31 07:38:11 compute-2 nova_compute[226829]: 2026-01-31 07:38:11.408 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:11.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:12.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:12 compute-2 ovn_controller[133834]: 2026-01-31T07:38:12Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ca:60:aa 10.100.0.8
Jan 31 07:38:12 compute-2 ovn_controller[133834]: 2026-01-31T07:38:12Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ca:60:aa 10.100.0.8
Jan 31 07:38:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:13.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:13 compute-2 nova_compute[226829]: 2026-01-31 07:38:13.764 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:13 compute-2 ceph-mon[77282]: pgmap v1134: 305 pgs: 305 active+clean; 277 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.4 MiB/s wr, 260 op/s
Jan 31 07:38:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:14.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:15.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:15 compute-2 ceph-mon[77282]: pgmap v1135: 305 pgs: 305 active+clean; 291 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.9 MiB/s wr, 267 op/s
Jan 31 07:38:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:16.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:16 compute-2 nova_compute[226829]: 2026-01-31 07:38:16.412 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:16 compute-2 sudo[241768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:16 compute-2 sudo[241768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:16 compute-2 sudo[241768]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:16 compute-2 sudo[241793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:38:16 compute-2 sudo[241793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:16 compute-2 sudo[241793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3568645214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4036725515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e156 e156: 3 total, 3 up, 3 in
Jan 31 07:38:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:17.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:18.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e157 e157: 3 total, 3 up, 3 in
Jan 31 07:38:18 compute-2 ceph-mon[77282]: pgmap v1136: 305 pgs: 305 active+clean; 293 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 227 op/s
Jan 31 07:38:18 compute-2 ceph-mon[77282]: osdmap e156: 3 total, 3 up, 3 in
Jan 31 07:38:18 compute-2 nova_compute[226829]: 2026-01-31 07:38:18.770 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:19 compute-2 ceph-mon[77282]: osdmap e157: 3 total, 3 up, 3 in
Jan 31 07:38:19 compute-2 ceph-mon[77282]: pgmap v1139: 305 pgs: 305 active+clean; 293 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Jan 31 07:38:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e158 e158: 3 total, 3 up, 3 in
Jan 31 07:38:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:38:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:19.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:38:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:20.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:20 compute-2 ceph-mon[77282]: osdmap e158: 3 total, 3 up, 3 in
Jan 31 07:38:21 compute-2 nova_compute[226829]: 2026-01-31 07:38:21.414 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:21 compute-2 ceph-mon[77282]: pgmap v1141: 305 pgs: 305 active+clean; 322 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.8 MiB/s wr, 344 op/s
Jan 31 07:38:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:21.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.004000108s ======
Jan 31 07:38:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:22.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000108s
Jan 31 07:38:23 compute-2 ceph-mon[77282]: pgmap v1142: 305 pgs: 305 active+clean; 339 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 3.6 MiB/s wr, 474 op/s
Jan 31 07:38:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e159 e159: 3 total, 3 up, 3 in
Jan 31 07:38:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:23.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:23 compute-2 nova_compute[226829]: 2026-01-31 07:38:23.774 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:24 compute-2 ceph-mon[77282]: osdmap e159: 3 total, 3 up, 3 in
Jan 31 07:38:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:38:24.956 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:38:24 compute-2 nova_compute[226829]: 2026-01-31 07:38:24.956 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:38:24.957 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:38:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:25.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:25 compute-2 ceph-mon[77282]: pgmap v1144: 305 pgs: 305 active+clean; 332 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 3.5 MiB/s wr, 545 op/s
Jan 31 07:38:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3430571829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:38:25.960 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:26.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:26 compute-2 nova_compute[226829]: 2026-01-31 07:38:26.416 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:27.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:27 compute-2 ceph-mon[77282]: pgmap v1145: 305 pgs: 305 active+clean; 325 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.0 MiB/s wr, 497 op/s
Jan 31 07:38:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4091500420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:28.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e160 e160: 3 total, 3 up, 3 in
Jan 31 07:38:28 compute-2 nova_compute[226829]: 2026-01-31 07:38:28.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:29 compute-2 sudo[241825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:29 compute-2 sudo[241825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:29 compute-2 sudo[241825]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:29 compute-2 sudo[241851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:29 compute-2 sudo[241851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:29 compute-2 sudo[241851]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:29 compute-2 ceph-mon[77282]: pgmap v1146: 305 pgs: 305 active+clean; 302 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 426 op/s
Jan 31 07:38:29 compute-2 ceph-mon[77282]: osdmap e160: 3 total, 3 up, 3 in
Jan 31 07:38:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:29.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:30.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2789639205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:31 compute-2 podman[241876]: 2026-01-31 07:38:31.239321489 +0000 UTC m=+0.125735796 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:38:31 compute-2 nova_compute[226829]: 2026-01-31 07:38:31.418 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:31 compute-2 ceph-mon[77282]: pgmap v1148: 305 pgs: 305 active+clean; 307 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 231 op/s
Jan 31 07:38:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2176314634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:31.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:32.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/392069509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e161 e161: 3 total, 3 up, 3 in
Jan 31 07:38:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:33.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:33 compute-2 nova_compute[226829]: 2026-01-31 07:38:33.778 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-2 ceph-mon[77282]: pgmap v1149: 305 pgs: 305 active+clean; 323 MiB data, 515 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 230 op/s
Jan 31 07:38:33 compute-2 ceph-mon[77282]: osdmap e161: 3 total, 3 up, 3 in
Jan 31 07:38:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:35.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:35 compute-2 ceph-mon[77282]: pgmap v1151: 305 pgs: 305 active+clean; 352 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 6.4 MiB/s wr, 183 op/s
Jan 31 07:38:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:36.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:36 compute-2 nova_compute[226829]: 2026-01-31 07:38:36.422 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:37.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:37 compute-2 ceph-mon[77282]: pgmap v1152: 305 pgs: 305 active+clean; 372 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.2 MiB/s wr, 226 op/s
Jan 31 07:38:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:38.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:38 compute-2 nova_compute[226829]: 2026-01-31 07:38:38.781 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:39.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:39 compute-2 ceph-mon[77282]: pgmap v1153: 305 pgs: 305 active+clean; 372 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.8 MiB/s wr, 187 op/s
Jan 31 07:38:40 compute-2 podman[241908]: 2026-01-31 07:38:40.175117839 +0000 UTC m=+0.061508261 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:38:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:40.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3320545784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:41 compute-2 nova_compute[226829]: 2026-01-31 07:38:41.424 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:41.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:42 compute-2 ceph-mon[77282]: pgmap v1154: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.4 MiB/s wr, 170 op/s
Jan 31 07:38:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2154855559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:42.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:43.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:43 compute-2 nova_compute[226829]: 2026-01-31 07:38:43.783 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:44 compute-2 ceph-mon[77282]: pgmap v1155: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 150 op/s
Jan 31 07:38:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:44.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3533633623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:38:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3533633623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:38:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:45.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:46 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 31 07:38:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:46.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:46 compute-2 ceph-mon[77282]: pgmap v1156: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Jan 31 07:38:46 compute-2 nova_compute[226829]: 2026-01-31 07:38:46.426 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:47.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:48.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:48 compute-2 ceph-mon[77282]: pgmap v1157: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 642 KiB/s wr, 109 op/s
Jan 31 07:38:48 compute-2 nova_compute[226829]: 2026-01-31 07:38:48.785 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e162 e162: 3 total, 3 up, 3 in
Jan 31 07:38:49 compute-2 ceph-mon[77282]: pgmap v1158: 305 pgs: 305 active+clean; 374 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 126 KiB/s wr, 93 op/s
Jan 31 07:38:49 compute-2 sudo[241932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:49 compute-2 sudo[241932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:49 compute-2 sudo[241932]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:49 compute-2 sudo[241957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:49 compute-2 sudo[241957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:49 compute-2 sudo[241957]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:49.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:50.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:50 compute-2 ceph-mon[77282]: osdmap e162: 3 total, 3 up, 3 in
Jan 31 07:38:51 compute-2 nova_compute[226829]: 2026-01-31 07:38:51.429 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:51 compute-2 ceph-mon[77282]: pgmap v1160: 305 pgs: 305 active+clean; 399 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.8 MiB/s wr, 199 op/s
Jan 31 07:38:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:51.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:52.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:52 compute-2 nova_compute[226829]: 2026-01-31 07:38:52.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3921444299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:53.707310) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133707507, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2391, "num_deletes": 259, "total_data_size": 5459440, "memory_usage": 5541200, "flush_reason": "Manual Compaction"}
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Jan 31 07:38:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:53.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:53 compute-2 nova_compute[226829]: 2026-01-31 07:38:53.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133859957, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3527382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25623, "largest_seqno": 28009, "table_properties": {"data_size": 3517726, "index_size": 6022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20633, "raw_average_key_size": 20, "raw_value_size": 3497941, "raw_average_value_size": 3439, "num_data_blocks": 265, "num_entries": 1017, "num_filter_entries": 1017, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844960, "oldest_key_time": 1769844960, "file_creation_time": 1769845133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 152775 microseconds, and 12476 cpu microseconds.
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:53.860085) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3527382 bytes OK
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:53.860122) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:53.883383) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:53.883417) EVENT_LOG_v1 {"time_micros": 1769845133883406, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:53.883443) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5448720, prev total WAL file size 5450683, number of live WAL files 2.
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:53.885167) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3444KB)], [51(8785KB)]
Jan 31 07:38:53 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133885284, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12523452, "oldest_snapshot_seqno": -1}
Jan 31 07:38:53 compute-2 ceph-mon[77282]: pgmap v1161: 305 pgs: 305 active+clean; 375 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.6 MiB/s wr, 211 op/s
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5453 keys, 12404208 bytes, temperature: kUnknown
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845134118059, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 12404208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12363413, "index_size": 26029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 137210, "raw_average_key_size": 25, "raw_value_size": 12261089, "raw_average_value_size": 2248, "num_data_blocks": 1075, "num_entries": 5453, "num_filter_entries": 5453, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769845133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:54.118789) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 12404208 bytes
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:54.125279) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 53.7 rd, 53.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.6 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 5990, records dropped: 537 output_compression: NoCompression
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:54.125302) EVENT_LOG_v1 {"time_micros": 1769845134125291, "job": 30, "event": "compaction_finished", "compaction_time_micros": 233308, "compaction_time_cpu_micros": 41040, "output_level": 6, "num_output_files": 1, "total_output_size": 12404208, "num_input_records": 5990, "num_output_records": 5453, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845134125851, "job": 30, "event": "table_file_deletion", "file_number": 53}
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845134127070, "job": 30, "event": "table_file_deletion", "file_number": 51}
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:53.884944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:54.127115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:54.127121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:54.127123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:54.127125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:38:54.127127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:54.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:54 compute-2 nova_compute[226829]: 2026-01-31 07:38:54.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1990709756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:55 compute-2 nova_compute[226829]: 2026-01-31 07:38:55.112 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Acquiring lock "4987b961-5362-48cd-80ca-ff6a201327e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:55 compute-2 nova_compute[226829]: 2026-01-31 07:38:55.113 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:55 compute-2 nova_compute[226829]: 2026-01-31 07:38:55.377 226833 DEBUG nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:38:55 compute-2 nova_compute[226829]: 2026-01-31 07:38:55.697 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:55 compute-2 nova_compute[226829]: 2026-01-31 07:38:55.698 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:55 compute-2 nova_compute[226829]: 2026-01-31 07:38:55.707 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:38:55 compute-2 nova_compute[226829]: 2026-01-31 07:38:55.707 226833 INFO nova.compute.claims [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:38:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:55.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:55 compute-2 nova_compute[226829]: 2026-01-31 07:38:55.943 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:56 compute-2 ceph-mon[77282]: pgmap v1162: 305 pgs: 305 active+clean; 346 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.6 MiB/s wr, 211 op/s
Jan 31 07:38:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:56.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:38:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/78430908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.432 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.440 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.447 226833 DEBUG nova.compute.provider_tree [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.481 226833 DEBUG nova.scheduler.client.report [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.490 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.535 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.564 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.566 226833 DEBUG nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.569 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.570 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.570 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.571 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.687 226833 DEBUG nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.688 226833 DEBUG nova.network.neutron [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.754 226833 INFO nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.813 226833 DEBUG nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.936 226833 DEBUG nova.policy [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '73c9db412cc647958ba8093d8f187dce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4511f016f1e44a299447f7fe1ad1a7ab', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:38:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:38:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/880117336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:56 compute-2 nova_compute[226829]: 2026-01-31 07:38:56.988 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.090 226833 DEBUG nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.092 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.093 226833 INFO nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Creating image(s)
Jan 31 07:38:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/78430908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3481407038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/880117336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.251 226833 DEBUG nova.storage.rbd_utils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] rbd image 4987b961-5362-48cd-80ca-ff6a201327e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.306 226833 DEBUG nova.storage.rbd_utils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] rbd image 4987b961-5362-48cd-80ca-ff6a201327e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.349 226833 DEBUG nova.storage.rbd_utils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] rbd image 4987b961-5362-48cd-80ca-ff6a201327e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.355 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.434 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.436 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.437 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.438 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.481 226833 DEBUG nova.storage.rbd_utils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] rbd image 4987b961-5362-48cd-80ca-ff6a201327e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.487 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4987b961-5362-48cd-80ca-ff6a201327e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.519 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.520 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.705 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.707 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4599MB free_disk=20.830982208251953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.707 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.707 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:57.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.982 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance ae28510e-134b-4be3-b0b5-1eed1b4da893 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.983 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 4987b961-5362-48cd-80ca-ff6a201327e6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.983 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:38:57 compute-2 nova_compute[226829]: 2026-01-31 07:38:57.983 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:38:58 compute-2 nova_compute[226829]: 2026-01-31 07:38:58.066 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:58 compute-2 ceph-mon[77282]: pgmap v1163: 305 pgs: 305 active+clean; 359 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.5 MiB/s wr, 204 op/s
Jan 31 07:38:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4008658442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:58.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:58 compute-2 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 07:38:58 compute-2 nova_compute[226829]: 2026-01-31 07:38:58.551 226833 DEBUG nova.network.neutron [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Successfully created port: 045ce842-8d48-43b3-8c03-27d4ea7b65cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:38:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:38:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2250944479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:58 compute-2 nova_compute[226829]: 2026-01-31 07:38:58.705 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:58 compute-2 nova_compute[226829]: 2026-01-31 07:38:58.711 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:38:58 compute-2 nova_compute[226829]: 2026-01-31 07:38:58.746 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:38:58 compute-2 nova_compute[226829]: 2026-01-31 07:38:58.792 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:58 compute-2 nova_compute[226829]: 2026-01-31 07:38:58.818 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:38:58 compute-2 nova_compute[226829]: 2026-01-31 07:38:58.819 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.461 226833 DEBUG nova.network.neutron [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Successfully updated port: 045ce842-8d48-43b3-8c03-27d4ea7b65cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.522 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Acquiring lock "refresh_cache-4987b961-5362-48cd-80ca-ff6a201327e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.522 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Acquired lock "refresh_cache-4987b961-5362-48cd-80ca-ff6a201327e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.522 226833 DEBUG nova.network.neutron [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:38:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2784302991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2250944479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/833713787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.693 226833 DEBUG nova.compute.manager [req-29ee0d99-2125-4250-851f-148f85f749e3 req-752efcae-926a-49a0-aabd-524f68f1ebde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Received event network-changed-045ce842-8d48-43b3-8c03-27d4ea7b65cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.694 226833 DEBUG nova.compute.manager [req-29ee0d99-2125-4250-851f-148f85f749e3 req-752efcae-926a-49a0-aabd-524f68f1ebde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Refreshing instance network info cache due to event network-changed-045ce842-8d48-43b3-8c03-27d4ea7b65cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.695 226833 DEBUG oslo_concurrency.lockutils [req-29ee0d99-2125-4250-851f-148f85f749e3 req-752efcae-926a-49a0-aabd-524f68f1ebde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4987b961-5362-48cd-80ca-ff6a201327e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.717 226833 DEBUG nova.network.neutron [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:38:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:38:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:38:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:59.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.815 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.815 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.816 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.816 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.844 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.990 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-ae28510e-134b-4be3-b0b5-1eed1b4da893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.991 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-ae28510e-134b-4be3-b0b5-1eed1b4da893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.991 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:38:59 compute-2 nova_compute[226829]: 2026-01-31 07:38:59.991 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid ae28510e-134b-4be3-b0b5-1eed1b4da893 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.117762) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140117843, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 330, "num_deletes": 251, "total_data_size": 183050, "memory_usage": 189720, "flush_reason": "Manual Compaction"}
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Jan 31 07:39:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:00.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140287378, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 120120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28014, "largest_seqno": 28339, "table_properties": {"data_size": 118076, "index_size": 208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5360, "raw_average_key_size": 18, "raw_value_size": 114012, "raw_average_value_size": 395, "num_data_blocks": 9, "num_entries": 288, "num_filter_entries": 288, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845133, "oldest_key_time": 1769845133, "file_creation_time": 1769845140, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 169733 microseconds, and 1982 cpu microseconds.
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.287496) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 120120 bytes OK
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.287532) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.348934) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.348990) EVENT_LOG_v1 {"time_micros": 1769845140348976, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.349020) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 180717, prev total WAL file size 180717, number of live WAL files 2.
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.349826) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(117KB)], [54(11MB)]
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140349932, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 12524328, "oldest_snapshot_seqno": -1}
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5231 keys, 10608901 bytes, temperature: kUnknown
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140529265, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 10608901, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10571264, "index_size": 23447, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 133316, "raw_average_key_size": 25, "raw_value_size": 10474384, "raw_average_value_size": 2002, "num_data_blocks": 960, "num_entries": 5231, "num_filter_entries": 5231, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769845140, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.529617) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10608901 bytes
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.557477) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 69.8 rd, 59.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(192.6) write-amplify(88.3) OK, records in: 5741, records dropped: 510 output_compression: NoCompression
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.557528) EVENT_LOG_v1 {"time_micros": 1769845140557508, "job": 32, "event": "compaction_finished", "compaction_time_micros": 179443, "compaction_time_cpu_micros": 41342, "output_level": 6, "num_output_files": 1, "total_output_size": 10608901, "num_input_records": 5741, "num_output_records": 5231, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140557769, "job": 32, "event": "table_file_deletion", "file_number": 56}
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140559658, "job": 32, "event": "table_file_deletion", "file_number": 54}
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.349637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.559774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.559784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.559788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.559793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:39:00.559797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-2 ceph-mon[77282]: pgmap v1164: 305 pgs: 305 active+clean; 397 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.4 MiB/s wr, 200 op/s
Jan 31 07:39:00 compute-2 nova_compute[226829]: 2026-01-31 07:39:00.770 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4987b961-5362-48cd-80ca-ff6a201327e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:00 compute-2 nova_compute[226829]: 2026-01-31 07:39:00.865 226833 DEBUG nova.storage.rbd_utils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] resizing rbd image 4987b961-5362-48cd-80ca-ff6a201327e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.004 226833 DEBUG nova.network.neutron [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Updating instance_info_cache with network_info: [{"id": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "address": "fa:16:3e:90:a5:31", "network": {"id": "470ea8b7-e52d-4600-9179-eb48dda5f49d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1196752311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4511f016f1e44a299447f7fe1ad1a7ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap045ce842-8d", "ovs_interfaceid": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.195 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Releasing lock "refresh_cache-4987b961-5362-48cd-80ca-ff6a201327e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.195 226833 DEBUG nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Instance network_info: |[{"id": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "address": "fa:16:3e:90:a5:31", "network": {"id": "470ea8b7-e52d-4600-9179-eb48dda5f49d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1196752311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4511f016f1e44a299447f7fe1ad1a7ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap045ce842-8d", "ovs_interfaceid": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.196 226833 DEBUG oslo_concurrency.lockutils [req-29ee0d99-2125-4250-851f-148f85f749e3 req-752efcae-926a-49a0-aabd-524f68f1ebde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4987b961-5362-48cd-80ca-ff6a201327e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.196 226833 DEBUG nova.network.neutron [req-29ee0d99-2125-4250-851f-148f85f749e3 req-752efcae-926a-49a0-aabd-524f68f1ebde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Refreshing network info cache for port 045ce842-8d48-43b3-8c03-27d4ea7b65cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.436 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.760 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Updating instance_info_cache with network_info: [{"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:01 compute-2 ceph-mon[77282]: pgmap v1165: 305 pgs: 305 active+clean; 448 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 194 op/s
Jan 31 07:39:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2116205950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:01.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.791 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-ae28510e-134b-4be3-b0b5-1eed1b4da893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.791 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.791 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.792 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.792 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:39:01 compute-2 nova_compute[226829]: 2026-01-31 07:39:01.869 226833 DEBUG nova.objects.instance [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lazy-loading 'migration_context' on Instance uuid 4987b961-5362-48cd-80ca-ff6a201327e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.170 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.171 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Ensure instance console log exists: /var/lib/nova/instances/4987b961-5362-48cd-80ca-ff6a201327e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.172 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.173 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.173 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.178 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Start _get_guest_xml network_info=[{"id": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "address": "fa:16:3e:90:a5:31", "network": {"id": "470ea8b7-e52d-4600-9179-eb48dda5f49d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1196752311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4511f016f1e44a299447f7fe1ad1a7ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap045ce842-8d", "ovs_interfaceid": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.186 226833 WARNING nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.193 226833 DEBUG nova.virt.libvirt.host [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.193 226833 DEBUG nova.virt.libvirt.host [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.197 226833 DEBUG nova.virt.libvirt.host [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.198 226833 DEBUG nova.virt.libvirt.host [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.200 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.200 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.201 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.202 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.202 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.202 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.203 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.203 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.204 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.204 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.204 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.205 226833 DEBUG nova.virt.hardware [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.210 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:02 compute-2 podman[242222]: 2026-01-31 07:39:02.250082024 +0000 UTC m=+0.114595122 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:39:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:02.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:39:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2014041756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.713 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.752 226833 DEBUG nova.storage.rbd_utils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] rbd image 4987b961-5362-48cd-80ca-ff6a201327e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:39:02 compute-2 nova_compute[226829]: 2026-01-31 07:39:02.758 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:39:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1255292434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.206 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.209 226833 DEBUG nova.virt.libvirt.vif [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:38:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1787237336',display_name='tempest-ImagesOneServerTestJSON-server-1787237336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1787237336',id=39,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4511f016f1e44a299447f7fe1ad1a7ab',ramdisk_id='',reservation_id='r-jte6xd6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-626457921',owner_user_name='tempest-ImagesOneServerTestJSON-626457921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:56Z,user_data=None,user_id='73c9db412cc647958ba8093d8f187dce',uuid=4987b961-5362-48cd-80ca-ff6a201327e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "address": "fa:16:3e:90:a5:31", "network": {"id": "470ea8b7-e52d-4600-9179-eb48dda5f49d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1196752311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4511f016f1e44a299447f7fe1ad1a7ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap045ce842-8d", "ovs_interfaceid": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.210 226833 DEBUG nova.network.os_vif_util [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Converting VIF {"id": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "address": "fa:16:3e:90:a5:31", "network": {"id": "470ea8b7-e52d-4600-9179-eb48dda5f49d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1196752311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4511f016f1e44a299447f7fe1ad1a7ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap045ce842-8d", "ovs_interfaceid": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.212 226833 DEBUG nova.network.os_vif_util [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:a5:31,bridge_name='br-int',has_traffic_filtering=True,id=045ce842-8d48-43b3-8c03-27d4ea7b65cc,network=Network(470ea8b7-e52d-4600-9179-eb48dda5f49d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap045ce842-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.214 226833 DEBUG nova.objects.instance [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lazy-loading 'pci_devices' on Instance uuid 4987b961-5362-48cd-80ca-ff6a201327e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.241 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <uuid>4987b961-5362-48cd-80ca-ff6a201327e6</uuid>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <name>instance-00000027</name>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <nova:name>tempest-ImagesOneServerTestJSON-server-1787237336</nova:name>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:39:02</nova:creationTime>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <nova:user uuid="73c9db412cc647958ba8093d8f187dce">tempest-ImagesOneServerTestJSON-626457921-project-member</nova:user>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <nova:project uuid="4511f016f1e44a299447f7fe1ad1a7ab">tempest-ImagesOneServerTestJSON-626457921</nova:project>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <nova:port uuid="045ce842-8d48-43b3-8c03-27d4ea7b65cc">
Jan 31 07:39:03 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <system>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <entry name="serial">4987b961-5362-48cd-80ca-ff6a201327e6</entry>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <entry name="uuid">4987b961-5362-48cd-80ca-ff6a201327e6</entry>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     </system>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <os>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   </os>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <features>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   </features>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/4987b961-5362-48cd-80ca-ff6a201327e6_disk">
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       </source>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/4987b961-5362-48cd-80ca-ff6a201327e6_disk.config">
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       </source>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:39:03 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:90:a5:31"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <target dev="tap045ce842-8d"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/4987b961-5362-48cd-80ca-ff6a201327e6/console.log" append="off"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <video>
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     </video>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:39:03 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:39:03 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:39:03 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:39:03 compute-2 nova_compute[226829]: </domain>
Jan 31 07:39:03 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.242 226833 DEBUG nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Preparing to wait for external event network-vif-plugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.242 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Acquiring lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.243 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.243 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.244 226833 DEBUG nova.virt.libvirt.vif [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:38:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1787237336',display_name='tempest-ImagesOneServerTestJSON-server-1787237336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1787237336',id=39,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4511f016f1e44a299447f7fe1ad1a7ab',ramdisk_id='',reservation_id='r-jte6xd6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-626457921',owner_user_name='tempest-ImagesOneServerTestJSON-626457921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:56Z,user_data=None,user_id='73c9db412cc647958ba8093d8f187dce',uuid=4987b961-5362-48cd-80ca-ff6a201327e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "address": "fa:16:3e:90:a5:31", "network": {"id": "470ea8b7-e52d-4600-9179-eb48dda5f49d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1196752311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4511f016f1e44a299447f7fe1ad1a7ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap045ce842-8d", "ovs_interfaceid": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.245 226833 DEBUG nova.network.os_vif_util [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Converting VIF {"id": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "address": "fa:16:3e:90:a5:31", "network": {"id": "470ea8b7-e52d-4600-9179-eb48dda5f49d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1196752311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4511f016f1e44a299447f7fe1ad1a7ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap045ce842-8d", "ovs_interfaceid": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.245 226833 DEBUG nova.network.os_vif_util [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:a5:31,bridge_name='br-int',has_traffic_filtering=True,id=045ce842-8d48-43b3-8c03-27d4ea7b65cc,network=Network(470ea8b7-e52d-4600-9179-eb48dda5f49d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap045ce842-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.246 226833 DEBUG os_vif [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:a5:31,bridge_name='br-int',has_traffic_filtering=True,id=045ce842-8d48-43b3-8c03-27d4ea7b65cc,network=Network(470ea8b7-e52d-4600-9179-eb48dda5f49d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap045ce842-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.247 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.248 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.249 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.257 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.258 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap045ce842-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.258 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap045ce842-8d, col_values=(('external_ids', {'iface-id': '045ce842-8d48-43b3-8c03-27d4ea7b65cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:a5:31', 'vm-uuid': '4987b961-5362-48cd-80ca-ff6a201327e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.260 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:03 compute-2 NetworkManager[48999]: <info>  [1769845143.2638] manager: (tap045ce842-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.267 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.269 226833 INFO os_vif [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:a5:31,bridge_name='br-int',has_traffic_filtering=True,id=045ce842-8d48-43b3-8c03-27d4ea7b65cc,network=Network(470ea8b7-e52d-4600-9179-eb48dda5f49d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap045ce842-8d')
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.315 226833 DEBUG nova.network.neutron [req-29ee0d99-2125-4250-851f-148f85f749e3 req-752efcae-926a-49a0-aabd-524f68f1ebde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Updated VIF entry in instance network info cache for port 045ce842-8d48-43b3-8c03-27d4ea7b65cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.315 226833 DEBUG nova.network.neutron [req-29ee0d99-2125-4250-851f-148f85f749e3 req-752efcae-926a-49a0-aabd-524f68f1ebde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Updating instance_info_cache with network_info: [{"id": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "address": "fa:16:3e:90:a5:31", "network": {"id": "470ea8b7-e52d-4600-9179-eb48dda5f49d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1196752311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4511f016f1e44a299447f7fe1ad1a7ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap045ce842-8d", "ovs_interfaceid": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.354 226833 DEBUG oslo_concurrency.lockutils [req-29ee0d99-2125-4250-851f-148f85f749e3 req-752efcae-926a-49a0-aabd-524f68f1ebde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4987b961-5362-48cd-80ca-ff6a201327e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.641 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.642 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.645 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] No VIF found with MAC fa:16:3e:90:a5:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.646 226833 INFO nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Using config drive
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.686 226833 DEBUG nova.storage.rbd_utils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] rbd image 4987b961-5362-48cd-80ca-ff6a201327e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:39:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:03.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:03 compute-2 nova_compute[226829]: 2026-01-31 07:39:03.794 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:03 compute-2 ceph-mon[77282]: pgmap v1166: 305 pgs: 305 active+clean; 461 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 180 op/s
Jan 31 07:39:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2014041756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1255292434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:04.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:04 compute-2 nova_compute[226829]: 2026-01-31 07:39:04.832 226833 INFO nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Creating config drive at /var/lib/nova/instances/4987b961-5362-48cd-80ca-ff6a201327e6/disk.config
Jan 31 07:39:04 compute-2 nova_compute[226829]: 2026-01-31 07:39:04.837 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4987b961-5362-48cd-80ca-ff6a201327e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi2x8v4w4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:04 compute-2 nova_compute[226829]: 2026-01-31 07:39:04.968 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4987b961-5362-48cd-80ca-ff6a201327e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi2x8v4w4" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.011 226833 DEBUG nova.storage.rbd_utils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] rbd image 4987b961-5362-48cd-80ca-ff6a201327e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.016 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4987b961-5362-48cd-80ca-ff6a201327e6/disk.config 4987b961-5362-48cd-80ca-ff6a201327e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.606 226833 DEBUG oslo_concurrency.processutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4987b961-5362-48cd-80ca-ff6a201327e6/disk.config 4987b961-5362-48cd-80ca-ff6a201327e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.607 226833 INFO nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Deleting local config drive /var/lib/nova/instances/4987b961-5362-48cd-80ca-ff6a201327e6/disk.config because it was imported into RBD.
Jan 31 07:39:05 compute-2 kernel: tap045ce842-8d: entered promiscuous mode
Jan 31 07:39:05 compute-2 ovn_controller[133834]: 2026-01-31T07:39:05Z|00085|binding|INFO|Claiming lport 045ce842-8d48-43b3-8c03-27d4ea7b65cc for this chassis.
Jan 31 07:39:05 compute-2 ovn_controller[133834]: 2026-01-31T07:39:05Z|00086|binding|INFO|045ce842-8d48-43b3-8c03-27d4ea7b65cc: Claiming fa:16:3e:90:a5:31 10.100.0.4
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.671 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:05 compute-2 NetworkManager[48999]: <info>  [1769845145.6753] manager: (tap045ce842-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.678 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.685 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:a5:31 10.100.0.4'], port_security=['fa:16:3e:90:a5:31 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4987b961-5362-48cd-80ca-ff6a201327e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-470ea8b7-e52d-4600-9179-eb48dda5f49d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4511f016f1e44a299447f7fe1ad1a7ab', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ee39d04-816f-491f-9ed7-7ff642bb6345', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f7ba8b7-dbd3-416d-b95c-c4b4bad6c78b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=045ce842-8d48-43b3-8c03-27d4ea7b65cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.689 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 045ce842-8d48-43b3-8c03-27d4ea7b65cc in datapath 470ea8b7-e52d-4600-9179-eb48dda5f49d bound to our chassis
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.694 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 470ea8b7-e52d-4600-9179-eb48dda5f49d
Jan 31 07:39:05 compute-2 systemd-machined[195142]: New machine qemu-16-instance-00000027.
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.709 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2656dcbb-4c31-4adc-84ed-1fa79f7c27a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.711 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap470ea8b7-e1 in ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.713 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap470ea8b7-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.713 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ff0ef1-2654-4dba-8565-fffbe38d2236]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.714 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ba3a13-efa9-4638-8f7e-bad3cce454a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 systemd[1]: Started Virtual Machine qemu-16-instance-00000027.
Jan 31 07:39:05 compute-2 ovn_controller[133834]: 2026-01-31T07:39:05Z|00087|binding|INFO|Setting lport 045ce842-8d48-43b3-8c03-27d4ea7b65cc ovn-installed in OVS
Jan 31 07:39:05 compute-2 ovn_controller[133834]: 2026-01-31T07:39:05Z|00088|binding|INFO|Setting lport 045ce842-8d48-43b3-8c03-27d4ea7b65cc up in Southbound
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.720 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.723 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:05 compute-2 systemd-udevd[242388]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.731 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab61192-b97a-4355-826d-25fef6b7684c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 NetworkManager[48999]: <info>  [1769845145.7422] device (tap045ce842-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:39:05 compute-2 NetworkManager[48999]: <info>  [1769845145.7432] device (tap045ce842-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.746 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[09581381-036e-469d-a36f-f9798406fa57]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:05.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.779 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[652114a1-4481-409c-810c-9ded61d3131e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 systemd-udevd[242391]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.785 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4d4a5201-bed8-42ae-8177-adc404bfe50b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 NetworkManager[48999]: <info>  [1769845145.7867] manager: (tap470ea8b7-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.809 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f10d9e7d-b604-4627-b4f6-9dc91bfe5d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.812 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[39a1bcd3-c5a9-4cff-a30e-a45e168a88ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 NetworkManager[48999]: <info>  [1769845145.8302] device (tap470ea8b7-e0): carrier: link connected
Jan 31 07:39:05 compute-2 ceph-mon[77282]: pgmap v1167: 305 pgs: 305 active+clean; 461 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.2 MiB/s wr, 170 op/s
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.835 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[86398624-9dc1-489b-8de4-c0757d44f974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.849 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[499295cd-c720-45f1-a63f-bfda11bd905c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap470ea8b7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:e0:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539830, 'reachable_time': 21737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242419, 'error': None, 'target': 'ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.863 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ce42f3e2-30c9-4110-8dc8-3d97b2beda85]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:e066'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539830, 'tstamp': 539830}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242420, 'error': None, 'target': 'ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.877 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[72835ae7-5066-4dee-a4cb-013cb13dc102]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap470ea8b7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:e0:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539830, 'reachable_time': 21737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242421, 'error': None, 'target': 'ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.904 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2dfc54-5932-4c51-b51c-bf0346061f8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.953 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[06e4b0e5-3bd1-47ea-aa19-51b0535b24b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.956 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap470ea8b7-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.956 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.957 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap470ea8b7-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:05 compute-2 NetworkManager[48999]: <info>  [1769845145.9604] manager: (tap470ea8b7-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.959 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:05 compute-2 kernel: tap470ea8b7-e0: entered promiscuous mode
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.963 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.968 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap470ea8b7-e0, col_values=(('external_ids', {'iface-id': 'cccc4219-9be6-4cfe-91bf-d4c0c9c79649'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.969 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:05 compute-2 ovn_controller[133834]: 2026-01-31T07:39:05Z|00089|binding|INFO|Releasing lport cccc4219-9be6-4cfe-91bf-d4c0c9c79649 from this chassis (sb_readonly=0)
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.970 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.973 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/470ea8b7-e52d-4600-9179-eb48dda5f49d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/470ea8b7-e52d-4600-9179-eb48dda5f49d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.975 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1f141ca3-406c-4a82-82bd-e5e338ca4733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:05 compute-2 nova_compute[226829]: 2026-01-31 07:39:05.976 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.977 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-470ea8b7-e52d-4600-9179-eb48dda5f49d
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/470ea8b7-e52d-4600-9179-eb48dda5f49d.pid.haproxy
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 470ea8b7-e52d-4600-9179-eb48dda5f49d
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:39:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:05.978 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d', 'env', 'PROCESS_TAG=haproxy-470ea8b7-e52d-4600-9179-eb48dda5f49d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/470ea8b7-e52d-4600-9179-eb48dda5f49d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:39:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:06.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:06 compute-2 nova_compute[226829]: 2026-01-31 07:39:06.345 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845146.3447685, 4987b961-5362-48cd-80ca-ff6a201327e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:39:06 compute-2 nova_compute[226829]: 2026-01-31 07:39:06.347 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] VM Started (Lifecycle Event)
Jan 31 07:39:06 compute-2 nova_compute[226829]: 2026-01-31 07:39:06.368 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:06 compute-2 nova_compute[226829]: 2026-01-31 07:39:06.372 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845146.3458586, 4987b961-5362-48cd-80ca-ff6a201327e6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:39:06 compute-2 nova_compute[226829]: 2026-01-31 07:39:06.372 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] VM Paused (Lifecycle Event)
Jan 31 07:39:06 compute-2 nova_compute[226829]: 2026-01-31 07:39:06.391 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:06 compute-2 nova_compute[226829]: 2026-01-31 07:39:06.395 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:39:06 compute-2 nova_compute[226829]: 2026-01-31 07:39:06.422 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:39:06 compute-2 podman[242495]: 2026-01-31 07:39:06.354108373 +0000 UTC m=+0.027223734 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:39:06 compute-2 podman[242495]: 2026-01-31 07:39:06.832628367 +0000 UTC m=+0.505743698 container create a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:39:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:06.845 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:06.846 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:06.847 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:06 compute-2 systemd[1]: Started libpod-conmon-a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649.scope.
Jan 31 07:39:06 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:39:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0106ff1424b6df86554ae3ffcf92df08368760d066e6df418db74f4bff002cf0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:06 compute-2 podman[242495]: 2026-01-31 07:39:06.996787772 +0000 UTC m=+0.669903063 container init a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:39:07 compute-2 podman[242495]: 2026-01-31 07:39:07.004285756 +0000 UTC m=+0.677401077 container start a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:39:07 compute-2 neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d[242511]: [NOTICE]   (242515) : New worker (242517) forked
Jan 31 07:39:07 compute-2 neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d[242511]: [NOTICE]   (242515) : Loading success.
Jan 31 07:39:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:07.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:07 compute-2 ceph-mon[77282]: pgmap v1168: 305 pgs: 305 active+clean; 496 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.3 MiB/s wr, 229 op/s
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.161 226833 DEBUG nova.compute.manager [req-597b154c-8f0a-40b5-8c3d-ee1045a7b44e req-8f42cf50-33ee-4d42-b151-719b09fc2469 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Received event network-vif-plugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.162 226833 DEBUG oslo_concurrency.lockutils [req-597b154c-8f0a-40b5-8c3d-ee1045a7b44e req-8f42cf50-33ee-4d42-b151-719b09fc2469 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.162 226833 DEBUG oslo_concurrency.lockutils [req-597b154c-8f0a-40b5-8c3d-ee1045a7b44e req-8f42cf50-33ee-4d42-b151-719b09fc2469 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.163 226833 DEBUG oslo_concurrency.lockutils [req-597b154c-8f0a-40b5-8c3d-ee1045a7b44e req-8f42cf50-33ee-4d42-b151-719b09fc2469 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.163 226833 DEBUG nova.compute.manager [req-597b154c-8f0a-40b5-8c3d-ee1045a7b44e req-8f42cf50-33ee-4d42-b151-719b09fc2469 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Processing event network-vif-plugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.165 226833 DEBUG nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.169 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845148.169324, 4987b961-5362-48cd-80ca-ff6a201327e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.170 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] VM Resumed (Lifecycle Event)
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.173 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.180 226833 INFO nova.virt.libvirt.driver [-] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Instance spawned successfully.
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.181 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.260 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:08.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.377 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.384 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.385 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.386 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.387 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.387 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.388 226833 DEBUG nova.virt.libvirt.driver [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.395 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.439 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.576 226833 INFO nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Took 11.49 seconds to spawn the instance on the hypervisor.
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.577 226833 DEBUG nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.798 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.877 226833 INFO nova.compute.manager [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Took 13.21 seconds to build instance.
Jan 31 07:39:08 compute-2 nova_compute[226829]: 2026-01-31 07:39:08.991 226833 DEBUG oslo_concurrency.lockutils [None req-3ba554f6-f341-46a9-9ed2-e3d1cf8ac267 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3330843390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:09.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:09 compute-2 sudo[242528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:09 compute-2 sudo[242528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:09 compute-2 sudo[242528]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:09 compute-2 sudo[242553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:09 compute-2 sudo[242553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:09 compute-2 sudo[242553]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:10.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:10 compute-2 ceph-mon[77282]: pgmap v1169: 305 pgs: 305 active+clean; 518 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.7 MiB/s wr, 223 op/s
Jan 31 07:39:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/640104396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:10 compute-2 nova_compute[226829]: 2026-01-31 07:39:10.648 226833 DEBUG nova.compute.manager [req-801d2802-9f58-427d-a610-043f59091bab req-acf3403a-e22f-4db1-b9ef-af7b31295039 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Received event network-vif-plugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:10 compute-2 nova_compute[226829]: 2026-01-31 07:39:10.649 226833 DEBUG oslo_concurrency.lockutils [req-801d2802-9f58-427d-a610-043f59091bab req-acf3403a-e22f-4db1-b9ef-af7b31295039 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:10 compute-2 nova_compute[226829]: 2026-01-31 07:39:10.650 226833 DEBUG oslo_concurrency.lockutils [req-801d2802-9f58-427d-a610-043f59091bab req-acf3403a-e22f-4db1-b9ef-af7b31295039 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:10 compute-2 nova_compute[226829]: 2026-01-31 07:39:10.650 226833 DEBUG oslo_concurrency.lockutils [req-801d2802-9f58-427d-a610-043f59091bab req-acf3403a-e22f-4db1-b9ef-af7b31295039 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:10 compute-2 nova_compute[226829]: 2026-01-31 07:39:10.651 226833 DEBUG nova.compute.manager [req-801d2802-9f58-427d-a610-043f59091bab req-acf3403a-e22f-4db1-b9ef-af7b31295039 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] No waiting events found dispatching network-vif-plugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:10 compute-2 nova_compute[226829]: 2026-01-31 07:39:10.651 226833 WARNING nova.compute.manager [req-801d2802-9f58-427d-a610-043f59091bab req-acf3403a-e22f-4db1-b9ef-af7b31295039 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Received unexpected event network-vif-plugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc for instance with vm_state active and task_state None.
Jan 31 07:39:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e163 e163: 3 total, 3 up, 3 in
Jan 31 07:39:11 compute-2 nova_compute[226829]: 2026-01-31 07:39:11.187 226833 DEBUG nova.compute.manager [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:11 compute-2 podman[242578]: 2026-01-31 07:39:11.20471025 +0000 UTC m=+0.081722674 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 07:39:11 compute-2 nova_compute[226829]: 2026-01-31 07:39:11.231 226833 INFO nova.compute.manager [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] instance snapshotting
Jan 31 07:39:11 compute-2 nova_compute[226829]: 2026-01-31 07:39:11.424 226833 INFO nova.virt.libvirt.driver [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Beginning live snapshot process
Jan 31 07:39:11 compute-2 nova_compute[226829]: 2026-01-31 07:39:11.640 226833 DEBUG nova.virt.libvirt.imagebackend [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:39:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:11.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:11 compute-2 nova_compute[226829]: 2026-01-31 07:39:11.969 226833 DEBUG nova.storage.rbd_utils [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] creating snapshot(eeb358208b93453796160749e7383be7) on rbd image(4987b961-5362-48cd-80ca-ff6a201327e6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:39:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:12.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:12 compute-2 ceph-mon[77282]: pgmap v1170: 305 pgs: 305 active+clean; 518 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.1 MiB/s wr, 268 op/s
Jan 31 07:39:12 compute-2 ceph-mon[77282]: osdmap e163: 3 total, 3 up, 3 in
Jan 31 07:39:13 compute-2 nova_compute[226829]: 2026-01-31 07:39:13.262 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e164 e164: 3 total, 3 up, 3 in
Jan 31 07:39:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:13.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:13 compute-2 nova_compute[226829]: 2026-01-31 07:39:13.799 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:14 compute-2 ceph-mon[77282]: pgmap v1172: 305 pgs: 305 active+clean; 526 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 214 op/s
Jan 31 07:39:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:14.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:15 compute-2 nova_compute[226829]: 2026-01-31 07:39:15.009 226833 DEBUG nova.storage.rbd_utils [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] cloning vms/4987b961-5362-48cd-80ca-ff6a201327e6_disk@eeb358208b93453796160749e7383be7 to images/4cb863eb-873b-4a83-aef8-5fea8d796cf2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:39:15 compute-2 ceph-mon[77282]: osdmap e164: 3 total, 3 up, 3 in
Jan 31 07:39:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e165 e165: 3 total, 3 up, 3 in
Jan 31 07:39:15 compute-2 nova_compute[226829]: 2026-01-31 07:39:15.244 226833 DEBUG nova.storage.rbd_utils [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] flattening images/4cb863eb-873b-4a83-aef8-5fea8d796cf2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:39:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:15.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:16.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:16 compute-2 ceph-mon[77282]: pgmap v1174: 305 pgs: 305 active+clean; 526 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 164 op/s
Jan 31 07:39:16 compute-2 ceph-mon[77282]: osdmap e165: 3 total, 3 up, 3 in
Jan 31 07:39:16 compute-2 sudo[242721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:16 compute-2 sudo[242721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:16 compute-2 sudo[242721]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:16 compute-2 sudo[242746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:39:16 compute-2 sudo[242746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:16 compute-2 sudo[242746]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:16 compute-2 sudo[242775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:16 compute-2 sudo[242775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:16 compute-2 sudo[242775]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:16 compute-2 sudo[242800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:39:16 compute-2 sudo[242800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:17 compute-2 nova_compute[226829]: 2026-01-31 07:39:17.213 226833 DEBUG nova.storage.rbd_utils [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] removing snapshot(eeb358208b93453796160749e7383be7) on rbd image(4987b961-5362-48cd-80ca-ff6a201327e6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:39:17 compute-2 sudo[242800]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:17.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e166 e166: 3 total, 3 up, 3 in
Jan 31 07:39:17 compute-2 ceph-mon[77282]: pgmap v1176: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 633 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 12 MiB/s wr, 263 op/s
Jan 31 07:39:18 compute-2 nova_compute[226829]: 2026-01-31 07:39:18.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:18.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:18 compute-2 nova_compute[226829]: 2026-01-31 07:39:18.802 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:39:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:39:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:39:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:39:19 compute-2 ceph-mon[77282]: osdmap e166: 3 total, 3 up, 3 in
Jan 31 07:39:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:39:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:39:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:39:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e167 e167: 3 total, 3 up, 3 in
Jan 31 07:39:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:19.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:20.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:21.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:22.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:23 compute-2 nova_compute[226829]: 2026-01-31 07:39:23.266 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:23 compute-2 nova_compute[226829]: 2026-01-31 07:39:23.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:23.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:24.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:25 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:39:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:25.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:26.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:27 compute-2 ceph-mon[77282]: pgmap v1178: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 660 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 388 op/s
Jan 31 07:39:27 compute-2 ceph-mon[77282]: osdmap e167: 3 total, 3 up, 3 in
Jan 31 07:39:27 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 7.545313358s
Jan 31 07:39:27 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 7.545313358s
Jan 31 07:39:27 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.659042835s, txc = 0x562dbac62300
Jan 31 07:39:27 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.626890659s, txc = 0x562dbb9cf200
Jan 31 07:39:27 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.603488445s, txc = 0x562dbba6c000
Jan 31 07:39:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).paxos(paxos updating c 2260..2786) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 2.488163233s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Jan 31 07:39:27 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2[77278]: 2026-01-31T07:39:27.669+0000 7fa4f938c640 -1 mon.compute-2@1(peon).paxos(paxos updating c 2260..2786) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 2.488163233s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Jan 31 07:39:27 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.560798645s, txc = 0x562dbb48ec00
Jan 31 07:39:27 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.976703644s, txc = 0x562dba835800
Jan 31 07:39:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:27 compute-2 ceph-mon[77282]: pgmap v1180: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 16 MiB/s rd, 14 MiB/s wr, 500 op/s
Jan 31 07:39:27 compute-2 nova_compute[226829]: 2026-01-31 07:39:27.699 226833 DEBUG nova.storage.rbd_utils [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] creating snapshot(snap) on rbd image(4cb863eb-873b-4a83-aef8-5fea8d796cf2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:39:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:27.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:28 compute-2 sudo[242883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:28 compute-2 sudo[242883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:28 compute-2 sudo[242883]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:28 compute-2 sudo[242908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:39:28 compute-2 sudo[242908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:28 compute-2 sudo[242908]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:28 compute-2 nova_compute[226829]: 2026-01-31 07:39:28.269 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:28.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:28 compute-2 ceph-mon[77282]: pgmap v1181: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 8.1 MiB/s wr, 322 op/s
Jan 31 07:39:28 compute-2 ceph-mon[77282]: pgmap v1182: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 2.7 MiB/s wr, 230 op/s
Jan 31 07:39:28 compute-2 ceph-mon[77282]: pgmap v1183: 305 pgs: 305 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 99 op/s
Jan 31 07:39:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:39:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:39:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e168 e168: 3 total, 3 up, 3 in
Jan 31 07:39:28 compute-2 nova_compute[226829]: 2026-01-31 07:39:28.806 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:29.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:29 compute-2 sudo[242935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:29 compute-2 sudo[242935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:29 compute-2 sudo[242935]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:29 compute-2 ceph-mon[77282]: pgmap v1184: 305 pgs: 305 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 929 KiB/s wr, 88 op/s
Jan 31 07:39:29 compute-2 ceph-mon[77282]: osdmap e168: 3 total, 3 up, 3 in
Jan 31 07:39:29 compute-2 sudo[242960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:29 compute-2 sudo[242960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:29 compute-2 sudo[242960]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:30.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:30 compute-2 ovn_controller[133834]: 2026-01-31T07:39:30Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:a5:31 10.100.0.4
Jan 31 07:39:30 compute-2 ovn_controller[133834]: 2026-01-31T07:39:30Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:a5:31 10.100.0.4
Jan 31 07:39:31 compute-2 nova_compute[226829]: 2026-01-31 07:39:31.149 226833 INFO nova.virt.libvirt.driver [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Snapshot image upload complete
Jan 31 07:39:31 compute-2 nova_compute[226829]: 2026-01-31 07:39:31.151 226833 INFO nova.compute.manager [None req-d872e32d-296f-4a4c-9fdd-58fdb2dac160 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Took 19.92 seconds to snapshot the instance on the hypervisor.
Jan 31 07:39:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:31.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:32.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:32 compute-2 ceph-mon[77282]: pgmap v1186: 305 pgs: 305 active+clean; 614 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 591 KiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 07:39:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:33 compute-2 podman[242986]: 2026-01-31 07:39:33.225783341 +0000 UTC m=+0.106095670 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:39:33 compute-2 nova_compute[226829]: 2026-01-31 07:39:33.271 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:33 compute-2 nova_compute[226829]: 2026-01-31 07:39:33.810 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:33.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e169 e169: 3 total, 3 up, 3 in
Jan 31 07:39:34 compute-2 ceph-mon[77282]: pgmap v1187: 305 pgs: 305 active+clean; 578 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 797 KiB/s rd, 5.5 MiB/s wr, 205 op/s
Jan 31 07:39:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:34.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1659632397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1211290531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:35 compute-2 ceph-mon[77282]: osdmap e169: 3 total, 3 up, 3 in
Jan 31 07:39:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2681325381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:35 compute-2 ovn_controller[133834]: 2026-01-31T07:39:35Z|00090|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Jan 31 07:39:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:35.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:36.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:36 compute-2 ceph-mon[77282]: pgmap v1189: 305 pgs: 305 active+clean; 578 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 935 KiB/s rd, 6.9 MiB/s wr, 236 op/s
Jan 31 07:39:36 compute-2 nova_compute[226829]: 2026-01-31 07:39:36.670 226833 DEBUG nova.compute.manager [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:37 compute-2 nova_compute[226829]: 2026-01-31 07:39:37.368 226833 INFO nova.compute.manager [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] instance snapshotting
Jan 31 07:39:37 compute-2 ceph-mon[77282]: pgmap v1190: 305 pgs: 305 active+clean; 545 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 9.5 MiB/s wr, 367 op/s
Jan 31 07:39:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:37 compute-2 nova_compute[226829]: 2026-01-31 07:39:37.726 226833 INFO nova.virt.libvirt.driver [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Beginning live snapshot process
Jan 31 07:39:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:37.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:38 compute-2 nova_compute[226829]: 2026-01-31 07:39:38.274 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:38.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:38 compute-2 nova_compute[226829]: 2026-01-31 07:39:38.353 226833 DEBUG nova.virt.libvirt.imagebackend [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:39:38 compute-2 nova_compute[226829]: 2026-01-31 07:39:38.690 226833 DEBUG nova.storage.rbd_utils [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] creating snapshot(f6672851f6904864bf43a2e7f04fe989) on rbd image(4987b961-5362-48cd-80ca-ff6a201327e6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:39:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e170 e170: 3 total, 3 up, 3 in
Jan 31 07:39:39 compute-2 nova_compute[226829]: 2026-01-31 07:39:39.092 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:39.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:39 compute-2 ceph-mon[77282]: pgmap v1191: 305 pgs: 305 active+clean; 519 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 936 KiB/s rd, 8.4 MiB/s wr, 315 op/s
Jan 31 07:39:39 compute-2 ceph-mon[77282]: osdmap e170: 3 total, 3 up, 3 in
Jan 31 07:39:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e171 e171: 3 total, 3 up, 3 in
Jan 31 07:39:40 compute-2 nova_compute[226829]: 2026-01-31 07:39:40.168 226833 DEBUG nova.storage.rbd_utils [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] cloning vms/4987b961-5362-48cd-80ca-ff6a201327e6_disk@f6672851f6904864bf43a2e7f04fe989 to images/eec3485d-bba1-4018-9e35-7f9f59c9e839 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:39:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:40 compute-2 nova_compute[226829]: 2026-01-31 07:39:40.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:40.551 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:39:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:40.556 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:39:40 compute-2 nova_compute[226829]: 2026-01-31 07:39:40.797 226833 DEBUG nova.storage.rbd_utils [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] flattening images/eec3485d-bba1-4018-9e35-7f9f59c9e839 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:39:41 compute-2 ceph-mon[77282]: osdmap e171: 3 total, 3 up, 3 in
Jan 31 07:39:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:41.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:42 compute-2 ceph-mon[77282]: pgmap v1194: 305 pgs: 305 active+clean; 551 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 363 op/s
Jan 31 07:39:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3874652538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:42 compute-2 podman[243124]: 2026-01-31 07:39:42.202327494 +0000 UTC m=+0.072763858 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:39:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:42.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:42 compute-2 nova_compute[226829]: 2026-01-31 07:39:42.608 226833 DEBUG nova.storage.rbd_utils [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] removing snapshot(f6672851f6904864bf43a2e7f04fe989) on rbd image(4987b961-5362-48cd-80ca-ff6a201327e6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:39:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:43 compute-2 nova_compute[226829]: 2026-01-31 07:39:43.276 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e172 e172: 3 total, 3 up, 3 in
Jan 31 07:39:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2104040955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1549460415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:43 compute-2 nova_compute[226829]: 2026-01-31 07:39:43.575 226833 DEBUG nova.storage.rbd_utils [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] creating snapshot(snap) on rbd image(eec3485d-bba1-4018-9e35-7f9f59c9e839) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:39:43 compute-2 nova_compute[226829]: 2026-01-31 07:39:43.813 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:43.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:44 compute-2 ceph-mon[77282]: pgmap v1195: 305 pgs: 305 active+clean; 581 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.3 MiB/s wr, 358 op/s
Jan 31 07:39:44 compute-2 ceph-mon[77282]: osdmap e172: 3 total, 3 up, 3 in
Jan 31 07:39:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e173 e173: 3 total, 3 up, 3 in
Jan 31 07:39:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:44.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:45 compute-2 ceph-mon[77282]: osdmap e173: 3 total, 3 up, 3 in
Jan 31 07:39:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1935598199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:39:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1935598199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:39:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:45.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:46.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:46 compute-2 ceph-mon[77282]: pgmap v1198: 305 pgs: 305 active+clean; 581 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.3 MiB/s wr, 282 op/s
Jan 31 07:39:46 compute-2 nova_compute[226829]: 2026-01-31 07:39:46.445 226833 INFO nova.virt.libvirt.driver [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Snapshot image upload complete
Jan 31 07:39:46 compute-2 nova_compute[226829]: 2026-01-31 07:39:46.446 226833 INFO nova.compute.manager [None req-88ff93bf-b82b-4fc0-bfb6-26a979b836c8 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Took 9.08 seconds to snapshot the instance on the hypervisor.
Jan 31 07:39:47 compute-2 ceph-mon[77282]: pgmap v1199: 305 pgs: 305 active+clean; 560 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 6.9 MiB/s wr, 266 op/s
Jan 31 07:39:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:47.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:48 compute-2 nova_compute[226829]: 2026-01-31 07:39:48.278 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:39:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:48.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:39:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1664142076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:48.559 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:48 compute-2 nova_compute[226829]: 2026-01-31 07:39:48.849 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e174 e174: 3 total, 3 up, 3 in
Jan 31 07:39:49 compute-2 ceph-mon[77282]: pgmap v1200: 305 pgs: 305 active+clean; 551 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.8 MiB/s wr, 191 op/s
Jan 31 07:39:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:39:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:49.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:39:50 compute-2 sudo[243183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:50 compute-2 sudo[243183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:50 compute-2 sudo[243183]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:50 compute-2 sudo[243208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:50 compute-2 sudo[243208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:50 compute-2 sudo[243208]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:50.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:50 compute-2 ceph-mon[77282]: osdmap e174: 3 total, 3 up, 3 in
Jan 31 07:39:51 compute-2 ceph-mon[77282]: pgmap v1202: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 545 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 6.6 MiB/s wr, 330 op/s
Jan 31 07:39:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:39:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:51.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:39:51 compute-2 nova_compute[226829]: 2026-01-31 07:39:51.880 226833 DEBUG oslo_concurrency.lockutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Acquiring lock "4987b961-5362-48cd-80ca-ff6a201327e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:51 compute-2 nova_compute[226829]: 2026-01-31 07:39:51.880 226833 DEBUG oslo_concurrency.lockutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:51 compute-2 nova_compute[226829]: 2026-01-31 07:39:51.881 226833 DEBUG oslo_concurrency.lockutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Acquiring lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:51 compute-2 nova_compute[226829]: 2026-01-31 07:39:51.881 226833 DEBUG oslo_concurrency.lockutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:51 compute-2 nova_compute[226829]: 2026-01-31 07:39:51.881 226833 DEBUG oslo_concurrency.lockutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:51 compute-2 nova_compute[226829]: 2026-01-31 07:39:51.882 226833 INFO nova.compute.manager [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Terminating instance
Jan 31 07:39:51 compute-2 nova_compute[226829]: 2026-01-31 07:39:51.883 226833 DEBUG nova.compute.manager [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:39:52 compute-2 kernel: tap045ce842-8d (unregistering): left promiscuous mode
Jan 31 07:39:52 compute-2 NetworkManager[48999]: <info>  [1769845192.1538] device (tap045ce842-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:39:52 compute-2 ovn_controller[133834]: 2026-01-31T07:39:52Z|00091|binding|INFO|Releasing lport 045ce842-8d48-43b3-8c03-27d4ea7b65cc from this chassis (sb_readonly=0)
Jan 31 07:39:52 compute-2 ovn_controller[133834]: 2026-01-31T07:39:52Z|00092|binding|INFO|Setting lport 045ce842-8d48-43b3-8c03-27d4ea7b65cc down in Southbound
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.159 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:52 compute-2 ovn_controller[133834]: 2026-01-31T07:39:52Z|00093|binding|INFO|Removing iface tap045ce842-8d ovn-installed in OVS
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.162 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.167 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:52 compute-2 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000027.scope: Deactivated successfully.
Jan 31 07:39:52 compute-2 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000027.scope: Consumed 15.426s CPU time.
Jan 31 07:39:52 compute-2 systemd-machined[195142]: Machine qemu-16-instance-00000027 terminated.
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.267 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:a5:31 10.100.0.4'], port_security=['fa:16:3e:90:a5:31 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4987b961-5362-48cd-80ca-ff6a201327e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-470ea8b7-e52d-4600-9179-eb48dda5f49d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4511f016f1e44a299447f7fe1ad1a7ab', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ee39d04-816f-491f-9ed7-7ff642bb6345', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f7ba8b7-dbd3-416d-b95c-c4b4bad6c78b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=045ce842-8d48-43b3-8c03-27d4ea7b65cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.270 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 045ce842-8d48-43b3-8c03-27d4ea7b65cc in datapath 470ea8b7-e52d-4600-9179-eb48dda5f49d unbound from our chassis
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.274 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 470ea8b7-e52d-4600-9179-eb48dda5f49d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.279 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6bd83e-37b8-457a-b117-861cae87b0ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.280 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d namespace which is not needed anymore
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.317 226833 INFO nova.virt.libvirt.driver [-] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Instance destroyed successfully.
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.318 226833 DEBUG nova.objects.instance [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lazy-loading 'resources' on Instance uuid 4987b961-5362-48cd-80ca-ff6a201327e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:52.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:52 compute-2 neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d[242511]: [NOTICE]   (242515) : haproxy version is 2.8.14-c23fe91
Jan 31 07:39:52 compute-2 neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d[242511]: [NOTICE]   (242515) : path to executable is /usr/sbin/haproxy
Jan 31 07:39:52 compute-2 neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d[242511]: [WARNING]  (242515) : Exiting Master process...
Jan 31 07:39:52 compute-2 neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d[242511]: [ALERT]    (242515) : Current worker (242517) exited with code 143 (Terminated)
Jan 31 07:39:52 compute-2 neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d[242511]: [WARNING]  (242515) : All workers exited. Exiting... (0)
Jan 31 07:39:52 compute-2 systemd[1]: libpod-a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649.scope: Deactivated successfully.
Jan 31 07:39:52 compute-2 podman[243270]: 2026-01-31 07:39:52.418298635 +0000 UTC m=+0.059891885 container died a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:39:52 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649-userdata-shm.mount: Deactivated successfully.
Jan 31 07:39:52 compute-2 systemd[1]: var-lib-containers-storage-overlay-0106ff1424b6df86554ae3ffcf92df08368760d066e6df418db74f4bff002cf0-merged.mount: Deactivated successfully.
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.467 226833 DEBUG nova.virt.libvirt.vif [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:38:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1787237336',display_name='tempest-ImagesOneServerTestJSON-server-1787237336',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1787237336',id=39,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:39:08Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4511f016f1e44a299447f7fe1ad1a7ab',ramdisk_id='',reservation_id='r-jte6xd6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-626457921',owner_user_name='tempest-ImagesOneServerTestJSON-626457921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:39:46Z,user_data=None,user_id='73c9db412cc647958ba8093d8f187dce',uuid=4987b961-5362-48cd-80ca-ff6a201327e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "address": "fa:16:3e:90:a5:31", "network": {"id": "470ea8b7-e52d-4600-9179-eb48dda5f49d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1196752311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4511f016f1e44a299447f7fe1ad1a7ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap045ce842-8d", "ovs_interfaceid": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.468 226833 DEBUG nova.network.os_vif_util [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Converting VIF {"id": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "address": "fa:16:3e:90:a5:31", "network": {"id": "470ea8b7-e52d-4600-9179-eb48dda5f49d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1196752311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4511f016f1e44a299447f7fe1ad1a7ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap045ce842-8d", "ovs_interfaceid": "045ce842-8d48-43b3-8c03-27d4ea7b65cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.470 226833 DEBUG nova.network.os_vif_util [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:a5:31,bridge_name='br-int',has_traffic_filtering=True,id=045ce842-8d48-43b3-8c03-27d4ea7b65cc,network=Network(470ea8b7-e52d-4600-9179-eb48dda5f49d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap045ce842-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.471 226833 DEBUG os_vif [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:a5:31,bridge_name='br-int',has_traffic_filtering=True,id=045ce842-8d48-43b3-8c03-27d4ea7b65cc,network=Network(470ea8b7-e52d-4600-9179-eb48dda5f49d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap045ce842-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.478 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.479 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap045ce842-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.481 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.482 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.487 226833 INFO os_vif [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:a5:31,bridge_name='br-int',has_traffic_filtering=True,id=045ce842-8d48-43b3-8c03-27d4ea7b65cc,network=Network(470ea8b7-e52d-4600-9179-eb48dda5f49d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap045ce842-8d')
Jan 31 07:39:52 compute-2 podman[243270]: 2026-01-31 07:39:52.501523623 +0000 UTC m=+0.143116863 container cleanup a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:39:52 compute-2 systemd[1]: libpod-conmon-a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649.scope: Deactivated successfully.
Jan 31 07:39:52 compute-2 podman[243316]: 2026-01-31 07:39:52.631910715 +0000 UTC m=+0.108413855 container remove a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.635 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[906e3947-4914-4d06-829f-707c8fe5b6fc]: (4, ('Sat Jan 31 07:39:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d (a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649)\na622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649\nSat Jan 31 07:39:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d (a622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649)\na622a520cf5349a2edc4489424bfedc8082f9a88d795bfba6834f005f521c649\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.637 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e74ff910-9ccb-46a8-951a-52b3e074bbfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.638 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap470ea8b7-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.639 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:52 compute-2 kernel: tap470ea8b7-e0: left promiscuous mode
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.641 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.644 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.647 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8e15b5-7b98-4af6-94fb-007347fbd6aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.662 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4840a1-ffff-496a-acb5-ccd8578bafa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.664 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee17f39-812c-4526-8bbf-017b967e3ca2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.678 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0c17dc-7c96-4ab2-b5d3-10ade6a6541f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539824, 'reachable_time': 43854, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243335, 'error': None, 'target': 'ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.682 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-470ea8b7-e52d-4600-9179-eb48dda5f49d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:39:52 compute-2 systemd[1]: run-netns-ovnmeta\x2d470ea8b7\x2de52d\x2d4600\x2d9179\x2deb48dda5f49d.mount: Deactivated successfully.
Jan 31 07:39:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:52.682 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[a01c198c-a6a2-4b42-902a-b6e17acae1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.686 226833 DEBUG nova.compute.manager [req-b60d0828-5e0a-4fb6-ab0b-c1236472dd71 req-dea0841f-131a-45f9-8d85-20a17ff5d8a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Received event network-vif-unplugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.687 226833 DEBUG oslo_concurrency.lockutils [req-b60d0828-5e0a-4fb6-ab0b-c1236472dd71 req-dea0841f-131a-45f9-8d85-20a17ff5d8a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.687 226833 DEBUG oslo_concurrency.lockutils [req-b60d0828-5e0a-4fb6-ab0b-c1236472dd71 req-dea0841f-131a-45f9-8d85-20a17ff5d8a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.687 226833 DEBUG oslo_concurrency.lockutils [req-b60d0828-5e0a-4fb6-ab0b-c1236472dd71 req-dea0841f-131a-45f9-8d85-20a17ff5d8a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.688 226833 DEBUG nova.compute.manager [req-b60d0828-5e0a-4fb6-ab0b-c1236472dd71 req-dea0841f-131a-45f9-8d85-20a17ff5d8a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] No waiting events found dispatching network-vif-unplugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:52 compute-2 nova_compute[226829]: 2026-01-31 07:39:52.688 226833 DEBUG nova.compute.manager [req-b60d0828-5e0a-4fb6-ab0b-c1236472dd71 req-dea0841f-131a-45f9-8d85-20a17ff5d8a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Received event network-vif-unplugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:39:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:53 compute-2 nova_compute[226829]: 2026-01-31 07:39:53.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:53.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:53 compute-2 nova_compute[226829]: 2026-01-31 07:39:53.930 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:54 compute-2 ceph-mon[77282]: pgmap v1203: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 514 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.4 MiB/s wr, 348 op/s
Jan 31 07:39:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2608596935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e175 e175: 3 total, 3 up, 3 in
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.062 226833 INFO nova.virt.libvirt.driver [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Deleting instance files /var/lib/nova/instances/4987b961-5362-48cd-80ca-ff6a201327e6_del
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.063 226833 INFO nova.virt.libvirt.driver [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Deletion of /var/lib/nova/instances/4987b961-5362-48cd-80ca-ff6a201327e6_del complete
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.209 226833 INFO nova.compute.manager [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Took 2.33 seconds to destroy the instance on the hypervisor.
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.210 226833 DEBUG oslo.service.loopingcall [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.210 226833 DEBUG nova.compute.manager [-] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.210 226833 DEBUG nova.network.neutron [-] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:39:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:54.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:39:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1480405234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.639 226833 DEBUG oslo_concurrency.lockutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "ae28510e-134b-4be3-b0b5-1eed1b4da893" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.639 226833 DEBUG oslo_concurrency.lockutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.640 226833 DEBUG oslo_concurrency.lockutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.640 226833 DEBUG oslo_concurrency.lockutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.640 226833 DEBUG oslo_concurrency.lockutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.641 226833 INFO nova.compute.manager [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Terminating instance
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.642 226833 DEBUG nova.compute.manager [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.795 226833 DEBUG nova.compute.manager [req-f3f14142-10f8-4b0a-a9f9-553a299e3996 req-64b1ee39-5087-4ec6-9f76-c8728947c9e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Received event network-vif-plugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.796 226833 DEBUG oslo_concurrency.lockutils [req-f3f14142-10f8-4b0a-a9f9-553a299e3996 req-64b1ee39-5087-4ec6-9f76-c8728947c9e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.796 226833 DEBUG oslo_concurrency.lockutils [req-f3f14142-10f8-4b0a-a9f9-553a299e3996 req-64b1ee39-5087-4ec6-9f76-c8728947c9e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.797 226833 DEBUG oslo_concurrency.lockutils [req-f3f14142-10f8-4b0a-a9f9-553a299e3996 req-64b1ee39-5087-4ec6-9f76-c8728947c9e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.797 226833 DEBUG nova.compute.manager [req-f3f14142-10f8-4b0a-a9f9-553a299e3996 req-64b1ee39-5087-4ec6-9f76-c8728947c9e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] No waiting events found dispatching network-vif-plugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.797 226833 WARNING nova.compute.manager [req-f3f14142-10f8-4b0a-a9f9-553a299e3996 req-64b1ee39-5087-4ec6-9f76-c8728947c9e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Received unexpected event network-vif-plugged-045ce842-8d48-43b3-8c03-27d4ea7b65cc for instance with vm_state active and task_state deleting.
Jan 31 07:39:54 compute-2 kernel: tap34b1d0d0-9a (unregistering): left promiscuous mode
Jan 31 07:39:54 compute-2 NetworkManager[48999]: <info>  [1769845194.8264] device (tap34b1d0d0-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:39:54 compute-2 ovn_controller[133834]: 2026-01-31T07:39:54Z|00094|binding|INFO|Releasing lport 34b1d0d0-9a83-46fb-8663-0daad4c676e3 from this chassis (sb_readonly=0)
Jan 31 07:39:54 compute-2 ovn_controller[133834]: 2026-01-31T07:39:54Z|00095|binding|INFO|Setting lport 34b1d0d0-9a83-46fb-8663-0daad4c676e3 down in Southbound
Jan 31 07:39:54 compute-2 ovn_controller[133834]: 2026-01-31T07:39:54Z|00096|binding|INFO|Removing iface tap34b1d0d0-9a ovn-installed in OVS
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.868 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:54.877 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:60:aa 10.100.0.8'], port_security=['fa:16:3e:ca:60:aa 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ae28510e-134b-4be3-b0b5-1eed1b4da893', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=34b1d0d0-9a83-46fb-8663-0daad4c676e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.878 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:54.880 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 34b1d0d0-9a83-46fb-8663-0daad4c676e3 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 unbound from our chassis
Jan 31 07:39:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:54.885 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:39:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:54.886 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee68658-526f-4b08-9f4c-b73a1b92e5de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:54.887 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 namespace which is not needed anymore
Jan 31 07:39:54 compute-2 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000022.scope: Deactivated successfully.
Jan 31 07:39:54 compute-2 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000022.scope: Consumed 17.847s CPU time.
Jan 31 07:39:54 compute-2 systemd-machined[195142]: Machine qemu-15-instance-00000022 terminated.
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.945 226833 DEBUG nova.network.neutron [-] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:54 compute-2 nova_compute[226829]: 2026-01-31 07:39:54.975 226833 INFO nova.compute.manager [-] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Took 0.76 seconds to deallocate network for instance.
Jan 31 07:39:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2782953610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:55 compute-2 ceph-mon[77282]: osdmap e175: 3 total, 3 up, 3 in
Jan 31 07:39:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1480405234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/579142581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:55 compute-2 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[240064]: [NOTICE]   (240068) : haproxy version is 2.8.14-c23fe91
Jan 31 07:39:55 compute-2 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[240064]: [NOTICE]   (240068) : path to executable is /usr/sbin/haproxy
Jan 31 07:39:55 compute-2 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[240064]: [WARNING]  (240068) : Exiting Master process...
Jan 31 07:39:55 compute-2 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[240064]: [ALERT]    (240068) : Current worker (240070) exited with code 143 (Terminated)
Jan 31 07:39:55 compute-2 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[240064]: [WARNING]  (240068) : All workers exited. Exiting... (0)
Jan 31 07:39:55 compute-2 systemd[1]: libpod-0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0.scope: Deactivated successfully.
Jan 31 07:39:55 compute-2 NetworkManager[48999]: <info>  [1769845195.0652] manager: (tap34b1d0d0-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Jan 31 07:39:55 compute-2 podman[243360]: 2026-01-31 07:39:55.068168493 +0000 UTC m=+0.063498272 container died 0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.087 226833 INFO nova.virt.libvirt.driver [-] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Instance destroyed successfully.
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.087 226833 DEBUG nova.objects.instance [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'resources' on Instance uuid ae28510e-134b-4be3-b0b5-1eed1b4da893 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:55 compute-2 systemd[1]: var-lib-containers-storage-overlay-7155d948b40ed956dfeaff1435c1e770dc2223bbbb10bd820c41a6201511310f-merged.mount: Deactivated successfully.
Jan 31 07:39:55 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0-userdata-shm.mount: Deactivated successfully.
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.109 226833 DEBUG oslo_concurrency.lockutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.110 226833 DEBUG oslo_concurrency.lockutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:55 compute-2 podman[243360]: 2026-01-31 07:39:55.111539494 +0000 UTC m=+0.106869223 container cleanup 0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:39:55 compute-2 systemd[1]: libpod-conmon-0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0.scope: Deactivated successfully.
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.138 226833 DEBUG nova.virt.libvirt.vif [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:37:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-263437114',display_name='tempest-ServersAdminTestJSON-server-263437114',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-263437114',id=34,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:37:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-4q08ca15',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:37:58Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=ae28510e-134b-4be3-b0b5-1eed1b4da893,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.139 226833 DEBUG nova.network.os_vif_util [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "address": "fa:16:3e:ca:60:aa", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34b1d0d0-9a", "ovs_interfaceid": "34b1d0d0-9a83-46fb-8663-0daad4c676e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.139 226833 DEBUG nova.network.os_vif_util [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ca:60:aa,bridge_name='br-int',has_traffic_filtering=True,id=34b1d0d0-9a83-46fb-8663-0daad4c676e3,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34b1d0d0-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.140 226833 DEBUG os_vif [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:60:aa,bridge_name='br-int',has_traffic_filtering=True,id=34b1d0d0-9a83-46fb-8663-0daad4c676e3,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34b1d0d0-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.142 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.143 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b1d0d0-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.145 226833 DEBUG nova.compute.manager [req-04610e25-96a4-4bdf-ae2b-e01fa8e28d27 req-62d07733-35dd-42dd-81a2-4a5389d688bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Received event network-vif-unplugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.146 226833 DEBUG oslo_concurrency.lockutils [req-04610e25-96a4-4bdf-ae2b-e01fa8e28d27 req-62d07733-35dd-42dd-81a2-4a5389d688bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.146 226833 DEBUG oslo_concurrency.lockutils [req-04610e25-96a4-4bdf-ae2b-e01fa8e28d27 req-62d07733-35dd-42dd-81a2-4a5389d688bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.146 226833 DEBUG oslo_concurrency.lockutils [req-04610e25-96a4-4bdf-ae2b-e01fa8e28d27 req-62d07733-35dd-42dd-81a2-4a5389d688bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.146 226833 DEBUG nova.compute.manager [req-04610e25-96a4-4bdf-ae2b-e01fa8e28d27 req-62d07733-35dd-42dd-81a2-4a5389d688bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] No waiting events found dispatching network-vif-unplugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.146 226833 DEBUG nova.compute.manager [req-04610e25-96a4-4bdf-ae2b-e01fa8e28d27 req-62d07733-35dd-42dd-81a2-4a5389d688bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Received event network-vif-unplugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.147 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.148 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.150 226833 INFO os_vif [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:60:aa,bridge_name='br-int',has_traffic_filtering=True,id=34b1d0d0-9a83-46fb-8663-0daad4c676e3,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34b1d0d0-9a')
Jan 31 07:39:55 compute-2 podman[243398]: 2026-01-31 07:39:55.195287667 +0000 UTC m=+0.064372055 container remove 0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 07:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:55.200 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd9d754-d552-44c9-aedb-28f1b8cba2e3]: (4, ('Sat Jan 31 07:39:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 (0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0)\n0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0\nSat Jan 31 07:39:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 (0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0)\n0ffcf2636ac16db5bf63214bbf18a370be8cdf6cf66a20f95b65064eca9c38a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:55.202 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6c180d8c-183a-483f-9b69-1320eedaacb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:55.203 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.204 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:55 compute-2 kernel: tap8c92e27e-f0: left promiscuous mode
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.207 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:55.210 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c279068c-5b83-49b6-a135-09e9ace8d827]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.211 226833 DEBUG oslo_concurrency.processutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:55.227 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a5696c-2d86-4d6a-8bc1-9c2c4b5f3574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:55.229 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[216b6247-efdb-4ddc-883f-d930f082bbed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.237 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:55.242 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[58c185f3-b07b-42fc-8c2e-c758346cb9bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533037, 'reachable_time': 42125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243435, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:55 compute-2 systemd[1]: run-netns-ovnmeta\x2d8c92e27e\x2df16c\x2d4df2\x2da299\x2d60ef2ca44f53.mount: Deactivated successfully.
Jan 31 07:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:55.246 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:39:55.246 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[2aeec36a-f888-4f6b-aa05-573669cf4f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:39:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3209765064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.655 226833 DEBUG oslo_concurrency.processutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.661 226833 DEBUG nova.compute.provider_tree [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.675 226833 DEBUG nova.scheduler.client.report [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.703 226833 DEBUG oslo_concurrency.lockutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.711 226833 INFO nova.virt.libvirt.driver [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Deleting instance files /var/lib/nova/instances/ae28510e-134b-4be3-b0b5-1eed1b4da893_del
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.711 226833 INFO nova.virt.libvirt.driver [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Deletion of /var/lib/nova/instances/ae28510e-134b-4be3-b0b5-1eed1b4da893_del complete
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.724 226833 INFO nova.scheduler.client.report [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Deleted allocations for instance 4987b961-5362-48cd-80ca-ff6a201327e6
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.794 226833 INFO nova.compute.manager [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Took 1.15 seconds to destroy the instance on the hypervisor.
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.795 226833 DEBUG oslo.service.loopingcall [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.795 226833 DEBUG nova.compute.manager [-] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.795 226833 DEBUG nova.network.neutron [-] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:39:55 compute-2 nova_compute[226829]: 2026-01-31 07:39:55.802 226833 DEBUG oslo_concurrency.lockutils [None req-c5873054-a169-4a80-8085-ee4bdfd883f3 73c9db412cc647958ba8093d8f187dce 4511f016f1e44a299447f7fe1ad1a7ab - - default default] Lock "4987b961-5362-48cd-80ca-ff6a201327e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:55.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:56 compute-2 ceph-mon[77282]: pgmap v1205: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 514 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 253 op/s
Jan 31 07:39:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3209765064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:56 compute-2 nova_compute[226829]: 2026-01-31 07:39:56.054 226833 DEBUG nova.compute.manager [req-0e9cf891-d6e3-4c8f-a9eb-49bde098cda7 req-cb09f34f-becc-4c6d-bec3-158a69da6c15 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Received event network-vif-deleted-045ce842-8d48-43b3-8c03-27d4ea7b65cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e176 e176: 3 total, 3 up, 3 in
Jan 31 07:39:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:39:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:56.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:39:56 compute-2 nova_compute[226829]: 2026-01-31 07:39:56.958 226833 DEBUG nova.network.neutron [-] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:56 compute-2 nova_compute[226829]: 2026-01-31 07:39:56.979 226833 INFO nova.compute.manager [-] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Took 1.18 seconds to deallocate network for instance.
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.042 226833 DEBUG oslo_concurrency.lockutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.043 226833 DEBUG oslo_concurrency.lockutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:57 compute-2 ceph-mon[77282]: osdmap e176: 3 total, 3 up, 3 in
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.147 226833 DEBUG oslo_concurrency.processutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e177 e177: 3 total, 3 up, 3 in
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.225 226833 DEBUG nova.compute.manager [req-d83b08b5-b597-49ae-a46a-65e39558d667 req-0823851a-494b-4b3d-97f8-bf2815225ad3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Received event network-vif-plugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.225 226833 DEBUG oslo_concurrency.lockutils [req-d83b08b5-b597-49ae-a46a-65e39558d667 req-0823851a-494b-4b3d-97f8-bf2815225ad3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.226 226833 DEBUG oslo_concurrency.lockutils [req-d83b08b5-b597-49ae-a46a-65e39558d667 req-0823851a-494b-4b3d-97f8-bf2815225ad3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.226 226833 DEBUG oslo_concurrency.lockutils [req-d83b08b5-b597-49ae-a46a-65e39558d667 req-0823851a-494b-4b3d-97f8-bf2815225ad3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.227 226833 DEBUG nova.compute.manager [req-d83b08b5-b597-49ae-a46a-65e39558d667 req-0823851a-494b-4b3d-97f8-bf2815225ad3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] No waiting events found dispatching network-vif-plugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.227 226833 WARNING nova.compute.manager [req-d83b08b5-b597-49ae-a46a-65e39558d667 req-0823851a-494b-4b3d-97f8-bf2815225ad3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Received unexpected event network-vif-plugged-34b1d0d0-9a83-46fb-8663-0daad4c676e3 for instance with vm_state deleted and task_state None.
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.490 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.545 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:39:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/92165256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.588 226833 DEBUG oslo_concurrency.processutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.594 226833 DEBUG nova.compute.provider_tree [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:39:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:57 compute-2 nova_compute[226829]: 2026-01-31 07:39:57.820 226833 DEBUG nova.scheduler.client.report [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:39:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:39:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:57.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.033 226833 DEBUG oslo_concurrency.lockutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.038 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.038 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.039 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.039 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.192 226833 INFO nova.scheduler.client.report [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Deleted allocations for instance ae28510e-134b-4be3-b0b5-1eed1b4da893
Jan 31 07:39:58 compute-2 ceph-mon[77282]: pgmap v1207: 305 pgs: 305 active+clean; 335 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 363 op/s
Jan 31 07:39:58 compute-2 ceph-mon[77282]: osdmap e177: 3 total, 3 up, 3 in
Jan 31 07:39:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/92165256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.291 226833 DEBUG nova.compute.manager [req-32031141-fd35-4920-9cb7-71f215618f38 req-63ae74e6-23c2-4ee0-93a6-4a4135b16038 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Received event network-vif-deleted-34b1d0d0-9a83-46fb-8663-0daad4c676e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e178 e178: 3 total, 3 up, 3 in
Jan 31 07:39:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:39:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:58.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:39:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:39:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2259836008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.531 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.681 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.683 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4675MB free_disk=20.83910369873047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.683 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.684 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.733 226833 DEBUG oslo_concurrency.lockutils [None req-01b805e1-fb01-4407-94cd-d0956f050937 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ae28510e-134b-4be3-b0b5-1eed1b4da893" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e179 e179: 3 total, 3 up, 3 in
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.933 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.961 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.961 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:39:58 compute-2 nova_compute[226829]: 2026-01-31 07:39:58.984 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2244405028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:59 compute-2 ceph-mon[77282]: osdmap e178: 3 total, 3 up, 3 in
Jan 31 07:39:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2259836008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:59 compute-2 ceph-mon[77282]: osdmap e179: 3 total, 3 up, 3 in
Jan 31 07:39:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:39:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2270609077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:59 compute-2 nova_compute[226829]: 2026-01-31 07:39:59.486 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:59 compute-2 nova_compute[226829]: 2026-01-31 07:39:59.496 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:39:59 compute-2 nova_compute[226829]: 2026-01-31 07:39:59.688 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:39:59 compute-2 nova_compute[226829]: 2026-01-31 07:39:59.735 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:59 compute-2 nova_compute[226829]: 2026-01-31 07:39:59.860 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:39:59 compute-2 nova_compute[226829]: 2026-01-31 07:39:59.861 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:39:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:00 compute-2 nova_compute[226829]: 2026-01-31 07:40:00.144 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:00.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:00 compute-2 ceph-mon[77282]: pgmap v1210: 305 pgs: 305 active+clean; 310 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 504 KiB/s wr, 216 op/s
Jan 31 07:40:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2270609077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1501468041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:40:01 compute-2 ceph-mon[77282]: pgmap v1212: 305 pgs: 305 active+clean; 293 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.7 MiB/s wr, 271 op/s
Jan 31 07:40:01 compute-2 nova_compute[226829]: 2026-01-31 07:40:01.856 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:01 compute-2 nova_compute[226829]: 2026-01-31 07:40:01.858 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:01 compute-2 nova_compute[226829]: 2026-01-31 07:40:01.858 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:40:01 compute-2 nova_compute[226829]: 2026-01-31 07:40:01.859 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:40:01 compute-2 nova_compute[226829]: 2026-01-31 07:40:01.927 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:40:01 compute-2 nova_compute[226829]: 2026-01-31 07:40:01.928 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:01 compute-2 nova_compute[226829]: 2026-01-31 07:40:01.928 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:01 compute-2 nova_compute[226829]: 2026-01-31 07:40:01.929 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:40:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:01.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:40:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:02.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:40:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e180 e180: 3 total, 3 up, 3 in
Jan 31 07:40:03 compute-2 nova_compute[226829]: 2026-01-31 07:40:03.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:03 compute-2 nova_compute[226829]: 2026-01-31 07:40:03.538 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e181 e181: 3 total, 3 up, 3 in
Jan 31 07:40:03 compute-2 ceph-mon[77282]: pgmap v1213: 305 pgs: 305 active+clean; 256 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Jan 31 07:40:03 compute-2 ceph-mon[77282]: osdmap e180: 3 total, 3 up, 3 in
Jan 31 07:40:03 compute-2 ceph-mon[77282]: osdmap e181: 3 total, 3 up, 3 in
Jan 31 07:40:03 compute-2 nova_compute[226829]: 2026-01-31 07:40:03.935 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:03.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:04 compute-2 podman[243534]: 2026-01-31 07:40:04.206737264 +0000 UTC m=+0.083379124 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:40:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:40:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:04.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:40:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/287017321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:05 compute-2 nova_compute[226829]: 2026-01-31 07:40:05.147 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:05 compute-2 ceph-mon[77282]: pgmap v1216: 305 pgs: 305 active+clean; 256 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.2 MiB/s wr, 191 op/s
Jan 31 07:40:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1676011270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:05.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:06.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:06.847 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:06.850 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:06.850 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/979781824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e182 e182: 3 total, 3 up, 3 in
Jan 31 07:40:07 compute-2 nova_compute[226829]: 2026-01-31 07:40:07.316 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845192.3155832, 4987b961-5362-48cd-80ca-ff6a201327e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:07 compute-2 nova_compute[226829]: 2026-01-31 07:40:07.317 226833 INFO nova.compute.manager [-] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] VM Stopped (Lifecycle Event)
Jan 31 07:40:07 compute-2 nova_compute[226829]: 2026-01-31 07:40:07.504 226833 DEBUG nova.compute.manager [None req-0d322b5e-ae98-4497-b72f-4965b6354705 - - - - - -] [instance: 4987b961-5362-48cd-80ca-ff6a201327e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:07 compute-2 ceph-mon[77282]: pgmap v1217: 305 pgs: 305 active+clean; 80 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 264 op/s
Jan 31 07:40:07 compute-2 ceph-mon[77282]: osdmap e182: 3 total, 3 up, 3 in
Jan 31 07:40:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1190711260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:40:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:07.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:40:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:08.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:08 compute-2 nova_compute[226829]: 2026-01-31 07:40:08.937 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:09 compute-2 ceph-mon[77282]: pgmap v1219: 305 pgs: 305 active+clean; 54 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 9.3 KiB/s wr, 167 op/s
Jan 31 07:40:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:10 compute-2 nova_compute[226829]: 2026-01-31 07:40:10.084 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845195.082261, ae28510e-134b-4be3-b0b5-1eed1b4da893 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:10 compute-2 nova_compute[226829]: 2026-01-31 07:40:10.084 226833 INFO nova.compute.manager [-] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] VM Stopped (Lifecycle Event)
Jan 31 07:40:10 compute-2 nova_compute[226829]: 2026-01-31 07:40:10.117 226833 DEBUG nova.compute.manager [None req-b53f334d-a9fa-409d-94cc-8155ffec6b33 - - - - - -] [instance: ae28510e-134b-4be3-b0b5-1eed1b4da893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:10 compute-2 nova_compute[226829]: 2026-01-31 07:40:10.150 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:10 compute-2 sudo[243563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:10 compute-2 sudo[243563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:10 compute-2 sudo[243563]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:10 compute-2 sudo[243588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:10 compute-2 sudo[243588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:10 compute-2 sudo[243588]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:10.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/694355676' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:40:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/694355676' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:40:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:11.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:12 compute-2 ceph-mon[77282]: pgmap v1220: 305 pgs: 305 active+clean; 53 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 111 KiB/s rd, 202 KiB/s wr, 163 op/s
Jan 31 07:40:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:40:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:12.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:40:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:13 compute-2 podman[243614]: 2026-01-31 07:40:13.18783153 +0000 UTC m=+0.068548027 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:40:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e183 e183: 3 total, 3 up, 3 in
Jan 31 07:40:13 compute-2 nova_compute[226829]: 2026-01-31 07:40:13.940 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:13.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:14 compute-2 ceph-mon[77282]: pgmap v1221: 305 pgs: 305 active+clean; 68 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 968 KiB/s wr, 162 op/s
Jan 31 07:40:14 compute-2 ceph-mon[77282]: osdmap e183: 3 total, 3 up, 3 in
Jan 31 07:40:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:40:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:14.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:40:15 compute-2 nova_compute[226829]: 2026-01-31 07:40:15.153 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/735678213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:15.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:16.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:16 compute-2 ceph-mon[77282]: pgmap v1223: 305 pgs: 305 active+clean; 68 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 1.0 MiB/s wr, 62 op/s
Jan 31 07:40:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4167689424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:17 compute-2 ceph-mon[77282]: pgmap v1224: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 2.2 MiB/s wr, 80 op/s
Jan 31 07:40:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:17.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:18 compute-2 nova_compute[226829]: 2026-01-31 07:40:18.941 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:19 compute-2 ceph-mon[77282]: pgmap v1225: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 07:40:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:19.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:20 compute-2 nova_compute[226829]: 2026-01-31 07:40:20.156 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:20.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:21 compute-2 ceph-mon[77282]: pgmap v1226: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 109 op/s
Jan 31 07:40:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e184 e184: 3 total, 3 up, 3 in
Jan 31 07:40:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:21.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:22.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e185 e185: 3 total, 3 up, 3 in
Jan 31 07:40:22 compute-2 ceph-mon[77282]: osdmap e184: 3 total, 3 up, 3 in
Jan 31 07:40:23 compute-2 ceph-mon[77282]: pgmap v1228: 305 pgs: 305 active+clean; 107 MiB data, 453 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 31 07:40:23 compute-2 ceph-mon[77282]: osdmap e185: 3 total, 3 up, 3 in
Jan 31 07:40:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e186 e186: 3 total, 3 up, 3 in
Jan 31 07:40:23 compute-2 nova_compute[226829]: 2026-01-31 07:40:23.944 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:23.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:40:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:24.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:40:25 compute-2 ceph-mon[77282]: osdmap e186: 3 total, 3 up, 3 in
Jan 31 07:40:25 compute-2 nova_compute[226829]: 2026-01-31 07:40:25.161 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:25.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:26 compute-2 ceph-mon[77282]: pgmap v1231: 305 pgs: 305 active+clean; 107 MiB data, 453 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.1 MiB/s wr, 163 op/s
Jan 31 07:40:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3020357619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:26.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:40:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:27.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:40:28 compute-2 sudo[243642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:28 compute-2 sudo[243642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:28 compute-2 sudo[243642]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:28 compute-2 ceph-mon[77282]: pgmap v1232: 305 pgs: 305 active+clean; 134 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.5 MiB/s wr, 142 op/s
Jan 31 07:40:28 compute-2 sudo[243667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:40:28 compute-2 sudo[243667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:28 compute-2 sudo[243667]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:28 compute-2 sudo[243692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:28 compute-2 sudo[243692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:28 compute-2 sudo[243692]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:28.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:28 compute-2 sudo[243717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:40:28 compute-2 sudo[243717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:28 compute-2 sudo[243717]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:28 compute-2 nova_compute[226829]: 2026-01-31 07:40:28.947 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:40:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:40:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:40:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:40:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:40:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:40:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:29.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:30 compute-2 nova_compute[226829]: 2026-01-31 07:40:30.164 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:40:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:30.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:40:30 compute-2 sudo[243776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:30 compute-2 sudo[243776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:30 compute-2 sudo[243776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:30 compute-2 sudo[243801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:30 compute-2 sudo[243801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:30 compute-2 sudo[243801]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:31 compute-2 nova_compute[226829]: 2026-01-31 07:40:31.858 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "87f440bf-fb27-4f54-91e4-20ac4a817803" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:31 compute-2 nova_compute[226829]: 2026-01-31 07:40:31.859 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:31 compute-2 nova_compute[226829]: 2026-01-31 07:40:31.895 226833 DEBUG nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:40:31 compute-2 nova_compute[226829]: 2026-01-31 07:40:31.984 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:31 compute-2 nova_compute[226829]: 2026-01-31 07:40:31.985 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:31.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:31 compute-2 nova_compute[226829]: 2026-01-31 07:40:31.995 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:40:31 compute-2 nova_compute[226829]: 2026-01-31 07:40:31.996 226833 INFO nova.compute.claims [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.115 226833 DEBUG oslo_concurrency.processutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:40:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:32.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:40:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:40:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2010659746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.620 226833 DEBUG oslo_concurrency.processutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.626 226833 DEBUG nova.compute.provider_tree [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.644 226833 DEBUG nova.scheduler.client.report [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.681 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.683 226833 DEBUG nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:40:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.825 226833 DEBUG nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.826 226833 DEBUG nova.network.neutron [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.859 226833 INFO nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.881 226833 DEBUG nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.977 226833 DEBUG nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.978 226833 DEBUG nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:40:32 compute-2 nova_compute[226829]: 2026-01-31 07:40:32.979 226833 INFO nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Creating image(s)
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.007 226833 DEBUG nova.storage.rbd_utils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 87f440bf-fb27-4f54-91e4-20ac4a817803_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.037 226833 DEBUG nova.storage.rbd_utils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 87f440bf-fb27-4f54-91e4-20ac4a817803_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.065 226833 DEBUG nova.storage.rbd_utils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 87f440bf-fb27-4f54-91e4-20ac4a817803_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.069 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "e7d1a2b2e5df064bf93a8974968c95faa19a3d64" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.070 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "e7d1a2b2e5df064bf93a8974968c95faa19a3d64" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.081 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "49c2b2d1-3230-4f75-bc49-86230accc637" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.082 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.116 226833 DEBUG nova.compute.manager [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.145 226833 DEBUG nova.policy [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '533eaca1e9c4430dabe2b0a39039ca65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.249 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.250 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.255 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.256 226833 INFO nova.compute.claims [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.363 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.422 226833 DEBUG nova.virt.libvirt.imagebackend [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/1c1b7167-294e-43fd-b811-31bea4078f3d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/1c1b7167-294e-43fd-b811-31bea4078f3d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.482 226833 DEBUG nova.virt.libvirt.imagebackend [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/1c1b7167-294e-43fd-b811-31bea4078f3d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.483 226833 DEBUG nova.storage.rbd_utils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] cloning images/1c1b7167-294e-43fd-b811-31bea4078f3d@snap to None/87f440bf-fb27-4f54-91e4-20ac4a817803_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:40:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:40:33 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/24656606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.765 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.770 226833 DEBUG nova.compute.provider_tree [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.797 226833 DEBUG nova.scheduler.client.report [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.823 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.824 226833 DEBUG nova.compute.manager [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.875 226833 DEBUG nova.compute.manager [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.918 226833 INFO nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:40:33 compute-2 nova_compute[226829]: 2026-01-31 07:40:33.949 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:40:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:33.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.008 226833 DEBUG nova.compute.manager [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.302 226833 DEBUG nova.network.neutron [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Successfully created port: cfae2a9b-ad02-4548-9536-75c968915dac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:40:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:40:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.467 226833 DEBUG nova.compute.manager [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.468 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.469 226833 INFO nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Creating image(s)
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.498 226833 DEBUG nova.storage.rbd_utils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.532 226833 DEBUG nova.storage.rbd_utils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.564 226833 DEBUG nova.storage.rbd_utils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.567 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.619 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.620 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.621 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.621 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.653 226833 DEBUG nova.storage.rbd_utils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:34 compute-2 nova_compute[226829]: 2026-01-31 07:40:34.658 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 49c2b2d1-3230-4f75-bc49-86230accc637_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:35 compute-2 nova_compute[226829]: 2026-01-31 07:40:35.167 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:35 compute-2 podman[244083]: 2026-01-31 07:40:35.204223072 +0000 UTC m=+0.083732113 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:40:35 compute-2 nova_compute[226829]: 2026-01-31 07:40:35.847 226833 DEBUG nova.network.neutron [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Successfully updated port: cfae2a9b-ad02-4548-9536-75c968915dac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:40:35 compute-2 nova_compute[226829]: 2026-01-31 07:40:35.895 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "refresh_cache-87f440bf-fb27-4f54-91e4-20ac4a817803" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:40:35 compute-2 nova_compute[226829]: 2026-01-31 07:40:35.895 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquired lock "refresh_cache-87f440bf-fb27-4f54-91e4-20ac4a817803" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:40:35 compute-2 nova_compute[226829]: 2026-01-31 07:40:35.895 226833 DEBUG nova.network.neutron [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:40:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:35.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:36 compute-2 nova_compute[226829]: 2026-01-31 07:40:36.201 226833 DEBUG nova.compute.manager [req-f93931e8-400d-4576-a773-8309548d2baf req-a5b1a0c0-b2c5-404c-92f2-3a7c655b9d61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Received event network-changed-cfae2a9b-ad02-4548-9536-75c968915dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:36 compute-2 nova_compute[226829]: 2026-01-31 07:40:36.201 226833 DEBUG nova.compute.manager [req-f93931e8-400d-4576-a773-8309548d2baf req-a5b1a0c0-b2c5-404c-92f2-3a7c655b9d61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Refreshing instance network info cache due to event network-changed-cfae2a9b-ad02-4548-9536-75c968915dac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:40:36 compute-2 nova_compute[226829]: 2026-01-31 07:40:36.201 226833 DEBUG oslo_concurrency.lockutils [req-f93931e8-400d-4576-a773-8309548d2baf req-a5b1a0c0-b2c5-404c-92f2-3a7c655b9d61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-87f440bf-fb27-4f54-91e4-20ac4a817803" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:40:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:36.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:36 compute-2 nova_compute[226829]: 2026-01-31 07:40:36.548 226833 DEBUG nova.network.neutron [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:40:37 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:40:37 compute-2 nova_compute[226829]: 2026-01-31 07:40:37.682 226833 DEBUG nova.network.neutron [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Updating instance_info_cache with network_info: [{"id": "cfae2a9b-ad02-4548-9536-75c968915dac", "address": "fa:16:3e:25:b1:e4", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfae2a9b-ad", "ovs_interfaceid": "cfae2a9b-ad02-4548-9536-75c968915dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:40:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:37 compute-2 nova_compute[226829]: 2026-01-31 07:40:37.805 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Releasing lock "refresh_cache-87f440bf-fb27-4f54-91e4-20ac4a817803" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:40:37 compute-2 nova_compute[226829]: 2026-01-31 07:40:37.806 226833 DEBUG nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Instance network_info: |[{"id": "cfae2a9b-ad02-4548-9536-75c968915dac", "address": "fa:16:3e:25:b1:e4", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfae2a9b-ad", "ovs_interfaceid": "cfae2a9b-ad02-4548-9536-75c968915dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:40:37 compute-2 nova_compute[226829]: 2026-01-31 07:40:37.807 226833 DEBUG oslo_concurrency.lockutils [req-f93931e8-400d-4576-a773-8309548d2baf req-a5b1a0c0-b2c5-404c-92f2-3a7c655b9d61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-87f440bf-fb27-4f54-91e4-20ac4a817803" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:40:37 compute-2 nova_compute[226829]: 2026-01-31 07:40:37.807 226833 DEBUG nova.network.neutron [req-f93931e8-400d-4576-a773-8309548d2baf req-a5b1a0c0-b2c5-404c-92f2-3a7c655b9d61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Refreshing network info cache for port cfae2a9b-ad02-4548-9536-75c968915dac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:40:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:38.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:38 compute-2 nova_compute[226829]: 2026-01-31 07:40:38.952 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:40.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:40 compute-2 nova_compute[226829]: 2026-01-31 07:40:40.175 226833 DEBUG nova.network.neutron [req-f93931e8-400d-4576-a773-8309548d2baf req-a5b1a0c0-b2c5-404c-92f2-3a7c655b9d61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Updated VIF entry in instance network info cache for port cfae2a9b-ad02-4548-9536-75c968915dac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:40:40 compute-2 nova_compute[226829]: 2026-01-31 07:40:40.176 226833 DEBUG nova.network.neutron [req-f93931e8-400d-4576-a773-8309548d2baf req-a5b1a0c0-b2c5-404c-92f2-3a7c655b9d61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Updating instance_info_cache with network_info: [{"id": "cfae2a9b-ad02-4548-9536-75c968915dac", "address": "fa:16:3e:25:b1:e4", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfae2a9b-ad", "ovs_interfaceid": "cfae2a9b-ad02-4548-9536-75c968915dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:40:40 compute-2 nova_compute[226829]: 2026-01-31 07:40:40.216 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:40 compute-2 nova_compute[226829]: 2026-01-31 07:40:40.302 226833 DEBUG oslo_concurrency.lockutils [req-f93931e8-400d-4576-a773-8309548d2baf req-a5b1a0c0-b2c5-404c-92f2-3a7c655b9d61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-87f440bf-fb27-4f54-91e4-20ac4a817803" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:40:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:40.868 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:40:40 compute-2 nova_compute[226829]: 2026-01-31 07:40:40.869 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:40.871 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:40:41 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:40:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:42.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:42.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).paxos(paxos updating c 2260..2883) lease_timeout -- calling new election
Jan 31 07:40:43 compute-2 ceph-mon[77282]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 31 07:40:43 compute-2 ceph-mon[77282]: paxos.1).electionLogic(14) init, last seen epoch 14
Jan 31 07:40:43 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 10.169338226s
Jan 31 07:40:43 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 10.169338226s
Jan 31 07:40:43 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.169607162s, txc = 0x562dbb768f00
Jan 31 07:40:43 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:40:43 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.975427628s, txc = 0x562dbac44c00
Jan 31 07:40:43 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.062172890s, txc = 0x562dbb9ce000
Jan 31 07:40:43 compute-2 nova_compute[226829]: 2026-01-31 07:40:43.953 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:44.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:44 compute-2 podman[244122]: 2026-01-31 07:40:44.149771443 +0000 UTC m=+0.041000958 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:40:44 compute-2 nova_compute[226829]: 2026-01-31 07:40:44.159 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 49c2b2d1-3230-4f75-bc49-86230accc637_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 9.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:44 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:44 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:44.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:44 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:44 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:45 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:45 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:40:45 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:45 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:45 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:45 compute-2 nova_compute[226829]: 2026-01-31 07:40:45.620 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:45 compute-2 nova_compute[226829]: 2026-01-31 07:40:45.629 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "e7d1a2b2e5df064bf93a8974968c95faa19a3d64" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 12.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:45 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:45 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:45 compute-2 sudo[244177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:45 compute-2 sudo[244177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:45 compute-2 sudo[244177]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:45 compute-2 sudo[244202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:40:45 compute-2 sudo[244202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:45 compute-2 sudo[244202]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:46.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.303 226833 DEBUG nova.storage.rbd_utils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] resizing rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.416 226833 DEBUG nova.objects.instance [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'migration_context' on Instance uuid 87f440bf-fb27-4f54-91e4-20ac4a817803 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.483 226833 DEBUG nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.484 226833 DEBUG nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Ensure instance console log exists: /var/lib/nova/instances/87f440bf-fb27-4f54-91e4-20ac4a817803/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.485 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.485 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.486 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.490 226833 DEBUG nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Start _get_guest_xml network_info=[{"id": "cfae2a9b-ad02-4548-9536-75c968915dac", "address": "fa:16:3e:25:b1:e4", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfae2a9b-ad", "ovs_interfaceid": "cfae2a9b-ad02-4548-9536-75c968915dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T07:40:20Z,direct_url=<?>,disk_format='raw',id=1c1b7167-294e-43fd-b811-31bea4078f3d,min_disk=1,min_ram=0,name='tempest-test-snap-1998051882',owner='b3e3e6f216d24c1f9f68777cfb63dbf8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T07:40:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '1c1b7167-294e-43fd-b811-31bea4078f3d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.497 226833 DEBUG nova.objects.instance [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lazy-loading 'migration_context' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.500 226833 WARNING nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.509 226833 DEBUG nova.virt.libvirt.host [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.509 226833 DEBUG nova.virt.libvirt.host [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.513 226833 DEBUG nova.virt.libvirt.host [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.513 226833 DEBUG nova.virt.libvirt.host [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.514 226833 DEBUG nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.514 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T07:40:20Z,direct_url=<?>,disk_format='raw',id=1c1b7167-294e-43fd-b811-31bea4078f3d,min_disk=1,min_ram=0,name='tempest-test-snap-1998051882',owner='b3e3e6f216d24c1f9f68777cfb63dbf8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T07:40:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.515 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.515 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.515 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.516 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.516 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.516 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.516 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.517 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.517 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.517 226833 DEBUG nova.virt.hardware [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.520 226833 DEBUG oslo_concurrency.processutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.546 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.547 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Ensure instance console log exists: /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.548 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.548 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.548 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.550 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.555 226833 WARNING nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.561 226833 DEBUG nova.virt.libvirt.host [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.562 226833 DEBUG nova.virt.libvirt.host [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.565 226833 DEBUG nova.virt.libvirt.host [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.565 226833 DEBUG nova.virt.libvirt.host [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.567 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.567 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.568 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.568 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.568 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.568 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.569 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.569 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.569 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.570 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.570 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.570 226833 DEBUG nova.virt.hardware [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.573 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:46 compute-2 nova_compute[226829]: 2026-01-31 07:40:46.998 226833 DEBUG oslo_concurrency.processutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.044 226833 DEBUG nova.storage.rbd_utils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 87f440bf-fb27-4f54-91e4-20ac4a817803_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.048 226833 DEBUG oslo_concurrency.processutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.062 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.101 226833 DEBUG nova.storage.rbd_utils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.106 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:47 compute-2 ovn_controller[133834]: 2026-01-31T07:40:47Z|00097|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma MDS connection to Monitors appears to be laggy; 17.9645s since last acked beacon
Jan 31 07:40:47 compute-2 ceph-mds[84366]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.454 226833 DEBUG oslo_concurrency.processutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.456 226833 DEBUG nova.virt.libvirt.vif [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:40:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-57654757',display_name='tempest-ImagesTestJSON-server-57654757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-57654757',id=44,image_ref='1c1b7167-294e-43fd-b811-31bea4078f3d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-0inkhv66',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='314f0738-9cae-4fe8-8b90-3d18f72488ef',image_min_disk='1',image_min_ram='0',image_owner_id='b3e3e6f216d24c1f9f68777cfb63dbf8',image_owner_project_name='tempest-ImagesTestJSON-533495031',image_owner_user_name='tempest-ImagesTestJSON-533495031-project-member',image_user_id='533eaca1e9c4430dabe2b0a39039ca65',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:40:32Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=87f440bf-fb27-4f54-91e4-20ac4a817803,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cfae2a9b-ad02-4548-9536-75c968915dac", "address": "fa:16:3e:25:b1:e4", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfae2a9b-ad", "ovs_interfaceid": "cfae2a9b-ad02-4548-9536-75c968915dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.456 226833 DEBUG nova.network.os_vif_util [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "cfae2a9b-ad02-4548-9536-75c968915dac", "address": "fa:16:3e:25:b1:e4", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfae2a9b-ad", "ovs_interfaceid": "cfae2a9b-ad02-4548-9536-75c968915dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.458 226833 DEBUG nova.network.os_vif_util [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:e4,bridge_name='br-int',has_traffic_filtering=True,id=cfae2a9b-ad02-4548-9536-75c968915dac,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfae2a9b-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.459 226833 DEBUG nova.objects.instance [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 87f440bf-fb27-4f54-91e4-20ac4a817803 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.506 226833 DEBUG nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <uuid>87f440bf-fb27-4f54-91e4-20ac4a817803</uuid>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <name>instance-0000002c</name>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:name>tempest-ImagesTestJSON-server-57654757</nova:name>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:40:46</nova:creationTime>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:user uuid="533eaca1e9c4430dabe2b0a39039ca65">tempest-ImagesTestJSON-533495031-project-member</nova:user>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:project uuid="b3e3e6f216d24c1f9f68777cfb63dbf8">tempest-ImagesTestJSON-533495031</nova:project>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="1c1b7167-294e-43fd-b811-31bea4078f3d"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:port uuid="cfae2a9b-ad02-4548-9536-75c968915dac">
Jan 31 07:40:47 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <system>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="serial">87f440bf-fb27-4f54-91e4-20ac4a817803</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="uuid">87f440bf-fb27-4f54-91e4-20ac4a817803</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </system>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <os>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </os>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <features>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </features>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/87f440bf-fb27-4f54-91e4-20ac4a817803_disk">
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </source>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/87f440bf-fb27-4f54-91e4-20ac4a817803_disk.config">
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </source>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:25:b1:e4"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <target dev="tapcfae2a9b-ad"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/87f440bf-fb27-4f54-91e4-20ac4a817803/console.log" append="off"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <video>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </video>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:40:47 compute-2 nova_compute[226829]: </domain>
Jan 31 07:40:47 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.508 226833 DEBUG nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Preparing to wait for external event network-vif-plugged-cfae2a9b-ad02-4548-9536-75c968915dac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.509 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.510 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.510 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.512 226833 DEBUG nova.virt.libvirt.vif [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:40:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-57654757',display_name='tempest-ImagesTestJSON-server-57654757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-57654757',id=44,image_ref='1c1b7167-294e-43fd-b811-31bea4078f3d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-0inkhv66',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='314f0738-9cae-4fe8-8b90-3d18f72488ef',image_min_disk='1',image_min_ram='0',image_owner_id='b3e3e6f216d24c1f9f68777cfb63dbf8',image_owner_project_name='tempest-ImagesTestJSON-533495031',image_owner_user_name='tempest-ImagesTestJSON-533495031-project-member',image_user_id='533eaca1e9c4430dabe2b0a39039ca65',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:40:32Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=87f440bf-fb27-4f54-91e4-20ac4a817803,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cfae2a9b-ad02-4548-9536-75c968915dac", "address": "fa:16:3e:25:b1:e4", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfae2a9b-ad", "ovs_interfaceid": "cfae2a9b-ad02-4548-9536-75c968915dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.513 226833 DEBUG nova.network.os_vif_util [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "cfae2a9b-ad02-4548-9536-75c968915dac", "address": "fa:16:3e:25:b1:e4", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfae2a9b-ad", "ovs_interfaceid": "cfae2a9b-ad02-4548-9536-75c968915dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.514 226833 DEBUG nova.network.os_vif_util [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:e4,bridge_name='br-int',has_traffic_filtering=True,id=cfae2a9b-ad02-4548-9536-75c968915dac,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfae2a9b-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.515 226833 DEBUG os_vif [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:e4,bridge_name='br-int',has_traffic_filtering=True,id=cfae2a9b-ad02-4548-9536-75c968915dac,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfae2a9b-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.516 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.517 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.518 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.522 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.524 226833 DEBUG nova.objects.instance [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lazy-loading 'pci_devices' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.525 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.526 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfae2a9b-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.527 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcfae2a9b-ad, col_values=(('external_ids', {'iface-id': 'cfae2a9b-ad02-4548-9536-75c968915dac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:b1:e4', 'vm-uuid': '87f440bf-fb27-4f54-91e4-20ac4a817803'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.528 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.530 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:40:47 compute-2 NetworkManager[48999]: <info>  [1769845247.5306] manager: (tapcfae2a9b-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.538 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <uuid>49c2b2d1-3230-4f75-bc49-86230accc637</uuid>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <name>instance-0000002d</name>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1773899943</nova:name>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:40:46</nova:creationTime>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:user uuid="cda91adb5caf4eeb81b5a934ccbb1a1e">tempest-UnshelveToHostMultiNodesTest-877324354-project-member</nova:user>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <nova:project uuid="37a878bbb1224cfeabcbe629345fc85d">tempest-UnshelveToHostMultiNodesTest-877324354</nova:project>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <system>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="serial">49c2b2d1-3230-4f75-bc49-86230accc637</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="uuid">49c2b2d1-3230-4f75-bc49-86230accc637</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </system>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <os>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </os>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <features>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </features>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk">
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </source>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk.config">
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </source>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:40:47 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/console.log" append="off"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <video>
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </video>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:40:47 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:40:47 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:40:47 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:40:47 compute-2 nova_compute[226829]: </domain>
Jan 31 07:40:47 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.539 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.540 226833 INFO os_vif [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:e4,bridge_name='br-int',has_traffic_filtering=True,id=cfae2a9b-ad02-4548-9536-75c968915dac,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfae2a9b-ad')
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.605 226833 DEBUG nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.606 226833 DEBUG nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.606 226833 DEBUG nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No VIF found with MAC fa:16:3e:25:b1:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.607 226833 INFO nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Using config drive
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.652 226833 DEBUG nova.storage.rbd_utils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 87f440bf-fb27-4f54-91e4-20ac4a817803_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.662 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.662 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.663 226833 INFO nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Using config drive
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:47 compute-2 nova_compute[226829]: 2026-01-31 07:40:47.705 226833 DEBUG nova.storage.rbd_utils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:40:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:48.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:40:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:48 compute-2 nova_compute[226829]: 2026-01-31 07:40:48.523 226833 INFO nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Creating config drive at /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config
Jan 31 07:40:48 compute-2 nova_compute[226829]: 2026-01-31 07:40:48.527 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpaym4dpwo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:48 compute-2 nova_compute[226829]: 2026-01-31 07:40:48.658 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpaym4dpwo" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:48 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:48 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:48 compute-2 nova_compute[226829]: 2026-01-31 07:40:48.696 226833 DEBUG nova.storage.rbd_utils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:48 compute-2 nova_compute[226829]: 2026-01-31 07:40:48.701 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:48 compute-2 ceph-mon[77282]: paxos.1).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Jan 31 07:40:48 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:40:48 compute-2 ceph-mon[77282]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 31 07:40:48 compute-2 ceph-mon[77282]: paxos.1).electionLogic(19) init, last seen epoch 19, mid-election, bumping
Jan 31 07:40:48 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:48 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:40:48 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:40:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e187 e187: 3 total, 3 up, 3 in
Jan 31 07:40:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2581716136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:40:48 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma  MDS is no longer laggy
Jan 31 07:40:48 compute-2 nova_compute[226829]: 2026-01-31 07:40:48.955 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:49 compute-2 nova_compute[226829]: 2026-01-31 07:40:49.238 226833 DEBUG oslo_concurrency.processutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:49 compute-2 nova_compute[226829]: 2026-01-31 07:40:49.239 226833 INFO nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Deleting local config drive /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config because it was imported into RBD.
Jan 31 07:40:49 compute-2 systemd-machined[195142]: New machine qemu-17-instance-0000002d.
Jan 31 07:40:49 compute-2 systemd[1]: Started Virtual Machine qemu-17-instance-0000002d.
Jan 31 07:40:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2010659746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/24656606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:49 compute-2 ceph-mon[77282]: mon.compute-2 calling monitor election
Jan 31 07:40:49 compute-2 ceph-mon[77282]: pgmap v1244: 305 pgs: 305 active+clean; 259 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 2.2 MiB/s wr, 61 op/s
Jan 31 07:40:49 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 07:40:49 compute-2 ceph-mon[77282]: mon.compute-2 calling monitor election
Jan 31 07:40:49 compute-2 ceph-mon[77282]: mon.compute-1 calling monitor election
Jan 31 07:40:49 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 07:40:49 compute-2 ceph-mon[77282]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:40:49 compute-2 ceph-mon[77282]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:40:49 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:40:49 compute-2 ceph-mon[77282]: osdmap e187: 3 total, 3 up, 3 in
Jan 31 07:40:49 compute-2 ceph-mon[77282]: mgrmap e11: compute-0.hhuoua(active, since 35m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:40:49 compute-2 ceph-mon[77282]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 07:40:49 compute-2 ceph-mon[77282]: Cluster is now healthy
Jan 31 07:40:49 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:40:49 compute-2 nova_compute[226829]: 2026-01-31 07:40:49.579 226833 INFO nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Creating config drive at /var/lib/nova/instances/87f440bf-fb27-4f54-91e4-20ac4a817803/disk.config
Jan 31 07:40:49 compute-2 nova_compute[226829]: 2026-01-31 07:40:49.583 226833 DEBUG oslo_concurrency.processutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87f440bf-fb27-4f54-91e4-20ac4a817803/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpaqmseue1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:49 compute-2 nova_compute[226829]: 2026-01-31 07:40:49.711 226833 DEBUG oslo_concurrency.processutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87f440bf-fb27-4f54-91e4-20ac4a817803/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpaqmseue1" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:49 compute-2 nova_compute[226829]: 2026-01-31 07:40:49.745 226833 DEBUG nova.storage.rbd_utils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 87f440bf-fb27-4f54-91e4-20ac4a817803_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:49 compute-2 nova_compute[226829]: 2026-01-31 07:40:49.749 226833 DEBUG oslo_concurrency.processutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87f440bf-fb27-4f54-91e4-20ac4a817803/disk.config 87f440bf-fb27-4f54-91e4-20ac4a817803_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:49.874 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:49 compute-2 nova_compute[226829]: 2026-01-31 07:40:49.950 226833 DEBUG oslo_concurrency.processutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87f440bf-fb27-4f54-91e4-20ac4a817803/disk.config 87f440bf-fb27-4f54-91e4-20ac4a817803_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:49 compute-2 nova_compute[226829]: 2026-01-31 07:40:49.952 226833 INFO nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Deleting local config drive /var/lib/nova/instances/87f440bf-fb27-4f54-91e4-20ac4a817803/disk.config because it was imported into RBD.
Jan 31 07:40:49 compute-2 kernel: tapcfae2a9b-ad: entered promiscuous mode
Jan 31 07:40:49 compute-2 systemd-udevd[244618]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:40:49 compute-2 NetworkManager[48999]: <info>  [1769845249.9970] manager: (tapcfae2a9b-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Jan 31 07:40:49 compute-2 ovn_controller[133834]: 2026-01-31T07:40:49Z|00098|binding|INFO|Claiming lport cfae2a9b-ad02-4548-9536-75c968915dac for this chassis.
Jan 31 07:40:49 compute-2 ovn_controller[133834]: 2026-01-31T07:40:49Z|00099|binding|INFO|cfae2a9b-ad02-4548-9536-75c968915dac: Claiming fa:16:3e:25:b1:e4 10.100.0.4
Jan 31 07:40:49 compute-2 nova_compute[226829]: 2026-01-31 07:40:49.996 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.000 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.005 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845250.0046391, 49c2b2d1-3230-4f75-bc49-86230accc637 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.005 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] VM Resumed (Lifecycle Event)
Jan 31 07:40:50 compute-2 NetworkManager[48999]: <info>  [1769845250.0089] device (tapcfae2a9b-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:40:50 compute-2 NetworkManager[48999]: <info>  [1769845250.0097] device (tapcfae2a9b-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.012 226833 DEBUG nova.compute.manager [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.012 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.017 226833 INFO nova.virt.libvirt.driver [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance spawned successfully.
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.017 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:40:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:50.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:50 compute-2 systemd-machined[195142]: New machine qemu-18-instance-0000002c.
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.030 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:b1:e4 10.100.0.4'], port_security=['fa:16:3e:25:b1:e4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '87f440bf-fb27-4f54-91e4-20ac4a817803', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=cfae2a9b-ad02-4548-9536-75c968915dac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.032 143841 INFO neutron.agent.ovn.metadata.agent [-] Port cfae2a9b-ad02-4548-9536-75c968915dac in datapath cffffabd-62a6-4362-9315-bd726adce623 bound to our chassis
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.034 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.035 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.041 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-2 ovn_controller[133834]: 2026-01-31T07:40:50Z|00100|binding|INFO|Setting lport cfae2a9b-ad02-4548-9536-75c968915dac ovn-installed in OVS
Jan 31 07:40:50 compute-2 ovn_controller[133834]: 2026-01-31T07:40:50Z|00101|binding|INFO|Setting lport cfae2a9b-ad02-4548-9536-75c968915dac up in Southbound
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.045 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-2 systemd[1]: Started Virtual Machine qemu-18-instance-0000002c.
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.047 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[000c3a64-5db4-44bc-a304-b2d5f395dee6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.049 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcffffabd-61 in ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.051 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcffffabd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.052 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a60a2a-4f34-4244-aded-9144eda25ca0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.054 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[df404e23-e335-4318-9be1-b633c4ab880d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.054 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.058 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.068 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6dfc43-22eb-47e6-9a58-5798dcc0b55b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.080 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.081 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.081 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.082 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.082 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.083 226833 DEBUG nova.virt.libvirt.driver [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.086 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[caec867c-0c02-486a-bf8d-bf7bfef91354]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.096 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.099 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845250.0057871, 49c2b2d1-3230-4f75-bc49-86230accc637 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.099 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] VM Started (Lifecycle Event)
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.110 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f7869d1b-c3e2-43f8-8a6e-eb96de4ff65f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 NetworkManager[48999]: <info>  [1769845250.1201] manager: (tapcffffabd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.121 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae279f4-4f8a-4df8-b96a-f61b26b94e19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.141 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.146 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9d699f92-0c25-45dd-8716-2d20d7b0c1c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.146 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.149 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[6fba0079-64b7-4db6-8570-c4ac16722e68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 NetworkManager[48999]: <info>  [1769845250.1659] device (tapcffffabd-60): carrier: link connected
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.167 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[6f58a6fa-a876-4db0-b33b-d67df89c5c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.179 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.178 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3e928ebf-5c76-49ac-b0fd-1ca4b3c177ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550263, 'reachable_time': 34543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244666, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.190 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9781a801-8e8e-4578-9e60-df18d1b66610]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:96c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550263, 'tstamp': 550263}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244667, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.197 226833 INFO nova.compute.manager [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Took 15.73 seconds to spawn the instance on the hypervisor.
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.197 226833 DEBUG nova.compute.manager [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.199 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d28c0fd6-def4-4410-97d4-2b14f492b1af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550263, 'reachable_time': 34543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244668, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.220 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e3355a0c-17c6-47e1-88d7-99c52dac5840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.265 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[58e3ac43-caa8-4479-be79-ad8a8ad50018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.267 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.267 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.267 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcffffabd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.269 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-2 NetworkManager[48999]: <info>  [1769845250.2699] manager: (tapcffffabd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 31 07:40:50 compute-2 kernel: tapcffffabd-60: entered promiscuous mode
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.273 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcffffabd-60, col_values=(('external_ids', {'iface-id': '549e70cf-ed02-45f9-9021-3a04088f580f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.274 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.276 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-2 ovn_controller[133834]: 2026-01-31T07:40:50Z|00102|binding|INFO|Releasing lport 549e70cf-ed02-45f9-9021-3a04088f580f from this chassis (sb_readonly=0)
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.277 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.278 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c7173e56-6bfb-4f59-8402-7b55c7d30142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.279 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:40:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:50.280 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'env', 'PROCESS_TAG=haproxy-cffffabd-62a6-4362-9315-bd726adce623', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cffffabd-62a6-4362-9315-bd726adce623.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.285 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.288 226833 INFO nova.compute.manager [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Took 17.09 seconds to build instance.
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.332 226833 DEBUG oslo_concurrency.lockutils [None req-55267499-f36e-4690-889e-c02445a5df82 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:50.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:50 compute-2 sudo[244715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:50 compute-2 sudo[244715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:50 compute-2 sudo[244715]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:50 compute-2 sudo[244744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:50 compute-2 sudo[244744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:50 compute-2 sudo[244744]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.619 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845250.619085, 87f440bf-fb27-4f54-91e4-20ac4a817803 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.621 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] VM Started (Lifecycle Event)
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.656 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.661 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845250.619583, 87f440bf-fb27-4f54-91e4-20ac4a817803 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.661 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] VM Paused (Lifecycle Event)
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.691 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.695 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:40:50 compute-2 podman[244791]: 2026-01-31 07:40:50.71364905 +0000 UTC m=+0.069855982 container create 8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:40:50 compute-2 systemd[1]: Started libpod-conmon-8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef.scope.
Jan 31 07:40:50 compute-2 nova_compute[226829]: 2026-01-31 07:40:50.760 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:40:50 compute-2 podman[244791]: 2026-01-31 07:40:50.667420992 +0000 UTC m=+0.023627954 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:40:50 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:40:50 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b3d225c96836031efbc62555eea5055512ccdbe66d98b3d647d784b987f1b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:50 compute-2 podman[244791]: 2026-01-31 07:40:50.794826854 +0000 UTC m=+0.151033806 container init 8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:40:50 compute-2 podman[244791]: 2026-01-31 07:40:50.799696434 +0000 UTC m=+0.155903366 container start 8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:40:50 compute-2 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[244806]: [NOTICE]   (244810) : New worker (244812) forked
Jan 31 07:40:50 compute-2 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[244806]: [NOTICE]   (244810) : Loading success.
Jan 31 07:40:51 compute-2 ceph-mon[77282]: pgmap v1245: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 2.3 MiB/s wr, 90 op/s
Jan 31 07:40:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:52.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:40:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:52.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:40:52 compute-2 nova_compute[226829]: 2026-01-31 07:40:52.531 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:53 compute-2 nova_compute[226829]: 2026-01-31 07:40:53.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:53 compute-2 nova_compute[226829]: 2026-01-31 07:40:53.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:53 compute-2 nova_compute[226829]: 2026-01-31 07:40:53.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:40:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:53 compute-2 nova_compute[226829]: 2026-01-31 07:40:53.957 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:53 compute-2 ceph-mon[77282]: pgmap v1246: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 136 op/s
Jan 31 07:40:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:40:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:54.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.050 226833 DEBUG oslo_concurrency.lockutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "49c2b2d1-3230-4f75-bc49-86230accc637" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.051 226833 DEBUG oslo_concurrency.lockutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.052 226833 INFO nova.compute.manager [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Shelving
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.076 226833 DEBUG nova.virt.libvirt.driver [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:40:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:54.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.974 226833 DEBUG nova.compute.manager [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Received event network-vif-plugged-cfae2a9b-ad02-4548-9536-75c968915dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.975 226833 DEBUG oslo_concurrency.lockutils [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.976 226833 DEBUG oslo_concurrency.lockutils [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.976 226833 DEBUG oslo_concurrency.lockutils [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.976 226833 DEBUG nova.compute.manager [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Processing event network-vif-plugged-cfae2a9b-ad02-4548-9536-75c968915dac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.976 226833 DEBUG nova.compute.manager [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Received event network-vif-plugged-cfae2a9b-ad02-4548-9536-75c968915dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.977 226833 DEBUG oslo_concurrency.lockutils [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.977 226833 DEBUG oslo_concurrency.lockutils [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.978 226833 DEBUG oslo_concurrency.lockutils [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.978 226833 DEBUG nova.compute.manager [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] No waiting events found dispatching network-vif-plugged-cfae2a9b-ad02-4548-9536-75c968915dac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.978 226833 WARNING nova.compute.manager [req-b786713b-2649-4129-bec8-d4514018f27d req-c4dfbbab-3038-44d8-a1df-a5ba1011d2e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Received unexpected event network-vif-plugged-cfae2a9b-ad02-4548-9536-75c968915dac for instance with vm_state building and task_state spawning.
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.979 226833 DEBUG nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.983 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845254.9831758, 87f440bf-fb27-4f54-91e4-20ac4a817803 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.984 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] VM Resumed (Lifecycle Event)
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.985 226833 DEBUG nova.virt.libvirt.driver [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.988 226833 INFO nova.virt.libvirt.driver [-] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Instance spawned successfully.
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.988 226833 INFO nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Took 22.01 seconds to spawn the instance on the hypervisor.
Jan 31 07:40:54 compute-2 nova_compute[226829]: 2026-01-31 07:40:54.988 226833 DEBUG nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:55 compute-2 nova_compute[226829]: 2026-01-31 07:40:55.008 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:55 compute-2 nova_compute[226829]: 2026-01-31 07:40:55.032 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:40:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1081832384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:55 compute-2 nova_compute[226829]: 2026-01-31 07:40:55.068 226833 INFO nova.compute.manager [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Took 23.12 seconds to build instance.
Jan 31 07:40:55 compute-2 nova_compute[226829]: 2026-01-31 07:40:55.088 226833 DEBUG oslo_concurrency.lockutils [None req-8cae354d-72e2-4ced-8fd5-e0da911fb5a5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:55 compute-2 nova_compute[226829]: 2026-01-31 07:40:55.552 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:55 compute-2 nova_compute[226829]: 2026-01-31 07:40:55.553 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:55 compute-2 nova_compute[226829]: 2026-01-31 07:40:55.553 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:40:55 compute-2 nova_compute[226829]: 2026-01-31 07:40:55.606 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:40:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 31 07:40:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:56.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 31 07:40:56 compute-2 ceph-mon[77282]: pgmap v1247: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 136 op/s
Jan 31 07:40:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3409420697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:56.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.590 226833 DEBUG oslo_concurrency.lockutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "87f440bf-fb27-4f54-91e4-20ac4a817803" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.591 226833 DEBUG oslo_concurrency.lockutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.591 226833 DEBUG oslo_concurrency.lockutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.592 226833 DEBUG oslo_concurrency.lockutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.592 226833 DEBUG oslo_concurrency.lockutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.593 226833 INFO nova.compute.manager [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Terminating instance
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.594 226833 DEBUG nova.compute.manager [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:40:56 compute-2 kernel: tapcfae2a9b-ad (unregistering): left promiscuous mode
Jan 31 07:40:56 compute-2 NetworkManager[48999]: <info>  [1769845256.6417] device (tapcfae2a9b-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:40:56 compute-2 ovn_controller[133834]: 2026-01-31T07:40:56Z|00103|binding|INFO|Releasing lport cfae2a9b-ad02-4548-9536-75c968915dac from this chassis (sb_readonly=0)
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.643 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:56 compute-2 ovn_controller[133834]: 2026-01-31T07:40:56Z|00104|binding|INFO|Setting lport cfae2a9b-ad02-4548-9536-75c968915dac down in Southbound
Jan 31 07:40:56 compute-2 ovn_controller[133834]: 2026-01-31T07:40:56Z|00105|binding|INFO|Removing iface tapcfae2a9b-ad ovn-installed in OVS
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.655 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.656 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:b1:e4 10.100.0.4'], port_security=['fa:16:3e:25:b1:e4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '87f440bf-fb27-4f54-91e4-20ac4a817803', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=cfae2a9b-ad02-4548-9536-75c968915dac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.658 143841 INFO neutron.agent.ovn.metadata.agent [-] Port cfae2a9b-ad02-4548-9536-75c968915dac in datapath cffffabd-62a6-4362-9315-bd726adce623 unbound from our chassis
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.660 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cffffabd-62a6-4362-9315-bd726adce623, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.661 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[46a540a9-816f-48cc-9f81-e9e178a9ce8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.662 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace which is not needed anymore
Jan 31 07:40:56 compute-2 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Jan 31 07:40:56 compute-2 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000002c.scope: Consumed 2.371s CPU time.
Jan 31 07:40:56 compute-2 systemd-machined[195142]: Machine qemu-18-instance-0000002c terminated.
Jan 31 07:40:56 compute-2 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[244806]: [NOTICE]   (244810) : haproxy version is 2.8.14-c23fe91
Jan 31 07:40:56 compute-2 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[244806]: [NOTICE]   (244810) : path to executable is /usr/sbin/haproxy
Jan 31 07:40:56 compute-2 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[244806]: [WARNING]  (244810) : Exiting Master process...
Jan 31 07:40:56 compute-2 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[244806]: [ALERT]    (244810) : Current worker (244812) exited with code 143 (Terminated)
Jan 31 07:40:56 compute-2 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[244806]: [WARNING]  (244810) : All workers exited. Exiting... (0)
Jan 31 07:40:56 compute-2 systemd[1]: libpod-8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef.scope: Deactivated successfully.
Jan 31 07:40:56 compute-2 podman[244848]: 2026-01-31 07:40:56.782489101 +0000 UTC m=+0.045977083 container died 8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 07:40:56 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef-userdata-shm.mount: Deactivated successfully.
Jan 31 07:40:56 compute-2 systemd[1]: var-lib-containers-storage-overlay-26b3d225c96836031efbc62555eea5055512ccdbe66d98b3d647d784b987f1b0-merged.mount: Deactivated successfully.
Jan 31 07:40:56 compute-2 podman[244848]: 2026-01-31 07:40:56.831444802 +0000 UTC m=+0.094932794 container cleanup 8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.833 226833 INFO nova.virt.libvirt.driver [-] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Instance destroyed successfully.
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.834 226833 DEBUG nova.objects.instance [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'resources' on Instance uuid 87f440bf-fb27-4f54-91e4-20ac4a817803 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:40:56 compute-2 systemd[1]: libpod-conmon-8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef.scope: Deactivated successfully.
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.897 226833 DEBUG nova.virt.libvirt.vif [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:40:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-57654757',display_name='tempest-ImagesTestJSON-server-57654757',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-57654757',id=44,image_ref='1c1b7167-294e-43fd-b811-31bea4078f3d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:40:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-0inkhv66',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='314f0738-9cae-4fe8-8b90-3d18f72488ef',image_min_disk='1',image_min_ram='0',image_owner_id='b3e3e6f216d24c1f9f68777cfb63dbf8',image_owner_project_name='tempest-ImagesTestJSON-533495031',image_owner_user_name='tempest-ImagesTestJSON-533495031-project-member',image_user_id='533eaca1e9c4430dabe2b0a39039ca65',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:40:55Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=87f440bf-fb27-4f54-91e4-20ac4a817803,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cfae2a9b-ad02-4548-9536-75c968915dac", "address": "fa:16:3e:25:b1:e4", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfae2a9b-ad", "ovs_interfaceid": "cfae2a9b-ad02-4548-9536-75c968915dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.899 226833 DEBUG nova.network.os_vif_util [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "cfae2a9b-ad02-4548-9536-75c968915dac", "address": "fa:16:3e:25:b1:e4", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfae2a9b-ad", "ovs_interfaceid": "cfae2a9b-ad02-4548-9536-75c968915dac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.900 226833 DEBUG nova.network.os_vif_util [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:e4,bridge_name='br-int',has_traffic_filtering=True,id=cfae2a9b-ad02-4548-9536-75c968915dac,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfae2a9b-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.901 226833 DEBUG os_vif [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:e4,bridge_name='br-int',has_traffic_filtering=True,id=cfae2a9b-ad02-4548-9536-75c968915dac,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfae2a9b-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:40:56 compute-2 podman[244888]: 2026-01-31 07:40:56.908241449 +0000 UTC m=+0.053649499 container remove 8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.912 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.913 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfae2a9b-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.915 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.918 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.920 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.921 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e1fef292-ea72-4681-a51e-7eb31a6f1a75]: (4, ('Sat Jan 31 07:40:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef)\n8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef\nSat Jan 31 07:40:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef)\n8dd294defe41c58462136cab220fe05fa6c4bfe57636bbfd8c0d87c0c0f60aef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.924 226833 INFO os_vif [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:e4,bridge_name='br-int',has_traffic_filtering=True,id=cfae2a9b-ad02-4548-9536-75c968915dac,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfae2a9b-ad')
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.923 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9a17f9-dbd9-4281-8dea-ef336a3a6fdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.926 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:56 compute-2 kernel: tapcffffabd-60: left promiscuous mode
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.943 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[02dca50e-fa3f-497d-916a-5021a07acc7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:56 compute-2 nova_compute[226829]: 2026-01-31 07:40:56.944 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.957 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c158f9fb-5327-433d-9eb0-ec00e76c4425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.958 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2605d87b-f789-4829-a7a1-f4b3ae498abb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.973 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7501c34f-4b81-4f1f-86a5-e676de167c73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550258, 'reachable_time': 24159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244918, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:56 compute-2 systemd[1]: run-netns-ovnmeta\x2dcffffabd\x2d62a6\x2d4362\x2d9315\x2dbd726adce623.mount: Deactivated successfully.
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.976 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:40:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:40:56.976 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[9059c5fe-899c-486a-b599-ad1769cf395f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.343 226833 DEBUG nova.compute.manager [req-788a10b3-0c69-40a2-bf64-0eec4cc1d40d req-c934ce05-9739-4ef3-a9e9-ec452434f262 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Received event network-vif-unplugged-cfae2a9b-ad02-4548-9536-75c968915dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.344 226833 DEBUG oslo_concurrency.lockutils [req-788a10b3-0c69-40a2-bf64-0eec4cc1d40d req-c934ce05-9739-4ef3-a9e9-ec452434f262 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.344 226833 DEBUG oslo_concurrency.lockutils [req-788a10b3-0c69-40a2-bf64-0eec4cc1d40d req-c934ce05-9739-4ef3-a9e9-ec452434f262 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.345 226833 DEBUG oslo_concurrency.lockutils [req-788a10b3-0c69-40a2-bf64-0eec4cc1d40d req-c934ce05-9739-4ef3-a9e9-ec452434f262 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.345 226833 DEBUG nova.compute.manager [req-788a10b3-0c69-40a2-bf64-0eec4cc1d40d req-c934ce05-9739-4ef3-a9e9-ec452434f262 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] No waiting events found dispatching network-vif-unplugged-cfae2a9b-ad02-4548-9536-75c968915dac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.345 226833 DEBUG nova.compute.manager [req-788a10b3-0c69-40a2-bf64-0eec4cc1d40d req-c934ce05-9739-4ef3-a9e9-ec452434f262 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Received event network-vif-unplugged-cfae2a9b-ad02-4548-9536-75c968915dac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.468 226833 INFO nova.virt.libvirt.driver [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Deleting instance files /var/lib/nova/instances/87f440bf-fb27-4f54-91e4-20ac4a817803_del
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.469 226833 INFO nova.virt.libvirt.driver [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Deletion of /var/lib/nova/instances/87f440bf-fb27-4f54-91e4-20ac4a817803_del complete
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.542 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.797 226833 INFO nova.compute.manager [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Took 1.20 seconds to destroy the instance on the hypervisor.
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.797 226833 DEBUG oslo.service.loopingcall [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.797 226833 DEBUG nova.compute.manager [-] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:40:57 compute-2 nova_compute[226829]: 2026-01-31 07:40:57.798 226833 DEBUG nova.network.neutron [-] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:40:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:58.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:58 compute-2 ceph-mon[77282]: pgmap v1248: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.4 MiB/s wr, 225 op/s
Jan 31 07:40:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:40:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:58.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:58 compute-2 nova_compute[226829]: 2026-01-31 07:40:58.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:58 compute-2 nova_compute[226829]: 2026-01-31 07:40:58.525 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:58 compute-2 nova_compute[226829]: 2026-01-31 07:40:58.526 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:58 compute-2 nova_compute[226829]: 2026-01-31 07:40:58.526 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:58 compute-2 nova_compute[226829]: 2026-01-31 07:40:58.526 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:40:58 compute-2 nova_compute[226829]: 2026-01-31 07:40:58.527 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:40:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1164369848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:58 compute-2 nova_compute[226829]: 2026-01-31 07:40:58.952 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:58 compute-2 nova_compute[226829]: 2026-01-31 07:40:58.959 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.067 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.068 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:40:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1164369848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.189 226833 DEBUG nova.network.neutron [-] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.209 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.210 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4532MB free_disk=20.90084457397461GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.210 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.211 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.237 226833 INFO nova.compute.manager [-] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Took 1.44 seconds to deallocate network for instance.
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.267 226833 DEBUG nova.compute.manager [req-ce43ac06-2f77-4f51-a592-b85888ca48bc req-5ec18173-e084-42f7-8dc6-d6eff335751f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Received event network-vif-deleted-cfae2a9b-ad02-4548-9536-75c968915dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.299 226833 DEBUG oslo_concurrency.lockutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.336 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 87f440bf-fb27-4f54-91e4-20ac4a817803 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.337 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 49c2b2d1-3230-4f75-bc49-86230accc637 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.337 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.338 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.501 226833 DEBUG nova.compute.manager [req-ae11eec7-f86a-407a-b319-1dcf9e28224e req-45075c98-1a59-4885-8559-9b9e48f88add 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Received event network-vif-plugged-cfae2a9b-ad02-4548-9536-75c968915dac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.501 226833 DEBUG oslo_concurrency.lockutils [req-ae11eec7-f86a-407a-b319-1dcf9e28224e req-45075c98-1a59-4885-8559-9b9e48f88add 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.501 226833 DEBUG oslo_concurrency.lockutils [req-ae11eec7-f86a-407a-b319-1dcf9e28224e req-45075c98-1a59-4885-8559-9b9e48f88add 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.502 226833 DEBUG oslo_concurrency.lockutils [req-ae11eec7-f86a-407a-b319-1dcf9e28224e req-45075c98-1a59-4885-8559-9b9e48f88add 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.502 226833 DEBUG nova.compute.manager [req-ae11eec7-f86a-407a-b319-1dcf9e28224e req-45075c98-1a59-4885-8559-9b9e48f88add 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] No waiting events found dispatching network-vif-plugged-cfae2a9b-ad02-4548-9536-75c968915dac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.502 226833 WARNING nova.compute.manager [req-ae11eec7-f86a-407a-b319-1dcf9e28224e req-45075c98-1a59-4885-8559-9b9e48f88add 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Received unexpected event network-vif-plugged-cfae2a9b-ad02-4548-9536-75c968915dac for instance with vm_state deleted and task_state None.
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.569 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:40:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3007936790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.975 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:59 compute-2 nova_compute[226829]: 2026-01-31 07:40:59.982 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.005 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:41:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:00.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.085 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.085 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.086 226833 DEBUG oslo_concurrency.lockutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:00 compute-2 ceph-mon[77282]: pgmap v1249: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 648 KiB/s wr, 216 op/s
Jan 31 07:41:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3550577645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3007936790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.170 226833 DEBUG oslo_concurrency.processutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:00.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:41:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1330953625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.680 226833 DEBUG oslo_concurrency.processutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.687 226833 DEBUG nova.compute.provider_tree [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.708 226833 DEBUG nova.scheduler.client.report [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.736 226833 DEBUG oslo_concurrency.lockutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.760 226833 INFO nova.scheduler.client.report [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Deleted allocations for instance 87f440bf-fb27-4f54-91e4-20ac4a817803
Jan 31 07:41:00 compute-2 nova_compute[226829]: 2026-01-31 07:41:00.860 226833 DEBUG oslo_concurrency.lockutils [None req-9fae8c70-0b0f-4163-a325-4232bb8a04cb 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "87f440bf-fb27-4f54-91e4-20ac4a817803" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.081 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.081 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.082 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.082 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.098 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.098 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.098 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.099 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2575247536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1330953625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.526 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.836 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.871 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.871 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.872 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.872 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.872 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:41:01 compute-2 nova_compute[226829]: 2026-01-31 07:41:01.917 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:41:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:02.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:41:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e188 e188: 3 total, 3 up, 3 in
Jan 31 07:41:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:02.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:02 compute-2 ceph-mon[77282]: pgmap v1250: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 50 KiB/s wr, 247 op/s
Jan 31 07:41:03 compute-2 ceph-mon[77282]: osdmap e188: 3 total, 3 up, 3 in
Jan 31 07:41:03 compute-2 ceph-mon[77282]: pgmap v1252: 305 pgs: 305 active+clean; 279 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.7 MiB/s wr, 255 op/s
Jan 31 07:41:03 compute-2 nova_compute[226829]: 2026-01-31 07:41:03.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:03 compute-2 nova_compute[226829]: 2026-01-31 07:41:03.960 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:04.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:04 compute-2 nova_compute[226829]: 2026-01-31 07:41:04.122 226833 DEBUG nova.virt.libvirt.driver [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:41:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:04.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:05 compute-2 ceph-mon[77282]: pgmap v1253: 305 pgs: 305 active+clean; 279 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.7 MiB/s wr, 255 op/s
Jan 31 07:41:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:06.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:06 compute-2 podman[244995]: 2026-01-31 07:41:06.197703111 +0000 UTC m=+0.076960763 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:41:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:41:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:06.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:41:06 compute-2 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Jan 31 07:41:06 compute-2 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000002d.scope: Consumed 13.745s CPU time.
Jan 31 07:41:06 compute-2 systemd-machined[195142]: Machine qemu-17-instance-0000002d terminated.
Jan 31 07:41:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:06.848 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:06.850 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:06.851 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:06 compute-2 nova_compute[226829]: 2026-01-31 07:41:06.938 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:06 compute-2 nova_compute[226829]: 2026-01-31 07:41:06.939 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:06 compute-2 nova_compute[226829]: 2026-01-31 07:41:06.957 226833 DEBUG nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:41:06 compute-2 nova_compute[226829]: 2026-01-31 07:41:06.968 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.040 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.040 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.047 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.048 226833 INFO nova.compute.claims [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.136 226833 INFO nova.virt.libvirt.driver [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance shutdown successfully after 13 seconds.
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.141 226833 INFO nova.virt.libvirt.driver [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance destroyed successfully.
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.141 226833 DEBUG nova.objects.instance [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lazy-loading 'numa_topology' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.213 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.401 226833 INFO nova.virt.libvirt.driver [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Beginning cold snapshot process
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.579 226833 DEBUG nova.virt.libvirt.imagebackend [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:41:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:41:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3200938819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.661 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.669 226833 DEBUG nova.compute.provider_tree [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.688 226833 DEBUG nova.scheduler.client.report [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.718 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.719 226833 DEBUG nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.774 226833 DEBUG nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.774 226833 DEBUG nova.network.neutron [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.805 226833 INFO nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.836 226833 DEBUG nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:41:07 compute-2 nova_compute[226829]: 2026-01-31 07:41:07.844 226833 DEBUG nova.storage.rbd_utils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] creating snapshot(d9fda835f4aa4947ad48a7be0b247c42) on rbd image(49c2b2d1-3230-4f75-bc49-86230accc637_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:41:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:41:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:08.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:41:08 compute-2 nova_compute[226829]: 2026-01-31 07:41:08.084 226833 DEBUG nova.policy [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b873da8845e6461088fcff99c5c140b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '016f45da455049d7aad578f0a534a0f2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:41:08 compute-2 ceph-mon[77282]: pgmap v1254: 305 pgs: 305 active+clean; 296 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 246 op/s
Jan 31 07:41:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3200938819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e189 e189: 3 total, 3 up, 3 in
Jan 31 07:41:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:08.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:08 compute-2 nova_compute[226829]: 2026-01-31 07:41:08.427 226833 DEBUG nova.storage.rbd_utils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] cloning vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk@d9fda835f4aa4947ad48a7be0b247c42 to images/41472c33-f8e9-4285-8ebd-4297b1fe1775 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:41:08 compute-2 nova_compute[226829]: 2026-01-31 07:41:08.620 226833 DEBUG nova.storage.rbd_utils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] flattening images/41472c33-f8e9-4285-8ebd-4297b1fe1775 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:41:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:08 compute-2 nova_compute[226829]: 2026-01-31 07:41:08.971 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:09 compute-2 ceph-mon[77282]: osdmap e189: 3 total, 3 up, 3 in
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.421 226833 DEBUG nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.424 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.425 226833 INFO nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Creating image(s)
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.467 226833 DEBUG nova.storage.rbd_utils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image c030025f-5967-4922-a748-2f999d0645b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.504 226833 DEBUG nova.storage.rbd_utils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image c030025f-5967-4922-a748-2f999d0645b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.551 226833 DEBUG nova.storage.rbd_utils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image c030025f-5967-4922-a748-2f999d0645b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.558 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.581 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.644 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.645 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.646 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.646 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.676 226833 DEBUG nova.storage.rbd_utils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image c030025f-5967-4922-a748-2f999d0645b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.680 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c030025f-5967-4922-a748-2f999d0645b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:09 compute-2 nova_compute[226829]: 2026-01-31 07:41:09.726 226833 DEBUG nova.storage.rbd_utils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] removing snapshot(d9fda835f4aa4947ad48a7be0b247c42) on rbd image(49c2b2d1-3230-4f75-bc49-86230accc637_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:41:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:10.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:10.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:10 compute-2 sudo[245267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:10 compute-2 sudo[245267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:10 compute-2 sudo[245267]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:10 compute-2 sudo[245292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:10 compute-2 sudo[245292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:10 compute-2 sudo[245292]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e190 e190: 3 total, 3 up, 3 in
Jan 31 07:41:10 compute-2 ceph-mon[77282]: pgmap v1256: 305 pgs: 305 active+clean; 255 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 938 KiB/s rd, 6.4 MiB/s wr, 224 op/s
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.166 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c030025f-5967-4922-a748-2f999d0645b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.211 226833 DEBUG nova.storage.rbd_utils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] creating snapshot(snap) on rbd image(41472c33-f8e9-4285-8ebd-4297b1fe1775) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.317 226833 DEBUG nova.storage.rbd_utils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] resizing rbd image c030025f-5967-4922-a748-2f999d0645b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.519 226833 DEBUG nova.objects.instance [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lazy-loading 'migration_context' on Instance uuid c030025f-5967-4922-a748-2f999d0645b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.833 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845256.831471, 87f440bf-fb27-4f54-91e4-20ac4a817803 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.834 226833 INFO nova.compute.manager [-] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] VM Stopped (Lifecycle Event)
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.926 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.927 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Ensure instance console log exists: /var/lib/nova/instances/c030025f-5967-4922-a748-2f999d0645b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.928 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.929 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.929 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.937 226833 DEBUG nova.compute.manager [None req-94730345-3a71-4de6-b123-a85550b0a73d - - - - - -] [instance: 87f440bf-fb27-4f54-91e4-20ac4a817803] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:11 compute-2 nova_compute[226829]: 2026-01-31 07:41:11.972 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:12.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:12 compute-2 ceph-mon[77282]: pgmap v1257: 305 pgs: 305 active+clean; 258 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 9.7 MiB/s wr, 326 op/s
Jan 31 07:41:12 compute-2 ceph-mon[77282]: osdmap e190: 3 total, 3 up, 3 in
Jan 31 07:41:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e191 e191: 3 total, 3 up, 3 in
Jan 31 07:41:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:12.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:12 compute-2 nova_compute[226829]: 2026-01-31 07:41:12.645 226833 DEBUG nova.network.neutron [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Successfully created port: 462010b9-29e6-472e-ba82-5e6c54eec345 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:41:13 compute-2 ceph-mon[77282]: osdmap e191: 3 total, 3 up, 3 in
Jan 31 07:41:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2069176226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2175813159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:13 compute-2 nova_compute[226829]: 2026-01-31 07:41:13.813 226833 DEBUG nova.network.neutron [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Successfully updated port: 462010b9-29e6-472e-ba82-5e6c54eec345 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:41:13 compute-2 nova_compute[226829]: 2026-01-31 07:41:13.834 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "refresh_cache-c030025f-5967-4922-a748-2f999d0645b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:41:13 compute-2 nova_compute[226829]: 2026-01-31 07:41:13.834 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquired lock "refresh_cache-c030025f-5967-4922-a748-2f999d0645b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:41:13 compute-2 nova_compute[226829]: 2026-01-31 07:41:13.834 226833 DEBUG nova.network.neutron [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:41:13 compute-2 nova_compute[226829]: 2026-01-31 07:41:13.976 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.007 226833 DEBUG nova.network.neutron [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:41:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:14.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.077 226833 DEBUG nova.compute.manager [req-44a69fbd-479e-408c-b4b6-3dae4aa0622f req-82ece888-b325-4e27-90c5-f5b5a54c328f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Received event network-changed-462010b9-29e6-472e-ba82-5e6c54eec345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.078 226833 DEBUG nova.compute.manager [req-44a69fbd-479e-408c-b4b6-3dae4aa0622f req-82ece888-b325-4e27-90c5-f5b5a54c328f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Refreshing instance network info cache due to event network-changed-462010b9-29e6-472e-ba82-5e6c54eec345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.078 226833 DEBUG oslo_concurrency.lockutils [req-44a69fbd-479e-408c-b4b6-3dae4aa0622f req-82ece888-b325-4e27-90c5-f5b5a54c328f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c030025f-5967-4922-a748-2f999d0645b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:41:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:41:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:14.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:41:14 compute-2 ceph-mon[77282]: pgmap v1260: 305 pgs: 305 active+clean; 302 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 9.5 MiB/s wr, 257 op/s
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.748 226833 INFO nova.virt.libvirt.driver [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Snapshot image upload complete
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.749 226833 DEBUG nova.compute.manager [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.853 226833 INFO nova.compute.manager [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Shelve offloading
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.861 226833 INFO nova.virt.libvirt.driver [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance destroyed successfully.
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.862 226833 DEBUG nova.compute.manager [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.864 226833 DEBUG oslo_concurrency.lockutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.864 226833 DEBUG oslo_concurrency.lockutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquired lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.864 226833 DEBUG nova.network.neutron [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:41:14 compute-2 nova_compute[226829]: 2026-01-31 07:41:14.934 226833 DEBUG nova.network.neutron [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Updating instance_info_cache with network_info: [{"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.011 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Releasing lock "refresh_cache-c030025f-5967-4922-a748-2f999d0645b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.011 226833 DEBUG nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Instance network_info: |[{"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.012 226833 DEBUG oslo_concurrency.lockutils [req-44a69fbd-479e-408c-b4b6-3dae4aa0622f req-82ece888-b325-4e27-90c5-f5b5a54c328f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c030025f-5967-4922-a748-2f999d0645b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.012 226833 DEBUG nova.network.neutron [req-44a69fbd-479e-408c-b4b6-3dae4aa0622f req-82ece888-b325-4e27-90c5-f5b5a54c328f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Refreshing network info cache for port 462010b9-29e6-472e-ba82-5e6c54eec345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.016 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Start _get_guest_xml network_info=[{"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.021 226833 WARNING nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.030 226833 DEBUG nova.virt.libvirt.host [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.030 226833 DEBUG nova.virt.libvirt.host [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.039 226833 DEBUG nova.virt.libvirt.host [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.040 226833 DEBUG nova.virt.libvirt.host [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.041 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.041 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.042 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.042 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.042 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.042 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.043 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.043 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.043 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.043 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.044 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.044 226833 DEBUG nova.virt.hardware [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.047 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.135 226833 DEBUG nova.network.neutron [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:41:15 compute-2 podman[245410]: 2026-01-31 07:41:15.191181175 +0000 UTC m=+0.068696291 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:41:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:41:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3783863931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.478 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.505 226833 DEBUG nova.storage.rbd_utils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image c030025f-5967-4922-a748-2f999d0645b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.509 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.668 226833 DEBUG nova.network.neutron [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.760 226833 DEBUG oslo_concurrency.lockutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Releasing lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:41:15 compute-2 ceph-mon[77282]: pgmap v1261: 305 pgs: 305 active+clean; 302 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 8.7 MiB/s wr, 207 op/s
Jan 31 07:41:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3783863931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.768 226833 INFO nova.virt.libvirt.driver [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance destroyed successfully.
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.769 226833 DEBUG nova.objects.instance [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lazy-loading 'resources' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:41:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1684801822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.941 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.943 226833 DEBUG nova.virt.libvirt.vif [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:41:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-276353670',display_name='tempest-VolumesAdminNegativeTest-server-276353670',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-276353670',id=46,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG45txBttIRVAaDalRus4g+sGRVlJS+MTqAQB06P99OWE1GwBlFSi+Nkbfr4Wi+1f75znP6mwmSrCTMSKLDST0/MC4PLC/3COJBj4gWDT+jq3RFY7K+3lw1lw7j4siFQEA==',key_name='tempest-keypair-294079093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='016f45da455049d7aad578f0a534a0f2',ramdisk_id='',reservation_id='r-468sha1f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1814342344',owner_user_name='tempest-VolumesAdminNegativeTest-1814342344-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:41:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b873da8845e6461088fcff99c5c140b1',uuid=c030025f-5967-4922-a748-2f999d0645b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.943 226833 DEBUG nova.network.os_vif_util [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converting VIF {"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.944 226833 DEBUG nova.network.os_vif_util [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:2b:d4,bridge_name='br-int',has_traffic_filtering=True,id=462010b9-29e6-472e-ba82-5e6c54eec345,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap462010b9-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.946 226833 DEBUG nova.objects.instance [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid c030025f-5967-4922-a748-2f999d0645b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.981 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <uuid>c030025f-5967-4922-a748-2f999d0645b1</uuid>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <name>instance-0000002e</name>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <nova:name>tempest-VolumesAdminNegativeTest-server-276353670</nova:name>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:41:15</nova:creationTime>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <nova:user uuid="b873da8845e6461088fcff99c5c140b1">tempest-VolumesAdminNegativeTest-1814342344-project-member</nova:user>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <nova:project uuid="016f45da455049d7aad578f0a534a0f2">tempest-VolumesAdminNegativeTest-1814342344</nova:project>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <nova:port uuid="462010b9-29e6-472e-ba82-5e6c54eec345">
Jan 31 07:41:15 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <system>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <entry name="serial">c030025f-5967-4922-a748-2f999d0645b1</entry>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <entry name="uuid">c030025f-5967-4922-a748-2f999d0645b1</entry>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     </system>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <os>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   </os>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <features>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   </features>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c030025f-5967-4922-a748-2f999d0645b1_disk">
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       </source>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c030025f-5967-4922-a748-2f999d0645b1_disk.config">
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       </source>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:41:15 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:89:2b:d4"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <target dev="tap462010b9-29"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/c030025f-5967-4922-a748-2f999d0645b1/console.log" append="off"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <video>
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     </video>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:41:15 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:41:15 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:41:15 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:41:15 compute-2 nova_compute[226829]: </domain>
Jan 31 07:41:15 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.983 226833 DEBUG nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Preparing to wait for external event network-vif-plugged-462010b9-29e6-472e-ba82-5e6c54eec345 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.983 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.984 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.984 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.985 226833 DEBUG nova.virt.libvirt.vif [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:41:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-276353670',display_name='tempest-VolumesAdminNegativeTest-server-276353670',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-276353670',id=46,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG45txBttIRVAaDalRus4g+sGRVlJS+MTqAQB06P99OWE1GwBlFSi+Nkbfr4Wi+1f75znP6mwmSrCTMSKLDST0/MC4PLC/3COJBj4gWDT+jq3RFY7K+3lw1lw7j4siFQEA==',key_name='tempest-keypair-294079093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='016f45da455049d7aad578f0a534a0f2',ramdisk_id='',reservation_id='r-468sha1f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1814342344',owner_user_name='tempest-VolumesAdminNegativeTest-1814342344-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:41:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b873da8845e6461088fcff99c5c140b1',uuid=c030025f-5967-4922-a748-2f999d0645b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.985 226833 DEBUG nova.network.os_vif_util [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converting VIF {"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.986 226833 DEBUG nova.network.os_vif_util [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:2b:d4,bridge_name='br-int',has_traffic_filtering=True,id=462010b9-29e6-472e-ba82-5e6c54eec345,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap462010b9-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.986 226833 DEBUG os_vif [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:2b:d4,bridge_name='br-int',has_traffic_filtering=True,id=462010b9-29e6-472e-ba82-5e6c54eec345,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap462010b9-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.992 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.992 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:15 compute-2 nova_compute[226829]: 2026-01-31 07:41:15.993 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.000 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.000 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap462010b9-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.000 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap462010b9-29, col_values=(('external_ids', {'iface-id': '462010b9-29e6-472e-ba82-5e6c54eec345', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:2b:d4', 'vm-uuid': 'c030025f-5967-4922-a748-2f999d0645b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.004 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:41:16 compute-2 NetworkManager[48999]: <info>  [1769845276.0046] manager: (tap462010b9-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.012 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.015 226833 INFO os_vif [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:2b:d4,bridge_name='br-int',has_traffic_filtering=True,id=462010b9-29e6-472e-ba82-5e6c54eec345,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap462010b9-29')
Jan 31 07:41:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.178 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.179 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.179 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] No VIF found with MAC fa:16:3e:89:2b:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.179 226833 INFO nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Using config drive
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.211 226833 DEBUG nova.storage.rbd_utils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image c030025f-5967-4922-a748-2f999d0645b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:16.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.586 226833 DEBUG nova.network.neutron [req-44a69fbd-479e-408c-b4b6-3dae4aa0622f req-82ece888-b325-4e27-90c5-f5b5a54c328f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Updated VIF entry in instance network info cache for port 462010b9-29e6-472e-ba82-5e6c54eec345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.587 226833 DEBUG nova.network.neutron [req-44a69fbd-479e-408c-b4b6-3dae4aa0622f req-82ece888-b325-4e27-90c5-f5b5a54c328f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Updating instance_info_cache with network_info: [{"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.638 226833 DEBUG oslo_concurrency.lockutils [req-44a69fbd-479e-408c-b4b6-3dae4aa0622f req-82ece888-b325-4e27-90c5-f5b5a54c328f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c030025f-5967-4922-a748-2f999d0645b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.724 226833 INFO nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Creating config drive at /var/lib/nova/instances/c030025f-5967-4922-a748-2f999d0645b1/disk.config
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.730 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c030025f-5967-4922-a748-2f999d0645b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpafi_jgvx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1684801822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.866 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c030025f-5967-4922-a748-2f999d0645b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpafi_jgvx" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.901 226833 DEBUG nova.storage.rbd_utils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image c030025f-5967-4922-a748-2f999d0645b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:16 compute-2 nova_compute[226829]: 2026-01-31 07:41:16.905 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c030025f-5967-4922-a748-2f999d0645b1/disk.config c030025f-5967-4922-a748-2f999d0645b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.001 226833 INFO nova.virt.libvirt.driver [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Deleting instance files /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637_del
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.004 226833 INFO nova.virt.libvirt.driver [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Deletion of /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637_del complete
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.137 226833 DEBUG oslo_concurrency.processutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c030025f-5967-4922-a748-2f999d0645b1/disk.config c030025f-5967-4922-a748-2f999d0645b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.138 226833 INFO nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Deleting local config drive /var/lib/nova/instances/c030025f-5967-4922-a748-2f999d0645b1/disk.config because it was imported into RBD.
Jan 31 07:41:17 compute-2 kernel: tap462010b9-29: entered promiscuous mode
Jan 31 07:41:17 compute-2 NetworkManager[48999]: <info>  [1769845277.1809] manager: (tap462010b9-29): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Jan 31 07:41:17 compute-2 ovn_controller[133834]: 2026-01-31T07:41:17Z|00106|binding|INFO|Claiming lport 462010b9-29e6-472e-ba82-5e6c54eec345 for this chassis.
Jan 31 07:41:17 compute-2 ovn_controller[133834]: 2026-01-31T07:41:17Z|00107|binding|INFO|462010b9-29e6-472e-ba82-5e6c54eec345: Claiming fa:16:3e:89:2b:d4 10.100.0.8
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.182 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:17 compute-2 systemd-machined[195142]: New machine qemu-19-instance-0000002e.
Jan 31 07:41:17 compute-2 systemd[1]: Started Virtual Machine qemu-19-instance-0000002e.
Jan 31 07:41:17 compute-2 systemd-udevd[245585]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.236 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:17 compute-2 NetworkManager[48999]: <info>  [1769845277.2417] device (tap462010b9-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:41:17 compute-2 NetworkManager[48999]: <info>  [1769845277.2424] device (tap462010b9-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:41:17 compute-2 ovn_controller[133834]: 2026-01-31T07:41:17Z|00108|binding|INFO|Setting lport 462010b9-29e6-472e-ba82-5e6c54eec345 ovn-installed in OVS
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:17 compute-2 ovn_controller[133834]: 2026-01-31T07:41:17Z|00109|binding|INFO|Setting lport 462010b9-29e6-472e-ba82-5e6c54eec345 up in Southbound
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.283 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:2b:d4 10.100.0.8'], port_security=['fa:16:3e:89:2b:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c030025f-5967-4922-a748-2f999d0645b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bff39063-463a-42de-b52a-d9ff7905f368', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '016f45da455049d7aad578f0a534a0f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41805443-972e-49c5-b4cc-3b8c00ce4178', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64424fb0-e7eb-43b1-ba76-473962c1b131, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=462010b9-29e6-472e-ba82-5e6c54eec345) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.285 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 462010b9-29e6-472e-ba82-5e6c54eec345 in datapath bff39063-463a-42de-b52a-d9ff7905f368 bound to our chassis
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.286 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bff39063-463a-42de-b52a-d9ff7905f368
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.297 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[19bd7161-6091-42d7-9bd0-05b5bac0e480]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.298 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbff39063-41 in ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.299 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbff39063-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.299 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb573e2-71f2-4a61-bdce-946334988dcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.300 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[94d90257-90fe-430b-937a-4dd8aff11555]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.311 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[75453b37-b9cd-42e6-9637-29a4bd825810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.315 226833 INFO nova.scheduler.client.report [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Deleted allocations for instance 49c2b2d1-3230-4f75-bc49-86230accc637
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.322 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[889717a0-c86b-4dc6-90b9-3a15dea54502]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.346 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2181b783-a8cb-4146-8602-62f2bdf1fd9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 NetworkManager[48999]: <info>  [1769845277.3521] manager: (tapbff39063-40): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Jan 31 07:41:17 compute-2 systemd-udevd[245588]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.353 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3ecf66-7675-4038-86da-b07f33b5eeb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.375 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f88d618b-a865-4694-8267-8c01aefe35ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.379 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[aafaec95-d10a-4165-9d55-3d13790c423b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 NetworkManager[48999]: <info>  [1769845277.3945] device (tapbff39063-40): carrier: link connected
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.397 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d9edebda-99fe-402f-a40b-3f46b25d7351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.411 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[669a6851-49e4-4493-abe8-1027de1add24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbff39063-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:f4:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552986, 'reachable_time': 35435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245618, 'error': None, 'target': 'ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.422 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[61826933-1641-47de-bebf-179187a4a78b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5a:f47e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552986, 'tstamp': 552986}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245619, 'error': None, 'target': 'ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.435 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb6bba2-50ba-4fd3-b7d5-19339481dfdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbff39063-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:f4:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552986, 'reachable_time': 35435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245620, 'error': None, 'target': 'ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.457 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[39a6efa4-21c2-4433-b8c2-871b7d91aa88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.469 226833 DEBUG oslo_concurrency.lockutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.470 226833 DEBUG oslo_concurrency.lockutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.500 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[75b26e6f-236c-4e93-8965-1a19d31d4056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.501 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbff39063-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.502 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.502 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbff39063-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.504 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:17 compute-2 NetworkManager[48999]: <info>  [1769845277.5045] manager: (tapbff39063-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 31 07:41:17 compute-2 kernel: tapbff39063-40: entered promiscuous mode
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.508 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbff39063-40, col_values=(('external_ids', {'iface-id': 'b96729ad-cf99-4a3d-b17d-6bdafe5723db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:17 compute-2 ovn_controller[133834]: 2026-01-31T07:41:17Z|00110|binding|INFO|Releasing lport b96729ad-cf99-4a3d-b17d-6bdafe5723db from this chassis (sb_readonly=0)
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.510 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.510 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bff39063-463a-42de-b52a-d9ff7905f368.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bff39063-463a-42de-b52a-d9ff7905f368.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.511 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[07a1a087-e208-493d-9339-292744da4902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.511 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-bff39063-463a-42de-b52a-d9ff7905f368
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/bff39063-463a-42de-b52a-d9ff7905f368.pid.haproxy
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID bff39063-463a-42de-b52a-d9ff7905f368
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:41:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:17.512 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368', 'env', 'PROCESS_TAG=haproxy-bff39063-463a-42de-b52a-d9ff7905f368', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bff39063-463a-42de-b52a-d9ff7905f368.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.515 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.532 226833 DEBUG oslo_concurrency.processutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.826 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845277.8262804, c030025f-5967-4922-a748-2f999d0645b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.827 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c030025f-5967-4922-a748-2f999d0645b1] VM Started (Lifecycle Event)
Jan 31 07:41:17 compute-2 ceph-mon[77282]: pgmap v1262: 305 pgs: 305 active+clean; 368 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 241 op/s
Jan 31 07:41:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2673872824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:17 compute-2 podman[245714]: 2026-01-31 07:41:17.917580572 +0000 UTC m=+0.080560529 container create 2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 07:41:17 compute-2 systemd[1]: Started libpod-conmon-2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7.scope.
Jan 31 07:41:17 compute-2 podman[245714]: 2026-01-31 07:41:17.854474312 +0000 UTC m=+0.017454299 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:41:17 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:41:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04288f51c6bf900c6ef617dd879e6dfb21e962313cc4ba50d0bb3a9c39b84cf9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:41:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2857570762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:17 compute-2 podman[245714]: 2026-01-31 07:41:17.996020692 +0000 UTC m=+0.159000669 container init 2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:41:17 compute-2 nova_compute[226829]: 2026-01-31 07:41:17.995 226833 DEBUG oslo_concurrency.processutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:18 compute-2 podman[245714]: 2026-01-31 07:41:18.001605812 +0000 UTC m=+0.164585769 container start 2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.001 226833 DEBUG nova.compute.provider_tree [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.005 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c030025f-5967-4922-a748-2f999d0645b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.008 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845277.8275073, c030025f-5967-4922-a748-2f999d0645b1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.008 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c030025f-5967-4922-a748-2f999d0645b1] VM Paused (Lifecycle Event)
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.017 226833 DEBUG nova.compute.manager [req-003954c8-bb14-4616-bd74-c0e86cf28fbf req-daa132e8-5e43-4c52-b614-310636407c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Received event network-vif-plugged-462010b9-29e6-472e-ba82-5e6c54eec345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.018 226833 DEBUG oslo_concurrency.lockutils [req-003954c8-bb14-4616-bd74-c0e86cf28fbf req-daa132e8-5e43-4c52-b614-310636407c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.018 226833 DEBUG oslo_concurrency.lockutils [req-003954c8-bb14-4616-bd74-c0e86cf28fbf req-daa132e8-5e43-4c52-b614-310636407c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.018 226833 DEBUG oslo_concurrency.lockutils [req-003954c8-bb14-4616-bd74-c0e86cf28fbf req-daa132e8-5e43-4c52-b614-310636407c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:18 compute-2 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[245729]: [NOTICE]   (245737) : New worker (245739) forked
Jan 31 07:41:18 compute-2 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[245729]: [NOTICE]   (245737) : Loading success.
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.019 226833 DEBUG nova.compute.manager [req-003954c8-bb14-4616-bd74-c0e86cf28fbf req-daa132e8-5e43-4c52-b614-310636407c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Processing event network-vif-plugged-462010b9-29e6-472e-ba82-5e6c54eec345 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.019 226833 DEBUG nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.048 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.050 226833 DEBUG nova.scheduler.client.report [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.054 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c030025f-5967-4922-a748-2f999d0645b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.056 226833 INFO nova.virt.libvirt.driver [-] [instance: c030025f-5967-4922-a748-2f999d0645b1] Instance spawned successfully.
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.057 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.059 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845278.0371826, c030025f-5967-4922-a748-2f999d0645b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.059 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c030025f-5967-4922-a748-2f999d0645b1] VM Resumed (Lifecycle Event)
Jan 31 07:41:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:18.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.112 226833 DEBUG oslo_concurrency.lockutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.119 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c030025f-5967-4922-a748-2f999d0645b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.124 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.125 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.126 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.126 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.127 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.127 226833 DEBUG nova.virt.libvirt.driver [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.132 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c030025f-5967-4922-a748-2f999d0645b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.245 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c030025f-5967-4922-a748-2f999d0645b1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.296 226833 DEBUG oslo_concurrency.lockutils [None req-afeb1b58-5b75-4151-8473-f51248c84ac6 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 24.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.378 226833 INFO nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Took 8.96 seconds to spawn the instance on the hypervisor.
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.378 226833 DEBUG nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:41:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:18.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.574 226833 INFO nova.compute.manager [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Took 11.56 seconds to build instance.
Jan 31 07:41:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.708 226833 DEBUG oslo_concurrency.lockutils [None req-1bda9590-01ef-446f-aa1e-02ac9237b8c7 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3454434337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2857570762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:18 compute-2 nova_compute[226829]: 2026-01-31 07:41:18.977 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:19 compute-2 ceph-mon[77282]: pgmap v1263: 305 pgs: 305 active+clean; 330 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 7.3 MiB/s wr, 147 op/s
Jan 31 07:41:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3533032308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:20.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:20.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:20 compute-2 nova_compute[226829]: 2026-01-31 07:41:20.694 226833 DEBUG nova.compute.manager [req-ac696385-ab0d-4cf0-9a42-012475f4d0d5 req-f0bd7d02-5138-4f07-991f-6fbb254d8ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Received event network-vif-plugged-462010b9-29e6-472e-ba82-5e6c54eec345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:20 compute-2 nova_compute[226829]: 2026-01-31 07:41:20.695 226833 DEBUG oslo_concurrency.lockutils [req-ac696385-ab0d-4cf0-9a42-012475f4d0d5 req-f0bd7d02-5138-4f07-991f-6fbb254d8ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:20 compute-2 nova_compute[226829]: 2026-01-31 07:41:20.695 226833 DEBUG oslo_concurrency.lockutils [req-ac696385-ab0d-4cf0-9a42-012475f4d0d5 req-f0bd7d02-5138-4f07-991f-6fbb254d8ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:20 compute-2 nova_compute[226829]: 2026-01-31 07:41:20.695 226833 DEBUG oslo_concurrency.lockutils [req-ac696385-ab0d-4cf0-9a42-012475f4d0d5 req-f0bd7d02-5138-4f07-991f-6fbb254d8ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:20 compute-2 nova_compute[226829]: 2026-01-31 07:41:20.695 226833 DEBUG nova.compute.manager [req-ac696385-ab0d-4cf0-9a42-012475f4d0d5 req-f0bd7d02-5138-4f07-991f-6fbb254d8ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] No waiting events found dispatching network-vif-plugged-462010b9-29e6-472e-ba82-5e6c54eec345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:41:20 compute-2 nova_compute[226829]: 2026-01-31 07:41:20.696 226833 WARNING nova.compute.manager [req-ac696385-ab0d-4cf0-9a42-012475f4d0d5 req-f0bd7d02-5138-4f07-991f-6fbb254d8ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Received unexpected event network-vif-plugged-462010b9-29e6-472e-ba82-5e6c54eec345 for instance with vm_state active and task_state None.
Jan 31 07:41:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e192 e192: 3 total, 3 up, 3 in
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.002 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.316 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquiring lock "49c2b2d1-3230-4f75-bc49-86230accc637" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.316 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.317 226833 INFO nova.compute.manager [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Unshelving
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.696 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845266.695202, 49c2b2d1-3230-4f75-bc49-86230accc637 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.697 226833 INFO nova.compute.manager [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] VM Stopped (Lifecycle Event)
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.902 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.903 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.908 226833 DEBUG nova.compute.manager [None req-e8ea9bc6-4c44-4325-ae96-bc758bce69b7 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.909 226833 DEBUG nova.objects.instance [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'pci_requests' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:21 compute-2 nova_compute[226829]: 2026-01-31 07:41:21.952 226833 DEBUG nova.objects.instance [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'numa_topology' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:22.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:22 compute-2 nova_compute[226829]: 2026-01-31 07:41:22.189 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:41:22 compute-2 nova_compute[226829]: 2026-01-31 07:41:22.190 226833 INFO nova.compute.claims [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:41:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:22.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:22 compute-2 nova_compute[226829]: 2026-01-31 07:41:22.503 226833 DEBUG oslo_concurrency.processutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:22 compute-2 ceph-mon[77282]: pgmap v1264: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 242 op/s
Jan 31 07:41:22 compute-2 ceph-mon[77282]: osdmap e192: 3 total, 3 up, 3 in
Jan 31 07:41:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:41:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3141007097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:23 compute-2 nova_compute[226829]: 2026-01-31 07:41:23.056 226833 DEBUG oslo_concurrency.processutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:23 compute-2 nova_compute[226829]: 2026-01-31 07:41:23.062 226833 DEBUG nova.compute.provider_tree [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:41:23 compute-2 nova_compute[226829]: 2026-01-31 07:41:23.193 226833 DEBUG nova.scheduler.client.report [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:41:23 compute-2 nova_compute[226829]: 2026-01-31 07:41:23.326 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:23 compute-2 ceph-mon[77282]: pgmap v1266: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 232 op/s
Jan 31 07:41:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3141007097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:23 compute-2 nova_compute[226829]: 2026-01-31 07:41:23.979 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:24.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:24.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:24 compute-2 nova_compute[226829]: 2026-01-31 07:41:24.597 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquiring lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:41:24 compute-2 nova_compute[226829]: 2026-01-31 07:41:24.597 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquired lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:41:24 compute-2 nova_compute[226829]: 2026-01-31 07:41:24.597 226833 DEBUG nova.network.neutron [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:41:24 compute-2 nova_compute[226829]: 2026-01-31 07:41:24.869 226833 DEBUG nova.network.neutron [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:41:24 compute-2 nova_compute[226829]: 2026-01-31 07:41:24.996 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:24 compute-2 NetworkManager[48999]: <info>  [1769845284.9982] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 31 07:41:25 compute-2 NetworkManager[48999]: <info>  [1769845285.0001] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.037 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:25 compute-2 ovn_controller[133834]: 2026-01-31T07:41:25Z|00111|binding|INFO|Releasing lport b96729ad-cf99-4a3d-b17d-6bdafe5723db from this chassis (sb_readonly=0)
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.054 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.174 226833 DEBUG nova.network.neutron [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.415 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Releasing lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.416 226833 DEBUG nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.417 226833 INFO nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Creating image(s)
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.448 226833 DEBUG nova.storage.rbd_utils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.452 226833 DEBUG nova.objects.instance [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.702 226833 DEBUG nova.storage.rbd_utils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.730 226833 DEBUG nova.storage.rbd_utils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.735 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquiring lock "0338696b5f68edbc9d1130fcaa58a534f080f6cc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:25 compute-2 nova_compute[226829]: 2026-01-31 07:41:25.736 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "0338696b5f68edbc9d1130fcaa58a534f080f6cc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:25 compute-2 ceph-mon[77282]: pgmap v1267: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 232 op/s
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.004 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.021 226833 DEBUG nova.virt.libvirt.imagebackend [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/41472c33-f8e9-4285-8ebd-4297b1fe1775/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/41472c33-f8e9-4285-8ebd-4297b1fe1775/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 07:41:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:26.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.083 226833 DEBUG nova.virt.libvirt.imagebackend [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/41472c33-f8e9-4285-8ebd-4297b1fe1775/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.083 226833 DEBUG nova.storage.rbd_utils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] cloning images/41472c33-f8e9-4285-8ebd-4297b1fe1775@snap to None/49c2b2d1-3230-4f75-bc49-86230accc637_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.220 226833 DEBUG nova.compute.manager [req-0a9c675c-cfe8-47a6-82cc-0731ecdedad2 req-0ffc7247-f4bd-4630-a9ed-65e7e6c0514a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Received event network-changed-462010b9-29e6-472e-ba82-5e6c54eec345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.220 226833 DEBUG nova.compute.manager [req-0a9c675c-cfe8-47a6-82cc-0731ecdedad2 req-0ffc7247-f4bd-4630-a9ed-65e7e6c0514a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Refreshing instance network info cache due to event network-changed-462010b9-29e6-472e-ba82-5e6c54eec345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.221 226833 DEBUG oslo_concurrency.lockutils [req-0a9c675c-cfe8-47a6-82cc-0731ecdedad2 req-0ffc7247-f4bd-4630-a9ed-65e7e6c0514a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c030025f-5967-4922-a748-2f999d0645b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.221 226833 DEBUG oslo_concurrency.lockutils [req-0a9c675c-cfe8-47a6-82cc-0731ecdedad2 req-0ffc7247-f4bd-4630-a9ed-65e7e6c0514a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c030025f-5967-4922-a748-2f999d0645b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.221 226833 DEBUG nova.network.neutron [req-0a9c675c-cfe8-47a6-82cc-0731ecdedad2 req-0ffc7247-f4bd-4630-a9ed-65e7e6c0514a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Refreshing network info cache for port 462010b9-29e6-472e-ba82-5e6c54eec345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.229 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "0338696b5f68edbc9d1130fcaa58a534f080f6cc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.375 226833 DEBUG nova.objects.instance [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'migration_context' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:26.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.556 226833 DEBUG nova.storage.rbd_utils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] flattening vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.983 226833 DEBUG nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Image rbd:vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.985 226833 DEBUG nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.986 226833 DEBUG nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Ensure instance console log exists: /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.988 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.988 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.988 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.990 226833 DEBUG nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T07:40:53Z,direct_url=<?>,disk_format='raw',id=41472c33-f8e9-4285-8ebd-4297b1fe1775,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1773899943-shelved',owner='37a878bbb1224cfeabcbe629345fc85d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T07:41:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.993 226833 WARNING nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.998 226833 DEBUG nova.virt.libvirt.host [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:41:26 compute-2 nova_compute[226829]: 2026-01-31 07:41:26.999 226833 DEBUG nova.virt.libvirt.host [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.001 226833 DEBUG nova.virt.libvirt.host [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.002 226833 DEBUG nova.virt.libvirt.host [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.003 226833 DEBUG nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.003 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T07:40:53Z,direct_url=<?>,disk_format='raw',id=41472c33-f8e9-4285-8ebd-4297b1fe1775,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1773899943-shelved',owner='37a878bbb1224cfeabcbe629345fc85d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T07:41:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.004 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.004 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.004 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.004 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.005 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.005 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.005 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.006 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.006 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.006 226833 DEBUG nova.virt.hardware [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.006 226833 DEBUG nova.objects.instance [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.027 226833 DEBUG oslo_concurrency.processutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:41:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2654768386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.435 226833 DEBUG oslo_concurrency.processutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.466 226833 DEBUG nova.storage.rbd_utils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.470 226833 DEBUG oslo_concurrency.processutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.599 226833 DEBUG nova.network.neutron [req-0a9c675c-cfe8-47a6-82cc-0731ecdedad2 req-0ffc7247-f4bd-4630-a9ed-65e7e6c0514a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Updated VIF entry in instance network info cache for port 462010b9-29e6-472e-ba82-5e6c54eec345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.600 226833 DEBUG nova.network.neutron [req-0a9c675c-cfe8-47a6-82cc-0731ecdedad2 req-0ffc7247-f4bd-4630-a9ed-65e7e6c0514a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Updating instance_info_cache with network_info: [{"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.700 226833 DEBUG oslo_concurrency.lockutils [req-0a9c675c-cfe8-47a6-82cc-0731ecdedad2 req-0ffc7247-f4bd-4630-a9ed-65e7e6c0514a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c030025f-5967-4922-a748-2f999d0645b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:41:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e193 e193: 3 total, 3 up, 3 in
Jan 31 07:41:27 compute-2 ceph-mon[77282]: pgmap v1268: 305 pgs: 305 active+clean; 213 MiB data, 515 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 86 KiB/s wr, 251 op/s
Jan 31 07:41:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2654768386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.956 226833 DEBUG oslo_concurrency.processutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.958 226833 DEBUG nova.objects.instance [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'pci_devices' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:27 compute-2 nova_compute[226829]: 2026-01-31 07:41:27.993 226833 DEBUG nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <uuid>49c2b2d1-3230-4f75-bc49-86230accc637</uuid>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <name>instance-0000002d</name>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1773899943</nova:name>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:41:26</nova:creationTime>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <nova:user uuid="cda91adb5caf4eeb81b5a934ccbb1a1e">tempest-UnshelveToHostMultiNodesTest-877324354-project-member</nova:user>
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <nova:project uuid="37a878bbb1224cfeabcbe629345fc85d">tempest-UnshelveToHostMultiNodesTest-877324354</nova:project>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="41472c33-f8e9-4285-8ebd-4297b1fe1775"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <system>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <entry name="serial">49c2b2d1-3230-4f75-bc49-86230accc637</entry>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <entry name="uuid">49c2b2d1-3230-4f75-bc49-86230accc637</entry>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     </system>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <os>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   </os>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <features>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   </features>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk">
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       </source>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk.config">
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       </source>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:41:27 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/console.log" append="off"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <video>
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     </video>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:41:27 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:41:27 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:41:27 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:41:27 compute-2 nova_compute[226829]: </domain>
Jan 31 07:41:27 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:41:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:28.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:28 compute-2 nova_compute[226829]: 2026-01-31 07:41:28.083 226833 DEBUG nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:41:28 compute-2 nova_compute[226829]: 2026-01-31 07:41:28.084 226833 DEBUG nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:41:28 compute-2 nova_compute[226829]: 2026-01-31 07:41:28.084 226833 INFO nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Using config drive
Jan 31 07:41:28 compute-2 nova_compute[226829]: 2026-01-31 07:41:28.116 226833 DEBUG nova.storage.rbd_utils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:28 compute-2 nova_compute[226829]: 2026-01-31 07:41:28.226 226833 DEBUG nova.objects.instance [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:28 compute-2 ovn_controller[133834]: 2026-01-31T07:41:28Z|00112|binding|INFO|Releasing lport b96729ad-cf99-4a3d-b17d-6bdafe5723db from this chassis (sb_readonly=0)
Jan 31 07:41:28 compute-2 nova_compute[226829]: 2026-01-31 07:41:28.309 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:28 compute-2 nova_compute[226829]: 2026-01-31 07:41:28.347 226833 DEBUG nova.objects.instance [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'keypairs' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:28.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:28 compute-2 nova_compute[226829]: 2026-01-31 07:41:28.902 226833 INFO nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Creating config drive at /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config
Jan 31 07:41:28 compute-2 nova_compute[226829]: 2026-01-31 07:41:28.906 226833 DEBUG oslo_concurrency.processutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjvjy1wdv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:28 compute-2 ceph-mon[77282]: osdmap e193: 3 total, 3 up, 3 in
Jan 31 07:41:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/628568958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:28 compute-2 nova_compute[226829]: 2026-01-31 07:41:28.982 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.035 226833 DEBUG oslo_concurrency.processutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjvjy1wdv" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.077 226833 DEBUG nova.storage.rbd_utils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.084 226833 DEBUG oslo_concurrency.processutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.383 226833 DEBUG oslo_concurrency.processutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.384 226833 INFO nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Deleting local config drive /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config because it was imported into RBD.
Jan 31 07:41:29 compute-2 systemd-machined[195142]: New machine qemu-20-instance-0000002d.
Jan 31 07:41:29 compute-2 systemd[1]: Started Virtual Machine qemu-20-instance-0000002d.
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.967 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845289.9664183, 49c2b2d1-3230-4f75-bc49-86230accc637 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.969 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] VM Resumed (Lifecycle Event)
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.975 226833 DEBUG nova.compute.manager [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.976 226833 DEBUG nova.virt.libvirt.driver [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.983 226833 INFO nova.virt.libvirt.driver [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance spawned successfully.
Jan 31 07:41:29 compute-2 nova_compute[226829]: 2026-01-31 07:41:29.995 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:30 compute-2 nova_compute[226829]: 2026-01-31 07:41:30.000 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:41:30 compute-2 ceph-mon[77282]: pgmap v1270: 305 pgs: 305 active+clean; 243 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 31 07:41:30 compute-2 nova_compute[226829]: 2026-01-31 07:41:30.024 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:41:30 compute-2 nova_compute[226829]: 2026-01-31 07:41:30.024 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845289.967282, 49c2b2d1-3230-4f75-bc49-86230accc637 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:30 compute-2 nova_compute[226829]: 2026-01-31 07:41:30.024 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] VM Started (Lifecycle Event)
Jan 31 07:41:30 compute-2 nova_compute[226829]: 2026-01-31 07:41:30.043 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:30 compute-2 nova_compute[226829]: 2026-01-31 07:41:30.048 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:41:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e194 e194: 3 total, 3 up, 3 in
Jan 31 07:41:30 compute-2 nova_compute[226829]: 2026-01-31 07:41:30.066 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:41:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:30.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:30.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:30 compute-2 sudo[246169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:30 compute-2 sudo[246169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:30 compute-2 sudo[246169]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:30 compute-2 sudo[246194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:30 compute-2 sudo[246194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:30 compute-2 sudo[246194]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:31 compute-2 nova_compute[226829]: 2026-01-31 07:41:31.007 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e195 e195: 3 total, 3 up, 3 in
Jan 31 07:41:31 compute-2 ceph-mon[77282]: osdmap e194: 3 total, 3 up, 3 in
Jan 31 07:41:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:32.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:32 compute-2 nova_compute[226829]: 2026-01-31 07:41:32.306 226833 DEBUG nova.compute.manager [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:32 compute-2 nova_compute[226829]: 2026-01-31 07:41:32.397 226833 DEBUG oslo_concurrency.lockutils [None req-249e551b-e4dd-4c58-bb48-f77f85cb38c2 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:32.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:32 compute-2 ceph-mon[77282]: pgmap v1272: 305 pgs: 305 active+clean; 325 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 325 op/s
Jan 31 07:41:32 compute-2 ceph-mon[77282]: osdmap e195: 3 total, 3 up, 3 in
Jan 31 07:41:32 compute-2 ovn_controller[133834]: 2026-01-31T07:41:32Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:2b:d4 10.100.0.8
Jan 31 07:41:32 compute-2 ovn_controller[133834]: 2026-01-31T07:41:32Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:2b:d4 10.100.0.8
Jan 31 07:41:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e196 e196: 3 total, 3 up, 3 in
Jan 31 07:41:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:33 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 31 07:41:33 compute-2 ceph-mon[77282]: pgmap v1274: 305 pgs: 305 active+clean; 324 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 14 MiB/s wr, 385 op/s
Jan 31 07:41:33 compute-2 ceph-mon[77282]: osdmap e196: 3 total, 3 up, 3 in
Jan 31 07:41:33 compute-2 nova_compute[226829]: 2026-01-31 07:41:33.985 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:34.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:34.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:36 compute-2 nova_compute[226829]: 2026-01-31 07:41:36.010 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:36.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:36.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:36 compute-2 ceph-mon[77282]: pgmap v1276: 305 pgs: 305 active+clean; 324 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 10 MiB/s wr, 372 op/s
Jan 31 07:41:37 compute-2 podman[246222]: 2026-01-31 07:41:37.19611593 +0000 UTC m=+0.080883637 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_controller)
Jan 31 07:41:37 compute-2 nova_compute[226829]: 2026-01-31 07:41:37.539 226833 DEBUG oslo_concurrency.lockutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "49c2b2d1-3230-4f75-bc49-86230accc637" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:37 compute-2 nova_compute[226829]: 2026-01-31 07:41:37.540 226833 DEBUG oslo_concurrency.lockutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:37 compute-2 nova_compute[226829]: 2026-01-31 07:41:37.540 226833 INFO nova.compute.manager [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Shelving
Jan 31 07:41:37 compute-2 ceph-mon[77282]: pgmap v1277: 305 pgs: 305 active+clean; 268 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 9.4 MiB/s wr, 467 op/s
Jan 31 07:41:37 compute-2 nova_compute[226829]: 2026-01-31 07:41:37.593 226833 DEBUG nova.virt.libvirt.driver [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:41:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:38.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:38.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:38 compute-2 nova_compute[226829]: 2026-01-31 07:41:38.987 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:39 compute-2 ceph-mon[77282]: pgmap v1278: 305 pgs: 305 active+clean; 261 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.5 MiB/s wr, 269 op/s
Jan 31 07:41:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:40.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:41:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:40.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:41:41 compute-2 nova_compute[226829]: 2026-01-31 07:41:41.013 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e197 e197: 3 total, 3 up, 3 in
Jan 31 07:41:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:42.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:42 compute-2 ceph-mon[77282]: pgmap v1279: 305 pgs: 305 active+clean; 211 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.8 MiB/s wr, 271 op/s
Jan 31 07:41:42 compute-2 ceph-mon[77282]: osdmap e197: 3 total, 3 up, 3 in
Jan 31 07:41:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2739341122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:42.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:43 compute-2 nova_compute[226829]: 2026-01-31 07:41:43.224 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:43.226 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:41:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:43.230 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:41:43 compute-2 ceph-mon[77282]: pgmap v1281: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 222 op/s
Jan 31 07:41:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:43 compute-2 nova_compute[226829]: 2026-01-31 07:41:43.990 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:44.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:44.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:44 compute-2 nova_compute[226829]: 2026-01-31 07:41:44.474 226833 DEBUG oslo_concurrency.lockutils [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:44 compute-2 nova_compute[226829]: 2026-01-31 07:41:44.475 226833 DEBUG oslo_concurrency.lockutils [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:44 compute-2 nova_compute[226829]: 2026-01-31 07:41:44.528 226833 DEBUG nova.objects.instance [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lazy-loading 'flavor' on Instance uuid c030025f-5967-4922-a748-2f999d0645b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:44 compute-2 nova_compute[226829]: 2026-01-31 07:41:44.623 226833 DEBUG oslo_concurrency.lockutils [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.228 226833 DEBUG oslo_concurrency.lockutils [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.229 226833 DEBUG oslo_concurrency.lockutils [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.230 226833 INFO nova.compute.manager [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Attaching volume 23d2de7d-d18d-40c1-8f76-18391958864a to /dev/vdb
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.345 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.403 226833 DEBUG os_brick.utils [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.407 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.424 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.425 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd8243d-80b7-4d4b-a171-c81de206ab68]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.427 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.435 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.436 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[7b1d0596-819e-48bf-bae8-2caa68c87046]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.437 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.445 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.446 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[9a38015f-8a06-4272-af61-cd8073b4b108]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.447 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b25663-a37c-40b4-a5c8-a98ee6254d7f]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.448 226833 DEBUG oslo_concurrency.processutils [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.470 226833 DEBUG oslo_concurrency.processutils [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.473 226833 DEBUG os_brick.initiator.connectors.lightos [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.473 226833 DEBUG os_brick.initiator.connectors.lightos [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.474 226833 DEBUG os_brick.initiator.connectors.lightos [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.474 226833 DEBUG os_brick.utils [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:41:45 compute-2 nova_compute[226829]: 2026-01-31 07:41:45.475 226833 DEBUG nova.virt.block_device [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Updating existing volume attachment record: 6446c35a-18be-4e82-954d-47b753e36829 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 07:41:45 compute-2 ceph-mon[77282]: pgmap v1282: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.4 MiB/s wr, 218 op/s
Jan 31 07:41:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1538759407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:41:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1538759407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:41:46 compute-2 nova_compute[226829]: 2026-01-31 07:41:46.015 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:46 compute-2 sudo[246260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:46 compute-2 sudo[246260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:46 compute-2 sudo[246260]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:46.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:46 compute-2 podman[246284]: 2026-01-31 07:41:46.139906735 +0000 UTC m=+0.064382515 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:41:46 compute-2 sudo[246292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:41:46 compute-2 sudo[246292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:46 compute-2 sudo[246292]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:46 compute-2 sudo[246330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:46 compute-2 sudo[246330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:46 compute-2 sudo[246330]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:46 compute-2 sudo[246355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:41:46 compute-2 sudo[246355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:41:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/851585430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:46.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:46 compute-2 nova_compute[226829]: 2026-01-31 07:41:46.485 226833 DEBUG nova.objects.instance [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lazy-loading 'flavor' on Instance uuid c030025f-5967-4922-a748-2f999d0645b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:46 compute-2 nova_compute[226829]: 2026-01-31 07:41:46.624 226833 DEBUG nova.virt.libvirt.driver [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Attempting to attach volume 23d2de7d-d18d-40c1-8f76-18391958864a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 07:41:46 compute-2 nova_compute[226829]: 2026-01-31 07:41:46.628 226833 DEBUG nova.virt.libvirt.guest [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 07:41:46 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:41:46 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-23d2de7d-d18d-40c1-8f76-18391958864a">
Jan 31 07:41:46 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:41:46 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:41:46 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:41:46 compute-2 nova_compute[226829]:   </source>
Jan 31 07:41:46 compute-2 nova_compute[226829]:   <auth username="openstack">
Jan 31 07:41:46 compute-2 nova_compute[226829]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:41:46 compute-2 nova_compute[226829]:   </auth>
Jan 31 07:41:46 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:41:46 compute-2 nova_compute[226829]:   <serial>23d2de7d-d18d-40c1-8f76-18391958864a</serial>
Jan 31 07:41:46 compute-2 nova_compute[226829]: </disk>
Jan 31 07:41:46 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 07:41:46 compute-2 sudo[246355]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/851585430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:47 compute-2 nova_compute[226829]: 2026-01-31 07:41:47.351 226833 DEBUG nova.virt.libvirt.driver [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:41:47 compute-2 nova_compute[226829]: 2026-01-31 07:41:47.351 226833 DEBUG nova.virt.libvirt.driver [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:41:47 compute-2 nova_compute[226829]: 2026-01-31 07:41:47.352 226833 DEBUG nova.virt.libvirt.driver [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:41:47 compute-2 nova_compute[226829]: 2026-01-31 07:41:47.352 226833 DEBUG nova.virt.libvirt.driver [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] No VIF found with MAC fa:16:3e:89:2b:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:41:47 compute-2 nova_compute[226829]: 2026-01-31 07:41:47.649 226833 DEBUG nova.virt.libvirt.driver [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:41:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:48.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:48 compute-2 ceph-mon[77282]: pgmap v1283: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 592 KiB/s rd, 1.0 MiB/s wr, 97 op/s
Jan 31 07:41:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:41:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:48.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:41:48 compute-2 nova_compute[226829]: 2026-01-31 07:41:48.487 226833 DEBUG oslo_concurrency.lockutils [None req-a232f7ac-4637-4ed6-9b84-1d0e46e5755d b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:48 compute-2 nova_compute[226829]: 2026-01-31 07:41:48.993 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:41:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:41:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:41:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:41:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:41:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:50.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:41:50.234 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:50.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:50 compute-2 ceph-mon[77282]: pgmap v1284: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 695 KiB/s rd, 272 KiB/s wr, 102 op/s
Jan 31 07:41:51 compute-2 nova_compute[226829]: 2026-01-31 07:41:51.018 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:51 compute-2 sudo[246433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:51 compute-2 sudo[246433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:51 compute-2 sudo[246433]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:51 compute-2 sudo[246458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:51 compute-2 sudo[246458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:51 compute-2 sudo[246458]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:51 compute-2 nova_compute[226829]: 2026-01-31 07:41:51.674 226833 INFO nova.virt.libvirt.driver [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance shutdown successfully after 14 seconds.
Jan 31 07:41:51 compute-2 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Jan 31 07:41:51 compute-2 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000002d.scope: Consumed 14.175s CPU time.
Jan 31 07:41:51 compute-2 systemd-machined[195142]: Machine qemu-20-instance-0000002d terminated.
Jan 31 07:41:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:52.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:52 compute-2 nova_compute[226829]: 2026-01-31 07:41:52.224 226833 INFO nova.virt.libvirt.driver [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance destroyed successfully.
Jan 31 07:41:52 compute-2 nova_compute[226829]: 2026-01-31 07:41:52.226 226833 DEBUG nova.objects.instance [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lazy-loading 'numa_topology' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:52 compute-2 ceph-mon[77282]: pgmap v1285: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 46 KiB/s wr, 64 op/s
Jan 31 07:41:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:41:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:52.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:41:53 compute-2 nova_compute[226829]: 2026-01-31 07:41:53.160 226833 INFO nova.virt.libvirt.driver [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Beginning cold snapshot process
Jan 31 07:41:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:53 compute-2 nova_compute[226829]: 2026-01-31 07:41:53.808 226833 DEBUG oslo_concurrency.lockutils [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:53 compute-2 nova_compute[226829]: 2026-01-31 07:41:53.809 226833 DEBUG oslo_concurrency.lockutils [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:53 compute-2 nova_compute[226829]: 2026-01-31 07:41:53.996 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:54 compute-2 ceph-mon[77282]: pgmap v1286: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 46 KiB/s wr, 57 op/s
Jan 31 07:41:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:54.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:54 compute-2 ovn_controller[133834]: 2026-01-31T07:41:54Z|00113|binding|INFO|Releasing lport b96729ad-cf99-4a3d-b17d-6bdafe5723db from this chassis (sb_readonly=0)
Jan 31 07:41:54 compute-2 nova_compute[226829]: 2026-01-31 07:41:54.219 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:54 compute-2 nova_compute[226829]: 2026-01-31 07:41:54.306 226833 INFO nova.compute.manager [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Detaching volume 23d2de7d-d18d-40c1-8f76-18391958864a
Jan 31 07:41:54 compute-2 nova_compute[226829]: 2026-01-31 07:41:54.360 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:54.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:54 compute-2 nova_compute[226829]: 2026-01-31 07:41:54.997 226833 DEBUG nova.virt.libvirt.imagebackend [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:41:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/294200435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:55 compute-2 nova_compute[226829]: 2026-01-31 07:41:55.317 226833 INFO nova.virt.block_device [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Attempting to driver detach volume 23d2de7d-d18d-40c1-8f76-18391958864a from mountpoint /dev/vdb
Jan 31 07:41:55 compute-2 nova_compute[226829]: 2026-01-31 07:41:55.327 226833 DEBUG nova.virt.libvirt.driver [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Attempting to detach device vdb from instance c030025f-5967-4922-a748-2f999d0645b1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 07:41:55 compute-2 nova_compute[226829]: 2026-01-31 07:41:55.328 226833 DEBUG nova.virt.libvirt.guest [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 07:41:55 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-23d2de7d-d18d-40c1-8f76-18391958864a">
Jan 31 07:41:55 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]:   </source>
Jan 31 07:41:55 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]:   <serial>23d2de7d-d18d-40c1-8f76-18391958864a</serial>
Jan 31 07:41:55 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]: </disk>
Jan 31 07:41:55 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:41:55 compute-2 nova_compute[226829]: 2026-01-31 07:41:55.405 226833 INFO nova.virt.libvirt.driver [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Successfully detached device vdb from instance c030025f-5967-4922-a748-2f999d0645b1 from the persistent domain config.
Jan 31 07:41:55 compute-2 nova_compute[226829]: 2026-01-31 07:41:55.406 226833 DEBUG nova.virt.libvirt.driver [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c030025f-5967-4922-a748-2f999d0645b1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 07:41:55 compute-2 nova_compute[226829]: 2026-01-31 07:41:55.406 226833 DEBUG nova.virt.libvirt.guest [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 07:41:55 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-23d2de7d-d18d-40c1-8f76-18391958864a">
Jan 31 07:41:55 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]:   </source>
Jan 31 07:41:55 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]:   <serial>23d2de7d-d18d-40c1-8f76-18391958864a</serial>
Jan 31 07:41:55 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 07:41:55 compute-2 nova_compute[226829]: </disk>
Jan 31 07:41:55 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:41:55 compute-2 nova_compute[226829]: 2026-01-31 07:41:55.474 226833 DEBUG nova.storage.rbd_utils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] creating snapshot(c66450f717144b4ba5f01c484f14c28e) on rbd image(49c2b2d1-3230-4f75-bc49-86230accc637_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:41:55 compute-2 nova_compute[226829]: 2026-01-31 07:41:55.704 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769845315.7035294, c030025f-5967-4922-a748-2f999d0645b1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 07:41:55 compute-2 nova_compute[226829]: 2026-01-31 07:41:55.706 226833 DEBUG nova.virt.libvirt.driver [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c030025f-5967-4922-a748-2f999d0645b1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 07:41:55 compute-2 nova_compute[226829]: 2026-01-31 07:41:55.708 226833 INFO nova.virt.libvirt.driver [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Successfully detached device vdb from instance c030025f-5967-4922-a748-2f999d0645b1 from the live domain config.
Jan 31 07:41:56 compute-2 nova_compute[226829]: 2026-01-31 07:41:56.003 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:56 compute-2 nova_compute[226829]: 2026-01-31 07:41:56.021 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:56.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:56 compute-2 ceph-mon[77282]: pgmap v1287: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 35 KiB/s wr, 43 op/s
Jan 31 07:41:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/570766913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:41:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:56.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:41:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e198 e198: 3 total, 3 up, 3 in
Jan 31 07:41:56 compute-2 nova_compute[226829]: 2026-01-31 07:41:56.861 226833 DEBUG nova.objects.instance [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lazy-loading 'flavor' on Instance uuid c030025f-5967-4922-a748-2f999d0645b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:57 compute-2 nova_compute[226829]: 2026-01-31 07:41:57.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:57 compute-2 nova_compute[226829]: 2026-01-31 07:41:57.753 226833 DEBUG nova.storage.rbd_utils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] cloning vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk@c66450f717144b4ba5f01c484f14c28e to images/0b8514e3-4251-4bd6-a806-b742c737781f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:41:57 compute-2 sudo[246551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:57 compute-2 sudo[246551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:57 compute-2 sudo[246551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:57 compute-2 sudo[246576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:41:57 compute-2 sudo[246576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:57 compute-2 sudo[246576]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:57 compute-2 ceph-mon[77282]: pgmap v1288: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 35 KiB/s wr, 46 op/s
Jan 31 07:41:57 compute-2 ceph-mon[77282]: osdmap e198: 3 total, 3 up, 3 in
Jan 31 07:41:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:41:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:58.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:41:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:41:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:41:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:41:58 compute-2 nova_compute[226829]: 2026-01-31 07:41:58.759 226833 DEBUG nova.storage.rbd_utils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] flattening images/0b8514e3-4251-4bd6-a806-b742c737781f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:41:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:58 compute-2 nova_compute[226829]: 2026-01-31 07:41:58.998 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:59 compute-2 nova_compute[226829]: 2026-01-31 07:41:59.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:59 compute-2 nova_compute[226829]: 2026-01-31 07:41:59.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:59 compute-2 nova_compute[226829]: 2026-01-31 07:41:59.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:00.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:00 compute-2 ceph-mon[77282]: pgmap v1290: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 24 KiB/s wr, 9 op/s
Jan 31 07:42:00 compute-2 nova_compute[226829]: 2026-01-31 07:42:00.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.023 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.254 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.255 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.255 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.255 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.256 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.328 226833 DEBUG oslo_concurrency.lockutils [None req-fee21920-4fa9-4f99-9b59-4272ddbcce56 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 7.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.507 226833 DEBUG nova.storage.rbd_utils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] removing snapshot(c66450f717144b4ba5f01c484f14c28e) on rbd image(49c2b2d1-3230-4f75-bc49-86230accc637_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:42:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:42:01 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2218541968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.735 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.996 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:42:01 compute-2 nova_compute[226829]: 2026-01-31 07:42:01.996 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.001 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.001 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:42:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:02.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.242 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.244 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4406MB free_disk=20.89716339111328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.244 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.245 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:02 compute-2 ceph-mon[77282]: pgmap v1291: 305 pgs: 305 active+clean; 209 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 560 KiB/s wr, 33 op/s
Jan 31 07:42:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1991141632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2218541968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e199 e199: 3 total, 3 up, 3 in
Jan 31 07:42:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:02.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.576 226833 DEBUG nova.storage.rbd_utils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] creating snapshot(snap) on rbd image(0b8514e3-4251-4bd6-a806-b742c737781f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.678 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance c030025f-5967-4922-a748-2f999d0645b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.680 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 49c2b2d1-3230-4f75-bc49-86230accc637 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.680 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.681 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.701 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.719 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.720 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.737 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.769 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:42:02 compute-2 nova_compute[226829]: 2026-01-31 07:42:02.876 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:42:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1371050203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:03 compute-2 nova_compute[226829]: 2026-01-31 07:42:03.333 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:03 compute-2 nova_compute[226829]: 2026-01-31 07:42:03.340 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:42:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e200 e200: 3 total, 3 up, 3 in
Jan 31 07:42:03 compute-2 ceph-mon[77282]: osdmap e199: 3 total, 3 up, 3 in
Jan 31 07:42:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3473502299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:03 compute-2 ceph-mon[77282]: pgmap v1293: 305 pgs: 305 active+clean; 235 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.5 MiB/s wr, 57 op/s
Jan 31 07:42:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1371050203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:03 compute-2 nova_compute[226829]: 2026-01-31 07:42:03.657 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:42:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:03 compute-2 nova_compute[226829]: 2026-01-31 07:42:03.952 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:42:03 compute-2 nova_compute[226829]: 2026-01-31 07:42:03.953 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:04 compute-2 nova_compute[226829]: 2026-01-31 07:42:04.000 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:04.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:04.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:04 compute-2 ceph-mon[77282]: osdmap e200: 3 total, 3 up, 3 in
Jan 31 07:42:04 compute-2 nova_compute[226829]: 2026-01-31 07:42:04.953 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:04 compute-2 nova_compute[226829]: 2026-01-31 07:42:04.954 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:42:04 compute-2 nova_compute[226829]: 2026-01-31 07:42:04.954 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:42:05 compute-2 nova_compute[226829]: 2026-01-31 07:42:05.094 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:42:05 compute-2 nova_compute[226829]: 2026-01-31 07:42:05.094 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:42:05 compute-2 nova_compute[226829]: 2026-01-31 07:42:05.095 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:42:05 compute-2 nova_compute[226829]: 2026-01-31 07:42:05.095 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:42:05 compute-2 nova_compute[226829]: 2026-01-31 07:42:05.620 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:42:06 compute-2 nova_compute[226829]: 2026-01-31 07:42:06.025 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:06 compute-2 ceph-mon[77282]: pgmap v1295: 305 pgs: 305 active+clean; 235 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.5 MiB/s wr, 53 op/s
Jan 31 07:42:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:06.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:06.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:06.849 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:06.850 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:06.851 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.222 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845312.220473, 49c2b2d1-3230-4f75-bc49-86230accc637 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.223 226833 INFO nova.compute.manager [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] VM Stopped (Lifecycle Event)
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.278 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.283 226833 DEBUG nova.compute.manager [None req-18321846-2aa4-427f-8c6b-2cd59538a92a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.286 226833 DEBUG nova.compute.manager [None req-18321846-2aa4-427f-8c6b-2cd59538a92a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: shelving_image_uploading, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.496 226833 INFO nova.compute.manager [None req-18321846-2aa4-427f-8c6b-2cd59538a92a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.497 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.497 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.497 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.498 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.498 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:42:07 compute-2 ceph-mon[77282]: pgmap v1296: 305 pgs: 305 active+clean; 283 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 105 op/s
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.842 226833 INFO nova.virt.libvirt.driver [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Snapshot image upload complete
Jan 31 07:42:07 compute-2 nova_compute[226829]: 2026-01-31 07:42:07.842 226833 DEBUG nova.compute.manager [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:08 compute-2 nova_compute[226829]: 2026-01-31 07:42:08.028 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:08.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:08 compute-2 podman[246734]: 2026-01-31 07:42:08.264377051 +0000 UTC m=+0.134613736 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:42:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:42:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:08.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:42:08 compute-2 nova_compute[226829]: 2026-01-31 07:42:08.550 226833 INFO nova.compute.manager [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Shelve offloading
Jan 31 07:42:08 compute-2 nova_compute[226829]: 2026-01-31 07:42:08.559 226833 INFO nova.virt.libvirt.driver [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance destroyed successfully.
Jan 31 07:42:08 compute-2 nova_compute[226829]: 2026-01-31 07:42:08.561 226833 DEBUG nova.compute.manager [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:08 compute-2 nova_compute[226829]: 2026-01-31 07:42:08.563 226833 DEBUG oslo_concurrency.lockutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:42:08 compute-2 nova_compute[226829]: 2026-01-31 07:42:08.564 226833 DEBUG oslo_concurrency.lockutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquired lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:42:08 compute-2 nova_compute[226829]: 2026-01-31 07:42:08.564 226833 DEBUG nova.network.neutron [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:42:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:08 compute-2 nova_compute[226829]: 2026-01-31 07:42:08.968 226833 DEBUG nova.network.neutron [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:42:09 compute-2 nova_compute[226829]: 2026-01-31 07:42:09.004 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:09 compute-2 nova_compute[226829]: 2026-01-31 07:42:09.646 226833 DEBUG nova.network.neutron [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:42:09 compute-2 nova_compute[226829]: 2026-01-31 07:42:09.794 226833 DEBUG oslo_concurrency.lockutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Releasing lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:42:09 compute-2 nova_compute[226829]: 2026-01-31 07:42:09.808 226833 INFO nova.virt.libvirt.driver [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance destroyed successfully.
Jan 31 07:42:09 compute-2 nova_compute[226829]: 2026-01-31 07:42:09.809 226833 DEBUG nova.objects.instance [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lazy-loading 'resources' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:42:09 compute-2 ceph-mon[77282]: pgmap v1297: 305 pgs: 305 active+clean; 283 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.1 MiB/s wr, 89 op/s
Jan 31 07:42:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:10.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:10.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3679202319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:11 compute-2 nova_compute[226829]: 2026-01-31 07:42:11.028 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e201 e201: 3 total, 3 up, 3 in
Jan 31 07:42:11 compute-2 sudo[246781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:11 compute-2 sudo[246781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:11 compute-2 sudo[246781]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:11 compute-2 sudo[246807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:11 compute-2 sudo[246807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:11 compute-2 sudo[246807]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:12 compute-2 ceph-mon[77282]: pgmap v1298: 305 pgs: 305 active+clean; 265 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.0 MiB/s wr, 103 op/s
Jan 31 07:42:12 compute-2 ceph-mon[77282]: osdmap e201: 3 total, 3 up, 3 in
Jan 31 07:42:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:12.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:12.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:12 compute-2 nova_compute[226829]: 2026-01-31 07:42:12.844 226833 INFO nova.virt.libvirt.driver [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Deleting instance files /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637_del
Jan 31 07:42:12 compute-2 nova_compute[226829]: 2026-01-31 07:42:12.845 226833 INFO nova.virt.libvirt.driver [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Deletion of /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637_del complete
Jan 31 07:42:13 compute-2 nova_compute[226829]: 2026-01-31 07:42:13.166 226833 INFO nova.scheduler.client.report [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Deleted allocations for instance 49c2b2d1-3230-4f75-bc49-86230accc637
Jan 31 07:42:13 compute-2 nova_compute[226829]: 2026-01-31 07:42:13.451 226833 DEBUG oslo_concurrency.lockutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:13 compute-2 nova_compute[226829]: 2026-01-31 07:42:13.452 226833 DEBUG oslo_concurrency.lockutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:13 compute-2 nova_compute[226829]: 2026-01-31 07:42:13.547 226833 DEBUG oslo_concurrency.processutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:14 compute-2 nova_compute[226829]: 2026-01-31 07:42:14.007 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:42:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1072138478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:14 compute-2 nova_compute[226829]: 2026-01-31 07:42:14.055 226833 DEBUG oslo_concurrency.processutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:14 compute-2 nova_compute[226829]: 2026-01-31 07:42:14.059 226833 DEBUG nova.compute.provider_tree [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:42:14 compute-2 nova_compute[226829]: 2026-01-31 07:42:14.169 226833 DEBUG nova.scheduler.client.report [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:42:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:14.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:14 compute-2 ceph-mon[77282]: pgmap v1300: 305 pgs: 305 active+clean; 246 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 184 KiB/s rd, 3.5 MiB/s wr, 79 op/s
Jan 31 07:42:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1072138478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:14 compute-2 nova_compute[226829]: 2026-01-31 07:42:14.322 226833 DEBUG oslo_concurrency.lockutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:14 compute-2 nova_compute[226829]: 2026-01-31 07:42:14.478 226833 DEBUG oslo_concurrency.lockutils [None req-322df7b3-efea-4b6f-85e4-b627a43eacb5 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 36.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:42:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:14.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:42:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1190926015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.634700) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335634786, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2886, "num_deletes": 517, "total_data_size": 5684888, "memory_usage": 5766592, "flush_reason": "Manual Compaction"}
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335651780, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3344978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28344, "largest_seqno": 31225, "table_properties": {"data_size": 3333983, "index_size": 6529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 28427, "raw_average_key_size": 21, "raw_value_size": 3309124, "raw_average_value_size": 2456, "num_data_blocks": 282, "num_entries": 1347, "num_filter_entries": 1347, "num_deletions": 517, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845140, "oldest_key_time": 1769845140, "file_creation_time": 1769845335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 17147 microseconds, and 6237 cpu microseconds.
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.651848) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3344978 bytes OK
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.651870) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.654459) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.654497) EVENT_LOG_v1 {"time_micros": 1769845335654486, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.654518) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5670937, prev total WAL file size 5670937, number of live WAL files 2.
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.655474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3266KB)], [57(10MB)]
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335655559, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 13953879, "oldest_snapshot_seqno": -1}
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5555 keys, 8660282 bytes, temperature: kUnknown
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335706171, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 8660282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8623086, "index_size": 22238, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 141936, "raw_average_key_size": 25, "raw_value_size": 8522996, "raw_average_value_size": 1534, "num_data_blocks": 895, "num_entries": 5555, "num_filter_entries": 5555, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769845335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.706427) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8660282 bytes
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.708299) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 275.3 rd, 170.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.1 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.8) write-amplify(2.6) OK, records in: 6578, records dropped: 1023 output_compression: NoCompression
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.708320) EVENT_LOG_v1 {"time_micros": 1769845335708311, "job": 34, "event": "compaction_finished", "compaction_time_micros": 50684, "compaction_time_cpu_micros": 26837, "output_level": 6, "num_output_files": 1, "total_output_size": 8660282, "num_input_records": 6578, "num_output_records": 5555, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335708735, "job": 34, "event": "table_file_deletion", "file_number": 59}
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335709722, "job": 34, "event": "table_file_deletion", "file_number": 57}
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.655311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.709797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.709805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.709809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.709812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:15 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:42:15.709815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:16 compute-2 nova_compute[226829]: 2026-01-31 07:42:16.031 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:16.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:42:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:16.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:42:16 compute-2 ceph-mon[77282]: pgmap v1301: 305 pgs: 305 active+clean; 246 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 169 KiB/s rd, 3.2 MiB/s wr, 72 op/s
Jan 31 07:42:17 compute-2 podman[246858]: 2026-01-31 07:42:17.168015392 +0000 UTC m=+0.050141103 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:42:17 compute-2 ceph-mon[77282]: pgmap v1302: 305 pgs: 305 active+clean; 248 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 31 07:42:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:18.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:18 compute-2 nova_compute[226829]: 2026-01-31 07:42:18.476 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:18.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:19 compute-2 nova_compute[226829]: 2026-01-31 07:42:19.008 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:19 compute-2 ceph-mon[77282]: pgmap v1303: 305 pgs: 305 active+clean; 248 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 07:42:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:20.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:20.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:21 compute-2 nova_compute[226829]: 2026-01-31 07:42:21.034 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4050484851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3812285109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3650789939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:22.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:22 compute-2 ceph-mon[77282]: pgmap v1304: 305 pgs: 305 active+clean; 279 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Jan 31 07:42:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:22.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2103826521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:24 compute-2 nova_compute[226829]: 2026-01-31 07:42:24.010 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:24.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:24.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:24 compute-2 ceph-mon[77282]: pgmap v1305: 305 pgs: 305 active+clean; 294 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.7 MiB/s wr, 74 op/s
Jan 31 07:42:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3775045493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:25 compute-2 ceph-mon[77282]: pgmap v1306: 305 pgs: 305 active+clean; 294 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.1 MiB/s wr, 71 op/s
Jan 31 07:42:26 compute-2 nova_compute[226829]: 2026-01-31 07:42:26.036 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:26 compute-2 nova_compute[226829]: 2026-01-31 07:42:26.167 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:26.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:26.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4197615563' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1043671265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:27 compute-2 ceph-mon[77282]: pgmap v1307: 305 pgs: 305 active+clean; 356 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 132 op/s
Jan 31 07:42:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:28.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:28.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:29 compute-2 nova_compute[226829]: 2026-01-31 07:42:29.012 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:29 compute-2 ceph-mon[77282]: pgmap v1308: 305 pgs: 305 active+clean; 365 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.4 MiB/s wr, 147 op/s
Jan 31 07:42:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2678096094' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:42:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2678096094' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:42:29 compute-2 nova_compute[226829]: 2026-01-31 07:42:29.933 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Acquiring lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:29 compute-2 nova_compute[226829]: 2026-01-31 07:42:29.933 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:29 compute-2 nova_compute[226829]: 2026-01-31 07:42:29.971 226833 DEBUG nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.044 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.045 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.051 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.052 226833 INFO nova.compute.claims [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.164 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:30.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:42:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1929837302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.608 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.616 226833 DEBUG nova.compute.provider_tree [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.632 226833 DEBUG nova.scheduler.client.report [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.660 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.662 226833 DEBUG nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.718 226833 DEBUG nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.719 226833 DEBUG nova.network.neutron [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.753 226833 INFO nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:42:30 compute-2 nova_compute[226829]: 2026-01-31 07:42:30.792 226833 DEBUG nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.039 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e202 e202: 3 total, 3 up, 3 in
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.095 226833 DEBUG nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.097 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.097 226833 INFO nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Creating image(s)
Jan 31 07:42:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1929837302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:31 compute-2 sudo[246922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:31 compute-2 sudo[246922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:31 compute-2 sudo[246922]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:31 compute-2 sudo[246947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:31 compute-2 sudo[246947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:31 compute-2 sudo[246947]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.430 226833 DEBUG nova.storage.rbd_utils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] rbd image 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.470 226833 DEBUG nova.storage.rbd_utils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] rbd image 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.498 226833 DEBUG nova.storage.rbd_utils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] rbd image 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.501 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.585 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.586 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.586 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.587 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.609 226833 DEBUG nova.storage.rbd_utils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] rbd image 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.612 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:31 compute-2 nova_compute[226829]: 2026-01-31 07:42:31.783 226833 DEBUG nova.policy [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b8744f8bd69646dc9ff12e21bf869893', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8f1b80664b714f54baa0c16dfadacca0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:42:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:32.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:32 compute-2 ceph-mon[77282]: pgmap v1309: 305 pgs: 305 active+clean; 376 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 5.7 MiB/s wr, 247 op/s
Jan 31 07:42:32 compute-2 ceph-mon[77282]: osdmap e202: 3 total, 3 up, 3 in
Jan 31 07:42:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:42:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:32.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:42:33 compute-2 nova_compute[226829]: 2026-01-31 07:42:33.231 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:33 compute-2 nova_compute[226829]: 2026-01-31 07:42:33.307 226833 DEBUG nova.storage.rbd_utils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] resizing rbd image 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:42:33 compute-2 nova_compute[226829]: 2026-01-31 07:42:33.630 226833 DEBUG nova.network.neutron [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Successfully created port: 4e8fc55d-0d40-4084-86b9-4babe0a2f675 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:42:33 compute-2 nova_compute[226829]: 2026-01-31 07:42:33.765 226833 DEBUG nova.objects.instance [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lazy-loading 'migration_context' on Instance uuid 91efd25f-cb58-4eb3-8456-0ceb4efa4ede obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:42:33 compute-2 nova_compute[226829]: 2026-01-31 07:42:33.782 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:42:33 compute-2 nova_compute[226829]: 2026-01-31 07:42:33.783 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Ensure instance console log exists: /var/lib/nova/instances/91efd25f-cb58-4eb3-8456-0ceb4efa4ede/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:42:33 compute-2 nova_compute[226829]: 2026-01-31 07:42:33.783 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:33 compute-2 nova_compute[226829]: 2026-01-31 07:42:33.784 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:33 compute-2 nova_compute[226829]: 2026-01-31 07:42:33.784 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:34 compute-2 nova_compute[226829]: 2026-01-31 07:42:34.014 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:42:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:34.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:42:34 compute-2 ceph-mon[77282]: pgmap v1311: 305 pgs: 305 active+clean; 377 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 4.7 MiB/s wr, 321 op/s
Jan 31 07:42:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:34.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:35 compute-2 nova_compute[226829]: 2026-01-31 07:42:35.070 226833 DEBUG nova.network.neutron [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Successfully updated port: 4e8fc55d-0d40-4084-86b9-4babe0a2f675 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:42:35 compute-2 nova_compute[226829]: 2026-01-31 07:42:35.099 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Acquiring lock "refresh_cache-91efd25f-cb58-4eb3-8456-0ceb4efa4ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:42:35 compute-2 nova_compute[226829]: 2026-01-31 07:42:35.100 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Acquired lock "refresh_cache-91efd25f-cb58-4eb3-8456-0ceb4efa4ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:42:35 compute-2 nova_compute[226829]: 2026-01-31 07:42:35.100 226833 DEBUG nova.network.neutron [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:42:35 compute-2 nova_compute[226829]: 2026-01-31 07:42:35.272 226833 DEBUG nova.compute.manager [req-69b01c41-793b-416c-9101-f9745c03d6bc req-d32d7f55-60a2-425b-a35b-d5183feffefa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Received event network-changed-4e8fc55d-0d40-4084-86b9-4babe0a2f675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:35 compute-2 nova_compute[226829]: 2026-01-31 07:42:35.272 226833 DEBUG nova.compute.manager [req-69b01c41-793b-416c-9101-f9745c03d6bc req-d32d7f55-60a2-425b-a35b-d5183feffefa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Refreshing instance network info cache due to event network-changed-4e8fc55d-0d40-4084-86b9-4babe0a2f675. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:42:35 compute-2 nova_compute[226829]: 2026-01-31 07:42:35.273 226833 DEBUG oslo_concurrency.lockutils [req-69b01c41-793b-416c-9101-f9745c03d6bc req-d32d7f55-60a2-425b-a35b-d5183feffefa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-91efd25f-cb58-4eb3-8456-0ceb4efa4ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:42:35 compute-2 ceph-mon[77282]: pgmap v1312: 305 pgs: 305 active+clean; 377 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 4.7 MiB/s wr, 321 op/s
Jan 31 07:42:35 compute-2 nova_compute[226829]: 2026-01-31 07:42:35.726 226833 DEBUG nova.network.neutron [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:42:36 compute-2 nova_compute[226829]: 2026-01-31 07:42:36.041 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:42:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:36.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:42:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:36.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.879 226833 DEBUG nova.network.neutron [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Updating instance_info_cache with network_info: [{"id": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "address": "fa:16:3e:0e:c8:b1", "network": {"id": "f35578fe-26be-4a64-914c-888b4c19b570", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-106164737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f1b80664b714f54baa0c16dfadacca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e8fc55d-0d", "ovs_interfaceid": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.979 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Releasing lock "refresh_cache-91efd25f-cb58-4eb3-8456-0ceb4efa4ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.980 226833 DEBUG nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Instance network_info: |[{"id": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "address": "fa:16:3e:0e:c8:b1", "network": {"id": "f35578fe-26be-4a64-914c-888b4c19b570", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-106164737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f1b80664b714f54baa0c16dfadacca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e8fc55d-0d", "ovs_interfaceid": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.980 226833 DEBUG oslo_concurrency.lockutils [req-69b01c41-793b-416c-9101-f9745c03d6bc req-d32d7f55-60a2-425b-a35b-d5183feffefa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-91efd25f-cb58-4eb3-8456-0ceb4efa4ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.980 226833 DEBUG nova.network.neutron [req-69b01c41-793b-416c-9101-f9745c03d6bc req-d32d7f55-60a2-425b-a35b-d5183feffefa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Refreshing network info cache for port 4e8fc55d-0d40-4084-86b9-4babe0a2f675 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.983 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Start _get_guest_xml network_info=[{"id": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "address": "fa:16:3e:0e:c8:b1", "network": {"id": "f35578fe-26be-4a64-914c-888b4c19b570", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-106164737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f1b80664b714f54baa0c16dfadacca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e8fc55d-0d", "ovs_interfaceid": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.987 226833 WARNING nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.992 226833 DEBUG nova.virt.libvirt.host [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.992 226833 DEBUG nova.virt.libvirt.host [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.998 226833 DEBUG nova.virt.libvirt.host [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:42:37 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.998 226833 DEBUG nova.virt.libvirt.host [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:37.999 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.000 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.000 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.000 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.001 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.001 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.001 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.001 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.001 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.002 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.002 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.002 226833 DEBUG nova.virt.hardware [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.005 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:38.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:38 compute-2 ceph-mon[77282]: pgmap v1313: 305 pgs: 305 active+clean; 328 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 2.6 MiB/s wr, 387 op/s
Jan 31 07:42:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1630656851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:42:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1164054309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.481 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.507 226833 DEBUG nova.storage.rbd_utils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] rbd image 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.511 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:38.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:42:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4228686676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.917 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.920 226833 DEBUG nova.virt.libvirt.vif [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:42:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-2026127611',display_name='tempest-ImagesNegativeTestJSON-server-2026127611',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-2026127611',id=50,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f1b80664b714f54baa0c16dfadacca0',ramdisk_id='',reservation_id='r-ttwcwdow',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1541003784',owner_user_name='tempest-ImagesNegativeTestJSON-1541003784-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:42:30Z,user_data=None,user_id='b8744f8bd69646dc9ff12e21bf869893',uuid=91efd25f-cb58-4eb3-8456-0ceb4efa4ede,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "address": "fa:16:3e:0e:c8:b1", "network": {"id": "f35578fe-26be-4a64-914c-888b4c19b570", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-106164737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f1b80664b714f54baa0c16dfadacca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e8fc55d-0d", "ovs_interfaceid": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.921 226833 DEBUG nova.network.os_vif_util [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Converting VIF {"id": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "address": "fa:16:3e:0e:c8:b1", "network": {"id": "f35578fe-26be-4a64-914c-888b4c19b570", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-106164737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f1b80664b714f54baa0c16dfadacca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e8fc55d-0d", "ovs_interfaceid": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.923 226833 DEBUG nova.network.os_vif_util [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c8:b1,bridge_name='br-int',has_traffic_filtering=True,id=4e8fc55d-0d40-4084-86b9-4babe0a2f675,network=Network(f35578fe-26be-4a64-914c-888b4c19b570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e8fc55d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:42:38 compute-2 nova_compute[226829]: 2026-01-31 07:42:38.925 226833 DEBUG nova.objects.instance [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91efd25f-cb58-4eb3-8456-0ceb4efa4ede obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.011 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <uuid>91efd25f-cb58-4eb3-8456-0ceb4efa4ede</uuid>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <name>instance-00000032</name>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <nova:name>tempest-ImagesNegativeTestJSON-server-2026127611</nova:name>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:42:37</nova:creationTime>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <nova:user uuid="b8744f8bd69646dc9ff12e21bf869893">tempest-ImagesNegativeTestJSON-1541003784-project-member</nova:user>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <nova:project uuid="8f1b80664b714f54baa0c16dfadacca0">tempest-ImagesNegativeTestJSON-1541003784</nova:project>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <nova:port uuid="4e8fc55d-0d40-4084-86b9-4babe0a2f675">
Jan 31 07:42:39 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <system>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <entry name="serial">91efd25f-cb58-4eb3-8456-0ceb4efa4ede</entry>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <entry name="uuid">91efd25f-cb58-4eb3-8456-0ceb4efa4ede</entry>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     </system>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <os>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   </os>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <features>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   </features>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk">
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       </source>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk.config">
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       </source>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:42:39 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:0e:c8:b1"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <target dev="tap4e8fc55d-0d"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/91efd25f-cb58-4eb3-8456-0ceb4efa4ede/console.log" append="off"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <video>
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     </video>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:42:39 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:42:39 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:42:39 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:42:39 compute-2 nova_compute[226829]: </domain>
Jan 31 07:42:39 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.012 226833 DEBUG nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Preparing to wait for external event network-vif-plugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.013 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Acquiring lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.013 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.013 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.014 226833 DEBUG nova.virt.libvirt.vif [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:42:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-2026127611',display_name='tempest-ImagesNegativeTestJSON-server-2026127611',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-2026127611',id=50,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f1b80664b714f54baa0c16dfadacca0',ramdisk_id='',reservation_id='r-ttwcwdow',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1541003784',owner_user_name='tempest-ImagesNegativeTestJSON-1541003784-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:42:30Z,user_data=None,user_id='b8744f8bd69646dc9ff12e21bf869893',uuid=91efd25f-cb58-4eb3-8456-0ceb4efa4ede,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "address": "fa:16:3e:0e:c8:b1", "network": {"id": "f35578fe-26be-4a64-914c-888b4c19b570", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-106164737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f1b80664b714f54baa0c16dfadacca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e8fc55d-0d", "ovs_interfaceid": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.015 226833 DEBUG nova.network.os_vif_util [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Converting VIF {"id": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "address": "fa:16:3e:0e:c8:b1", "network": {"id": "f35578fe-26be-4a64-914c-888b4c19b570", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-106164737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f1b80664b714f54baa0c16dfadacca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e8fc55d-0d", "ovs_interfaceid": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.015 226833 DEBUG nova.network.os_vif_util [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c8:b1,bridge_name='br-int',has_traffic_filtering=True,id=4e8fc55d-0d40-4084-86b9-4babe0a2f675,network=Network(f35578fe-26be-4a64-914c-888b4c19b570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e8fc55d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.016 226833 DEBUG os_vif [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c8:b1,bridge_name='br-int',has_traffic_filtering=True,id=4e8fc55d-0d40-4084-86b9-4babe0a2f675,network=Network(f35578fe-26be-4a64-914c-888b4c19b570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e8fc55d-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.017 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.019 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.019 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.019 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.025 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.025 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e8fc55d-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.026 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4e8fc55d-0d, col_values=(('external_ids', {'iface-id': '4e8fc55d-0d40-4084-86b9-4babe0a2f675', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:c8:b1', 'vm-uuid': '91efd25f-cb58-4eb3-8456-0ceb4efa4ede'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.028 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:42:39 compute-2 NetworkManager[48999]: <info>  [1769845359.0287] manager: (tap4e8fc55d-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.033 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.034 226833 INFO os_vif [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c8:b1,bridge_name='br-int',has_traffic_filtering=True,id=4e8fc55d-0d40-4084-86b9-4babe0a2f675,network=Network(f35578fe-26be-4a64-914c-888b4c19b570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e8fc55d-0d')
Jan 31 07:42:39 compute-2 podman[247191]: 2026-01-31 07:42:39.245870359 +0000 UTC m=+0.111651881 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.262 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.263 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.263 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] No VIF found with MAC fa:16:3e:0e:c8:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.264 226833 INFO nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Using config drive
Jan 31 07:42:39 compute-2 nova_compute[226829]: 2026-01-31 07:42:39.362 226833 DEBUG nova.storage.rbd_utils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] rbd image 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1164054309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4228686676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:40.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:42:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:40.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:42:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 e203: 3 total, 3 up, 3 in
Jan 31 07:42:41 compute-2 ceph-mon[77282]: pgmap v1314: 305 pgs: 305 active+clean; 295 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.5 MiB/s wr, 324 op/s
Jan 31 07:42:41 compute-2 ceph-mon[77282]: osdmap e203: 3 total, 3 up, 3 in
Jan 31 07:42:41 compute-2 nova_compute[226829]: 2026-01-31 07:42:41.694 226833 INFO nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Creating config drive at /var/lib/nova/instances/91efd25f-cb58-4eb3-8456-0ceb4efa4ede/disk.config
Jan 31 07:42:41 compute-2 nova_compute[226829]: 2026-01-31 07:42:41.703 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91efd25f-cb58-4eb3-8456-0ceb4efa4ede/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbf8gvmth execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:41 compute-2 nova_compute[226829]: 2026-01-31 07:42:41.731 226833 DEBUG nova.network.neutron [req-69b01c41-793b-416c-9101-f9745c03d6bc req-d32d7f55-60a2-425b-a35b-d5183feffefa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Updated VIF entry in instance network info cache for port 4e8fc55d-0d40-4084-86b9-4babe0a2f675. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:42:41 compute-2 nova_compute[226829]: 2026-01-31 07:42:41.732 226833 DEBUG nova.network.neutron [req-69b01c41-793b-416c-9101-f9745c03d6bc req-d32d7f55-60a2-425b-a35b-d5183feffefa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Updating instance_info_cache with network_info: [{"id": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "address": "fa:16:3e:0e:c8:b1", "network": {"id": "f35578fe-26be-4a64-914c-888b4c19b570", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-106164737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f1b80664b714f54baa0c16dfadacca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e8fc55d-0d", "ovs_interfaceid": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:42:41 compute-2 nova_compute[226829]: 2026-01-31 07:42:41.832 226833 DEBUG oslo_concurrency.lockutils [req-69b01c41-793b-416c-9101-f9745c03d6bc req-d32d7f55-60a2-425b-a35b-d5183feffefa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-91efd25f-cb58-4eb3-8456-0ceb4efa4ede" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:42:41 compute-2 nova_compute[226829]: 2026-01-31 07:42:41.840 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91efd25f-cb58-4eb3-8456-0ceb4efa4ede/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbf8gvmth" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:41 compute-2 nova_compute[226829]: 2026-01-31 07:42:41.894 226833 DEBUG nova.storage.rbd_utils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] rbd image 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:41 compute-2 nova_compute[226829]: 2026-01-31 07:42:41.901 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91efd25f-cb58-4eb3-8456-0ceb4efa4ede/disk.config 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:42.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:42 compute-2 ceph-mon[77282]: pgmap v1315: 305 pgs: 305 active+clean; 303 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Jan 31 07:42:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:42.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:43 compute-2 ceph-mon[77282]: pgmap v1317: 305 pgs: 305 active+clean; 307 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 202 op/s
Jan 31 07:42:43 compute-2 nova_compute[226829]: 2026-01-31 07:42:43.699 226833 DEBUG oslo_concurrency.processutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91efd25f-cb58-4eb3-8456-0ceb4efa4ede/disk.config 91efd25f-cb58-4eb3-8456-0ceb4efa4ede_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.798s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:43 compute-2 nova_compute[226829]: 2026-01-31 07:42:43.700 226833 INFO nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Deleting local config drive /var/lib/nova/instances/91efd25f-cb58-4eb3-8456-0ceb4efa4ede/disk.config because it was imported into RBD.
Jan 31 07:42:43 compute-2 kernel: tap4e8fc55d-0d: entered promiscuous mode
Jan 31 07:42:43 compute-2 NetworkManager[48999]: <info>  [1769845363.7620] manager: (tap4e8fc55d-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Jan 31 07:42:43 compute-2 ovn_controller[133834]: 2026-01-31T07:42:43Z|00114|binding|INFO|Claiming lport 4e8fc55d-0d40-4084-86b9-4babe0a2f675 for this chassis.
Jan 31 07:42:43 compute-2 ovn_controller[133834]: 2026-01-31T07:42:43Z|00115|binding|INFO|4e8fc55d-0d40-4084-86b9-4babe0a2f675: Claiming fa:16:3e:0e:c8:b1 10.100.0.7
Jan 31 07:42:43 compute-2 nova_compute[226829]: 2026-01-31 07:42:43.762 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:43 compute-2 ovn_controller[133834]: 2026-01-31T07:42:43Z|00116|binding|INFO|Setting lport 4e8fc55d-0d40-4084-86b9-4babe0a2f675 ovn-installed in OVS
Jan 31 07:42:43 compute-2 nova_compute[226829]: 2026-01-31 07:42:43.774 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:43 compute-2 nova_compute[226829]: 2026-01-31 07:42:43.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:43 compute-2 systemd-udevd[247291]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:42:43 compute-2 systemd-machined[195142]: New machine qemu-21-instance-00000032.
Jan 31 07:42:43 compute-2 NetworkManager[48999]: <info>  [1769845363.8051] device (tap4e8fc55d-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:42:43 compute-2 NetworkManager[48999]: <info>  [1769845363.8057] device (tap4e8fc55d-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:42:43 compute-2 systemd[1]: Started Virtual Machine qemu-21-instance-00000032.
Jan 31 07:42:43 compute-2 ovn_controller[133834]: 2026-01-31T07:42:43Z|00117|binding|INFO|Setting lport 4e8fc55d-0d40-4084-86b9-4babe0a2f675 up in Southbound
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.810 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:c8:b1 10.100.0.7'], port_security=['fa:16:3e:0e:c8:b1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91efd25f-cb58-4eb3-8456-0ceb4efa4ede', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f35578fe-26be-4a64-914c-888b4c19b570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f1b80664b714f54baa0c16dfadacca0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '50392d96-5467-43de-8e8a-8de829be13b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1c1b0ac-6ffe-4744-9389-3d88aa7d5f11, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=4e8fc55d-0d40-4084-86b9-4babe0a2f675) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.814 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 4e8fc55d-0d40-4084-86b9-4babe0a2f675 in datapath f35578fe-26be-4a64-914c-888b4c19b570 bound to our chassis
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.817 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f35578fe-26be-4a64-914c-888b4c19b570
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.828 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[de5eb8f3-3754-4276-9d3d-81f795d4d392]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.830 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf35578fe-21 in ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.833 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf35578fe-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.833 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[659c10a7-bedb-4be4-b512-3bfddaf68b04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.834 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[948292de-adf4-4608-97c8-fe65dc0e699a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.847 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfb2e5d-ca0d-4b9b-9afe-0d6817ac215c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.857 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[13fa69e2-d7d2-485b-9f89-4161a69323ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.881 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5c505d17-b953-44fd-b3ff-f9a68a77913d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.887 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0403494c-4340-4cd5-bd7e-b71ef59361e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 NetworkManager[48999]: <info>  [1769845363.8886] manager: (tapf35578fe-20): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Jan 31 07:42:43 compute-2 systemd-udevd[247294]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.912 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0cc465-fda1-4b5a-84be-edfe39ff55cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.914 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa94d08-c0b3-4a6f-921c-91c78f727b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 NetworkManager[48999]: <info>  [1769845363.9307] device (tapf35578fe-20): carrier: link connected
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.935 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5643b8-e790-4bd3-85db-076ed306da89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.951 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d89e67-e1dd-4d24-a948-fb1ddcbe0879]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf35578fe-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:27:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561640, 'reachable_time': 24769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247325, 'error': None, 'target': 'ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.966 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ed291e-0907-4a99-a9f6-db48e19c901e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:27c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561640, 'tstamp': 561640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247326, 'error': None, 'target': 'ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:43.979 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6ebe5b-965a-46cb-b6bf-63f06f6e8e73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf35578fe-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:27:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561640, 'reachable_time': 24769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247327, 'error': None, 'target': 'ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:42:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227965996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:42:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:42:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227965996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.013 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[971c2eb0-7019-45ee-9698-69eb4260e57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.018 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.029 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.063 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf37499-3b49-4c24-b330-ca2adb682a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.065 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf35578fe-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.065 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.066 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf35578fe-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:44 compute-2 kernel: tapf35578fe-20: entered promiscuous mode
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.067 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 NetworkManager[48999]: <info>  [1769845364.0682] manager: (tapf35578fe-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.072 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf35578fe-20, col_values=(('external_ids', {'iface-id': '6abc4ff5-0c2c-42c0-8f82-24a3399a0af2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.073 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.074 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 ovn_controller[133834]: 2026-01-31T07:42:44Z|00118|binding|INFO|Releasing lport 6abc4ff5-0c2c-42c0-8f82-24a3399a0af2 from this chassis (sb_readonly=0)
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.076 226833 DEBUG oslo_concurrency.lockutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.077 226833 DEBUG oslo_concurrency.lockutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.077 226833 DEBUG oslo_concurrency.lockutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.078 226833 DEBUG oslo_concurrency.lockutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.078 226833 DEBUG oslo_concurrency.lockutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.080 226833 INFO nova.compute.manager [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Terminating instance
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.081 226833 DEBUG nova.compute.manager [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.082 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.086 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f35578fe-26be-4a64-914c-888b4c19b570.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f35578fe-26be-4a64-914c-888b4c19b570.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.088 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c0877f7c-09ea-4b4f-bc9c-545dc274d7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.091 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-f35578fe-26be-4a64-914c-888b4c19b570
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/f35578fe-26be-4a64-914c-888b4c19b570.pid.haproxy
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID f35578fe-26be-4a64-914c-888b4c19b570
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.092 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570', 'env', 'PROCESS_TAG=haproxy-f35578fe-26be-4a64-914c-888b4c19b570', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f35578fe-26be-4a64-914c-888b4c19b570.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:42:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:44.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:44 compute-2 kernel: tap462010b9-29 (unregistering): left promiscuous mode
Jan 31 07:42:44 compute-2 NetworkManager[48999]: <info>  [1769845364.4173] device (tap462010b9-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.423 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 ovn_controller[133834]: 2026-01-31T07:42:44Z|00119|binding|INFO|Releasing lport 462010b9-29e6-472e-ba82-5e6c54eec345 from this chassis (sb_readonly=0)
Jan 31 07:42:44 compute-2 ovn_controller[133834]: 2026-01-31T07:42:44Z|00120|binding|INFO|Setting lport 462010b9-29e6-472e-ba82-5e6c54eec345 down in Southbound
Jan 31 07:42:44 compute-2 ovn_controller[133834]: 2026-01-31T07:42:44Z|00121|binding|INFO|Removing iface tap462010b9-29 ovn-installed in OVS
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.427 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.433 226833 DEBUG nova.compute.manager [req-568e1e38-216b-45e9-bbee-f38846128489 req-ecf81a65-4786-4ca4-bd52-adb9f68a6e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Received event network-vif-plugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.433 226833 DEBUG oslo_concurrency.lockutils [req-568e1e38-216b-45e9-bbee-f38846128489 req-ecf81a65-4786-4ca4-bd52-adb9f68a6e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.434 226833 DEBUG oslo_concurrency.lockutils [req-568e1e38-216b-45e9-bbee-f38846128489 req-ecf81a65-4786-4ca4-bd52-adb9f68a6e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.434 226833 DEBUG oslo_concurrency.lockutils [req-568e1e38-216b-45e9-bbee-f38846128489 req-ecf81a65-4786-4ca4-bd52-adb9f68a6e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.434 226833 DEBUG nova.compute.manager [req-568e1e38-216b-45e9-bbee-f38846128489 req-ecf81a65-4786-4ca4-bd52-adb9f68a6e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Processing event network-vif-plugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.438 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:44.437 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:2b:d4 10.100.0.8'], port_security=['fa:16:3e:89:2b:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c030025f-5967-4922-a748-2f999d0645b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bff39063-463a-42de-b52a-d9ff7905f368', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '016f45da455049d7aad578f0a534a0f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41805443-972e-49c5-b4cc-3b8c00ce4178', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64424fb0-e7eb-43b1-ba76-473962c1b131, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=462010b9-29e6-472e-ba82-5e6c54eec345) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:42:44 compute-2 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Jan 31 07:42:44 compute-2 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000002e.scope: Consumed 17.404s CPU time.
Jan 31 07:42:44 compute-2 systemd-machined[195142]: Machine qemu-19-instance-0000002e terminated.
Jan 31 07:42:44 compute-2 NetworkManager[48999]: <info>  [1769845364.4991] manager: (tap462010b9-29): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.500 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.503 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.517 226833 INFO nova.virt.libvirt.driver [-] [instance: c030025f-5967-4922-a748-2f999d0645b1] Instance destroyed successfully.
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.518 226833 DEBUG nova.objects.instance [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lazy-loading 'resources' on Instance uuid c030025f-5967-4922-a748-2f999d0645b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:42:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:44.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.552 226833 DEBUG nova.virt.libvirt.vif [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:41:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-276353670',display_name='tempest-VolumesAdminNegativeTest-server-276353670',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-276353670',id=46,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG45txBttIRVAaDalRus4g+sGRVlJS+MTqAQB06P99OWE1GwBlFSi+Nkbfr4Wi+1f75znP6mwmSrCTMSKLDST0/MC4PLC/3COJBj4gWDT+jq3RFY7K+3lw1lw7j4siFQEA==',key_name='tempest-keypair-294079093',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:41:18Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='016f45da455049d7aad578f0a534a0f2',ramdisk_id='',reservation_id='r-468sha1f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-1814342344',owner_user_name='tempest-VolumesAdminNegativeTest-1814342344-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:41:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b873da8845e6461088fcff99c5c140b1',uuid=c030025f-5967-4922-a748-2f999d0645b1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.553 226833 DEBUG nova.network.os_vif_util [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converting VIF {"id": "462010b9-29e6-472e-ba82-5e6c54eec345", "address": "fa:16:3e:89:2b:d4", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap462010b9-29", "ovs_interfaceid": "462010b9-29e6-472e-ba82-5e6c54eec345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.554 226833 DEBUG nova.network.os_vif_util [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:2b:d4,bridge_name='br-int',has_traffic_filtering=True,id=462010b9-29e6-472e-ba82-5e6c54eec345,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap462010b9-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.554 226833 DEBUG os_vif [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:2b:d4,bridge_name='br-int',has_traffic_filtering=True,id=462010b9-29e6-472e-ba82-5e6c54eec345,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap462010b9-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.557 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.558 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap462010b9-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.559 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.561 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845364.560741, 91efd25f-cb58-4eb3-8456-0ceb4efa4ede => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.561 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] VM Started (Lifecycle Event)
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.562 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.564 226833 DEBUG nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.565 226833 INFO os_vif [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:2b:d4,bridge_name='br-int',has_traffic_filtering=True,id=462010b9-29e6-472e-ba82-5e6c54eec345,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap462010b9-29')
Jan 31 07:42:44 compute-2 podman[247398]: 2026-01-31 07:42:44.485569397 +0000 UTC m=+0.023974783 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.590 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.594 226833 INFO nova.virt.libvirt.driver [-] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Instance spawned successfully.
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.594 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.617 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.622 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.632 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.633 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.633 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.633 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.634 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.634 226833 DEBUG nova.virt.libvirt.driver [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.667 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.667 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845364.5608084, 91efd25f-cb58-4eb3-8456-0ceb4efa4ede => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.667 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] VM Paused (Lifecycle Event)
Jan 31 07:42:44 compute-2 podman[247398]: 2026-01-31 07:42:44.683822586 +0000 UTC m=+0.222227972 container create b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.696 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.700 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845364.567539, 91efd25f-cb58-4eb3-8456-0ceb4efa4ede => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.700 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] VM Resumed (Lifecycle Event)
Jan 31 07:42:44 compute-2 systemd[1]: Started libpod-conmon-b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7.scope.
Jan 31 07:42:44 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:42:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2492bce5b27076971868b47fe31dd596e55c1e18ec19d64363e48d6060455ff3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.829 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:44 compute-2 podman[247398]: 2026-01-31 07:42:44.834729217 +0000 UTC m=+0.373134653 container init b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.838 226833 DEBUG nova.compute.manager [req-40ab4b44-78df-4018-8cfd-74f8a9206883 req-559ddd76-63a3-4de4-bf85-c67becf0275a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Received event network-vif-unplugged-462010b9-29e6-472e-ba82-5e6c54eec345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.839 226833 DEBUG oslo_concurrency.lockutils [req-40ab4b44-78df-4018-8cfd-74f8a9206883 req-559ddd76-63a3-4de4-bf85-c67becf0275a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.840 226833 DEBUG oslo_concurrency.lockutils [req-40ab4b44-78df-4018-8cfd-74f8a9206883 req-559ddd76-63a3-4de4-bf85-c67becf0275a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.840 226833 DEBUG oslo_concurrency.lockutils [req-40ab4b44-78df-4018-8cfd-74f8a9206883 req-559ddd76-63a3-4de4-bf85-c67becf0275a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.841 226833 DEBUG nova.compute.manager [req-40ab4b44-78df-4018-8cfd-74f8a9206883 req-559ddd76-63a3-4de4-bf85-c67becf0275a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] No waiting events found dispatching network-vif-unplugged-462010b9-29e6-472e-ba82-5e6c54eec345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.841 226833 DEBUG nova.compute.manager [req-40ab4b44-78df-4018-8cfd-74f8a9206883 req-559ddd76-63a3-4de4-bf85-c67becf0275a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Received event network-vif-unplugged-462010b9-29e6-472e-ba82-5e6c54eec345 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:42:44 compute-2 podman[247398]: 2026-01-31 07:42:44.842350991 +0000 UTC m=+0.380756377 container start b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.843 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:42:44 compute-2 neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570[247449]: [NOTICE]   (247453) : New worker (247455) forked
Jan 31 07:42:44 compute-2 neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570[247449]: [NOTICE]   (247453) : Loading success.
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.932 226833 INFO nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Took 13.84 seconds to spawn the instance on the hypervisor.
Jan 31 07:42:44 compute-2 nova_compute[226829]: 2026-01-31 07:42:44.933 226833 DEBUG nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:45 compute-2 nova_compute[226829]: 2026-01-31 07:42:45.285 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:42:45 compute-2 nova_compute[226829]: 2026-01-31 07:42:45.371 226833 INFO nova.compute.manager [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Took 15.36 seconds to build instance.
Jan 31 07:42:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:45.527 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 462010b9-29e6-472e-ba82-5e6c54eec345 in datapath bff39063-463a-42de-b52a-d9ff7905f368 unbound from our chassis
Jan 31 07:42:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:45.529 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bff39063-463a-42de-b52a-d9ff7905f368, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:42:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:45.530 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f38a380d-100f-4dfd-9484-5e06b89ca619]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:45.530 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368 namespace which is not needed anymore
Jan 31 07:42:45 compute-2 nova_compute[226829]: 2026-01-31 07:42:45.565 226833 DEBUG oslo_concurrency.lockutils [None req-747cf06d-97f2-41e3-9196-29056d2db7b9 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3227965996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:42:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3227965996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:42:45 compute-2 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[245729]: [NOTICE]   (245737) : haproxy version is 2.8.14-c23fe91
Jan 31 07:42:45 compute-2 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[245729]: [NOTICE]   (245737) : path to executable is /usr/sbin/haproxy
Jan 31 07:42:45 compute-2 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[245729]: [WARNING]  (245737) : Exiting Master process...
Jan 31 07:42:45 compute-2 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[245729]: [WARNING]  (245737) : Exiting Master process...
Jan 31 07:42:45 compute-2 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[245729]: [ALERT]    (245737) : Current worker (245739) exited with code 143 (Terminated)
Jan 31 07:42:45 compute-2 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[245729]: [WARNING]  (245737) : All workers exited. Exiting... (0)
Jan 31 07:42:45 compute-2 systemd[1]: libpod-2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7.scope: Deactivated successfully.
Jan 31 07:42:45 compute-2 podman[247483]: 2026-01-31 07:42:45.883174602 +0000 UTC m=+0.277843451 container died 2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:42:46 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7-userdata-shm.mount: Deactivated successfully.
Jan 31 07:42:46 compute-2 systemd[1]: var-lib-containers-storage-overlay-04288f51c6bf900c6ef617dd879e6dfb21e962313cc4ba50d0bb3a9c39b84cf9-merged.mount: Deactivated successfully.
Jan 31 07:42:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:46.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:46 compute-2 podman[247483]: 2026-01-31 07:42:46.526563781 +0000 UTC m=+0.921232590 container cleanup 2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:42:46 compute-2 systemd[1]: libpod-conmon-2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7.scope: Deactivated successfully.
Jan 31 07:42:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:46.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:46 compute-2 ceph-mon[77282]: pgmap v1318: 305 pgs: 305 active+clean; 307 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 202 op/s
Jan 31 07:42:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3784314569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:42:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3784314569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:42:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3880658820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.914 226833 DEBUG nova.compute.manager [req-26bcf112-2a75-48b7-8931-e9ceccafe821 req-fba97e05-d157-4d71-b59b-1af544497881 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Received event network-vif-plugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.915 226833 DEBUG oslo_concurrency.lockutils [req-26bcf112-2a75-48b7-8931-e9ceccafe821 req-fba97e05-d157-4d71-b59b-1af544497881 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.915 226833 DEBUG oslo_concurrency.lockutils [req-26bcf112-2a75-48b7-8931-e9ceccafe821 req-fba97e05-d157-4d71-b59b-1af544497881 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.916 226833 DEBUG oslo_concurrency.lockutils [req-26bcf112-2a75-48b7-8931-e9ceccafe821 req-fba97e05-d157-4d71-b59b-1af544497881 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.916 226833 DEBUG nova.compute.manager [req-26bcf112-2a75-48b7-8931-e9ceccafe821 req-fba97e05-d157-4d71-b59b-1af544497881 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] No waiting events found dispatching network-vif-plugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.916 226833 WARNING nova.compute.manager [req-26bcf112-2a75-48b7-8931-e9ceccafe821 req-fba97e05-d157-4d71-b59b-1af544497881 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Received unexpected event network-vif-plugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 for instance with vm_state active and task_state deleting.
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.919 226833 DEBUG oslo_concurrency.lockutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Acquiring lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.919 226833 DEBUG oslo_concurrency.lockutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.920 226833 DEBUG oslo_concurrency.lockutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Acquiring lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.920 226833 DEBUG oslo_concurrency.lockutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.921 226833 DEBUG oslo_concurrency.lockutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.923 226833 INFO nova.compute.manager [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Terminating instance
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.924 226833 DEBUG nova.compute.manager [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:42:46 compute-2 podman[247515]: 2026-01-31 07:42:46.948508919 +0000 UTC m=+0.398841090 container remove 2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 07:42:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:46.952 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fb6808-3106-422c-b611-c55182152043]: (4, ('Sat Jan 31 07:42:45 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368 (2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7)\n2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7\nSat Jan 31 07:42:46 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368 (2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7)\n2e586d93422f0d064a8988b622c765a375485f7405cc58ee69ad44614ed1fca7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:46.955 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6b849fb1-331d-46c5-bcdf-8921f467b66e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:46.956 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbff39063-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.961 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:46 compute-2 kernel: tapbff39063-40: left promiscuous mode
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.974 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:46 compute-2 kernel: tap4e8fc55d-0d (unregistering): left promiscuous mode
Jan 31 07:42:46 compute-2 NetworkManager[48999]: <info>  [1769845366.9803] device (tap4e8fc55d-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:42:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:46.979 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7e4fbb-2451-42b4-9491-6b5441a56a8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:46 compute-2 ovn_controller[133834]: 2026-01-31T07:42:46Z|00122|binding|INFO|Releasing lport 4e8fc55d-0d40-4084-86b9-4babe0a2f675 from this chassis (sb_readonly=0)
Jan 31 07:42:46 compute-2 ovn_controller[133834]: 2026-01-31T07:42:46Z|00123|binding|INFO|Setting lport 4e8fc55d-0d40-4084-86b9-4babe0a2f675 down in Southbound
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.990 226833 DEBUG nova.compute.manager [req-bba0cb86-df6c-4d6f-9290-e745ccaae18f req-6635e8f1-bf34-445d-835a-d426f541d93b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Received event network-vif-plugged-462010b9-29e6-472e-ba82-5e6c54eec345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.991 226833 DEBUG oslo_concurrency.lockutils [req-bba0cb86-df6c-4d6f-9290-e745ccaae18f req-6635e8f1-bf34-445d-835a-d426f541d93b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c030025f-5967-4922-a748-2f999d0645b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.991 226833 DEBUG oslo_concurrency.lockutils [req-bba0cb86-df6c-4d6f-9290-e745ccaae18f req-6635e8f1-bf34-445d-835a-d426f541d93b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.991 226833 DEBUG oslo_concurrency.lockutils [req-bba0cb86-df6c-4d6f-9290-e745ccaae18f req-6635e8f1-bf34-445d-835a-d426f541d93b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:46 compute-2 ovn_controller[133834]: 2026-01-31T07:42:46Z|00124|binding|INFO|Removing iface tap4e8fc55d-0d ovn-installed in OVS
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.992 226833 DEBUG nova.compute.manager [req-bba0cb86-df6c-4d6f-9290-e745ccaae18f req-6635e8f1-bf34-445d-835a-d426f541d93b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] No waiting events found dispatching network-vif-plugged-462010b9-29e6-472e-ba82-5e6c54eec345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.992 226833 WARNING nova.compute.manager [req-bba0cb86-df6c-4d6f-9290-e745ccaae18f req-6635e8f1-bf34-445d-835a-d426f541d93b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Received unexpected event network-vif-plugged-462010b9-29e6-472e-ba82-5e6c54eec345 for instance with vm_state active and task_state deleting.
Jan 31 07:42:46 compute-2 nova_compute[226829]: 2026-01-31 07:42:46.992 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:46.995 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc7b247-f560-4566-b780-37178cb70b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:46.997 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[89ec5580-18f9-4dba-80e9-097d46f8d8da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.002 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:47.009 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[511a96ca-2b63-4a42-af99-f4bbef74dfb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552981, 'reachable_time': 40488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247529, 'error': None, 'target': 'ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:47 compute-2 systemd[1]: run-netns-ovnmeta\x2dbff39063\x2d463a\x2d42de\x2db52a\x2dd9ff7905f368.mount: Deactivated successfully.
Jan 31 07:42:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:47.014 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:42:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:47.014 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c3b6e8-a423-4524-abb4-260fa6003223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:47.019 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:c8:b1 10.100.0.7'], port_security=['fa:16:3e:0e:c8:b1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91efd25f-cb58-4eb3-8456-0ceb4efa4ede', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f35578fe-26be-4a64-914c-888b4c19b570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f1b80664b714f54baa0c16dfadacca0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '50392d96-5467-43de-8e8a-8de829be13b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1c1b0ac-6ffe-4744-9389-3d88aa7d5f11, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=4e8fc55d-0d40-4084-86b9-4babe0a2f675) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:42:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:47.020 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 4e8fc55d-0d40-4084-86b9-4babe0a2f675 in datapath f35578fe-26be-4a64-914c-888b4c19b570 unbound from our chassis
Jan 31 07:42:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:47.022 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f35578fe-26be-4a64-914c-888b4c19b570, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:42:47 compute-2 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000032.scope: Deactivated successfully.
Jan 31 07:42:47 compute-2 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000032.scope: Consumed 3.051s CPU time.
Jan 31 07:42:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:47.023 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff9fddb-e841-4a73-9d64-bf92cda5e852]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:47.024 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570 namespace which is not needed anymore
Jan 31 07:42:47 compute-2 systemd-machined[195142]: Machine qemu-21-instance-00000032 terminated.
Jan 31 07:42:47 compute-2 kernel: tap4e8fc55d-0d: entered promiscuous mode
Jan 31 07:42:47 compute-2 kernel: tap4e8fc55d-0d (unregistering): left promiscuous mode
Jan 31 07:42:47 compute-2 NetworkManager[48999]: <info>  [1769845367.1544] manager: (tap4e8fc55d-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.155 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:47 compute-2 ovn_controller[133834]: 2026-01-31T07:42:47Z|00125|binding|INFO|Claiming lport 4e8fc55d-0d40-4084-86b9-4babe0a2f675 for this chassis.
Jan 31 07:42:47 compute-2 ovn_controller[133834]: 2026-01-31T07:42:47Z|00126|binding|INFO|4e8fc55d-0d40-4084-86b9-4babe0a2f675: Claiming fa:16:3e:0e:c8:b1 10.100.0.7
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.165 226833 INFO nova.virt.libvirt.driver [-] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Instance destroyed successfully.
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.165 226833 DEBUG nova.objects.instance [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lazy-loading 'resources' on Instance uuid 91efd25f-cb58-4eb3-8456-0ceb4efa4ede obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:42:47 compute-2 ovn_controller[133834]: 2026-01-31T07:42:47Z|00127|if_status|INFO|Dropped 2 log messages in last 660 seconds (most recently, 660 seconds ago) due to excessive rate
Jan 31 07:42:47 compute-2 ovn_controller[133834]: 2026-01-31T07:42:47Z|00128|if_status|INFO|Not setting lport 4e8fc55d-0d40-4084-86b9-4babe0a2f675 down as sb is readonly
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.167 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.169 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:47 compute-2 ovn_controller[133834]: 2026-01-31T07:42:47Z|00129|binding|INFO|Releasing lport 4e8fc55d-0d40-4084-86b9-4babe0a2f675 from this chassis (sb_readonly=0)
Jan 31 07:42:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:47.216 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:c8:b1 10.100.0.7'], port_security=['fa:16:3e:0e:c8:b1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91efd25f-cb58-4eb3-8456-0ceb4efa4ede', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f35578fe-26be-4a64-914c-888b4c19b570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f1b80664b714f54baa0c16dfadacca0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '50392d96-5467-43de-8e8a-8de829be13b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1c1b0ac-6ffe-4744-9389-3d88aa7d5f11, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=4e8fc55d-0d40-4084-86b9-4babe0a2f675) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:47 compute-2 neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570[247449]: [NOTICE]   (247453) : haproxy version is 2.8.14-c23fe91
Jan 31 07:42:47 compute-2 neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570[247449]: [NOTICE]   (247453) : path to executable is /usr/sbin/haproxy
Jan 31 07:42:47 compute-2 neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570[247449]: [WARNING]  (247453) : Exiting Master process...
Jan 31 07:42:47 compute-2 neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570[247449]: [WARNING]  (247453) : Exiting Master process...
Jan 31 07:42:47 compute-2 neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570[247449]: [ALERT]    (247453) : Current worker (247455) exited with code 143 (Terminated)
Jan 31 07:42:47 compute-2 neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570[247449]: [WARNING]  (247453) : All workers exited. Exiting... (0)
Jan 31 07:42:47 compute-2 systemd[1]: libpod-b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7.scope: Deactivated successfully.
Jan 31 07:42:47 compute-2 podman[247552]: 2026-01-31 07:42:47.237762555 +0000 UTC m=+0.149553436 container died b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:42:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:47.246 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:c8:b1 10.100.0.7'], port_security=['fa:16:3e:0e:c8:b1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91efd25f-cb58-4eb3-8456-0ceb4efa4ede', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f35578fe-26be-4a64-914c-888b4c19b570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f1b80664b714f54baa0c16dfadacca0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '50392d96-5467-43de-8e8a-8de829be13b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1c1b0ac-6ffe-4744-9389-3d88aa7d5f11, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=4e8fc55d-0d40-4084-86b9-4babe0a2f675) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.253 226833 DEBUG nova.virt.libvirt.vif [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:42:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-2026127611',display_name='tempest-ImagesNegativeTestJSON-server-2026127611',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-2026127611',id=50,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:42:44Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f1b80664b714f54baa0c16dfadacca0',ramdisk_id='',reservation_id='r-ttwcwdow',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1541003784',owner_user_name='tempest-ImagesNegativeTestJSON-1541003784-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:42:45Z,user_data=None,user_id='b8744f8bd69646dc9ff12e21bf869893',uuid=91efd25f-cb58-4eb3-8456-0ceb4efa4ede,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "address": "fa:16:3e:0e:c8:b1", "network": {"id": "f35578fe-26be-4a64-914c-888b4c19b570", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-106164737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f1b80664b714f54baa0c16dfadacca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e8fc55d-0d", "ovs_interfaceid": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.253 226833 DEBUG nova.network.os_vif_util [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Converting VIF {"id": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "address": "fa:16:3e:0e:c8:b1", "network": {"id": "f35578fe-26be-4a64-914c-888b4c19b570", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-106164737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f1b80664b714f54baa0c16dfadacca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e8fc55d-0d", "ovs_interfaceid": "4e8fc55d-0d40-4084-86b9-4babe0a2f675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.254 226833 DEBUG nova.network.os_vif_util [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c8:b1,bridge_name='br-int',has_traffic_filtering=True,id=4e8fc55d-0d40-4084-86b9-4babe0a2f675,network=Network(f35578fe-26be-4a64-914c-888b4c19b570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e8fc55d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.255 226833 DEBUG os_vif [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c8:b1,bridge_name='br-int',has_traffic_filtering=True,id=4e8fc55d-0d40-4084-86b9-4babe0a2f675,network=Network(f35578fe-26be-4a64-914c-888b4c19b570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e8fc55d-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.256 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.256 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e8fc55d-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.259 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.261 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:47 compute-2 nova_compute[226829]: 2026-01-31 07:42:47.264 226833 INFO os_vif [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c8:b1,bridge_name='br-int',has_traffic_filtering=True,id=4e8fc55d-0d40-4084-86b9-4babe0a2f675,network=Network(f35578fe-26be-4a64-914c-888b4c19b570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e8fc55d-0d')
Jan 31 07:42:47 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7-userdata-shm.mount: Deactivated successfully.
Jan 31 07:42:47 compute-2 systemd[1]: var-lib-containers-storage-overlay-2492bce5b27076971868b47fe31dd596e55c1e18ec19d64363e48d6060455ff3-merged.mount: Deactivated successfully.
Jan 31 07:42:47 compute-2 podman[247552]: 2026-01-31 07:42:47.64350835 +0000 UTC m=+0.555299271 container cleanup b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:42:47 compute-2 podman[247574]: 2026-01-31 07:42:47.667838732 +0000 UTC m=+0.454832900 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:42:47 compute-2 ceph-mon[77282]: pgmap v1319: 305 pgs: 305 active+clean; 239 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 682 KiB/s rd, 2.8 MiB/s wr, 140 op/s
Jan 31 07:42:48 compute-2 podman[247629]: 2026-01-31 07:42:48.052546213 +0000 UTC m=+0.389126241 container remove b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.059 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad9fb99-1a54-489b-9a7b-b3aef106c4de]: (4, ('Sat Jan 31 07:42:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570 (b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7)\nb051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7\nSat Jan 31 07:42:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570 (b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7)\nb051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.062 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[23735307-a211-4894-8add-541e0d79b837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.064 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf35578fe-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:48 compute-2 nova_compute[226829]: 2026-01-31 07:42:48.067 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:48 compute-2 kernel: tapf35578fe-20: left promiscuous mode
Jan 31 07:42:48 compute-2 nova_compute[226829]: 2026-01-31 07:42:48.078 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.084 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6f400162-c8e8-4dc8-acff-8f5fac785cb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:48 compute-2 systemd[1]: libpod-conmon-b051dd65f9c5a76d7849d6d494cb0591824113acd0191c182cad6b21f3b208d7.scope: Deactivated successfully.
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.104 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d76ea6e8-013d-47c4-a7df-16f73ca72103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.105 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e927bc25-619a-41e5-8a90-1eedc5285a5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.120 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[529d6055-987f-44df-972d-08f0bd1af398]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561635, 'reachable_time': 23690, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247645, 'error': None, 'target': 'ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.123 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f35578fe-26be-4a64-914c-888b4c19b570 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.123 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[afba7ef7-75c5-40e8-bbeb-b9eb7e3e23c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:48 compute-2 systemd[1]: run-netns-ovnmeta\x2df35578fe\x2d26be\x2d4a64\x2d914c\x2d888b4c19b570.mount: Deactivated successfully.
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.124 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 4e8fc55d-0d40-4084-86b9-4babe0a2f675 in datapath f35578fe-26be-4a64-914c-888b4c19b570 unbound from our chassis
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.126 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f35578fe-26be-4a64-914c-888b4c19b570, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.127 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[683ccc91-ac4c-4193-b2cb-8abca4e43966]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.128 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 4e8fc55d-0d40-4084-86b9-4babe0a2f675 in datapath f35578fe-26be-4a64-914c-888b4c19b570 unbound from our chassis
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.130 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f35578fe-26be-4a64-914c-888b4c19b570, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:42:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:48.131 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9585907d-e1ab-4734-b843-40bee513f67e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:48.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:48.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.022 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.166 226833 INFO nova.virt.libvirt.driver [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Deleting instance files /var/lib/nova/instances/c030025f-5967-4922-a748-2f999d0645b1_del
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.168 226833 INFO nova.virt.libvirt.driver [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Deletion of /var/lib/nova/instances/c030025f-5967-4922-a748-2f999d0645b1_del complete
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.176 226833 DEBUG nova.compute.manager [req-81da8f80-f8ed-464c-b190-eb4365e34c47 req-eeb96244-fff7-49fc-8a6e-ba47785444a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Received event network-vif-unplugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.177 226833 DEBUG oslo_concurrency.lockutils [req-81da8f80-f8ed-464c-b190-eb4365e34c47 req-eeb96244-fff7-49fc-8a6e-ba47785444a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.177 226833 DEBUG oslo_concurrency.lockutils [req-81da8f80-f8ed-464c-b190-eb4365e34c47 req-eeb96244-fff7-49fc-8a6e-ba47785444a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.177 226833 DEBUG oslo_concurrency.lockutils [req-81da8f80-f8ed-464c-b190-eb4365e34c47 req-eeb96244-fff7-49fc-8a6e-ba47785444a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.178 226833 DEBUG nova.compute.manager [req-81da8f80-f8ed-464c-b190-eb4365e34c47 req-eeb96244-fff7-49fc-8a6e-ba47785444a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] No waiting events found dispatching network-vif-unplugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.178 226833 DEBUG nova.compute.manager [req-81da8f80-f8ed-464c-b190-eb4365e34c47 req-eeb96244-fff7-49fc-8a6e-ba47785444a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Received event network-vif-unplugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.477 226833 INFO nova.compute.manager [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Took 5.39 seconds to destroy the instance on the hypervisor.
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.478 226833 DEBUG oslo.service.loopingcall [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.479 226833 DEBUG nova.compute.manager [-] [instance: c030025f-5967-4922-a748-2f999d0645b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:42:49 compute-2 nova_compute[226829]: 2026-01-31 07:42:49.480 226833 DEBUG nova.network.neutron [-] [instance: c030025f-5967-4922-a748-2f999d0645b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:42:50 compute-2 ceph-mon[77282]: pgmap v1320: 305 pgs: 305 active+clean; 214 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 160 op/s
Jan 31 07:42:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:50.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.263 226833 INFO nova.virt.libvirt.driver [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Deleting instance files /var/lib/nova/instances/91efd25f-cb58-4eb3-8456-0ceb4efa4ede_del
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.265 226833 INFO nova.virt.libvirt.driver [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Deletion of /var/lib/nova/instances/91efd25f-cb58-4eb3-8456-0ceb4efa4ede_del complete
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.310 226833 DEBUG nova.compute.manager [req-1a2ecbfc-89e1-4f82-90bc-1794653a15ee req-8def8333-52d3-4072-9d88-c09b0558fabf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Received event network-vif-plugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.311 226833 DEBUG oslo_concurrency.lockutils [req-1a2ecbfc-89e1-4f82-90bc-1794653a15ee req-8def8333-52d3-4072-9d88-c09b0558fabf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.311 226833 DEBUG oslo_concurrency.lockutils [req-1a2ecbfc-89e1-4f82-90bc-1794653a15ee req-8def8333-52d3-4072-9d88-c09b0558fabf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.312 226833 DEBUG oslo_concurrency.lockutils [req-1a2ecbfc-89e1-4f82-90bc-1794653a15ee req-8def8333-52d3-4072-9d88-c09b0558fabf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.312 226833 DEBUG nova.compute.manager [req-1a2ecbfc-89e1-4f82-90bc-1794653a15ee req-8def8333-52d3-4072-9d88-c09b0558fabf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] No waiting events found dispatching network-vif-plugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.312 226833 WARNING nova.compute.manager [req-1a2ecbfc-89e1-4f82-90bc-1794653a15ee req-8def8333-52d3-4072-9d88-c09b0558fabf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Received unexpected event network-vif-plugged-4e8fc55d-0d40-4084-86b9-4babe0a2f675 for instance with vm_state active and task_state deleting.
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.368 226833 INFO nova.compute.manager [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Took 4.44 seconds to destroy the instance on the hypervisor.
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.369 226833 DEBUG oslo.service.loopingcall [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.369 226833 DEBUG nova.compute.manager [-] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:42:51 compute-2 nova_compute[226829]: 2026-01-31 07:42:51.369 226833 DEBUG nova.network.neutron [-] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:42:51 compute-2 sudo[247650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:51 compute-2 sudo[247650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:51 compute-2 sudo[247650]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:51 compute-2 sudo[247675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:51 compute-2 sudo[247675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:51 compute-2 sudo[247675]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:52 compute-2 nova_compute[226829]: 2026-01-31 07:42:52.260 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:52.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:52 compute-2 ceph-mon[77282]: pgmap v1321: 305 pgs: 305 active+clean; 126 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.7 MiB/s wr, 251 op/s
Jan 31 07:42:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:52.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:52 compute-2 nova_compute[226829]: 2026-01-31 07:42:52.643 226833 DEBUG nova.network.neutron [-] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:42:52 compute-2 nova_compute[226829]: 2026-01-31 07:42:52.680 226833 INFO nova.compute.manager [-] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Took 1.31 seconds to deallocate network for instance.
Jan 31 07:42:52 compute-2 nova_compute[226829]: 2026-01-31 07:42:52.774 226833 DEBUG nova.network.neutron [-] [instance: c030025f-5967-4922-a748-2f999d0645b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:42:52 compute-2 nova_compute[226829]: 2026-01-31 07:42:52.781 226833 DEBUG nova.compute.manager [req-b3b10dea-11ef-4e23-89fa-4cd2d0716998 req-ad70f522-4705-4dda-ae2a-f612ccba6fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Received event network-vif-deleted-4e8fc55d-0d40-4084-86b9-4babe0a2f675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:52 compute-2 nova_compute[226829]: 2026-01-31 07:42:52.793 226833 DEBUG oslo_concurrency.lockutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:52 compute-2 nova_compute[226829]: 2026-01-31 07:42:52.794 226833 DEBUG oslo_concurrency.lockutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:52 compute-2 nova_compute[226829]: 2026-01-31 07:42:52.844 226833 INFO nova.compute.manager [-] [instance: c030025f-5967-4922-a748-2f999d0645b1] Took 3.36 seconds to deallocate network for instance.
Jan 31 07:42:52 compute-2 nova_compute[226829]: 2026-01-31 07:42:52.909 226833 DEBUG oslo_concurrency.lockutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:52 compute-2 nova_compute[226829]: 2026-01-31 07:42:52.932 226833 DEBUG oslo_concurrency.processutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:42:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/969411879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.345 226833 DEBUG oslo_concurrency.processutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.354 226833 DEBUG nova.compute.provider_tree [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.377 226833 DEBUG nova.scheduler.client.report [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.411 226833 DEBUG oslo_concurrency.lockutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.414 226833 DEBUG oslo_concurrency.lockutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.460 226833 DEBUG nova.compute.manager [req-f025eb47-7435-4bd2-9bc3-896946882c25 req-809936dc-ff7f-467f-93c7-a76e52b55e7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c030025f-5967-4922-a748-2f999d0645b1] Received event network-vif-deleted-462010b9-29e6-472e-ba82-5e6c54eec345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.500 226833 INFO nova.scheduler.client.report [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Deleted allocations for instance 91efd25f-cb58-4eb3-8456-0ceb4efa4ede
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.557 226833 DEBUG oslo_concurrency.processutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.633 226833 DEBUG oslo_concurrency.lockutils [None req-e794d8f3-fa21-4b3a-b21c-146a6f2857c8 b8744f8bd69646dc9ff12e21bf869893 8f1b80664b714f54baa0c16dfadacca0 - - default default] Lock "91efd25f-cb58-4eb3-8456-0ceb4efa4ede" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:53.880 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.880 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:42:53.882 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:42:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:42:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1960182664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.988 226833 DEBUG oslo_concurrency.processutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:53 compute-2 nova_compute[226829]: 2026-01-31 07:42:53.995 226833 DEBUG nova.compute.provider_tree [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:42:54 compute-2 nova_compute[226829]: 2026-01-31 07:42:54.017 226833 DEBUG nova.scheduler.client.report [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:42:54 compute-2 nova_compute[226829]: 2026-01-31 07:42:54.022 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:54 compute-2 nova_compute[226829]: 2026-01-31 07:42:54.046 226833 DEBUG oslo_concurrency.lockutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:54.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:54 compute-2 nova_compute[226829]: 2026-01-31 07:42:54.282 226833 INFO nova.scheduler.client.report [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Deleted allocations for instance c030025f-5967-4922-a748-2f999d0645b1
Jan 31 07:42:54 compute-2 ceph-mon[77282]: pgmap v1322: 305 pgs: 305 active+clean; 121 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 216 op/s
Jan 31 07:42:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/969411879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1960182664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:54 compute-2 nova_compute[226829]: 2026-01-31 07:42:54.368 226833 DEBUG oslo_concurrency.lockutils [None req-98cb9ef4-039d-4957-af17-deb4ae9c5282 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "c030025f-5967-4922-a748-2f999d0645b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:54.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:55 compute-2 ceph-mon[77282]: pgmap v1323: 305 pgs: 305 active+clean; 121 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 133 KiB/s wr, 185 op/s
Jan 31 07:42:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:56.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:56.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2725692687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:57 compute-2 nova_compute[226829]: 2026-01-31 07:42:57.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:57 compute-2 nova_compute[226829]: 2026-01-31 07:42:57.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:57 compute-2 sudo[247747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:57 compute-2 sudo[247747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:57 compute-2 sudo[247747]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:58 compute-2 sudo[247772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:42:58 compute-2 sudo[247772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:58 compute-2 sudo[247772]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:58 compute-2 sudo[247797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:58 compute-2 sudo[247797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:58 compute-2 sudo[247797]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:58 compute-2 sudo[247822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:42:58 compute-2 sudo[247822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:58 compute-2 ceph-mon[77282]: pgmap v1324: 305 pgs: 305 active+clean; 73 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 133 KiB/s wr, 201 op/s
Jan 31 07:42:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2509227913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3908076386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:58.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:58 compute-2 sudo[247822]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:42:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:42:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:58.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:42:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:59 compute-2 nova_compute[226829]: 2026-01-31 07:42:59.024 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:59 compute-2 nova_compute[226829]: 2026-01-31 07:42:59.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:59 compute-2 nova_compute[226829]: 2026-01-31 07:42:59.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:59 compute-2 nova_compute[226829]: 2026-01-31 07:42:59.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:59 compute-2 nova_compute[226829]: 2026-01-31 07:42:59.513 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845364.5129147, c030025f-5967-4922-a748-2f999d0645b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:42:59 compute-2 nova_compute[226829]: 2026-01-31 07:42:59.514 226833 INFO nova.compute.manager [-] [instance: c030025f-5967-4922-a748-2f999d0645b1] VM Stopped (Lifecycle Event)
Jan 31 07:42:59 compute-2 nova_compute[226829]: 2026-01-31 07:42:59.608 226833 DEBUG nova.compute.manager [None req-5cc8c7fe-7452-4adb-b508-32ef30311e0c - - - - - -] [instance: c030025f-5967-4922-a748-2f999d0645b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:43:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:00.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:00 compute-2 ceph-mon[77282]: pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 53 KiB/s wr, 148 op/s
Jan 31 07:43:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:00 compute-2 nova_compute[226829]: 2026-01-31 07:43:00.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:00.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:00 compute-2 nova_compute[226829]: 2026-01-31 07:43:00.589 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:00 compute-2 nova_compute[226829]: 2026-01-31 07:43:00.690 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:01 compute-2 nova_compute[226829]: 2026-01-31 07:43:01.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:01 compute-2 nova_compute[226829]: 2026-01-31 07:43:01.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:43:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:43:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:43:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:43:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:43:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:43:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/204450215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:01 compute-2 nova_compute[226829]: 2026-01-31 07:43:01.809 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:43:01 compute-2 nova_compute[226829]: 2026-01-31 07:43:01.811 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:43:01.884 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:43:01 compute-2 nova_compute[226829]: 2026-01-31 07:43:01.972 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:43:01 compute-2 nova_compute[226829]: 2026-01-31 07:43:01.972 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:43:01 compute-2 nova_compute[226829]: 2026-01-31 07:43:01.973 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:43:01 compute-2 nova_compute[226829]: 2026-01-31 07:43:01.973 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:43:01 compute-2 nova_compute[226829]: 2026-01-31 07:43:01.974 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:43:02 compute-2 nova_compute[226829]: 2026-01-31 07:43:02.164 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845367.1624086, 91efd25f-cb58-4eb3-8456-0ceb4efa4ede => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:43:02 compute-2 nova_compute[226829]: 2026-01-31 07:43:02.166 226833 INFO nova.compute.manager [-] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] VM Stopped (Lifecycle Event)
Jan 31 07:43:02 compute-2 nova_compute[226829]: 2026-01-31 07:43:02.211 226833 DEBUG nova.compute.manager [None req-f72897cc-69af-48f2-bd46-5e606532f801 - - - - - -] [instance: 91efd25f-cb58-4eb3-8456-0ceb4efa4ede] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:43:02 compute-2 nova_compute[226829]: 2026-01-31 07:43:02.267 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:02.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:43:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1720823205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:02 compute-2 nova_compute[226829]: 2026-01-31 07:43:02.418 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:43:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:02 compute-2 ceph-mon[77282]: pgmap v1326: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 35 KiB/s wr, 127 op/s
Jan 31 07:43:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1316177051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1720823205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:02 compute-2 nova_compute[226829]: 2026-01-31 07:43:02.655 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:43:02 compute-2 nova_compute[226829]: 2026-01-31 07:43:02.657 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4618MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:43:02 compute-2 nova_compute[226829]: 2026-01-31 07:43:02.657 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:43:02 compute-2 nova_compute[226829]: 2026-01-31 07:43:02.658 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:43:03 compute-2 nova_compute[226829]: 2026-01-31 07:43:03.081 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:43:03 compute-2 nova_compute[226829]: 2026-01-31 07:43:03.082 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:43:03 compute-2 nova_compute[226829]: 2026-01-31 07:43:03.253 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:43:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:43:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1847506575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:03 compute-2 ceph-mon[77282]: pgmap v1327: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 31 07:43:03 compute-2 nova_compute[226829]: 2026-01-31 07:43:03.704 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:43:03 compute-2 nova_compute[226829]: 2026-01-31 07:43:03.710 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:43:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:04 compute-2 nova_compute[226829]: 2026-01-31 07:43:04.026 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:04.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:04 compute-2 nova_compute[226829]: 2026-01-31 07:43:04.328 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:43:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:04.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1847506575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:04 compute-2 nova_compute[226829]: 2026-01-31 07:43:04.903 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:43:04 compute-2 nova_compute[226829]: 2026-01-31 07:43:04.903 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:43:05 compute-2 ceph-mon[77282]: pgmap v1328: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 07:43:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:06.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:06.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:06 compute-2 nova_compute[226829]: 2026-01-31 07:43:06.581 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:06 compute-2 nova_compute[226829]: 2026-01-31 07:43:06.582 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:06 compute-2 nova_compute[226829]: 2026-01-31 07:43:06.582 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:43:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:43:06.851 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:43:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:43:06.851 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:43:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:43:06.852 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:43:07 compute-2 nova_compute[226829]: 2026-01-31 07:43:07.270 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:07 compute-2 ceph-mon[77282]: pgmap v1329: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 07:43:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:07 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:07 compute-2 sudo[247930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:07 compute-2 sudo[247930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:07 compute-2 sudo[247930]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:07 compute-2 sudo[247955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:43:07 compute-2 sudo[247955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:07 compute-2 sudo[247955]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:08.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:08.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:09 compute-2 nova_compute[226829]: 2026-01-31 07:43:09.029 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:10 compute-2 ceph-mon[77282]: pgmap v1330: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 7.6 KiB/s rd, 682 B/s wr, 12 op/s
Jan 31 07:43:10 compute-2 podman[247981]: 2026-01-31 07:43:10.194202269 +0000 UTC m=+0.081792521 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:43:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:10.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:10.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:11 compute-2 sudo[248010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:11 compute-2 sudo[248010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:11 compute-2 sudo[248010]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:11 compute-2 sudo[248035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:11 compute-2 sudo[248035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:11 compute-2 sudo[248035]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:12 compute-2 nova_compute[226829]: 2026-01-31 07:43:12.273 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:43:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:12.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:43:12 compute-2 ceph-mon[77282]: pgmap v1331: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:43:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:12.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:43:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:14 compute-2 nova_compute[226829]: 2026-01-31 07:43:14.031 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:14.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:14 compute-2 ceph-mon[77282]: pgmap v1332: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:14.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:15 compute-2 ceph-mon[77282]: pgmap v1333: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:16.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:16.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:17 compute-2 nova_compute[226829]: 2026-01-31 07:43:17.277 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:18 compute-2 podman[248063]: 2026-01-31 07:43:18.16877191 +0000 UTC m=+0.058631321 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:43:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:18.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:43:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:18.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:43:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:19 compute-2 nova_compute[226829]: 2026-01-31 07:43:19.033 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:20.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:20.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:22 compute-2 nova_compute[226829]: 2026-01-31 07:43:22.279 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:22.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:22.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:24 compute-2 nova_compute[226829]: 2026-01-31 07:43:24.035 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:24.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:24.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:25 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:43:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:26.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:43:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:26.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:43:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).paxos(paxos active c 2511..3089) lease_timeout -- calling new election
Jan 31 07:43:27 compute-2 nova_compute[226829]: 2026-01-31 07:43:27.282 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:27 compute-2 ceph-mon[77282]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 31 07:43:27 compute-2 ceph-mon[77282]: paxos.1).electionLogic(24) init, last seen epoch 24
Jan 31 07:43:27 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:43:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:28.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:28.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:28 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:43:29 compute-2 nova_compute[226829]: 2026-01-31 07:43:29.037 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:29 compute-2 ceph-mon[77282]: pgmap v1334: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:29 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:43:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:43:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:30.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:30 compute-2 ceph-mon[77282]: pgmap v1335: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-2 ceph-mon[77282]: pgmap v1336: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-2 ceph-mon[77282]: pgmap v1337: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-2 ceph-mon[77282]: pgmap v1338: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-2 ceph-mon[77282]: pgmap v1339: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-2 ceph-mon[77282]: mon.compute-1 calling monitor election
Jan 31 07:43:30 compute-2 ceph-mon[77282]: mon.compute-2 calling monitor election
Jan 31 07:43:30 compute-2 ceph-mon[77282]: pgmap v1340: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 07:43:30 compute-2 ceph-mon[77282]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:43:30 compute-2 ceph-mon[77282]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:43:30 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:43:30 compute-2 ceph-mon[77282]: osdmap e203: 3 total, 3 up, 3 in
Jan 31 07:43:30 compute-2 ceph-mon[77282]: mgrmap e11: compute-0.hhuoua(active, since 38m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:43:30 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:43:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:30.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:31 compute-2 sudo[248090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:31 compute-2 ceph-mon[77282]: pgmap v1341: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:31 compute-2 sudo[248090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:31 compute-2 sudo[248090]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:31 compute-2 sudo[248115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:31 compute-2 sudo[248115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:31 compute-2 sudo[248115]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:32 compute-2 nova_compute[226829]: 2026-01-31 07:43:32.293 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:32.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:43:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:32.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:43:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:34 compute-2 nova_compute[226829]: 2026-01-31 07:43:34.039 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:34 compute-2 ceph-mon[77282]: pgmap v1342: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:34.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:34.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:36.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:36.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:36 compute-2 ceph-mon[77282]: pgmap v1343: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:37 compute-2 nova_compute[226829]: 2026-01-31 07:43:37.296 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:37 compute-2 ceph-mon[77282]: pgmap v1344: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:38.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:38.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:39 compute-2 nova_compute[226829]: 2026-01-31 07:43:39.039 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:39 compute-2 ceph-mon[77282]: pgmap v1345: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:40.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:40.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:41 compute-2 podman[248144]: 2026-01-31 07:43:41.287372735 +0000 UTC m=+0.177288299 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 07:43:42 compute-2 ceph-mon[77282]: pgmap v1346: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:42 compute-2 nova_compute[226829]: 2026-01-31 07:43:42.348 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:42.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:42.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:44 compute-2 nova_compute[226829]: 2026-01-31 07:43:44.040 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:44 compute-2 ceph-mon[77282]: pgmap v1347: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:44.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:43:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:43:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3345716201' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:43:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3345716201' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:43:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:46.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:46 compute-2 ceph-mon[77282]: pgmap v1348: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:46.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:47 compute-2 nova_compute[226829]: 2026-01-31 07:43:47.353 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:47 compute-2 ovn_controller[133834]: 2026-01-31T07:43:47Z|00130|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 07:43:47 compute-2 ceph-mon[77282]: pgmap v1349: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:48 compute-2 nova_compute[226829]: 2026-01-31 07:43:48.015 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "0de34c85-05be-461e-ae22-ac3816f1018d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:43:48 compute-2 nova_compute[226829]: 2026-01-31 07:43:48.015 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:43:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:48.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:48 compute-2 nova_compute[226829]: 2026-01-31 07:43:48.424 226833 DEBUG nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:43:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:48.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:48 compute-2 nova_compute[226829]: 2026-01-31 07:43:48.938 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:43:48 compute-2 nova_compute[226829]: 2026-01-31 07:43:48.939 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:43:48 compute-2 nova_compute[226829]: 2026-01-31 07:43:48.951 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:43:48 compute-2 nova_compute[226829]: 2026-01-31 07:43:48.952 226833 INFO nova.compute.claims [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:43:49 compute-2 nova_compute[226829]: 2026-01-31 07:43:49.042 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:49 compute-2 podman[248173]: 2026-01-31 07:43:49.210191301 +0000 UTC m=+0.096430813 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:43:49 compute-2 nova_compute[226829]: 2026-01-31 07:43:49.580 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:43:50 compute-2 ceph-mon[77282]: pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:43:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2752754670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:50 compute-2 nova_compute[226829]: 2026-01-31 07:43:50.374 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.793s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:43:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:43:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:50.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:43:50 compute-2 nova_compute[226829]: 2026-01-31 07:43:50.381 226833 DEBUG nova.compute.provider_tree [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:43:50 compute-2 nova_compute[226829]: 2026-01-31 07:43:50.492 226833 DEBUG nova.scheduler.client.report [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:43:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:50.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:50 compute-2 nova_compute[226829]: 2026-01-31 07:43:50.796 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:43:50 compute-2 nova_compute[226829]: 2026-01-31 07:43:50.797 226833 DEBUG nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:43:51 compute-2 nova_compute[226829]: 2026-01-31 07:43:51.203 226833 DEBUG nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:43:51 compute-2 nova_compute[226829]: 2026-01-31 07:43:51.204 226833 DEBUG nova.network.neutron [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:43:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2752754670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:51 compute-2 nova_compute[226829]: 2026-01-31 07:43:51.540 226833 INFO nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:43:51 compute-2 nova_compute[226829]: 2026-01-31 07:43:51.649 226833 DEBUG nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:43:51 compute-2 nova_compute[226829]: 2026-01-31 07:43:51.668 226833 DEBUG nova.policy [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4cdbfeb437d54df89a0fb0f6621b8fdc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9a5c5f11e8f24f898d16bceb9925aaa0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:43:51 compute-2 sudo[248216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:51 compute-2 sudo[248216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:51 compute-2 sudo[248216]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:51 compute-2 sudo[248241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:51 compute-2 sudo[248241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:51 compute-2 sudo[248241]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:51 compute-2 nova_compute[226829]: 2026-01-31 07:43:51.942 226833 DEBUG nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:43:51 compute-2 nova_compute[226829]: 2026-01-31 07:43:51.943 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:43:51 compute-2 nova_compute[226829]: 2026-01-31 07:43:51.944 226833 INFO nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Creating image(s)
Jan 31 07:43:51 compute-2 nova_compute[226829]: 2026-01-31 07:43:51.974 226833 DEBUG nova.storage.rbd_utils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 0de34c85-05be-461e-ae22-ac3816f1018d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:43:52 compute-2 nova_compute[226829]: 2026-01-31 07:43:52.006 226833 DEBUG nova.storage.rbd_utils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 0de34c85-05be-461e-ae22-ac3816f1018d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:43:52 compute-2 nova_compute[226829]: 2026-01-31 07:43:52.038 226833 DEBUG nova.storage.rbd_utils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 0de34c85-05be-461e-ae22-ac3816f1018d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:43:52 compute-2 nova_compute[226829]: 2026-01-31 07:43:52.042 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:43:52 compute-2 nova_compute[226829]: 2026-01-31 07:43:52.091 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:43:52 compute-2 nova_compute[226829]: 2026-01-31 07:43:52.092 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:43:52 compute-2 nova_compute[226829]: 2026-01-31 07:43:52.093 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:43:52 compute-2 nova_compute[226829]: 2026-01-31 07:43:52.093 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:43:52 compute-2 nova_compute[226829]: 2026-01-31 07:43:52.119 226833 DEBUG nova.storage.rbd_utils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 0de34c85-05be-461e-ae22-ac3816f1018d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:43:52 compute-2 nova_compute[226829]: 2026-01-31 07:43:52.123 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 0de34c85-05be-461e-ae22-ac3816f1018d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:43:52 compute-2 nova_compute[226829]: 2026-01-31 07:43:52.359 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:52.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:52 compute-2 ceph-mon[77282]: pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Jan 31 07:43:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:52.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:54.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:54 compute-2 nova_compute[226829]: 2026-01-31 07:43:54.796 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:54.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:55 compute-2 ceph-mon[77282]: pgmap v1352: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Jan 31 07:43:55 compute-2 nova_compute[226829]: 2026-01-31 07:43:55.903 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 0de34c85-05be-461e-ae22-ac3816f1018d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.780s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:43:55 compute-2 nova_compute[226829]: 2026-01-31 07:43:55.969 226833 DEBUG nova.storage.rbd_utils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] resizing rbd image 0de34c85-05be-461e-ae22-ac3816f1018d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:43:56 compute-2 ceph-mon[77282]: pgmap v1353: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Jan 31 07:43:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:56.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:57 compute-2 nova_compute[226829]: 2026-01-31 07:43:57.066 226833 DEBUG nova.objects.instance [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lazy-loading 'migration_context' on Instance uuid 0de34c85-05be-461e-ae22-ac3816f1018d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:43:57 compute-2 nova_compute[226829]: 2026-01-31 07:43:57.363 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/334924200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:57 compute-2 nova_compute[226829]: 2026-01-31 07:43:57.501 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:43:57 compute-2 nova_compute[226829]: 2026-01-31 07:43:57.502 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Ensure instance console log exists: /var/lib/nova/instances/0de34c85-05be-461e-ae22-ac3816f1018d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:43:57 compute-2 nova_compute[226829]: 2026-01-31 07:43:57.503 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:43:57 compute-2 nova_compute[226829]: 2026-01-31 07:43:57.504 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:43:57 compute-2 nova_compute[226829]: 2026-01-31 07:43:57.504 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:43:57 compute-2 nova_compute[226829]: 2026-01-31 07:43:57.587 226833 DEBUG nova.network.neutron [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Successfully created port: 3c1679bd-49ed-45ef-8e6f-9f10415f8f33 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:43:58 compute-2 ceph-osd[79942]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 07:43:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:58.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:58 compute-2 ceph-mon[77282]: pgmap v1354: 305 pgs: 305 active+clean; 70 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 885 KiB/s wr, 19 op/s
Jan 31 07:43:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:43:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:58.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:59 compute-2 nova_compute[226829]: 2026-01-31 07:43:59.045 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:59 compute-2 nova_compute[226829]: 2026-01-31 07:43:59.214 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:43:59.212 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:43:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:43:59.217 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:43:59 compute-2 nova_compute[226829]: 2026-01-31 07:43:59.485 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:59 compute-2 nova_compute[226829]: 2026-01-31 07:43:59.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:00 compute-2 nova_compute[226829]: 2026-01-31 07:44:00.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:00 compute-2 nova_compute[226829]: 2026-01-31 07:44:00.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:00.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:44:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:00.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:44:01 compute-2 nova_compute[226829]: 2026-01-31 07:44:01.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:01 compute-2 nova_compute[226829]: 2026-01-31 07:44:01.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:44:01 compute-2 nova_compute[226829]: 2026-01-31 07:44:01.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.015 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.015 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.016 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:02.221 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.366 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:02.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.640 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.640 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.641 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.641 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.641 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:02.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:02 compute-2 nova_compute[226829]: 2026-01-31 07:44:02.922 226833 DEBUG nova.network.neutron [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Successfully updated port: 3c1679bd-49ed-45ef-8e6f-9f10415f8f33 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:44:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:44:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/85849488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.117 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.212 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "refresh_cache-0de34c85-05be-461e-ae22-ac3816f1018d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.213 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquired lock "refresh_cache-0de34c85-05be-461e-ae22-ac3816f1018d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.213 226833 DEBUG nova.network.neutron [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.218 226833 DEBUG nova.compute.manager [req-76de7dd5-3832-4b09-aa96-65b21aa9fc41 req-cd23db1c-2b40-4706-a8e4-97cd31aea162 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Received event network-changed-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.219 226833 DEBUG nova.compute.manager [req-76de7dd5-3832-4b09-aa96-65b21aa9fc41 req-cd23db1c-2b40-4706-a8e4-97cd31aea162 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Refreshing instance network info cache due to event network-changed-3c1679bd-49ed-45ef-8e6f-9f10415f8f33. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.219 226833 DEBUG oslo_concurrency.lockutils [req-76de7dd5-3832-4b09-aa96-65b21aa9fc41 req-cd23db1c-2b40-4706-a8e4-97cd31aea162 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0de34c85-05be-461e-ae22-ac3816f1018d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.284 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.285 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4629MB free_disk=20.969539642333984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.285 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.286 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.640 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 0de34c85-05be-461e-ae22-ac3816f1018d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.641 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.641 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.703 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:03 compute-2 nova_compute[226829]: 2026-01-31 07:44:03.794 226833 DEBUG nova.network.neutron [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:44:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:04 compute-2 nova_compute[226829]: 2026-01-31 07:44:04.047 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:44:04 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/991046489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:04 compute-2 nova_compute[226829]: 2026-01-31 07:44:04.177 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:04 compute-2 nova_compute[226829]: 2026-01-31 07:44:04.183 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:44:04 compute-2 nova_compute[226829]: 2026-01-31 07:44:04.247 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:44:04 compute-2 nova_compute[226829]: 2026-01-31 07:44:04.429 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:44:04 compute-2 nova_compute[226829]: 2026-01-31 07:44:04.430 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:04.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:04.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:05 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.431 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.431 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:44:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:06.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.804 226833 DEBUG nova.network.neutron [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Updating instance_info_cache with network_info: [{"id": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "address": "fa:16:3e:4f:a4:b8", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c1679bd-49", "ovs_interfaceid": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:44:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:06.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:06.851 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:06.852 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:06.852 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.893 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Releasing lock "refresh_cache-0de34c85-05be-461e-ae22-ac3816f1018d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.893 226833 DEBUG nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Instance network_info: |[{"id": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "address": "fa:16:3e:4f:a4:b8", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c1679bd-49", "ovs_interfaceid": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.893 226833 DEBUG oslo_concurrency.lockutils [req-76de7dd5-3832-4b09-aa96-65b21aa9fc41 req-cd23db1c-2b40-4706-a8e4-97cd31aea162 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0de34c85-05be-461e-ae22-ac3816f1018d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.894 226833 DEBUG nova.network.neutron [req-76de7dd5-3832-4b09-aa96-65b21aa9fc41 req-cd23db1c-2b40-4706-a8e4-97cd31aea162 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Refreshing network info cache for port 3c1679bd-49ed-45ef-8e6f-9f10415f8f33 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.896 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Start _get_guest_xml network_info=[{"id": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "address": "fa:16:3e:4f:a4:b8", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c1679bd-49", "ovs_interfaceid": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.900 226833 WARNING nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.904 226833 DEBUG nova.virt.libvirt.host [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.905 226833 DEBUG nova.virt.libvirt.host [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.907 226833 DEBUG nova.virt.libvirt.host [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.907 226833 DEBUG nova.virt.libvirt.host [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.908 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.909 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.909 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.909 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.909 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.910 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.910 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.910 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.910 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.910 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.910 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.911 226833 DEBUG nova.virt.hardware [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:44:06 compute-2 nova_compute[226829]: 2026-01-31 07:44:06.913 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:44:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2181903236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:07 compute-2 nova_compute[226829]: 2026-01-31 07:44:07.429 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:07 compute-2 nova_compute[226829]: 2026-01-31 07:44:07.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:07 compute-2 sudo[248505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:07 compute-2 sudo[248505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:07 compute-2 sudo[248505]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:08 compute-2 sudo[248530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:44:08 compute-2 sudo[248530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:08 compute-2 sudo[248530]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:08 compute-2 sudo[248555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:08 compute-2 sudo[248555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:08 compute-2 sudo[248555]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:08 compute-2 sudo[248580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:44:08 compute-2 sudo[248580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:08 compute-2 sudo[248580]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:08.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:08 compute-2 ceph-mon[77282]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 31 07:44:08 compute-2 ceph-mon[77282]: paxos.1).electionLogic(29) init, last seen epoch 29, mid-election, bumping
Jan 31 07:44:08 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:44:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:08.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:09 compute-2 nova_compute[226829]: 2026-01-31 07:44:09.050 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:09 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:44:10 compute-2 nova_compute[226829]: 2026-01-31 07:44:10.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:10.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:10.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:11 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:44:11 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:44:11 compute-2 nova_compute[226829]: 2026-01-31 07:44:11.400 226833 DEBUG nova.network.neutron [req-76de7dd5-3832-4b09-aa96-65b21aa9fc41 req-cd23db1c-2b40-4706-a8e4-97cd31aea162 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Updated VIF entry in instance network info cache for port 3c1679bd-49ed-45ef-8e6f-9f10415f8f33. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:44:11 compute-2 nova_compute[226829]: 2026-01-31 07:44:11.401 226833 DEBUG nova.network.neutron [req-76de7dd5-3832-4b09-aa96-65b21aa9fc41 req-cd23db1c-2b40-4706-a8e4-97cd31aea162 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Updating instance_info_cache with network_info: [{"id": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "address": "fa:16:3e:4f:a4:b8", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c1679bd-49", "ovs_interfaceid": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:44:11 compute-2 nova_compute[226829]: 2026-01-31 07:44:11.465 226833 DEBUG oslo_concurrency.lockutils [req-76de7dd5-3832-4b09-aa96-65b21aa9fc41 req-cd23db1c-2b40-4706-a8e4-97cd31aea162 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0de34c85-05be-461e-ae22-ac3816f1018d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:44:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/914263149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:11 compute-2 ceph-mon[77282]: pgmap v1355: 305 pgs: 305 active+clean; 73 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 31 07:44:11 compute-2 sudo[248638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:12 compute-2 sudo[248638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:12 compute-2 sudo[248638]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:12 compute-2 sudo[248669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:12 compute-2 sudo[248669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:12 compute-2 sudo[248669]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:12 compute-2 podman[248662]: 2026-01-31 07:44:12.069077463 +0000 UTC m=+0.061995601 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 07:44:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:44:12 compute-2 nova_compute[226829]: 2026-01-31 07:44:12.150 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:12.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:12 compute-2 nova_compute[226829]: 2026-01-31 07:44:12.697 226833 DEBUG nova.storage.rbd_utils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 0de34c85-05be-461e-ae22-ac3816f1018d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:12 compute-2 nova_compute[226829]: 2026-01-31 07:44:12.701 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:12 compute-2 nova_compute[226829]: 2026-01-31 07:44:12.717 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:12.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:13 compute-2 ceph-mon[77282]: pgmap v1356: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 07:44:13 compute-2 ceph-mon[77282]: pgmap v1357: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/85849488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/991046489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: pgmap v1358: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3402555874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: pgmap v1359: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2181903236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: mon.compute-1 calling monitor election
Jan 31 07:44:13 compute-2 ceph-mon[77282]: mon.compute-2 calling monitor election
Jan 31 07:44:13 compute-2 ceph-mon[77282]: pgmap v1360: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 930 KiB/s wr, 16 op/s
Jan 31 07:44:13 compute-2 ceph-mon[77282]: pgmap v1361: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 754 KiB/s wr, 16 op/s
Jan 31 07:44:13 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:44:13 compute-2 ceph-mon[77282]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:44:13 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:44:13 compute-2 ceph-mon[77282]: osdmap e203: 3 total, 3 up, 3 in
Jan 31 07:44:13 compute-2 ceph-mon[77282]: mgrmap e11: compute-0.hhuoua(active, since 38m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:44:13 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3792388513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2815318421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:44:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:44:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1285714422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.462 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.466 226833 DEBUG nova.virt.libvirt.vif [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:43:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1415676217',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1415676217',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1415676217',id=51,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a5c5f11e8f24f898d16bceb9925aaa0',ramdisk_id='',reservation_id='r-3aspo384',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-536491326',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-536491326-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:43:51Z,user_data=None,user_id='4cdbfeb437d54df89a0fb0f6621b8fdc',uuid=0de34c85-05be-461e-ae22-ac3816f1018d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "address": "fa:16:3e:4f:a4:b8", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c1679bd-49", "ovs_interfaceid": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.467 226833 DEBUG nova.network.os_vif_util [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converting VIF {"id": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "address": "fa:16:3e:4f:a4:b8", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c1679bd-49", "ovs_interfaceid": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.468 226833 DEBUG nova.network.os_vif_util [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=3c1679bd-49ed-45ef-8e6f-9f10415f8f33,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c1679bd-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.471 226833 DEBUG nova.objects.instance [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0de34c85-05be-461e-ae22-ac3816f1018d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.508 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <uuid>0de34c85-05be-461e-ae22-ac3816f1018d</uuid>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <name>instance-00000033</name>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1415676217</nova:name>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:44:06</nova:creationTime>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <nova:user uuid="4cdbfeb437d54df89a0fb0f6621b8fdc">tempest-ImagesOneServerNegativeTestJSON-536491326-project-member</nova:user>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <nova:project uuid="9a5c5f11e8f24f898d16bceb9925aaa0">tempest-ImagesOneServerNegativeTestJSON-536491326</nova:project>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <nova:port uuid="3c1679bd-49ed-45ef-8e6f-9f10415f8f33">
Jan 31 07:44:13 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <system>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <entry name="serial">0de34c85-05be-461e-ae22-ac3816f1018d</entry>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <entry name="uuid">0de34c85-05be-461e-ae22-ac3816f1018d</entry>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     </system>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <os>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   </os>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <features>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   </features>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/0de34c85-05be-461e-ae22-ac3816f1018d_disk">
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       </source>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/0de34c85-05be-461e-ae22-ac3816f1018d_disk.config">
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       </source>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:44:13 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:4f:a4:b8"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <target dev="tap3c1679bd-49"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/0de34c85-05be-461e-ae22-ac3816f1018d/console.log" append="off"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <video>
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     </video>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:44:13 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:44:13 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:44:13 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:44:13 compute-2 nova_compute[226829]: </domain>
Jan 31 07:44:13 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.509 226833 DEBUG nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Preparing to wait for external event network-vif-plugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.510 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.510 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.511 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.512 226833 DEBUG nova.virt.libvirt.vif [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:43:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1415676217',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1415676217',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1415676217',id=51,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a5c5f11e8f24f898d16bceb9925aaa0',ramdisk_id='',reservation_id='r-3aspo384',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-536491326',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-536491326-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:43:51Z,user_data=None,user_id='4cdbfeb437d54df89a0fb0f6621b8fdc',uuid=0de34c85-05be-461e-ae22-ac3816f1018d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "address": "fa:16:3e:4f:a4:b8", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c1679bd-49", "ovs_interfaceid": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.513 226833 DEBUG nova.network.os_vif_util [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converting VIF {"id": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "address": "fa:16:3e:4f:a4:b8", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c1679bd-49", "ovs_interfaceid": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.514 226833 DEBUG nova.network.os_vif_util [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=3c1679bd-49ed-45ef-8e6f-9f10415f8f33,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c1679bd-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.515 226833 DEBUG os_vif [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=3c1679bd-49ed-45ef-8e6f-9f10415f8f33,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c1679bd-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.516 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.517 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.518 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.529 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.530 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c1679bd-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.531 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c1679bd-49, col_values=(('external_ids', {'iface-id': '3c1679bd-49ed-45ef-8e6f-9f10415f8f33', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:a4:b8', 'vm-uuid': '0de34c85-05be-461e-ae22-ac3816f1018d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.534 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.536 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:44:13 compute-2 NetworkManager[48999]: <info>  [1769845453.5377] manager: (tap3c1679bd-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.543 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:13 compute-2 nova_compute[226829]: 2026-01-31 07:44:13.546 226833 INFO os_vif [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=3c1679bd-49ed-45ef-8e6f-9f10415f8f33,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c1679bd-49')
Jan 31 07:44:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:14 compute-2 nova_compute[226829]: 2026-01-31 07:44:14.051 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:14 compute-2 nova_compute[226829]: 2026-01-31 07:44:14.062 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:44:14 compute-2 nova_compute[226829]: 2026-01-31 07:44:14.063 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:44:14 compute-2 nova_compute[226829]: 2026-01-31 07:44:14.063 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No VIF found with MAC fa:16:3e:4f:a4:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:44:14 compute-2 nova_compute[226829]: 2026-01-31 07:44:14.064 226833 INFO nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Using config drive
Jan 31 07:44:14 compute-2 nova_compute[226829]: 2026-01-31 07:44:14.131 226833 DEBUG nova.storage.rbd_utils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 0de34c85-05be-461e-ae22-ac3816f1018d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:14 compute-2 ceph-mon[77282]: pgmap v1362: 305 pgs: 305 active+clean; 93 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 177 KiB/s wr, 13 op/s
Jan 31 07:44:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1285714422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:14.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:14.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:14 compute-2 nova_compute[226829]: 2026-01-31 07:44:14.924 226833 INFO nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Creating config drive at /var/lib/nova/instances/0de34c85-05be-461e-ae22-ac3816f1018d/disk.config
Jan 31 07:44:14 compute-2 nova_compute[226829]: 2026-01-31 07:44:14.929 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0de34c85-05be-461e-ae22-ac3816f1018d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2zzu90qa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:15 compute-2 nova_compute[226829]: 2026-01-31 07:44:15.063 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0de34c85-05be-461e-ae22-ac3816f1018d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2zzu90qa" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:15 compute-2 nova_compute[226829]: 2026-01-31 07:44:15.097 226833 DEBUG nova.storage.rbd_utils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 0de34c85-05be-461e-ae22-ac3816f1018d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:15 compute-2 nova_compute[226829]: 2026-01-31 07:44:15.103 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0de34c85-05be-461e-ae22-ac3816f1018d/disk.config 0de34c85-05be-461e-ae22-ac3816f1018d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3142601042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:15 compute-2 nova_compute[226829]: 2026-01-31 07:44:15.585 226833 DEBUG oslo_concurrency.processutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0de34c85-05be-461e-ae22-ac3816f1018d/disk.config 0de34c85-05be-461e-ae22-ac3816f1018d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:15 compute-2 nova_compute[226829]: 2026-01-31 07:44:15.586 226833 INFO nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Deleting local config drive /var/lib/nova/instances/0de34c85-05be-461e-ae22-ac3816f1018d/disk.config because it was imported into RBD.
Jan 31 07:44:15 compute-2 kernel: tap3c1679bd-49: entered promiscuous mode
Jan 31 07:44:15 compute-2 ovn_controller[133834]: 2026-01-31T07:44:15Z|00131|binding|INFO|Claiming lport 3c1679bd-49ed-45ef-8e6f-9f10415f8f33 for this chassis.
Jan 31 07:44:15 compute-2 ovn_controller[133834]: 2026-01-31T07:44:15Z|00132|binding|INFO|3c1679bd-49ed-45ef-8e6f-9f10415f8f33: Claiming fa:16:3e:4f:a4:b8 10.100.0.10
Jan 31 07:44:15 compute-2 NetworkManager[48999]: <info>  [1769845455.6734] manager: (tap3c1679bd-49): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Jan 31 07:44:15 compute-2 nova_compute[226829]: 2026-01-31 07:44:15.672 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:15 compute-2 nova_compute[226829]: 2026-01-31 07:44:15.674 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:15 compute-2 nova_compute[226829]: 2026-01-31 07:44:15.679 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:15 compute-2 systemd-udevd[248831]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:44:15 compute-2 systemd-machined[195142]: New machine qemu-22-instance-00000033.
Jan 31 07:44:15 compute-2 ovn_controller[133834]: 2026-01-31T07:44:15Z|00133|binding|INFO|Setting lport 3c1679bd-49ed-45ef-8e6f-9f10415f8f33 ovn-installed in OVS
Jan 31 07:44:15 compute-2 nova_compute[226829]: 2026-01-31 07:44:15.714 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:15 compute-2 systemd[1]: Started Virtual Machine qemu-22-instance-00000033.
Jan 31 07:44:15 compute-2 NetworkManager[48999]: <info>  [1769845455.7219] device (tap3c1679bd-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:44:15 compute-2 NetworkManager[48999]: <info>  [1769845455.7267] device (tap3c1679bd-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:44:15 compute-2 ovn_controller[133834]: 2026-01-31T07:44:15Z|00134|binding|INFO|Setting lport 3c1679bd-49ed-45ef-8e6f-9f10415f8f33 up in Southbound
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.765 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:a4:b8 10.100.0.10'], port_security=['fa:16:3e:4f:a4:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0de34c85-05be-461e-ae22-ac3816f1018d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7130ed58-0d3f-4534-9498-e2d59204c82c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a5c5f11e8f24f898d16bceb9925aaa0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d514bed-4c59-42dc-a403-a5a9a9cfa795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0203aeab-482d-423f-9cbc-afbc1fe3631d, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=3c1679bd-49ed-45ef-8e6f-9f10415f8f33) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.767 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 3c1679bd-49ed-45ef-8e6f-9f10415f8f33 in datapath 7130ed58-0d3f-4534-9498-e2d59204c82c bound to our chassis
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.771 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7130ed58-0d3f-4534-9498-e2d59204c82c
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.788 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[25e615b9-6e20-4994-a26b-3ae2cb78b8ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.790 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7130ed58-01 in ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.794 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7130ed58-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.794 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[70a5b493-4ef0-4be5-9dfe-682a3dfeb0aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.796 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5e37e38f-87c7-4e47-98be-08db3a6582e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.813 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[e1294596-3360-4062-8f2b-aa66923e4c6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.828 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f9bf5c-e213-45ac-9880-529ddb58fc4d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.862 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2887be23-9ae1-4623-ba81-45eec7b8f031]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 NetworkManager[48999]: <info>  [1769845455.8712] manager: (tap7130ed58-00): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.869 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3489432e-1c5f-482a-929c-ee0a779e3856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.900 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7dea4b65-2c6f-45ae-b464-e4f66c33ee46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.905 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[a019f2a2-fbd4-4b6b-bcd1-a32f4b79c0b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 NetworkManager[48999]: <info>  [1769845455.9302] device (tap7130ed58-00): carrier: link connected
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.935 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7552b281-ce5b-4e89-9156-7466cdd9222a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.955 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1d49462b-b58b-4c85-8463-b3320ff51487]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7130ed58-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:8a:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570840, 'reachable_time': 42364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248864, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.972 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[88fba500-8a7f-42c9-967f-c556eb77f2cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:8a47'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570840, 'tstamp': 570840}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248865, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:15.990 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e642dc-94c6-4d6f-b75c-186a4b50f57c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7130ed58-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:8a:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570840, 'reachable_time': 42364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248866, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:16.020 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f4035d5b-903b-412e-afff-03d8d34104c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:16.072 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0043e652-31bc-4ce2-9088-4644a26cbb63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:16.075 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7130ed58-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:16.075 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:16.076 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7130ed58-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.078 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:16 compute-2 kernel: tap7130ed58-00: entered promiscuous mode
Jan 31 07:44:16 compute-2 NetworkManager[48999]: <info>  [1769845456.0799] manager: (tap7130ed58-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:16.083 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7130ed58-00, col_values=(('external_ids', {'iface-id': '498a34f8-98b0-44b9-8d4d-24ad7111bb4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:16 compute-2 ovn_controller[133834]: 2026-01-31T07:44:16Z|00135|binding|INFO|Releasing lport 498a34f8-98b0-44b9-8d4d-24ad7111bb4f from this chassis (sb_readonly=0)
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.094 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:16.096 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7130ed58-0d3f-4534-9498-e2d59204c82c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7130ed58-0d3f-4534-9498-e2d59204c82c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:16.098 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4f3ddd-bcba-4c3a-a74c-e4db699d1fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:16.099 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-7130ed58-0d3f-4534-9498-e2d59204c82c
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/7130ed58-0d3f-4534-9498-e2d59204c82c.pid.haproxy
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 7130ed58-0d3f-4534-9498-e2d59204c82c
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:16.100 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'env', 'PROCESS_TAG=haproxy-7130ed58-0d3f-4534-9498-e2d59204c82c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7130ed58-0d3f-4534-9498-e2d59204c82c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.226 226833 DEBUG nova.compute.manager [req-5c302337-7b2a-4330-9620-14ca7654db98 req-ba69e4b1-fefa-4a4d-9293-384a475d0494 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Received event network-vif-plugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.227 226833 DEBUG oslo_concurrency.lockutils [req-5c302337-7b2a-4330-9620-14ca7654db98 req-ba69e4b1-fefa-4a4d-9293-384a475d0494 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.228 226833 DEBUG oslo_concurrency.lockutils [req-5c302337-7b2a-4330-9620-14ca7654db98 req-ba69e4b1-fefa-4a4d-9293-384a475d0494 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.228 226833 DEBUG oslo_concurrency.lockutils [req-5c302337-7b2a-4330-9620-14ca7654db98 req-ba69e4b1-fefa-4a4d-9293-384a475d0494 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.229 226833 DEBUG nova.compute.manager [req-5c302337-7b2a-4330-9620-14ca7654db98 req-ba69e4b1-fefa-4a4d-9293-384a475d0494 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Processing event network-vif-plugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:44:16 compute-2 ceph-mon[77282]: pgmap v1363: 305 pgs: 305 active+clean; 93 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 6.3 KiB/s rd, 176 KiB/s wr, 10 op/s
Jan 31 07:44:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1037439228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:16 compute-2 podman[248905]: 2026-01-31 07:44:16.450341074 +0000 UTC m=+0.023496457 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:44:16 compute-2 podman[248905]: 2026-01-31 07:44:16.606696601 +0000 UTC m=+0.179851994 container create 19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:44:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:16.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:16 compute-2 systemd[1]: Started libpod-conmon-19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4.scope.
Jan 31 07:44:16 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:44:16 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40de429f1fbe313a65cc0afb4bcb4ba7ab156a9159671bdcb0267f96a94813d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:16 compute-2 podman[248905]: 2026-01-31 07:44:16.702049484 +0000 UTC m=+0.275204857 container init 19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 07:44:16 compute-2 podman[248905]: 2026-01-31 07:44:16.707918762 +0000 UTC m=+0.281074125 container start 19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:44:16 compute-2 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[248949]: [NOTICE]   (248956) : New worker (248960) forked
Jan 31 07:44:16 compute-2 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[248949]: [NOTICE]   (248956) : Loading success.
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.829 226833 DEBUG nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.830 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845456.8289514, 0de34c85-05be-461e-ae22-ac3816f1018d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.831 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] VM Started (Lifecycle Event)
Jan 31 07:44:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:16.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.834 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.838 226833 INFO nova.virt.libvirt.driver [-] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Instance spawned successfully.
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.839 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.894 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.899 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.899 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.900 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.900 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.901 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.901 226833 DEBUG nova.virt.libvirt.driver [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.906 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.983 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.984 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845456.8301218, 0de34c85-05be-461e-ae22-ac3816f1018d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:44:16 compute-2 nova_compute[226829]: 2026-01-31 07:44:16.984 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] VM Paused (Lifecycle Event)
Jan 31 07:44:17 compute-2 nova_compute[226829]: 2026-01-31 07:44:17.019 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:17 compute-2 nova_compute[226829]: 2026-01-31 07:44:17.024 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845456.8337631, 0de34c85-05be-461e-ae22-ac3816f1018d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:44:17 compute-2 nova_compute[226829]: 2026-01-31 07:44:17.024 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] VM Resumed (Lifecycle Event)
Jan 31 07:44:17 compute-2 nova_compute[226829]: 2026-01-31 07:44:17.078 226833 INFO nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Took 25.14 seconds to spawn the instance on the hypervisor.
Jan 31 07:44:17 compute-2 nova_compute[226829]: 2026-01-31 07:44:17.079 226833 DEBUG nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:17 compute-2 nova_compute[226829]: 2026-01-31 07:44:17.082 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:17 compute-2 nova_compute[226829]: 2026-01-31 07:44:17.092 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:44:17 compute-2 nova_compute[226829]: 2026-01-31 07:44:17.139 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:44:17 compute-2 nova_compute[226829]: 2026-01-31 07:44:17.189 226833 INFO nova.compute.manager [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Took 28.30 seconds to build instance.
Jan 31 07:44:17 compute-2 nova_compute[226829]: 2026-01-31 07:44:17.286 226833 DEBUG oslo_concurrency.lockutils [None req-a90b6f90-e16a-4b35-9e6a-92efc16279e2 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:17 compute-2 ceph-mon[77282]: pgmap v1364: 305 pgs: 305 active+clean; 134 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:44:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2187030413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:18 compute-2 nova_compute[226829]: 2026-01-31 07:44:18.424 226833 DEBUG nova.compute.manager [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Received event network-vif-plugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:18 compute-2 nova_compute[226829]: 2026-01-31 07:44:18.425 226833 DEBUG oslo_concurrency.lockutils [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:18 compute-2 nova_compute[226829]: 2026-01-31 07:44:18.426 226833 DEBUG oslo_concurrency.lockutils [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:18 compute-2 nova_compute[226829]: 2026-01-31 07:44:18.427 226833 DEBUG oslo_concurrency.lockutils [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:18 compute-2 nova_compute[226829]: 2026-01-31 07:44:18.427 226833 DEBUG nova.compute.manager [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] No waiting events found dispatching network-vif-plugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:44:18 compute-2 nova_compute[226829]: 2026-01-31 07:44:18.428 226833 WARNING nova.compute.manager [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Received unexpected event network-vif-plugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 for instance with vm_state active and task_state None.
Jan 31 07:44:18 compute-2 nova_compute[226829]: 2026-01-31 07:44:18.534 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:18.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:18.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:19 compute-2 nova_compute[226829]: 2026-01-31 07:44:19.063 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:20 compute-2 podman[248972]: 2026-01-31 07:44:20.198434173 +0000 UTC m=+0.075807285 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 07:44:20 compute-2 ceph-mon[77282]: pgmap v1365: 305 pgs: 305 active+clean; 134 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 07:44:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:44:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:20.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:44:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:20.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:20 compute-2 sudo[248990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:20 compute-2 sudo[248990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:20 compute-2 sudo[248990]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:20 compute-2 sudo[249015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:44:20 compute-2 sudo[249015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:20 compute-2 sudo[249015]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:44:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:44:21 compute-2 ceph-mon[77282]: pgmap v1366: 305 pgs: 305 active+clean; 166 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 189 op/s
Jan 31 07:44:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:22.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2053522155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:22.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:23 compute-2 nova_compute[226829]: 2026-01-31 07:44:23.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2050046995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:23 compute-2 ceph-mon[77282]: pgmap v1367: 305 pgs: 305 active+clean; 175 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 207 op/s
Jan 31 07:44:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:24 compute-2 nova_compute[226829]: 2026-01-31 07:44:24.064 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:44:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:24.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:44:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:24.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:25 compute-2 ceph-mon[77282]: pgmap v1368: 305 pgs: 305 active+clean; 175 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 196 op/s
Jan 31 07:44:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3591188333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2346548695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:26.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:26.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:28 compute-2 ceph-mon[77282]: pgmap v1369: 305 pgs: 305 active+clean; 134 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 226 op/s
Jan 31 07:44:28 compute-2 nova_compute[226829]: 2026-01-31 07:44:28.540 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:28.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:28.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:29 compute-2 nova_compute[226829]: 2026-01-31 07:44:29.063 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:30 compute-2 ceph-mon[77282]: pgmap v1370: 305 pgs: 305 active+clean; 145 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 248 op/s
Jan 31 07:44:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:30.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:30.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:31 compute-2 ovn_controller[133834]: 2026-01-31T07:44:31Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:a4:b8 10.100.0.10
Jan 31 07:44:31 compute-2 ovn_controller[133834]: 2026-01-31T07:44:31Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:a4:b8 10.100.0.10
Jan 31 07:44:31 compute-2 ceph-mon[77282]: pgmap v1371: 305 pgs: 305 active+clean; 192 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.9 MiB/s wr, 304 op/s
Jan 31 07:44:32 compute-2 sudo[249048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:32 compute-2 sudo[249048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:32 compute-2 sudo[249048]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:32 compute-2 sudo[249073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:32 compute-2 sudo[249073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:32 compute-2 sudo[249073]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:32.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:32.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:33 compute-2 nova_compute[226829]: 2026-01-31 07:44:33.395 226833 DEBUG nova.compute.manager [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:33 compute-2 nova_compute[226829]: 2026-01-31 07:44:33.542 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:33 compute-2 nova_compute[226829]: 2026-01-31 07:44:33.831 226833 INFO nova.compute.manager [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] instance snapshotting
Jan 31 07:44:33 compute-2 ceph-mon[77282]: pgmap v1372: 305 pgs: 305 active+clean; 200 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 161 op/s
Jan 31 07:44:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:34 compute-2 nova_compute[226829]: 2026-01-31 07:44:34.065 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:44:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:34.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:44:34 compute-2 nova_compute[226829]: 2026-01-31 07:44:34.846 226833 INFO nova.virt.libvirt.driver [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Beginning live snapshot process
Jan 31 07:44:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:34.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:35 compute-2 nova_compute[226829]: 2026-01-31 07:44:35.652 226833 DEBUG nova.virt.libvirt.imagebackend [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:44:35 compute-2 ceph-mon[77282]: pgmap v1373: 305 pgs: 305 active+clean; 200 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 143 op/s
Jan 31 07:44:36 compute-2 nova_compute[226829]: 2026-01-31 07:44:36.439 226833 DEBUG nova.storage.rbd_utils [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] creating snapshot(f4383d4db5924752acf3650c3456d698) on rbd image(0de34c85-05be-461e-ae22-ac3816f1018d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:44:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:36.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:36.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e204 e204: 3 total, 3 up, 3 in
Jan 31 07:44:36 compute-2 nova_compute[226829]: 2026-01-31 07:44:36.944 226833 DEBUG nova.storage.rbd_utils [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] cloning vms/0de34c85-05be-461e-ae22-ac3816f1018d_disk@f4383d4db5924752acf3650c3456d698 to images/ed46ac7b-1d36-47b6-9e31-f3feaccdcefd clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:44:37 compute-2 nova_compute[226829]: 2026-01-31 07:44:37.096 226833 DEBUG nova.storage.rbd_utils [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] flattening images/ed46ac7b-1d36-47b6-9e31-f3feaccdcefd flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:44:37 compute-2 nova_compute[226829]: 2026-01-31 07:44:37.690 226833 DEBUG nova.storage.rbd_utils [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] removing snapshot(f4383d4db5924752acf3650c3456d698) on rbd image(0de34c85-05be-461e-ae22-ac3816f1018d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:44:37 compute-2 ceph-mon[77282]: pgmap v1374: 305 pgs: 305 active+clean; 213 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 186 op/s
Jan 31 07:44:37 compute-2 ceph-mon[77282]: osdmap e204: 3 total, 3 up, 3 in
Jan 31 07:44:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e205 e205: 3 total, 3 up, 3 in
Jan 31 07:44:38 compute-2 nova_compute[226829]: 2026-01-31 07:44:38.079 226833 DEBUG nova.storage.rbd_utils [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] creating snapshot(snap) on rbd image(ed46ac7b-1d36-47b6-9e31-f3feaccdcefd) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:44:38 compute-2 nova_compute[226829]: 2026-01-31 07:44:38.544 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:44:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:38.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:44:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:38.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:39 compute-2 nova_compute[226829]: 2026-01-31 07:44:39.067 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:39 compute-2 ceph-mon[77282]: osdmap e205: 3 total, 3 up, 3 in
Jan 31 07:44:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e206 e206: 3 total, 3 up, 3 in
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image ed46ac7b-1d36-47b6-9e31-f3feaccdcefd could not be found.
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID ed46ac7b-1d36-47b6-9e31-f3feaccdcefd
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver 
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver 
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image ed46ac7b-1d36-47b6-9e31-f3feaccdcefd could not be found.
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.138 226833 ERROR nova.virt.libvirt.driver 
Jan 31 07:44:40 compute-2 nova_compute[226829]: 2026-01-31 07:44:40.221 226833 DEBUG nova.storage.rbd_utils [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] removing snapshot(snap) on rbd image(ed46ac7b-1d36-47b6-9e31-f3feaccdcefd) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:44:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:44:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:40.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:44:40 compute-2 ceph-mon[77282]: pgmap v1377: 305 pgs: 305 active+clean; 238 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 121 op/s
Jan 31 07:44:40 compute-2 ceph-mon[77282]: osdmap e206: 3 total, 3 up, 3 in
Jan 31 07:44:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2609713837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e207 e207: 3 total, 3 up, 3 in
Jan 31 07:44:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:40.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4087877508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1775213986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:41 compute-2 ceph-mon[77282]: osdmap e207: 3 total, 3 up, 3 in
Jan 31 07:44:41 compute-2 ceph-mon[77282]: pgmap v1380: 305 pgs: 305 active+clean; 313 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 17 MiB/s wr, 346 op/s
Jan 31 07:44:42 compute-2 podman[249281]: 2026-01-31 07:44:42.259836443 +0000 UTC m=+0.133366424 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 07:44:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:42.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:42.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:43 compute-2 nova_compute[226829]: 2026-01-31 07:44:43.548 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:43 compute-2 ceph-mon[77282]: pgmap v1381: 305 pgs: 305 active+clean; 301 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 12 MiB/s wr, 301 op/s
Jan 31 07:44:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:44 compute-2 nova_compute[226829]: 2026-01-31 07:44:44.069 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:44.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:44.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1878130030' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:44:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1878130030' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:44:45 compute-2 nova_compute[226829]: 2026-01-31 07:44:45.191 226833 WARNING nova.compute.manager [None req-de447ad1-7f32-412b-ab0b-37e220394d25 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Image not found during snapshot: nova.exception.ImageNotFound: Image ed46ac7b-1d36-47b6-9e31-f3feaccdcefd could not be found.
Jan 31 07:44:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e208 e208: 3 total, 3 up, 3 in
Jan 31 07:44:46 compute-2 ceph-mon[77282]: pgmap v1382: 305 pgs: 305 active+clean; 301 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 8.5 MiB/s wr, 220 op/s
Jan 31 07:44:46 compute-2 ceph-mon[77282]: osdmap e208: 3 total, 3 up, 3 in
Jan 31 07:44:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:46.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.217 226833 DEBUG oslo_concurrency.lockutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "0de34c85-05be-461e-ae22-ac3816f1018d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.218 226833 DEBUG oslo_concurrency.lockutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.218 226833 DEBUG oslo_concurrency.lockutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.218 226833 DEBUG oslo_concurrency.lockutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.219 226833 DEBUG oslo_concurrency.lockutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.221 226833 INFO nova.compute.manager [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Terminating instance
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.222 226833 DEBUG nova.compute.manager [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:44:47 compute-2 kernel: tap3c1679bd-49 (unregistering): left promiscuous mode
Jan 31 07:44:47 compute-2 NetworkManager[48999]: <info>  [1769845487.2976] device (tap3c1679bd-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.307 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:47 compute-2 ovn_controller[133834]: 2026-01-31T07:44:47Z|00136|binding|INFO|Releasing lport 3c1679bd-49ed-45ef-8e6f-9f10415f8f33 from this chassis (sb_readonly=0)
Jan 31 07:44:47 compute-2 ovn_controller[133834]: 2026-01-31T07:44:47Z|00137|binding|INFO|Setting lport 3c1679bd-49ed-45ef-8e6f-9f10415f8f33 down in Southbound
Jan 31 07:44:47 compute-2 ovn_controller[133834]: 2026-01-31T07:44:47Z|00138|binding|INFO|Removing iface tap3c1679bd-49 ovn-installed in OVS
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.309 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.317 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.323 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:a4:b8 10.100.0.10'], port_security=['fa:16:3e:4f:a4:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0de34c85-05be-461e-ae22-ac3816f1018d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7130ed58-0d3f-4534-9498-e2d59204c82c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a5c5f11e8f24f898d16bceb9925aaa0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d514bed-4c59-42dc-a403-a5a9a9cfa795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0203aeab-482d-423f-9cbc-afbc1fe3631d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=3c1679bd-49ed-45ef-8e6f-9f10415f8f33) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.329 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 3c1679bd-49ed-45ef-8e6f-9f10415f8f33 in datapath 7130ed58-0d3f-4534-9498-e2d59204c82c unbound from our chassis
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.335 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7130ed58-0d3f-4534-9498-e2d59204c82c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.339 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5f062be1-1180-4df9-b1e8-fbd159b3ebc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.341 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c namespace which is not needed anymore
Jan 31 07:44:47 compute-2 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000033.scope: Deactivated successfully.
Jan 31 07:44:47 compute-2 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000033.scope: Consumed 14.452s CPU time.
Jan 31 07:44:47 compute-2 systemd-machined[195142]: Machine qemu-22-instance-00000033 terminated.
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.466 226833 INFO nova.virt.libvirt.driver [-] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Instance destroyed successfully.
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.467 226833 DEBUG nova.objects.instance [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lazy-loading 'resources' on Instance uuid 0de34c85-05be-461e-ae22-ac3816f1018d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:44:47 compute-2 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[248949]: [NOTICE]   (248956) : haproxy version is 2.8.14-c23fe91
Jan 31 07:44:47 compute-2 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[248949]: [NOTICE]   (248956) : path to executable is /usr/sbin/haproxy
Jan 31 07:44:47 compute-2 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[248949]: [WARNING]  (248956) : Exiting Master process...
Jan 31 07:44:47 compute-2 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[248949]: [ALERT]    (248956) : Current worker (248960) exited with code 143 (Terminated)
Jan 31 07:44:47 compute-2 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[248949]: [WARNING]  (248956) : All workers exited. Exiting... (0)
Jan 31 07:44:47 compute-2 systemd[1]: libpod-19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4.scope: Deactivated successfully.
Jan 31 07:44:47 compute-2 podman[249336]: 2026-01-31 07:44:47.496740455 +0000 UTC m=+0.056914622 container died 19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:44:47 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4-userdata-shm.mount: Deactivated successfully.
Jan 31 07:44:47 compute-2 systemd[1]: var-lib-containers-storage-overlay-40de429f1fbe313a65cc0afb4bcb4ba7ab156a9159671bdcb0267f96a94813d0-merged.mount: Deactivated successfully.
Jan 31 07:44:47 compute-2 podman[249336]: 2026-01-31 07:44:47.536959495 +0000 UTC m=+0.097133682 container cleanup 19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:44:47 compute-2 systemd[1]: libpod-conmon-19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4.scope: Deactivated successfully.
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.573 226833 DEBUG nova.virt.libvirt.vif [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:43:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1415676217',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1415676217',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1415676217',id=51,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:44:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9a5c5f11e8f24f898d16bceb9925aaa0',ramdisk_id='',reservation_id='r-3aspo384',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-536491326',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-536491326-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:44:44Z,user_data=None,user_id='4cdbfeb437d54df89a0fb0f6621b8fdc',uuid=0de34c85-05be-461e-ae22-ac3816f1018d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "address": "fa:16:3e:4f:a4:b8", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c1679bd-49", "ovs_interfaceid": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.574 226833 DEBUG nova.network.os_vif_util [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converting VIF {"id": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "address": "fa:16:3e:4f:a4:b8", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c1679bd-49", "ovs_interfaceid": "3c1679bd-49ed-45ef-8e6f-9f10415f8f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.575 226833 DEBUG nova.network.os_vif_util [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=3c1679bd-49ed-45ef-8e6f-9f10415f8f33,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c1679bd-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.576 226833 DEBUG os_vif [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=3c1679bd-49ed-45ef-8e6f-9f10415f8f33,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c1679bd-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.579 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.579 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c1679bd-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.581 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.583 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.589 226833 INFO os_vif [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=3c1679bd-49ed-45ef-8e6f-9f10415f8f33,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c1679bd-49')
Jan 31 07:44:47 compute-2 podman[249377]: 2026-01-31 07:44:47.602285745 +0000 UTC m=+0.047754196 container remove 19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.609 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[17b0ba63-d8ca-4618-ba70-7adaeb6313f0]: (4, ('Sat Jan 31 07:44:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c (19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4)\n19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4\nSat Jan 31 07:44:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c (19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4)\n19d15f6c5b5a040d94876e001441bf4fb9bc005a604d038f20f3726b3622a8e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.612 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a081bc9a-52b0-498e-8f18-fcc0408fda64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.614 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7130ed58-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.616 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:47 compute-2 kernel: tap7130ed58-00: left promiscuous mode
Jan 31 07:44:47 compute-2 nova_compute[226829]: 2026-01-31 07:44:47.622 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.627 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[85b490f2-1f96-4d2c-ab4a-85385863e2bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.649 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6f0262-d353-4af6-8807-fd6ceaec04ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.651 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3739d026-e3ae-46a5-b3d5-f269216146ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.664 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[883492ba-b8d0-43af-b16f-dd85ecb5367e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570833, 'reachable_time': 36415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249410, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:47 compute-2 systemd[1]: run-netns-ovnmeta\x2d7130ed58\x2d0d3f\x2d4534\x2d9498\x2de2d59204c82c.mount: Deactivated successfully.
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.670 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:44:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:44:47.671 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[1a05ca43-346f-45dd-906f-9f04db76d98a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:48 compute-2 ceph-mon[77282]: pgmap v1384: 305 pgs: 305 active+clean; 293 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 289 op/s
Jan 31 07:44:48 compute-2 nova_compute[226829]: 2026-01-31 07:44:48.585 226833 INFO nova.virt.libvirt.driver [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Deleting instance files /var/lib/nova/instances/0de34c85-05be-461e-ae22-ac3816f1018d_del
Jan 31 07:44:48 compute-2 nova_compute[226829]: 2026-01-31 07:44:48.586 226833 INFO nova.virt.libvirt.driver [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Deletion of /var/lib/nova/instances/0de34c85-05be-461e-ae22-ac3816f1018d_del complete
Jan 31 07:44:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:48.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:48 compute-2 nova_compute[226829]: 2026-01-31 07:44:48.747 226833 INFO nova.compute.manager [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Took 1.52 seconds to destroy the instance on the hypervisor.
Jan 31 07:44:48 compute-2 nova_compute[226829]: 2026-01-31 07:44:48.747 226833 DEBUG oslo.service.loopingcall [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:44:48 compute-2 nova_compute[226829]: 2026-01-31 07:44:48.747 226833 DEBUG nova.compute.manager [-] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:44:48 compute-2 nova_compute[226829]: 2026-01-31 07:44:48.747 226833 DEBUG nova.network.neutron [-] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:44:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:48.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:49 compute-2 nova_compute[226829]: 2026-01-31 07:44:49.073 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:50 compute-2 ceph-mon[77282]: pgmap v1385: 305 pgs: 305 active+clean; 293 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 430 KiB/s rd, 3.4 MiB/s wr, 133 op/s
Jan 31 07:44:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:50.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:50.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:51 compute-2 nova_compute[226829]: 2026-01-31 07:44:51.023 226833 DEBUG nova.compute.manager [req-a50a424f-b536-4ba7-a835-102cd3da31e7 req-acf8971b-51a7-4776-a350-7d7af8649c77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Received event network-vif-unplugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:51 compute-2 nova_compute[226829]: 2026-01-31 07:44:51.024 226833 DEBUG oslo_concurrency.lockutils [req-a50a424f-b536-4ba7-a835-102cd3da31e7 req-acf8971b-51a7-4776-a350-7d7af8649c77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:51 compute-2 nova_compute[226829]: 2026-01-31 07:44:51.024 226833 DEBUG oslo_concurrency.lockutils [req-a50a424f-b536-4ba7-a835-102cd3da31e7 req-acf8971b-51a7-4776-a350-7d7af8649c77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:51 compute-2 nova_compute[226829]: 2026-01-31 07:44:51.024 226833 DEBUG oslo_concurrency.lockutils [req-a50a424f-b536-4ba7-a835-102cd3da31e7 req-acf8971b-51a7-4776-a350-7d7af8649c77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:51 compute-2 nova_compute[226829]: 2026-01-31 07:44:51.025 226833 DEBUG nova.compute.manager [req-a50a424f-b536-4ba7-a835-102cd3da31e7 req-acf8971b-51a7-4776-a350-7d7af8649c77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] No waiting events found dispatching network-vif-unplugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:44:51 compute-2 nova_compute[226829]: 2026-01-31 07:44:51.025 226833 DEBUG nova.compute.manager [req-a50a424f-b536-4ba7-a835-102cd3da31e7 req-acf8971b-51a7-4776-a350-7d7af8649c77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Received event network-vif-unplugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:44:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e209 e209: 3 total, 3 up, 3 in
Jan 31 07:44:51 compute-2 podman[249413]: 2026-01-31 07:44:51.189062524 +0000 UTC m=+0.070179183 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:44:51 compute-2 nova_compute[226829]: 2026-01-31 07:44:51.853 226833 DEBUG nova.network.neutron [-] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:44:52 compute-2 nova_compute[226829]: 2026-01-31 07:44:52.006 226833 INFO nova.compute.manager [-] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Took 3.26 seconds to deallocate network for instance.
Jan 31 07:44:52 compute-2 nova_compute[226829]: 2026-01-31 07:44:52.089 226833 DEBUG oslo_concurrency.lockutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:52 compute-2 nova_compute[226829]: 2026-01-31 07:44:52.089 226833 DEBUG oslo_concurrency.lockutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:52 compute-2 nova_compute[226829]: 2026-01-31 07:44:52.225 226833 DEBUG oslo_concurrency.processutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:52 compute-2 sudo[249434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:52 compute-2 sudo[249434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:52 compute-2 sudo[249434]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:52 compute-2 sudo[249460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:52 compute-2 sudo[249460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:52 compute-2 sudo[249460]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:52 compute-2 ceph-mon[77282]: pgmap v1386: 305 pgs: 305 active+clean; 225 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 197 op/s
Jan 31 07:44:52 compute-2 ceph-mon[77282]: osdmap e209: 3 total, 3 up, 3 in
Jan 31 07:44:52 compute-2 nova_compute[226829]: 2026-01-31 07:44:52.583 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:44:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2855609894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:52 compute-2 nova_compute[226829]: 2026-01-31 07:44:52.651 226833 DEBUG oslo_concurrency.processutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:52 compute-2 nova_compute[226829]: 2026-01-31 07:44:52.658 226833 DEBUG nova.compute.provider_tree [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:44:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:52.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:52 compute-2 nova_compute[226829]: 2026-01-31 07:44:52.721 226833 DEBUG nova.scheduler.client.report [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:44:52 compute-2 nova_compute[226829]: 2026-01-31 07:44:52.797 226833 DEBUG oslo_concurrency.lockutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:52.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:52 compute-2 nova_compute[226829]: 2026-01-31 07:44:52.932 226833 INFO nova.scheduler.client.report [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Deleted allocations for instance 0de34c85-05be-461e-ae22-ac3816f1018d
Jan 31 07:44:53 compute-2 nova_compute[226829]: 2026-01-31 07:44:53.148 226833 DEBUG oslo_concurrency.lockutils [None req-291e6aa4-f660-4f98-a4dd-59d4ccad68ad 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2855609894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:53 compute-2 nova_compute[226829]: 2026-01-31 07:44:53.778 226833 DEBUG nova.compute.manager [req-6158566c-34ee-404b-bbd3-6630da8b73d1 req-3dd64f7b-a7f4-4e99-a6a7-a72847c29624 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Received event network-vif-plugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:53 compute-2 nova_compute[226829]: 2026-01-31 07:44:53.779 226833 DEBUG oslo_concurrency.lockutils [req-6158566c-34ee-404b-bbd3-6630da8b73d1 req-3dd64f7b-a7f4-4e99-a6a7-a72847c29624 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:53 compute-2 nova_compute[226829]: 2026-01-31 07:44:53.779 226833 DEBUG oslo_concurrency.lockutils [req-6158566c-34ee-404b-bbd3-6630da8b73d1 req-3dd64f7b-a7f4-4e99-a6a7-a72847c29624 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:53 compute-2 nova_compute[226829]: 2026-01-31 07:44:53.780 226833 DEBUG oslo_concurrency.lockutils [req-6158566c-34ee-404b-bbd3-6630da8b73d1 req-3dd64f7b-a7f4-4e99-a6a7-a72847c29624 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0de34c85-05be-461e-ae22-ac3816f1018d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:53 compute-2 nova_compute[226829]: 2026-01-31 07:44:53.780 226833 DEBUG nova.compute.manager [req-6158566c-34ee-404b-bbd3-6630da8b73d1 req-3dd64f7b-a7f4-4e99-a6a7-a72847c29624 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] No waiting events found dispatching network-vif-plugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:44:53 compute-2 nova_compute[226829]: 2026-01-31 07:44:53.781 226833 WARNING nova.compute.manager [req-6158566c-34ee-404b-bbd3-6630da8b73d1 req-3dd64f7b-a7f4-4e99-a6a7-a72847c29624 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Received unexpected event network-vif-plugged-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 for instance with vm_state deleted and task_state None.
Jan 31 07:44:53 compute-2 nova_compute[226829]: 2026-01-31 07:44:53.781 226833 DEBUG nova.compute.manager [req-6158566c-34ee-404b-bbd3-6630da8b73d1 req-3dd64f7b-a7f4-4e99-a6a7-a72847c29624 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Received event network-vif-deleted-3c1679bd-49ed-45ef-8e6f-9f10415f8f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:54 compute-2 nova_compute[226829]: 2026-01-31 07:44:54.074 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:54 compute-2 nova_compute[226829]: 2026-01-31 07:44:54.226 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:54 compute-2 nova_compute[226829]: 2026-01-31 07:44:54.227 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:54 compute-2 nova_compute[226829]: 2026-01-31 07:44:54.301 226833 DEBUG nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:44:54 compute-2 ceph-mon[77282]: pgmap v1388: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 221 op/s
Jan 31 07:44:54 compute-2 nova_compute[226829]: 2026-01-31 07:44:54.461 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:54 compute-2 nova_compute[226829]: 2026-01-31 07:44:54.462 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:54 compute-2 nova_compute[226829]: 2026-01-31 07:44:54.481 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:44:54 compute-2 nova_compute[226829]: 2026-01-31 07:44:54.481 226833 INFO nova.compute.claims [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:44:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:54.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:54.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:54 compute-2 nova_compute[226829]: 2026-01-31 07:44:54.990 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:44:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1424588764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:55 compute-2 nova_compute[226829]: 2026-01-31 07:44:55.441 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:55 compute-2 nova_compute[226829]: 2026-01-31 07:44:55.446 226833 DEBUG nova.compute.provider_tree [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:44:55 compute-2 nova_compute[226829]: 2026-01-31 07:44:55.520 226833 DEBUG nova.scheduler.client.report [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:44:55 compute-2 nova_compute[226829]: 2026-01-31 07:44:55.602 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:55 compute-2 nova_compute[226829]: 2026-01-31 07:44:55.603 226833 DEBUG nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:44:55 compute-2 nova_compute[226829]: 2026-01-31 07:44:55.693 226833 DEBUG nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:44:55 compute-2 nova_compute[226829]: 2026-01-31 07:44:55.694 226833 DEBUG nova.network.neutron [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:44:55 compute-2 nova_compute[226829]: 2026-01-31 07:44:55.728 226833 INFO nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:44:55 compute-2 nova_compute[226829]: 2026-01-31 07:44:55.864 226833 DEBUG nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:44:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1208867897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/884181136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:56.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:56.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.106 226833 DEBUG nova.policy [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4865905ed4e4262a2242d3f323d4314', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9ddf930129cf4e0395f8c5e70fd9eda8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:44:57 compute-2 ceph-mon[77282]: pgmap v1389: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 137 op/s
Jan 31 07:44:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1424588764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:57 compute-2 ceph-mon[77282]: pgmap v1390: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 119 op/s
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.679 226833 DEBUG nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.680 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.680 226833 INFO nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Creating image(s)
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.705 226833 DEBUG nova.storage.rbd_utils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.736 226833 DEBUG nova.storage.rbd_utils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.764 226833 DEBUG nova.storage.rbd_utils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.768 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.832 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.833 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.833 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.834 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.861 226833 DEBUG nova.storage.rbd_utils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:57 compute-2 nova_compute[226829]: 2026-01-31 07:44:57.864 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:44:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:58.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:44:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:44:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:58.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:59 compute-2 nova_compute[226829]: 2026-01-31 07:44:59.076 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:59 compute-2 nova_compute[226829]: 2026-01-31 07:44:59.182 226833 DEBUG nova.network.neutron [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Successfully created port: 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:44:59 compute-2 nova_compute[226829]: 2026-01-31 07:44:59.518 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:59 compute-2 nova_compute[226829]: 2026-01-31 07:44:59.984 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:00 compute-2 nova_compute[226829]: 2026-01-31 07:45:00.115 226833 DEBUG nova.storage.rbd_utils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] resizing rbd image e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:45:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:00.151 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:45:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:00.152 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:45:00 compute-2 ceph-mon[77282]: pgmap v1391: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 110 op/s
Jan 31 07:45:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2076917329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:00.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:00 compute-2 nova_compute[226829]: 2026-01-31 07:45:00.830 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:00 compute-2 nova_compute[226829]: 2026-01-31 07:45:00.833 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:00 compute-2 nova_compute[226829]: 2026-01-31 07:45:00.834 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:00 compute-2 nova_compute[226829]: 2026-01-31 07:45:00.835 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:00.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.468 226833 DEBUG nova.objects.instance [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'migration_context' on Instance uuid e7df3fd9-ff03-4b35-930a-330e9dff6d0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.490 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.566 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.567 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Ensure instance console log exists: /var/lib/nova/instances/e7df3fd9-ff03-4b35-930a-330e9dff6d0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.567 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.567 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.568 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.579 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:45:01 compute-2 nova_compute[226829]: 2026-01-31 07:45:01.580 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:45:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1610713272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:02 compute-2 ceph-mon[77282]: pgmap v1392: 305 pgs: 305 active+clean; 260 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 96 op/s
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.464 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845487.4633849, 0de34c85-05be-461e-ae22-ac3816f1018d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.465 226833 INFO nova.compute.manager [-] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] VM Stopped (Lifecycle Event)
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.561 226833 DEBUG nova.compute.manager [None req-5378dc5f-8f62-407f-be1e-61fdee74e238 - - - - - -] [instance: 0de34c85-05be-461e-ae22-ac3816f1018d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.590 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.668 226833 DEBUG nova.network.neutron [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Successfully updated port: 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:45:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:02.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.785 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.785 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquired lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.785 226833 DEBUG nova.network.neutron [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.894 226833 DEBUG nova.compute.manager [req-2965a141-92d4-47dd-a82e-90229cee3999 req-49b843cb-5964-4d03-9555-78c27dd7c683 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Received event network-changed-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.894 226833 DEBUG nova.compute.manager [req-2965a141-92d4-47dd-a82e-90229cee3999 req-49b843cb-5964-4d03-9555-78c27dd7c683 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Refreshing instance network info cache due to event network-changed-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:45:02 compute-2 nova_compute[226829]: 2026-01-31 07:45:02.895 226833 DEBUG oslo_concurrency.lockutils [req-2965a141-92d4-47dd-a82e-90229cee3999 req-49b843cb-5964-4d03-9555-78c27dd7c683 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:02.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:03 compute-2 nova_compute[226829]: 2026-01-31 07:45:03.275 226833 DEBUG nova.network.neutron [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:45:03 compute-2 nova_compute[226829]: 2026-01-31 07:45:03.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:03 compute-2 nova_compute[226829]: 2026-01-31 07:45:03.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:04 compute-2 nova_compute[226829]: 2026-01-31 07:45:04.078 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:04 compute-2 ceph-mon[77282]: pgmap v1393: 305 pgs: 305 active+clean; 276 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Jan 31 07:45:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/7205415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:04.155 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:04.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:04 compute-2 nova_compute[226829]: 2026-01-31 07:45:04.911 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:04 compute-2 nova_compute[226829]: 2026-01-31 07:45:04.911 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:04 compute-2 nova_compute[226829]: 2026-01-31 07:45:04.912 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:04 compute-2 nova_compute[226829]: 2026-01-31 07:45:04.912 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:45:04 compute-2 nova_compute[226829]: 2026-01-31 07:45:04.913 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:04.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:45:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2026738879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:05 compute-2 nova_compute[226829]: 2026-01-31 07:45:05.376 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:05 compute-2 nova_compute[226829]: 2026-01-31 07:45:05.564 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:45:05 compute-2 nova_compute[226829]: 2026-01-31 07:45:05.566 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4583MB free_disk=20.860008239746094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:45:05 compute-2 nova_compute[226829]: 2026-01-31 07:45:05.566 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:05 compute-2 nova_compute[226829]: 2026-01-31 07:45:05.566 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:06 compute-2 ceph-mon[77282]: pgmap v1394: 305 pgs: 305 active+clean; 276 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1016 KiB/s rd, 3.5 MiB/s wr, 93 op/s
Jan 31 07:45:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2026738879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:06 compute-2 nova_compute[226829]: 2026-01-31 07:45:06.406 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e7df3fd9-ff03-4b35-930a-330e9dff6d0e actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:45:06 compute-2 nova_compute[226829]: 2026-01-31 07:45:06.407 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:45:06 compute-2 nova_compute[226829]: 2026-01-31 07:45:06.407 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:45:06 compute-2 nova_compute[226829]: 2026-01-31 07:45:06.493 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:06.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:06.853 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:06.854 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:06.855 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:06.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:45:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3352389906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:06 compute-2 nova_compute[226829]: 2026-01-31 07:45:06.961 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:06 compute-2 nova_compute[226829]: 2026-01-31 07:45:06.968 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.089 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.219 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.219 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/877251149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3352389906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.567 226833 DEBUG nova.network.neutron [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Updating instance_info_cache with network_info: [{"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.593 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.649 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Releasing lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.650 226833 DEBUG nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Instance network_info: |[{"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.651 226833 DEBUG oslo_concurrency.lockutils [req-2965a141-92d4-47dd-a82e-90229cee3999 req-49b843cb-5964-4d03-9555-78c27dd7c683 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.652 226833 DEBUG nova.network.neutron [req-2965a141-92d4-47dd-a82e-90229cee3999 req-49b843cb-5964-4d03-9555-78c27dd7c683 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Refreshing network info cache for port 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.657 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Start _get_guest_xml network_info=[{"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.666 226833 WARNING nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.672 226833 DEBUG nova.virt.libvirt.host [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.673 226833 DEBUG nova.virt.libvirt.host [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.683 226833 DEBUG nova.virt.libvirt.host [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.684 226833 DEBUG nova.virt.libvirt.host [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.685 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.685 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.686 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.686 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.686 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.686 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.686 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.686 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.687 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.687 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.687 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.687 226833 DEBUG nova.virt.hardware [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:45:07 compute-2 nova_compute[226829]: 2026-01-31 07:45:07.690 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:45:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/528887485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.116 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.158 226833 DEBUG nova.storage.rbd_utils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.162 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:08 compute-2 ceph-mon[77282]: pgmap v1395: 305 pgs: 305 active+clean; 292 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Jan 31 07:45:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4064234750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/528887485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:45:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3176177341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:45:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:08.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.711 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.714 226833 DEBUG nova.virt.libvirt.vif [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-795324657',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-795324657',id=56,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/jDytd8Ym5FxcVX16m0xs7mZjnpulUTkIvV8si73F9lzKYe980w/3RbovGTB1QQOm/Ss45P0fTDRJRtI1toiRP5c4zSltvuzCoq9BdQDxvme5rWNAqRGyanoC79C91qw==',key_name='tempest-keypair-1249601107',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9ddf930129cf4e0395f8c5e70fd9eda8',ramdisk_id='',reservation_id='r-sdbgddgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4865905ed4e4262a2242d3f323d4314',uuid=e7df3fd9-ff03-4b35-930a-330e9dff6d0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.715 226833 DEBUG nova.network.os_vif_util [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converting VIF {"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.716 226833 DEBUG nova.network.os_vif_util [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:47:4e,bridge_name='br-int',has_traffic_filtering=True,id=3196ca0e-ce5d-4fbf-9341-3c29ba2d513c,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3196ca0e-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.719 226833 DEBUG nova.objects.instance [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7df3fd9-ff03-4b35-930a-330e9dff6d0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:08.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.961 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <uuid>e7df3fd9-ff03-4b35-930a-330e9dff6d0e</uuid>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <name>instance-00000038</name>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-795324657</nova:name>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:45:07</nova:creationTime>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <nova:user uuid="b4865905ed4e4262a2242d3f323d4314">tempest-UpdateMultiattachVolumeNegativeTest-600318888-project-member</nova:user>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <nova:project uuid="9ddf930129cf4e0395f8c5e70fd9eda8">tempest-UpdateMultiattachVolumeNegativeTest-600318888</nova:project>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <nova:port uuid="3196ca0e-ce5d-4fbf-9341-3c29ba2d513c">
Jan 31 07:45:08 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <system>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <entry name="serial">e7df3fd9-ff03-4b35-930a-330e9dff6d0e</entry>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <entry name="uuid">e7df3fd9-ff03-4b35-930a-330e9dff6d0e</entry>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     </system>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <os>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   </os>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <features>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   </features>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk">
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       </source>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk.config">
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       </source>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:45:08 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:3b:47:4e"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <target dev="tap3196ca0e-ce"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/e7df3fd9-ff03-4b35-930a-330e9dff6d0e/console.log" append="off"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <video>
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     </video>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:45:08 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:45:08 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:45:08 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:45:08 compute-2 nova_compute[226829]: </domain>
Jan 31 07:45:08 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:45:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.963 226833 DEBUG nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Preparing to wait for external event network-vif-plugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.964 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.964 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.964 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.965 226833 DEBUG nova.virt.libvirt.vif [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-795324657',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-795324657',id=56,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/jDytd8Ym5FxcVX16m0xs7mZjnpulUTkIvV8si73F9lzKYe980w/3RbovGTB1QQOm/Ss45P0fTDRJRtI1toiRP5c4zSltvuzCoq9BdQDxvme5rWNAqRGyanoC79C91qw==',key_name='tempest-keypair-1249601107',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9ddf930129cf4e0395f8c5e70fd9eda8',ramdisk_id='',reservation_id='r-sdbgddgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4865905ed4e4262a2242d3f323d4314',uuid=e7df3fd9-ff03-4b35-930a-330e9dff6d0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.965 226833 DEBUG nova.network.os_vif_util [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converting VIF {"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.966 226833 DEBUG nova.network.os_vif_util [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:47:4e,bridge_name='br-int',has_traffic_filtering=True,id=3196ca0e-ce5d-4fbf-9341-3c29ba2d513c,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3196ca0e-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.966 226833 DEBUG os_vif [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:47:4e,bridge_name='br-int',has_traffic_filtering=True,id=3196ca0e-ce5d-4fbf-9341-3c29ba2d513c,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3196ca0e-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.967 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.968 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.968 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.970 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.971 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3196ca0e-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.971 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3196ca0e-ce, col_values=(('external_ids', {'iface-id': '3196ca0e-ce5d-4fbf-9341-3c29ba2d513c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:47:4e', 'vm-uuid': 'e7df3fd9-ff03-4b35-930a-330e9dff6d0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.973 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:08 compute-2 NetworkManager[48999]: <info>  [1769845508.9755] manager: (tap3196ca0e-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.976 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.984 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:08 compute-2 nova_compute[226829]: 2026-01-31 07:45:08.987 226833 INFO os_vif [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:47:4e,bridge_name='br-int',has_traffic_filtering=True,id=3196ca0e-ce5d-4fbf-9341-3c29ba2d513c,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3196ca0e-ce')
Jan 31 07:45:09 compute-2 nova_compute[226829]: 2026-01-31 07:45:09.128 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:09 compute-2 nova_compute[226829]: 2026-01-31 07:45:09.143 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:09 compute-2 nova_compute[226829]: 2026-01-31 07:45:09.144 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:09 compute-2 nova_compute[226829]: 2026-01-31 07:45:09.144 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No VIF found with MAC fa:16:3e:3b:47:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:45:09 compute-2 nova_compute[226829]: 2026-01-31 07:45:09.145 226833 INFO nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Using config drive
Jan 31 07:45:09 compute-2 nova_compute[226829]: 2026-01-31 07:45:09.178 226833 DEBUG nova.storage.rbd_utils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3176177341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:09 compute-2 ceph-mon[77282]: pgmap v1396: 305 pgs: 305 active+clean; 300 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 164 op/s
Jan 31 07:45:10 compute-2 nova_compute[226829]: 2026-01-31 07:45:10.219 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:10 compute-2 nova_compute[226829]: 2026-01-31 07:45:10.220 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:10 compute-2 nova_compute[226829]: 2026-01-31 07:45:10.221 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:45:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:10.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1027205063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:10.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.047 226833 INFO nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Creating config drive at /var/lib/nova/instances/e7df3fd9-ff03-4b35-930a-330e9dff6d0e/disk.config
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.052 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7df3fd9-ff03-4b35-930a-330e9dff6d0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpff9dc2du execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.178 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7df3fd9-ff03-4b35-930a-330e9dff6d0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpff9dc2du" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.219 226833 DEBUG nova.storage.rbd_utils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.223 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7df3fd9-ff03-4b35-930a-330e9dff6d0e/disk.config e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:45:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3572805502' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.500 226833 DEBUG oslo_concurrency.processutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7df3fd9-ff03-4b35-930a-330e9dff6d0e/disk.config e7df3fd9-ff03-4b35-930a-330e9dff6d0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.501 226833 INFO nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Deleting local config drive /var/lib/nova/instances/e7df3fd9-ff03-4b35-930a-330e9dff6d0e/disk.config because it was imported into RBD.
Jan 31 07:45:11 compute-2 kernel: tap3196ca0e-ce: entered promiscuous mode
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.557 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 ovn_controller[133834]: 2026-01-31T07:45:11Z|00139|binding|INFO|Claiming lport 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c for this chassis.
Jan 31 07:45:11 compute-2 ovn_controller[133834]: 2026-01-31T07:45:11Z|00140|binding|INFO|3196ca0e-ce5d-4fbf-9341-3c29ba2d513c: Claiming fa:16:3e:3b:47:4e 10.100.0.8
Jan 31 07:45:11 compute-2 NetworkManager[48999]: <info>  [1769845511.5601] manager: (tap3196ca0e-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.559 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.566 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.578 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 systemd-udevd[249884]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:45:11 compute-2 NetworkManager[48999]: <info>  [1769845511.5946] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.593 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 NetworkManager[48999]: <info>  [1769845511.5955] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Jan 31 07:45:11 compute-2 systemd-machined[195142]: New machine qemu-23-instance-00000038.
Jan 31 07:45:11 compute-2 NetworkManager[48999]: <info>  [1769845511.6078] device (tap3196ca0e-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:45:11 compute-2 NetworkManager[48999]: <info>  [1769845511.6089] device (tap3196ca0e-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.618 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:47:4e 10.100.0.8'], port_security=['fa:16:3e:3b:47:4e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e7df3fd9-ff03-4b35-930a-330e9dff6d0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9ddf930129cf4e0395f8c5e70fd9eda8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be9fd1f2-df08-4f20-8be4-2f77d359418d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=530610e1-1646-4c1c-9b6d-a046ad77685d, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=3196ca0e-ce5d-4fbf-9341-3c29ba2d513c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.621 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c in datapath d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 bound to our chassis
Jan 31 07:45:11 compute-2 systemd[1]: Started Virtual Machine qemu-23-instance-00000038.
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.626 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d52bfdcb-a5f3-4946-8fca-4e9f67091fc3
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.638 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3fd3ab-397a-4041-b0f7-7acff5e5dae1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.641 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd52bfdcb-a1 in ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.644 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd52bfdcb-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.644 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f139a885-d770-4910-a664-5312b2839900]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.645 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[45e7b114-f3f8-46d8-acec-028cfcde5444]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.658 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[a046a4a4-a644-4604-8223-8f1a6314fa23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.661 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.669 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[65ab6268-7506-4d65-9983-e54e96f13f62]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.687 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 ovn_controller[133834]: 2026-01-31T07:45:11Z|00141|binding|INFO|Setting lport 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c ovn-installed in OVS
Jan 31 07:45:11 compute-2 ovn_controller[133834]: 2026-01-31T07:45:11Z|00142|binding|INFO|Setting lport 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c up in Southbound
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.692 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.694 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[77c9915d-463f-47f7-b3e5-514b37001b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.699 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ae62f9db-ea41-4a60-a334-97fc9e6b1f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 NetworkManager[48999]: <info>  [1769845511.7007] manager: (tapd52bfdcb-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/83)
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.724 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a9d250-17bd-4aed-8ecd-a3b58fdd6059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.727 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[18ec28c6-5391-4eb5-995c-ebd334980185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 NetworkManager[48999]: <info>  [1769845511.7427] device (tapd52bfdcb-a0): carrier: link connected
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.746 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[1c163e90-7f7e-456a-ae00-ed356c259f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.761 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9e54c257-0080-4154-8c26-60f32e8e71ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd52bfdcb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:e6:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576421, 'reachable_time': 25212, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249917, 'error': None, 'target': 'ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.764 226833 DEBUG nova.network.neutron [req-2965a141-92d4-47dd-a82e-90229cee3999 req-49b843cb-5964-4d03-9555-78c27dd7c683 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Updated VIF entry in instance network info cache for port 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.765 226833 DEBUG nova.network.neutron [req-2965a141-92d4-47dd-a82e-90229cee3999 req-49b843cb-5964-4d03-9555-78c27dd7c683 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Updating instance_info_cache with network_info: [{"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.777 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6bab4b4c-e00a-45f5-bc02-00d7144d00f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:e6a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576421, 'tstamp': 576421}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249918, 'error': None, 'target': 'ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.792 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ebee12e3-0af8-4e27-91af-a8b86225ffff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd52bfdcb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:e6:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576421, 'reachable_time': 25212, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249919, 'error': None, 'target': 'ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.796 226833 DEBUG oslo_concurrency.lockutils [req-2965a141-92d4-47dd-a82e-90229cee3999 req-49b843cb-5964-4d03-9555-78c27dd7c683 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.824 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[70514eca-9807-45f1-8b79-ddf516b3b016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.875 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ac63c06b-b093-4540-9b5d-726bf661abae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.877 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd52bfdcb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.877 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.878 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd52bfdcb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.880 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 NetworkManager[48999]: <info>  [1769845511.8816] manager: (tapd52bfdcb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Jan 31 07:45:11 compute-2 kernel: tapd52bfdcb-a0: entered promiscuous mode
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.884 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.885 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd52bfdcb-a0, col_values=(('external_ids', {'iface-id': '81353d2a-c386-4c97-aadf-683c4d8daa27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.886 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 ovn_controller[133834]: 2026-01-31T07:45:11Z|00143|binding|INFO|Releasing lport 81353d2a-c386-4c97-aadf-683c4d8daa27 from this chassis (sb_readonly=0)
Jan 31 07:45:11 compute-2 nova_compute[226829]: 2026-01-31 07:45:11.900 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.901 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d52bfdcb-a5f3-4946-8fca-4e9f67091fc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d52bfdcb-a5f3-4946-8fca-4e9f67091fc3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.902 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a3342e7c-6303-4a53-86d3-425a87d5cd66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.903 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/d52bfdcb-a5f3-4946-8fca-4e9f67091fc3.pid.haproxy
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID d52bfdcb-a5f3-4946-8fca-4e9f67091fc3
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:45:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:11.905 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'env', 'PROCESS_TAG=haproxy-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d52bfdcb-a5f3-4946-8fca-4e9f67091fc3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:45:12 compute-2 ceph-mon[77282]: pgmap v1397: 305 pgs: 305 active+clean; 339 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 181 op/s
Jan 31 07:45:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3572805502' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.243 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845512.2425404, e7df3fd9-ff03-4b35-930a-330e9dff6d0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.243 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] VM Started (Lifecycle Event)
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.279 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.284 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845512.2438514, e7df3fd9-ff03-4b35-930a-330e9dff6d0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.284 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] VM Paused (Lifecycle Event)
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.331 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.337 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:45:12 compute-2 podman[249992]: 2026-01-31 07:45:12.350980824 +0000 UTC m=+0.075310382 container create 6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 07:45:12 compute-2 sudo[250003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:12 compute-2 systemd[1]: Started libpod-conmon-6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485.scope.
Jan 31 07:45:12 compute-2 sudo[250003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:12 compute-2 sudo[250003]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:12 compute-2 podman[249992]: 2026-01-31 07:45:12.31245085 +0000 UTC m=+0.036780468 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.404 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:45:12 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:45:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18f6d5b0a8cf22bc0d34fa9fbf9b8bc02368bdb9867166c13b7db905d2408aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:12 compute-2 podman[249992]: 2026-01-31 07:45:12.432889372 +0000 UTC m=+0.157218990 container init 6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:45:12 compute-2 podman[249992]: 2026-01-31 07:45:12.449194434 +0000 UTC m=+0.173524002 container start 6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.469 226833 DEBUG nova.compute.manager [req-8eeaf4d5-b5e6-450e-a6ec-b0f7f41eedf1 req-e5019c7e-6320-456c-b82e-6d984d2ec6ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Received event network-vif-plugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.469 226833 DEBUG oslo_concurrency.lockutils [req-8eeaf4d5-b5e6-450e-a6ec-b0f7f41eedf1 req-e5019c7e-6320-456c-b82e-6d984d2ec6ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.470 226833 DEBUG oslo_concurrency.lockutils [req-8eeaf4d5-b5e6-450e-a6ec-b0f7f41eedf1 req-e5019c7e-6320-456c-b82e-6d984d2ec6ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.470 226833 DEBUG oslo_concurrency.lockutils [req-8eeaf4d5-b5e6-450e-a6ec-b0f7f41eedf1 req-e5019c7e-6320-456c-b82e-6d984d2ec6ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:12 compute-2 sudo[250041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.470 226833 DEBUG nova.compute.manager [req-8eeaf4d5-b5e6-450e-a6ec-b0f7f41eedf1 req-e5019c7e-6320-456c-b82e-6d984d2ec6ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Processing event network-vif-plugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.471 226833 DEBUG nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:45:12 compute-2 sudo[250041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.476 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845512.4761188, e7df3fd9-ff03-4b35-930a-330e9dff6d0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.476 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] VM Resumed (Lifecycle Event)
Jan 31 07:45:12 compute-2 sudo[250041]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.479 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.482 226833 INFO nova.virt.libvirt.driver [-] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Instance spawned successfully.
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.483 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:45:12 compute-2 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[250032]: [NOTICE]   (250069) : New worker (250082) forked
Jan 31 07:45:12 compute-2 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[250032]: [NOTICE]   (250069) : Loading success.
Jan 31 07:45:12 compute-2 podman[250027]: 2026-01-31 07:45:12.518254706 +0000 UTC m=+0.120888466 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.539 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.544 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.545 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.545 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.546 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.546 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.547 226833 DEBUG nova.virt.libvirt.driver [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:12 compute-2 nova_compute[226829]: 2026-01-31 07:45:12.550 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:45:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:12.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:12.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:13 compute-2 nova_compute[226829]: 2026-01-31 07:45:13.973 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:14 compute-2 ceph-mon[77282]: pgmap v1398: 305 pgs: 305 active+clean; 358 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 149 op/s
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.067 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.106 226833 INFO nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Took 16.43 seconds to spawn the instance on the hypervisor.
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.107 226833 DEBUG nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.183 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.229 226833 INFO nova.compute.manager [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Took 19.81 seconds to build instance.
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.291 226833 DEBUG oslo_concurrency.lockutils [None req-ec22f3b6-8442-4eb9-99d8-d520fac96a43 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.697 226833 DEBUG nova.compute.manager [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Received event network-vif-plugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.697 226833 DEBUG oslo_concurrency.lockutils [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.698 226833 DEBUG oslo_concurrency.lockutils [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.698 226833 DEBUG oslo_concurrency.lockutils [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.698 226833 DEBUG nova.compute.manager [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] No waiting events found dispatching network-vif-plugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:45:14 compute-2 nova_compute[226829]: 2026-01-31 07:45:14.699 226833 WARNING nova.compute.manager [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Received unexpected event network-vif-plugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c for instance with vm_state active and task_state None.
Jan 31 07:45:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:14.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:14.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:16 compute-2 ceph-mon[77282]: pgmap v1399: 305 pgs: 305 active+clean; 358 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 120 op/s
Jan 31 07:45:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:16.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:18 compute-2 ceph-mon[77282]: pgmap v1400: 305 pgs: 305 active+clean; 418 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.1 MiB/s wr, 243 op/s
Jan 31 07:45:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:18.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:19 compute-2 nova_compute[226829]: 2026-01-31 07:45:19.008 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:19 compute-2 nova_compute[226829]: 2026-01-31 07:45:19.187 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:45:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:20.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:45:20 compute-2 ceph-mon[77282]: pgmap v1401: 305 pgs: 305 active+clean; 411 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 210 op/s
Jan 31 07:45:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3042256607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1545433916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:20.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:21 compute-2 sudo[250099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:21 compute-2 sudo[250099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:21 compute-2 sudo[250099]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:21 compute-2 sudo[250124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:45:21 compute-2 sudo[250124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:21 compute-2 sudo[250124]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:21 compute-2 sudo[250149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:21 compute-2 sudo[250149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:21 compute-2 sudo[250149]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:21 compute-2 podman[250174]: 2026-01-31 07:45:21.263628863 +0000 UTC m=+0.042458641 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:45:21 compute-2 sudo[250180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:45:21 compute-2 sudo[250180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:21 compute-2 ceph-mon[77282]: pgmap v1402: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 223 op/s
Jan 31 07:45:21 compute-2 podman[250292]: 2026-01-31 07:45:21.788541834 +0000 UTC m=+0.074530330 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:45:21 compute-2 podman[250292]: 2026-01-31 07:45:21.918393641 +0000 UTC m=+0.204382037 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:45:22 compute-2 podman[250448]: 2026-01-31 07:45:22.565854531 +0000 UTC m=+0.076527714 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:45:22 compute-2 podman[250448]: 2026-01-31 07:45:22.576962952 +0000 UTC m=+0.087636125 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:45:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:22.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:22 compute-2 podman[250510]: 2026-01-31 07:45:22.908402552 +0000 UTC m=+0.142282146 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, release=1793, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, description=keepalived for Ceph, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 31 07:45:22 compute-2 podman[250510]: 2026-01-31 07:45:22.948488608 +0000 UTC m=+0.182368132 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, release=1793, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 07:45:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:22.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:22 compute-2 sudo[250180]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:23 compute-2 sudo[250542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:23 compute-2 sudo[250542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:23 compute-2 sudo[250542]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:23 compute-2 sudo[250567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:45:23 compute-2 sudo[250567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:23 compute-2 sudo[250567]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:23 compute-2 sudo[250592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:23 compute-2 nova_compute[226829]: 2026-01-31 07:45:23.206 226833 DEBUG nova.compute.manager [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Received event network-changed-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:23 compute-2 nova_compute[226829]: 2026-01-31 07:45:23.207 226833 DEBUG nova.compute.manager [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Refreshing instance network info cache due to event network-changed-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:45:23 compute-2 nova_compute[226829]: 2026-01-31 07:45:23.208 226833 DEBUG oslo_concurrency.lockutils [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:23 compute-2 nova_compute[226829]: 2026-01-31 07:45:23.208 226833 DEBUG oslo_concurrency.lockutils [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:23 compute-2 sudo[250592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:23 compute-2 nova_compute[226829]: 2026-01-31 07:45:23.209 226833 DEBUG nova.network.neutron [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Refreshing network info cache for port 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:45:23 compute-2 sudo[250592]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:23 compute-2 sudo[250618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:45:23 compute-2 sudo[250618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:23 compute-2 sudo[250618]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:24 compute-2 nova_compute[226829]: 2026-01-31 07:45:24.063 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:24 compute-2 nova_compute[226829]: 2026-01-31 07:45:24.189 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:24 compute-2 ceph-mon[77282]: pgmap v1403: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Jan 31 07:45:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/408421975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:45:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:45:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:24.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:24.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:45:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:45:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:45:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2415221445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1950759889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:26 compute-2 ceph-mon[77282]: pgmap v1404: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 176 op/s
Jan 31 07:45:26 compute-2 ovn_controller[133834]: 2026-01-31T07:45:26Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:47:4e 10.100.0.8
Jan 31 07:45:26 compute-2 ovn_controller[133834]: 2026-01-31T07:45:26Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:47:4e 10.100.0.8
Jan 31 07:45:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:26.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:26.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:27 compute-2 nova_compute[226829]: 2026-01-31 07:45:27.136 226833 DEBUG nova.network.neutron [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Updated VIF entry in instance network info cache for port 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:45:27 compute-2 nova_compute[226829]: 2026-01-31 07:45:27.137 226833 DEBUG nova.network.neutron [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Updating instance_info_cache with network_info: [{"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:27 compute-2 nova_compute[226829]: 2026-01-31 07:45:27.177 226833 DEBUG oslo_concurrency.lockutils [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:28.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:28 compute-2 ceph-mon[77282]: pgmap v1405: 305 pgs: 305 active+clean; 347 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.1 MiB/s wr, 248 op/s
Jan 31 07:45:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:28.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:29 compute-2 nova_compute[226829]: 2026-01-31 07:45:29.065 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:29 compute-2 nova_compute[226829]: 2026-01-31 07:45:29.191 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:30 compute-2 ceph-mon[77282]: pgmap v1406: 305 pgs: 305 active+clean; 353 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 159 op/s
Jan 31 07:45:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:45:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:45:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:30.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:31 compute-2 ceph-mon[77282]: pgmap v1407: 305 pgs: 305 active+clean; 370 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 218 op/s
Jan 31 07:45:32 compute-2 sudo[250679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:32 compute-2 sudo[250679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:32 compute-2 sudo[250679]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:32 compute-2 sudo[250704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:32 compute-2 sudo[250704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:32 compute-2 sudo[250704]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:32.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:32.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:34 compute-2 nova_compute[226829]: 2026-01-31 07:45:34.112 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:34 compute-2 ceph-mon[77282]: pgmap v1408: 305 pgs: 305 active+clean; 372 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 214 op/s
Jan 31 07:45:34 compute-2 nova_compute[226829]: 2026-01-31 07:45:34.195 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:34.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:34.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:35 compute-2 sudo[250731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:35 compute-2 sudo[250731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:35 compute-2 sudo[250731]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:35 compute-2 sudo[250756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:45:35 compute-2 sudo[250756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:35 compute-2 sudo[250756]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:36 compute-2 ceph-mon[77282]: pgmap v1409: 305 pgs: 305 active+clean; 372 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Jan 31 07:45:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:36.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e210 e210: 3 total, 3 up, 3 in
Jan 31 07:45:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:45:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:36.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:45:37 compute-2 nova_compute[226829]: 2026-01-31 07:45:37.292 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:37 compute-2 nova_compute[226829]: 2026-01-31 07:45:37.293 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:37 compute-2 nova_compute[226829]: 2026-01-31 07:45:37.328 226833 DEBUG nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:45:37 compute-2 nova_compute[226829]: 2026-01-31 07:45:37.701 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:37 compute-2 nova_compute[226829]: 2026-01-31 07:45:37.702 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:37 compute-2 nova_compute[226829]: 2026-01-31 07:45:37.712 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:45:37 compute-2 nova_compute[226829]: 2026-01-31 07:45:37.713 226833 INFO nova.compute.claims [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:45:37 compute-2 nova_compute[226829]: 2026-01-31 07:45:37.911 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:37 compute-2 ceph-mon[77282]: pgmap v1410: 305 pgs: 305 active+clean; 381 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.1 MiB/s wr, 237 op/s
Jan 31 07:45:37 compute-2 ceph-mon[77282]: osdmap e210: 3 total, 3 up, 3 in
Jan 31 07:45:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:45:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3713299769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:38 compute-2 nova_compute[226829]: 2026-01-31 07:45:38.375 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:38 compute-2 nova_compute[226829]: 2026-01-31 07:45:38.383 226833 DEBUG nova.compute.provider_tree [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:45:38 compute-2 nova_compute[226829]: 2026-01-31 07:45:38.448 226833 DEBUG nova.scheduler.client.report [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:45:38 compute-2 nova_compute[226829]: 2026-01-31 07:45:38.526 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:38 compute-2 nova_compute[226829]: 2026-01-31 07:45:38.527 226833 DEBUG nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:45:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:38.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:38 compute-2 nova_compute[226829]: 2026-01-31 07:45:38.912 226833 DEBUG nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:45:38 compute-2 nova_compute[226829]: 2026-01-31 07:45:38.913 226833 DEBUG nova.network.neutron [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:45:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:38 compute-2 nova_compute[226829]: 2026-01-31 07:45:38.973 226833 INFO nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:45:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.026 226833 DEBUG nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.116 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.165 226833 DEBUG nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.166 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.167 226833 INFO nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Creating image(s)
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.198 226833 DEBUG nova.storage.rbd_utils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.227 226833 DEBUG nova.storage.rbd_utils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.257 226833 DEBUG nova.storage.rbd_utils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.262 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.279 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.284 226833 DEBUG nova.policy [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97abab8eb79247cd89fb2ebff295b890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.315 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.316 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.316 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.317 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.344 226833 DEBUG nova.storage.rbd_utils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:39 compute-2 nova_compute[226829]: 2026-01-31 07:45:39.348 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3713299769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:40.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:42.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:42.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:43 compute-2 ceph-mon[77282]: pgmap v1412: 305 pgs: 305 active+clean; 385 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 164 op/s
Jan 31 07:45:43 compute-2 podman[250900]: 2026-01-31 07:45:43.282242416 +0000 UTC m=+0.154311911 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:45:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:44 compute-2 nova_compute[226829]: 2026-01-31 07:45:44.119 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:44 compute-2 nova_compute[226829]: 2026-01-31 07:45:44.199 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:44 compute-2 ceph-mon[77282]: pgmap v1413: 305 pgs: 305 active+clean; 398 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 127 op/s
Jan 31 07:45:44 compute-2 ceph-mon[77282]: pgmap v1414: 305 pgs: 305 active+clean; 427 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Jan 31 07:45:44 compute-2 nova_compute[226829]: 2026-01-31 07:45:44.457 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:44 compute-2 nova_compute[226829]: 2026-01-31 07:45:44.578 226833 DEBUG nova.storage.rbd_utils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] resizing rbd image ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:45:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:44.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:45:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2852159801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:44.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:45 compute-2 nova_compute[226829]: 2026-01-31 07:45:45.223 226833 DEBUG nova.objects.instance [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'migration_context' on Instance uuid ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:45 compute-2 nova_compute[226829]: 2026-01-31 07:45:45.260 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:45:45 compute-2 nova_compute[226829]: 2026-01-31 07:45:45.261 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Ensure instance console log exists: /var/lib/nova/instances/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:45:45 compute-2 nova_compute[226829]: 2026-01-31 07:45:45.262 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:45 compute-2 nova_compute[226829]: 2026-01-31 07:45:45.263 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:45 compute-2 nova_compute[226829]: 2026-01-31 07:45:45.263 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:45 compute-2 nova_compute[226829]: 2026-01-31 07:45:45.404 226833 DEBUG nova.network.neutron [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Successfully created port: e24f0391-b643-4a28-a184-a94f3b8aac45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:45:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2852159801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3117920125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:45:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3117920125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:45:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e211 e211: 3 total, 3 up, 3 in
Jan 31 07:45:46 compute-2 ceph-mon[77282]: pgmap v1415: 305 pgs: 305 active+clean; 427 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Jan 31 07:45:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:46.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:46 compute-2 nova_compute[226829]: 2026-01-31 07:45:46.970 226833 DEBUG nova.network.neutron [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Successfully updated port: e24f0391-b643-4a28-a184-a94f3b8aac45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:45:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:47 compute-2 nova_compute[226829]: 2026-01-31 07:45:47.003 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:47 compute-2 nova_compute[226829]: 2026-01-31 07:45:47.003 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:47 compute-2 nova_compute[226829]: 2026-01-31 07:45:47.003 226833 DEBUG nova.network.neutron [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:45:47 compute-2 nova_compute[226829]: 2026-01-31 07:45:47.249 226833 DEBUG oslo_concurrency.lockutils [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:47 compute-2 nova_compute[226829]: 2026-01-31 07:45:47.250 226833 DEBUG oslo_concurrency.lockutils [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:47 compute-2 nova_compute[226829]: 2026-01-31 07:45:47.270 226833 DEBUG nova.objects.instance [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'flavor' on Instance uuid e7df3fd9-ff03-4b35-930a-330e9dff6d0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:47 compute-2 nova_compute[226829]: 2026-01-31 07:45:47.326 226833 DEBUG oslo_concurrency.lockutils [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:47 compute-2 nova_compute[226829]: 2026-01-31 07:45:47.452 226833 DEBUG nova.network.neutron [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:45:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e212 e212: 3 total, 3 up, 3 in
Jan 31 07:45:47 compute-2 ceph-mon[77282]: osdmap e211: 3 total, 3 up, 3 in
Jan 31 07:45:47 compute-2 ceph-mon[77282]: pgmap v1417: 305 pgs: 305 active+clean; 508 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.2 MiB/s wr, 170 op/s
Jan 31 07:45:48 compute-2 nova_compute[226829]: 2026-01-31 07:45:48.103 226833 DEBUG nova.compute.manager [req-590fe775-9496-4620-aec1-ace778ee5e2a req-e14f2fd3-5e5d-4b53-8498-bee84864516a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Received event network-changed-e24f0391-b643-4a28-a184-a94f3b8aac45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:48 compute-2 nova_compute[226829]: 2026-01-31 07:45:48.104 226833 DEBUG nova.compute.manager [req-590fe775-9496-4620-aec1-ace778ee5e2a req-e14f2fd3-5e5d-4b53-8498-bee84864516a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Refreshing instance network info cache due to event network-changed-e24f0391-b643-4a28-a184-a94f3b8aac45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:45:48 compute-2 nova_compute[226829]: 2026-01-31 07:45:48.105 226833 DEBUG oslo_concurrency.lockutils [req-590fe775-9496-4620-aec1-ace778ee5e2a req-e14f2fd3-5e5d-4b53-8498-bee84864516a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:48 compute-2 nova_compute[226829]: 2026-01-31 07:45:48.463 226833 DEBUG oslo_concurrency.lockutils [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:48 compute-2 nova_compute[226829]: 2026-01-31 07:45:48.464 226833 DEBUG oslo_concurrency.lockutils [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:48 compute-2 nova_compute[226829]: 2026-01-31 07:45:48.465 226833 INFO nova.compute.manager [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Attaching volume d1aca578-ec32-4cbf-a124-22c55e831394 to /dev/vdb
Jan 31 07:45:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:48.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:48 compute-2 ceph-mon[77282]: osdmap e212: 3 total, 3 up, 3 in
Jan 31 07:45:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:49.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.123 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.181 226833 DEBUG os_brick.utils [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.185 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.198 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.199 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[33a14fb7-ec5b-4af0-a0f9-fa9005b805b9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.201 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.201 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.211 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.211 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca73575-3baf-45dd-b014-54d751fc6c3a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.214 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.221 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.221 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[32f78ddc-3873-4bba-aca1-e7974f55dd62]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.223 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[d23c5f59-007a-4742-ae86-f8397cdbeb76]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.224 226833 DEBUG oslo_concurrency.processutils [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.250 226833 DEBUG oslo_concurrency.processutils [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.254 226833 DEBUG os_brick.initiator.connectors.lightos [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.254 226833 DEBUG os_brick.initiator.connectors.lightos [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.255 226833 DEBUG os_brick.initiator.connectors.lightos [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.256 226833 DEBUG os_brick.utils [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:45:49 compute-2 nova_compute[226829]: 2026-01-31 07:45:49.256 226833 DEBUG nova.virt.block_device [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Updating existing volume attachment record: 0c8e863f-aae8-4385-bd78-857678107448 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 07:45:50 compute-2 ceph-mon[77282]: pgmap v1419: 305 pgs: 305 active+clean; 525 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 8.4 MiB/s wr, 164 op/s
Jan 31 07:45:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:45:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/773784101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:50 compute-2 nova_compute[226829]: 2026-01-31 07:45:50.456 226833 DEBUG nova.objects.instance [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'flavor' on Instance uuid e7df3fd9-ff03-4b35-930a-330e9dff6d0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:50 compute-2 nova_compute[226829]: 2026-01-31 07:45:50.499 226833 DEBUG nova.virt.libvirt.driver [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Attempting to attach volume d1aca578-ec32-4cbf-a124-22c55e831394 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 07:45:50 compute-2 nova_compute[226829]: 2026-01-31 07:45:50.504 226833 DEBUG nova.virt.libvirt.guest [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 07:45:50 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:45:50 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-d1aca578-ec32-4cbf-a124-22c55e831394">
Jan 31 07:45:50 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:45:50 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:45:50 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:45:50 compute-2 nova_compute[226829]:   </source>
Jan 31 07:45:50 compute-2 nova_compute[226829]:   <auth username="openstack">
Jan 31 07:45:50 compute-2 nova_compute[226829]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:45:50 compute-2 nova_compute[226829]:   </auth>
Jan 31 07:45:50 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:45:50 compute-2 nova_compute[226829]:   <serial>d1aca578-ec32-4cbf-a124-22c55e831394</serial>
Jan 31 07:45:50 compute-2 nova_compute[226829]:   <shareable/>
Jan 31 07:45:50 compute-2 nova_compute[226829]: </disk>
Jan 31 07:45:50 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 07:45:50 compute-2 nova_compute[226829]: 2026-01-31 07:45:50.663 226833 DEBUG nova.virt.libvirt.driver [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:50 compute-2 nova_compute[226829]: 2026-01-31 07:45:50.664 226833 DEBUG nova.virt.libvirt.driver [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:50 compute-2 nova_compute[226829]: 2026-01-31 07:45:50.665 226833 DEBUG nova.virt.libvirt.driver [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:50 compute-2 nova_compute[226829]: 2026-01-31 07:45:50.665 226833 DEBUG nova.virt.libvirt.driver [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No VIF found with MAC fa:16:3e:3b:47:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:45:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:50.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:45:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:51.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:45:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/773784101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e213 e213: 3 total, 3 up, 3 in
Jan 31 07:45:51 compute-2 nova_compute[226829]: 2026-01-31 07:45:51.498 226833 DEBUG oslo_concurrency.lockutils [None req-22780929-a018-4e83-be8f-cd43fae25059 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:52 compute-2 podman[251031]: 2026-01-31 07:45:52.185072831 +0000 UTC m=+0.064644002 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.298 226833 DEBUG nova.network.neutron [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Updating instance_info_cache with network_info: [{"id": "e24f0391-b643-4a28-a184-a94f3b8aac45", "address": "fa:16:3e:76:45:c4", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape24f0391-b6", "ovs_interfaceid": "e24f0391-b643-4a28-a184-a94f3b8aac45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.372 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.372 226833 DEBUG nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Instance network_info: |[{"id": "e24f0391-b643-4a28-a184-a94f3b8aac45", "address": "fa:16:3e:76:45:c4", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape24f0391-b6", "ovs_interfaceid": "e24f0391-b643-4a28-a184-a94f3b8aac45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.372 226833 DEBUG oslo_concurrency.lockutils [req-590fe775-9496-4620-aec1-ace778ee5e2a req-e14f2fd3-5e5d-4b53-8498-bee84864516a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.373 226833 DEBUG nova.network.neutron [req-590fe775-9496-4620-aec1-ace778ee5e2a req-e14f2fd3-5e5d-4b53-8498-bee84864516a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Refreshing network info cache for port e24f0391-b643-4a28-a184-a94f3b8aac45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.375 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Start _get_guest_xml network_info=[{"id": "e24f0391-b643-4a28-a184-a94f3b8aac45", "address": "fa:16:3e:76:45:c4", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape24f0391-b6", "ovs_interfaceid": "e24f0391-b643-4a28-a184-a94f3b8aac45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.379 226833 WARNING nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.386 226833 DEBUG nova.virt.libvirt.host [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.386 226833 DEBUG nova.virt.libvirt.host [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.390 226833 DEBUG nova.virt.libvirt.host [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.390 226833 DEBUG nova.virt.libvirt.host [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.391 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.391 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.392 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.392 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.392 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.392 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.392 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.392 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.393 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.393 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.393 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.393 226833 DEBUG nova.virt.hardware [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.395 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:52 compute-2 ceph-mon[77282]: pgmap v1420: 305 pgs: 305 active+clean; 528 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 552 KiB/s rd, 7.0 MiB/s wr, 184 op/s
Jan 31 07:45:52 compute-2 ceph-mon[77282]: osdmap e213: 3 total, 3 up, 3 in
Jan 31 07:45:52 compute-2 sudo[251070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:52 compute-2 sudo[251070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:52 compute-2 sudo[251070]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:52 compute-2 sudo[251095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:52 compute-2 sudo[251095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:52.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:52 compute-2 sudo[251095]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:45:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/107301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.859 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.888 226833 DEBUG nova.storage.rbd_utils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:52 compute-2 nova_compute[226829]: 2026-01-31 07:45:52.895 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:53.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:45:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1776851338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.298 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.301 226833 DEBUG nova.virt.libvirt.vif [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-568739362',display_name='tempest-DeleteServersTestJSON-server-568739362',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-568739362',id=59,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-fi96aic9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:45:39Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e24f0391-b643-4a28-a184-a94f3b8aac45", "address": "fa:16:3e:76:45:c4", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape24f0391-b6", "ovs_interfaceid": "e24f0391-b643-4a28-a184-a94f3b8aac45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.301 226833 DEBUG nova.network.os_vif_util [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "e24f0391-b643-4a28-a184-a94f3b8aac45", "address": "fa:16:3e:76:45:c4", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape24f0391-b6", "ovs_interfaceid": "e24f0391-b643-4a28-a184-a94f3b8aac45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.303 226833 DEBUG nova.network.os_vif_util [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:45:c4,bridge_name='br-int',has_traffic_filtering=True,id=e24f0391-b643-4a28-a184-a94f3b8aac45,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape24f0391-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.305 226833 DEBUG nova.objects.instance [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/107301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1776851338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.590 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <uuid>ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4</uuid>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <name>instance-0000003b</name>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <nova:name>tempest-DeleteServersTestJSON-server-568739362</nova:name>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:45:52</nova:creationTime>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <nova:user uuid="97abab8eb79247cd89fb2ebff295b890">tempest-DeleteServersTestJSON-2031597701-project-member</nova:user>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <nova:project uuid="f299640bb1f64e5fa12b23955e5a2127">tempest-DeleteServersTestJSON-2031597701</nova:project>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <nova:port uuid="e24f0391-b643-4a28-a184-a94f3b8aac45">
Jan 31 07:45:53 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <system>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <entry name="serial">ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4</entry>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <entry name="uuid">ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4</entry>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     </system>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <os>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   </os>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <features>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   </features>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk">
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       </source>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk.config">
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       </source>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:45:53 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:76:45:c4"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <target dev="tape24f0391-b6"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4/console.log" append="off"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <video>
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     </video>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:45:53 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:45:53 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:45:53 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:45:53 compute-2 nova_compute[226829]: </domain>
Jan 31 07:45:53 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.591 226833 DEBUG nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Preparing to wait for external event network-vif-plugged-e24f0391-b643-4a28-a184-a94f3b8aac45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.591 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.592 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.592 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.593 226833 DEBUG nova.virt.libvirt.vif [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-568739362',display_name='tempest-DeleteServersTestJSON-server-568739362',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-568739362',id=59,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-fi96aic9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:45:39Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e24f0391-b643-4a28-a184-a94f3b8aac45", "address": "fa:16:3e:76:45:c4", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape24f0391-b6", "ovs_interfaceid": "e24f0391-b643-4a28-a184-a94f3b8aac45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.593 226833 DEBUG nova.network.os_vif_util [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "e24f0391-b643-4a28-a184-a94f3b8aac45", "address": "fa:16:3e:76:45:c4", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape24f0391-b6", "ovs_interfaceid": "e24f0391-b643-4a28-a184-a94f3b8aac45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.594 226833 DEBUG nova.network.os_vif_util [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:45:c4,bridge_name='br-int',has_traffic_filtering=True,id=e24f0391-b643-4a28-a184-a94f3b8aac45,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape24f0391-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.594 226833 DEBUG os_vif [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:45:c4,bridge_name='br-int',has_traffic_filtering=True,id=e24f0391-b643-4a28-a184-a94f3b8aac45,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape24f0391-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.595 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.595 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.596 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.602 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.603 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape24f0391-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.604 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape24f0391-b6, col_values=(('external_ids', {'iface-id': 'e24f0391-b643-4a28-a184-a94f3b8aac45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:45:c4', 'vm-uuid': 'ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.605 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:53 compute-2 NetworkManager[48999]: <info>  [1769845553.6072] manager: (tape24f0391-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.609 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.611 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:53 compute-2 nova_compute[226829]: 2026-01-31 07:45:53.612 226833 INFO os_vif [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:45:c4,bridge_name='br-int',has_traffic_filtering=True,id=e24f0391-b643-4a28-a184-a94f3b8aac45,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape24f0391-b6')
Jan 31 07:45:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:54 compute-2 nova_compute[226829]: 2026-01-31 07:45:54.124 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:54 compute-2 nova_compute[226829]: 2026-01-31 07:45:54.125 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:54 compute-2 nova_compute[226829]: 2026-01-31 07:45:54.125 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No VIF found with MAC fa:16:3e:76:45:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:45:54 compute-2 nova_compute[226829]: 2026-01-31 07:45:54.126 226833 INFO nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Using config drive
Jan 31 07:45:54 compute-2 nova_compute[226829]: 2026-01-31 07:45:54.161 226833 DEBUG nova.storage.rbd_utils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:54 compute-2 nova_compute[226829]: 2026-01-31 07:45:54.203 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:54 compute-2 ceph-mon[77282]: pgmap v1422: 305 pgs: 305 active+clean; 506 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 524 KiB/s rd, 1.8 MiB/s wr, 153 op/s
Jan 31 07:45:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:54.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:55.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:55 compute-2 nova_compute[226829]: 2026-01-31 07:45:55.378 226833 INFO nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Creating config drive at /var/lib/nova/instances/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4/disk.config
Jan 31 07:45:55 compute-2 nova_compute[226829]: 2026-01-31 07:45:55.382 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3tcsnc_f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:55 compute-2 nova_compute[226829]: 2026-01-31 07:45:55.510 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3tcsnc_f" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:55 compute-2 nova_compute[226829]: 2026-01-31 07:45:55.558 226833 DEBUG nova.storage.rbd_utils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:55 compute-2 nova_compute[226829]: 2026-01-31 07:45:55.563 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4/disk.config ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:55 compute-2 ceph-mon[77282]: pgmap v1423: 305 pgs: 305 active+clean; 506 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 1.4 MiB/s wr, 119 op/s
Jan 31 07:45:55 compute-2 nova_compute[226829]: 2026-01-31 07:45:55.975 226833 DEBUG oslo_concurrency.processutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4/disk.config ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:55 compute-2 nova_compute[226829]: 2026-01-31 07:45:55.977 226833 INFO nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Deleting local config drive /var/lib/nova/instances/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4/disk.config because it was imported into RBD.
Jan 31 07:45:56 compute-2 kernel: tape24f0391-b6: entered promiscuous mode
Jan 31 07:45:56 compute-2 NetworkManager[48999]: <info>  [1769845556.0294] manager: (tape24f0391-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Jan 31 07:45:56 compute-2 ovn_controller[133834]: 2026-01-31T07:45:56Z|00144|binding|INFO|Claiming lport e24f0391-b643-4a28-a184-a94f3b8aac45 for this chassis.
Jan 31 07:45:56 compute-2 ovn_controller[133834]: 2026-01-31T07:45:56Z|00145|binding|INFO|e24f0391-b643-4a28-a184-a94f3b8aac45: Claiming fa:16:3e:76:45:c4 10.100.0.4
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.031 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.033 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:56 compute-2 ovn_controller[133834]: 2026-01-31T07:45:56Z|00146|binding|INFO|Setting lport e24f0391-b643-4a28-a184-a94f3b8aac45 ovn-installed in OVS
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.041 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.042 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:56 compute-2 ovn_controller[133834]: 2026-01-31T07:45:56Z|00147|binding|INFO|Setting lport e24f0391-b643-4a28-a184-a94f3b8aac45 up in Southbound
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.047 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:45:c4 10.100.0.4'], port_security=['fa:16:3e:76:45:c4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=e24f0391-b643-4a28-a184-a94f3b8aac45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.050 143841 INFO neutron.agent.ovn.metadata.agent [-] Port e24f0391-b643-4a28-a184-a94f3b8aac45 in datapath 60244e92-1524-47f0-8207-05d0104afa47 bound to our chassis
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.053 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:45:56 compute-2 systemd-machined[195142]: New machine qemu-24-instance-0000003b.
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.079 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[55000773-04d2-456d-a602-b1a5c7434dc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.081 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60244e92-11 in ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.084 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60244e92-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.084 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce64644-858f-4ce3-82db-0e86b5d96b6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.086 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[27681eb7-f4b2-4b46-a195-e5592ffd8357]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 systemd[1]: Started Virtual Machine qemu-24-instance-0000003b.
Jan 31 07:45:56 compute-2 systemd-udevd[251241]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.102 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[a06c6f14-1f90-4107-a2c2-f382dd68f7dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 NetworkManager[48999]: <info>  [1769845556.1101] device (tape24f0391-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:45:56 compute-2 NetworkManager[48999]: <info>  [1769845556.1113] device (tape24f0391-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.113 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[72e11a86-4fa5-4204-a938-f4a75ac1411e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.143 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[938a39e1-c6b1-47e2-b33f-1d8c58be3dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 NetworkManager[48999]: <info>  [1769845556.1505] manager: (tap60244e92-10): new Veth device (/org/freedesktop/NetworkManager/Devices/87)
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.149 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b4b16a-54df-494f-83c9-54e13d8c26d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.173 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7ed47d-9793-4e40-b2f9-9c89566de15d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.176 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[1c984ccd-eae0-4b82-8be4-e6658510c76d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 NetworkManager[48999]: <info>  [1769845556.1933] device (tap60244e92-10): carrier: link connected
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.197 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[0a04ec4a-b379-4340-8a97-8896b00d6fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.214 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[267cf319-f1c9-4c30-98e6-9aaa9bd0b4aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580866, 'reachable_time': 24234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251272, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.229 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ee947caa-2b3b-4b36-a64e-88ee8f34673e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:59f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580866, 'tstamp': 580866}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251273, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.244 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c2916977-f36a-4c61-a06f-37d1a65a06f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580866, 'reachable_time': 24234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251274, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.270 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[87a22f2e-60c6-4d54-833f-e6430de087e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.319 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[61687375-2661-4135-b428-957dfe302064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.322 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.322 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.323 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60244e92-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:56 compute-2 kernel: tap60244e92-10: entered promiscuous mode
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.325 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:56 compute-2 NetworkManager[48999]: <info>  [1769845556.3299] manager: (tap60244e92-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.335 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60244e92-10, col_values=(('external_ids', {'iface-id': '4e20d708-9f46-41f3-86c0-9ab3849f8392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.337 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:56 compute-2 ovn_controller[133834]: 2026-01-31T07:45:56Z|00148|binding|INFO|Releasing lport 4e20d708-9f46-41f3-86c0-9ab3849f8392 from this chassis (sb_readonly=0)
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.343 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.345 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.346 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[71a45350-e25e-44af-b63d-71590d4114e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.348 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:45:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:45:56.349 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'env', 'PROCESS_TAG=haproxy-60244e92-1524-47f0-8207-05d0104afa47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60244e92-1524-47f0-8207-05d0104afa47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:45:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e214 e214: 3 total, 3 up, 3 in
Jan 31 07:45:56 compute-2 podman[251331]: 2026-01-31 07:45:56.718833643 +0000 UTC m=+0.058213389 container create 90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:45:56 compute-2 systemd[1]: Started libpod-conmon-90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d.scope.
Jan 31 07:45:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:56.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:56 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:45:56 compute-2 podman[251331]: 2026-01-31 07:45:56.68735231 +0000 UTC m=+0.026732076 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:45:56 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a70fc3911f5d1eb214d6d03cc0cb74e384b98e7624eb8b18f93c906b347b542/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.831 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845556.8304775, ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.831 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] VM Started (Lifecycle Event)
Jan 31 07:45:56 compute-2 podman[251331]: 2026-01-31 07:45:56.8339171 +0000 UTC m=+0.173296866 container init 90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 07:45:56 compute-2 podman[251331]: 2026-01-31 07:45:56.839000588 +0000 UTC m=+0.178380334 container start 90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 07:45:56 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[251363]: [NOTICE]   (251367) : New worker (251369) forked
Jan 31 07:45:56 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[251363]: [NOTICE]   (251367) : Loading success.
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.904 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.909 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845556.8308532, ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.909 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] VM Paused (Lifecycle Event)
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.954 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.958 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:45:56 compute-2 nova_compute[226829]: 2026-01-31 07:45:56.997 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:45:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:57.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.244 226833 DEBUG nova.network.neutron [req-590fe775-9496-4620-aec1-ace778ee5e2a req-e14f2fd3-5e5d-4b53-8498-bee84864516a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Updated VIF entry in instance network info cache for port e24f0391-b643-4a28-a184-a94f3b8aac45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.245 226833 DEBUG nova.network.neutron [req-590fe775-9496-4620-aec1-ace778ee5e2a req-e14f2fd3-5e5d-4b53-8498-bee84864516a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Updating instance_info_cache with network_info: [{"id": "e24f0391-b643-4a28-a184-a94f3b8aac45", "address": "fa:16:3e:76:45:c4", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape24f0391-b6", "ovs_interfaceid": "e24f0391-b643-4a28-a184-a94f3b8aac45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.265 226833 DEBUG oslo_concurrency.lockutils [req-590fe775-9496-4620-aec1-ace778ee5e2a req-e14f2fd3-5e5d-4b53-8498-bee84864516a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.393524) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557393662, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2758, "num_deletes": 511, "total_data_size": 5744426, "memory_usage": 5817240, "flush_reason": "Manual Compaction"}
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Jan 31 07:45:57 compute-2 ceph-mon[77282]: osdmap e214: 3 total, 3 up, 3 in
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557415261, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3746998, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31230, "largest_seqno": 33983, "table_properties": {"data_size": 3736143, "index_size": 6453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26265, "raw_average_key_size": 20, "raw_value_size": 3712069, "raw_average_value_size": 2842, "num_data_blocks": 280, "num_entries": 1306, "num_filter_entries": 1306, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845336, "oldest_key_time": 1769845336, "file_creation_time": 1769845557, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 21786 microseconds, and 7505 cpu microseconds.
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.415319) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3746998 bytes OK
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.415338) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.419534) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.419549) EVENT_LOG_v1 {"time_micros": 1769845557419543, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.419566) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5731389, prev total WAL file size 5731389, number of live WAL files 2.
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.420464) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3659KB)], [60(8457KB)]
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557420568, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12407280, "oldest_snapshot_seqno": -1}
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.476 226833 DEBUG nova.compute.manager [req-7dd86621-a3fd-4e8e-9778-3ea3ef6f542b req-d5671751-24d5-4826-af37-5fd414183247 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Received event network-vif-plugged-e24f0391-b643-4a28-a184-a94f3b8aac45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.477 226833 DEBUG oslo_concurrency.lockutils [req-7dd86621-a3fd-4e8e-9778-3ea3ef6f542b req-d5671751-24d5-4826-af37-5fd414183247 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.477 226833 DEBUG oslo_concurrency.lockutils [req-7dd86621-a3fd-4e8e-9778-3ea3ef6f542b req-d5671751-24d5-4826-af37-5fd414183247 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.477 226833 DEBUG oslo_concurrency.lockutils [req-7dd86621-a3fd-4e8e-9778-3ea3ef6f542b req-d5671751-24d5-4826-af37-5fd414183247 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.477 226833 DEBUG nova.compute.manager [req-7dd86621-a3fd-4e8e-9778-3ea3ef6f542b req-d5671751-24d5-4826-af37-5fd414183247 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Processing event network-vif-plugged-e24f0391-b643-4a28-a184-a94f3b8aac45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.478 226833 DEBUG nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.483 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845557.4833627, ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.484 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] VM Resumed (Lifecycle Event)
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.486 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.488 226833 INFO nova.virt.libvirt.driver [-] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Instance spawned successfully.
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.489 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 5818 keys, 10384993 bytes, temperature: kUnknown
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557503529, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 10384993, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10343857, "index_size": 25474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 149869, "raw_average_key_size": 25, "raw_value_size": 10237011, "raw_average_value_size": 1759, "num_data_blocks": 1025, "num_entries": 5818, "num_filter_entries": 5818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769845557, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.503869) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10384993 bytes
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.506263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.4 rd, 125.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.3 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6861, records dropped: 1043 output_compression: NoCompression
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.506283) EVENT_LOG_v1 {"time_micros": 1769845557506273, "job": 36, "event": "compaction_finished", "compaction_time_micros": 83037, "compaction_time_cpu_micros": 38986, "output_level": 6, "num_output_files": 1, "total_output_size": 10384993, "num_input_records": 6861, "num_output_records": 5818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557506874, "job": 36, "event": "table_file_deletion", "file_number": 62}
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557508104, "job": 36, "event": "table_file_deletion", "file_number": 60}
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.420277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.508208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.508217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.508220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.508223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:45:57.508226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.529 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.529 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.530 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.530 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.531 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.531 226833 DEBUG nova.virt.libvirt.driver [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.573 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.576 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.641 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.663 226833 INFO nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Took 18.50 seconds to spawn the instance on the hypervisor.
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.664 226833 DEBUG nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.778 226833 INFO nova.compute.manager [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Took 20.17 seconds to build instance.
Jan 31 07:45:57 compute-2 nova_compute[226829]: 2026-01-31 07:45:57.856 226833 DEBUG oslo_concurrency.lockutils [None req-c3830a9f-4d69-4444-9b4d-93e1cad7d1d2 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:58 compute-2 ceph-mon[77282]: pgmap v1425: 305 pgs: 305 active+clean; 484 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 199 KiB/s wr, 88 op/s
Jan 31 07:45:58 compute-2 nova_compute[226829]: 2026-01-31 07:45:58.606 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:45:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:58.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:45:58 compute-2 nova_compute[226829]: 2026-01-31 07:45:58.849 226833 DEBUG oslo_concurrency.lockutils [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:58 compute-2 nova_compute[226829]: 2026-01-31 07:45:58.850 226833 DEBUG oslo_concurrency.lockutils [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:58 compute-2 nova_compute[226829]: 2026-01-31 07:45:58.891 226833 INFO nova.compute.manager [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Detaching volume d1aca578-ec32-4cbf-a124-22c55e831394
Jan 31 07:45:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:45:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:45:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:59.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.236 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.304 226833 INFO nova.virt.block_device [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Attempting to driver detach volume d1aca578-ec32-4cbf-a124-22c55e831394 from mountpoint /dev/vdb
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.314 226833 DEBUG nova.virt.libvirt.driver [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Attempting to detach device vdb from instance e7df3fd9-ff03-4b35-930a-330e9dff6d0e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.314 226833 DEBUG nova.virt.libvirt.guest [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-d1aca578-ec32-4cbf-a124-22c55e831394">
Jan 31 07:45:59 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   </source>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <serial>d1aca578-ec32-4cbf-a124-22c55e831394</serial>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <shareable/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]: </disk>
Jan 31 07:45:59 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.321 226833 INFO nova.virt.libvirt.driver [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Successfully detached device vdb from instance e7df3fd9-ff03-4b35-930a-330e9dff6d0e from the persistent domain config.
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.321 226833 DEBUG nova.virt.libvirt.driver [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e7df3fd9-ff03-4b35-930a-330e9dff6d0e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.322 226833 DEBUG nova.virt.libvirt.guest [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-d1aca578-ec32-4cbf-a124-22c55e831394">
Jan 31 07:45:59 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   </source>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <serial>d1aca578-ec32-4cbf-a124-22c55e831394</serial>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <shareable/>
Jan 31 07:45:59 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 07:45:59 compute-2 nova_compute[226829]: </disk>
Jan 31 07:45:59 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.434 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769845559.434541, e7df3fd9-ff03-4b35-930a-330e9dff6d0e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.436 226833 DEBUG nova.virt.libvirt.driver [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e7df3fd9-ff03-4b35-930a-330e9dff6d0e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.438 226833 INFO nova.virt.libvirt.driver [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Successfully detached device vdb from instance e7df3fd9-ff03-4b35-930a-330e9dff6d0e from the live domain config.
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.531 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.917 226833 DEBUG nova.compute.manager [req-42d213b2-2710-477e-9226-d5a29d1188fa req-ebc80cc1-bab1-4e68-a6dc-6f85cc5f8d14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Received event network-vif-plugged-e24f0391-b643-4a28-a184-a94f3b8aac45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.918 226833 DEBUG oslo_concurrency.lockutils [req-42d213b2-2710-477e-9226-d5a29d1188fa req-ebc80cc1-bab1-4e68-a6dc-6f85cc5f8d14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.919 226833 DEBUG oslo_concurrency.lockutils [req-42d213b2-2710-477e-9226-d5a29d1188fa req-ebc80cc1-bab1-4e68-a6dc-6f85cc5f8d14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.919 226833 DEBUG oslo_concurrency.lockutils [req-42d213b2-2710-477e-9226-d5a29d1188fa req-ebc80cc1-bab1-4e68-a6dc-6f85cc5f8d14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.920 226833 DEBUG nova.compute.manager [req-42d213b2-2710-477e-9226-d5a29d1188fa req-ebc80cc1-bab1-4e68-a6dc-6f85cc5f8d14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] No waiting events found dispatching network-vif-plugged-e24f0391-b643-4a28-a184-a94f3b8aac45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:45:59 compute-2 nova_compute[226829]: 2026-01-31 07:45:59.921 226833 WARNING nova.compute.manager [req-42d213b2-2710-477e-9226-d5a29d1188fa req-ebc80cc1-bab1-4e68-a6dc-6f85cc5f8d14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Received unexpected event network-vif-plugged-e24f0391-b643-4a28-a184-a94f3b8aac45 for instance with vm_state active and task_state None.
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.277 226833 INFO nova.compute.manager [None req-b96466dd-0233-4445-adbf-436c4fd7b3dc 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Pausing
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.278 226833 DEBUG nova.objects.instance [None req-b96466dd-0233-4445-adbf-436c4fd7b3dc 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'flavor' on Instance uuid ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.320 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845560.3205924, ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.321 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] VM Paused (Lifecycle Event)
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.324 226833 DEBUG nova.compute.manager [None req-b96466dd-0233-4445-adbf-436c4fd7b3dc 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.382 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.386 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.466 226833 DEBUG nova.objects.instance [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'flavor' on Instance uuid e7df3fd9-ff03-4b35-930a-330e9dff6d0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:00 compute-2 ceph-mon[77282]: pgmap v1426: 305 pgs: 305 active+clean; 484 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 30 KiB/s wr, 51 op/s
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.531 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.532 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:00.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:00 compute-2 nova_compute[226829]: 2026-01-31 07:46:00.781 226833 DEBUG oslo_concurrency.lockutils [None req-bbab14cd-2e55-41fd-be3d-fc8f45e418cb b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:01.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 e215: 3 total, 3 up, 3 in
Jan 31 07:46:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3982097981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:01 compute-2 ceph-mon[77282]: osdmap e215: 3 total, 3 up, 3 in
Jan 31 07:46:01 compute-2 nova_compute[226829]: 2026-01-31 07:46:01.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:01 compute-2 nova_compute[226829]: 2026-01-31 07:46:01.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:02 compute-2 nova_compute[226829]: 2026-01-31 07:46:02.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:02 compute-2 nova_compute[226829]: 2026-01-31 07:46:02.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:46:02 compute-2 nova_compute[226829]: 2026-01-31 07:46:02.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:46:02 compute-2 ceph-mon[77282]: pgmap v1427: 305 pgs: 305 active+clean; 429 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 58 KiB/s wr, 147 op/s
Jan 31 07:46:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1332833272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:02.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.000 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.000 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.001 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.001 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid e7df3fd9-ff03-4b35-930a-330e9dff6d0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:03.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.402 226833 DEBUG oslo_concurrency.lockutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.402 226833 DEBUG oslo_concurrency.lockutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.403 226833 DEBUG oslo_concurrency.lockutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.403 226833 DEBUG oslo_concurrency.lockutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.403 226833 DEBUG oslo_concurrency.lockutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.405 226833 INFO nova.compute.manager [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Terminating instance
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.407 226833 DEBUG nova.compute.manager [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.608 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2722501756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:03 compute-2 ceph-mon[77282]: pgmap v1429: 305 pgs: 305 active+clean; 405 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 44 KiB/s wr, 170 op/s
Jan 31 07:46:03 compute-2 kernel: tape24f0391-b6 (unregistering): left promiscuous mode
Jan 31 07:46:03 compute-2 NetworkManager[48999]: <info>  [1769845563.6951] device (tape24f0391-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:46:03 compute-2 ovn_controller[133834]: 2026-01-31T07:46:03Z|00149|binding|INFO|Releasing lport e24f0391-b643-4a28-a184-a94f3b8aac45 from this chassis (sb_readonly=0)
Jan 31 07:46:03 compute-2 ovn_controller[133834]: 2026-01-31T07:46:03Z|00150|binding|INFO|Setting lport e24f0391-b643-4a28-a184-a94f3b8aac45 down in Southbound
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.709 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:03 compute-2 ovn_controller[133834]: 2026-01-31T07:46:03Z|00151|binding|INFO|Removing iface tape24f0391-b6 ovn-installed in OVS
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.713 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.720 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:03.726 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:45:c4 10.100.0.4'], port_security=['fa:16:3e:76:45:c4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=e24f0391-b643-4a28-a184-a94f3b8aac45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:46:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:03.728 143841 INFO neutron.agent.ovn.metadata.agent [-] Port e24f0391-b643-4a28-a184-a94f3b8aac45 in datapath 60244e92-1524-47f0-8207-05d0104afa47 unbound from our chassis
Jan 31 07:46:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:03.731 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60244e92-1524-47f0-8207-05d0104afa47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:46:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:03.732 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[31998ecb-5ccf-45b2-87b2-173f81f128af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:03.733 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace which is not needed anymore
Jan 31 07:46:03 compute-2 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Jan 31 07:46:03 compute-2 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000003b.scope: Consumed 3.686s CPU time.
Jan 31 07:46:03 compute-2 systemd-machined[195142]: Machine qemu-24-instance-0000003b terminated.
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.838 226833 INFO nova.virt.libvirt.driver [-] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Instance destroyed successfully.
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.839 226833 DEBUG nova.objects.instance [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'resources' on Instance uuid ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.860 226833 DEBUG nova.virt.libvirt.vif [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-568739362',display_name='tempest-DeleteServersTestJSON-server-568739362',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-568739362',id=59,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:45:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-fi96aic9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:46:00Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "e24f0391-b643-4a28-a184-a94f3b8aac45", "address": "fa:16:3e:76:45:c4", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape24f0391-b6", "ovs_interfaceid": "e24f0391-b643-4a28-a184-a94f3b8aac45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.860 226833 DEBUG nova.network.os_vif_util [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "e24f0391-b643-4a28-a184-a94f3b8aac45", "address": "fa:16:3e:76:45:c4", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape24f0391-b6", "ovs_interfaceid": "e24f0391-b643-4a28-a184-a94f3b8aac45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.861 226833 DEBUG nova.network.os_vif_util [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:45:c4,bridge_name='br-int',has_traffic_filtering=True,id=e24f0391-b643-4a28-a184-a94f3b8aac45,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape24f0391-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.861 226833 DEBUG os_vif [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:45:c4,bridge_name='br-int',has_traffic_filtering=True,id=e24f0391-b643-4a28-a184-a94f3b8aac45,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape24f0391-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.863 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.864 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape24f0391-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.926 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:03 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[251363]: [NOTICE]   (251367) : haproxy version is 2.8.14-c23fe91
Jan 31 07:46:03 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[251363]: [NOTICE]   (251367) : path to executable is /usr/sbin/haproxy
Jan 31 07:46:03 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[251363]: [WARNING]  (251367) : Exiting Master process...
Jan 31 07:46:03 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[251363]: [ALERT]    (251367) : Current worker (251369) exited with code 143 (Terminated)
Jan 31 07:46:03 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[251363]: [WARNING]  (251367) : All workers exited. Exiting... (0)
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.928 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:46:03 compute-2 systemd[1]: libpod-90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d.scope: Deactivated successfully.
Jan 31 07:46:03 compute-2 nova_compute[226829]: 2026-01-31 07:46:03.932 226833 INFO os_vif [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:45:c4,bridge_name='br-int',has_traffic_filtering=True,id=e24f0391-b643-4a28-a184-a94f3b8aac45,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape24f0391-b6')
Jan 31 07:46:03 compute-2 podman[251407]: 2026-01-31 07:46:03.938306567 +0000 UTC m=+0.132404848 container died 90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 07:46:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:04 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d-userdata-shm.mount: Deactivated successfully.
Jan 31 07:46:04 compute-2 systemd[1]: var-lib-containers-storage-overlay-4a70fc3911f5d1eb214d6d03cc0cb74e384b98e7624eb8b18f93c906b347b542-merged.mount: Deactivated successfully.
Jan 31 07:46:04 compute-2 nova_compute[226829]: 2026-01-31 07:46:04.229 226833 DEBUG nova.compute.manager [req-39c2f07e-c33e-4371-9640-6633b528d532 req-8c57f930-ec8e-4abf-b893-428a893f6a49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Received event network-vif-unplugged-e24f0391-b643-4a28-a184-a94f3b8aac45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:04 compute-2 nova_compute[226829]: 2026-01-31 07:46:04.230 226833 DEBUG oslo_concurrency.lockutils [req-39c2f07e-c33e-4371-9640-6633b528d532 req-8c57f930-ec8e-4abf-b893-428a893f6a49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:04 compute-2 nova_compute[226829]: 2026-01-31 07:46:04.230 226833 DEBUG oslo_concurrency.lockutils [req-39c2f07e-c33e-4371-9640-6633b528d532 req-8c57f930-ec8e-4abf-b893-428a893f6a49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:04 compute-2 nova_compute[226829]: 2026-01-31 07:46:04.231 226833 DEBUG oslo_concurrency.lockutils [req-39c2f07e-c33e-4371-9640-6633b528d532 req-8c57f930-ec8e-4abf-b893-428a893f6a49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:04 compute-2 nova_compute[226829]: 2026-01-31 07:46:04.231 226833 DEBUG nova.compute.manager [req-39c2f07e-c33e-4371-9640-6633b528d532 req-8c57f930-ec8e-4abf-b893-428a893f6a49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] No waiting events found dispatching network-vif-unplugged-e24f0391-b643-4a28-a184-a94f3b8aac45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:46:04 compute-2 nova_compute[226829]: 2026-01-31 07:46:04.231 226833 DEBUG nova.compute.manager [req-39c2f07e-c33e-4371-9640-6633b528d532 req-8c57f930-ec8e-4abf-b893-428a893f6a49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Received event network-vif-unplugged-e24f0391-b643-4a28-a184-a94f3b8aac45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:46:04 compute-2 nova_compute[226829]: 2026-01-31 07:46:04.238 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:04 compute-2 podman[251407]: 2026-01-31 07:46:04.379240782 +0000 UTC m=+0.573339103 container cleanup 90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:46:04 compute-2 systemd[1]: libpod-conmon-90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d.scope: Deactivated successfully.
Jan 31 07:46:04 compute-2 podman[251464]: 2026-01-31 07:46:04.618116244 +0000 UTC m=+0.208905951 container remove 90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:46:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:04.634 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8ffa34-2d99-441a-96b3-f8d845811212]: (4, ('Sat Jan 31 07:46:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d)\n90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d\nSat Jan 31 07:46:04 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d)\n90216e89c0cf1971f98544af3a48ac0f8b2635ca4a8577e6f219197d8a137c5d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:04.637 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9139fc3a-cdbd-4070-8665-76167e2ef414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:04.638 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:04 compute-2 kernel: tap60244e92-10: left promiscuous mode
Jan 31 07:46:04 compute-2 nova_compute[226829]: 2026-01-31 07:46:04.640 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:04.645 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e17242f7-f55b-4d1a-8cd4-b4a0ae442633]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:04 compute-2 nova_compute[226829]: 2026-01-31 07:46:04.647 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:04.665 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[14df4cfe-6f5a-4176-98cc-25f0bf335571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:04.667 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0667a6-4cb4-4118-b6d2-2c1fd2cdb2d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:04.681 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f191211f-f124-4a4a-9b55-16ef6d1abcea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580861, 'reachable_time': 38083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251480, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:04 compute-2 systemd[1]: run-netns-ovnmeta\x2d60244e92\x2d1524\x2d47f0\x2d8207\x2d05d0104afa47.mount: Deactivated successfully.
Jan 31 07:46:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:04.684 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:46:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:04.685 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[9749f862-a795-4401-b78b-7c150dfe6867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:04.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:05.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:06 compute-2 ceph-mon[77282]: pgmap v1430: 305 pgs: 305 active+clean; 405 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 40 KiB/s wr, 150 op/s
Jan 31 07:46:06 compute-2 nova_compute[226829]: 2026-01-31 07:46:06.442 226833 DEBUG nova.compute.manager [req-a9cb1cd3-70ec-4e2e-a445-06a53a3e6cfa req-13bdf5d6-c460-413d-aabb-201171d109ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Received event network-vif-plugged-e24f0391-b643-4a28-a184-a94f3b8aac45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:06 compute-2 nova_compute[226829]: 2026-01-31 07:46:06.443 226833 DEBUG oslo_concurrency.lockutils [req-a9cb1cd3-70ec-4e2e-a445-06a53a3e6cfa req-13bdf5d6-c460-413d-aabb-201171d109ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:06 compute-2 nova_compute[226829]: 2026-01-31 07:46:06.443 226833 DEBUG oslo_concurrency.lockutils [req-a9cb1cd3-70ec-4e2e-a445-06a53a3e6cfa req-13bdf5d6-c460-413d-aabb-201171d109ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:06 compute-2 nova_compute[226829]: 2026-01-31 07:46:06.443 226833 DEBUG oslo_concurrency.lockutils [req-a9cb1cd3-70ec-4e2e-a445-06a53a3e6cfa req-13bdf5d6-c460-413d-aabb-201171d109ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:06 compute-2 nova_compute[226829]: 2026-01-31 07:46:06.443 226833 DEBUG nova.compute.manager [req-a9cb1cd3-70ec-4e2e-a445-06a53a3e6cfa req-13bdf5d6-c460-413d-aabb-201171d109ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] No waiting events found dispatching network-vif-plugged-e24f0391-b643-4a28-a184-a94f3b8aac45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:46:06 compute-2 nova_compute[226829]: 2026-01-31 07:46:06.443 226833 WARNING nova.compute.manager [req-a9cb1cd3-70ec-4e2e-a445-06a53a3e6cfa req-13bdf5d6-c460-413d-aabb-201171d109ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Received unexpected event network-vif-plugged-e24f0391-b643-4a28-a184-a94f3b8aac45 for instance with vm_state paused and task_state deleting.
Jan 31 07:46:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:46:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:06.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:46:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:06.854 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:06.855 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:06.855 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:07.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.370 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Updating instance_info_cache with network_info: [{"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.395 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-e7df3fd9-ff03-4b35-930a-330e9dff6d0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.396 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.396 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.397 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.431 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.431 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.432 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.432 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.433 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.854 226833 INFO nova.virt.libvirt.driver [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Deleting instance files /var/lib/nova/instances/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_del
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.855 226833 INFO nova.virt.libvirt.driver [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Deletion of /var/lib/nova/instances/ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4_del complete
Jan 31 07:46:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:46:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4285144845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.876 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.955 226833 INFO nova.compute.manager [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Took 4.55 seconds to destroy the instance on the hypervisor.
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.956 226833 DEBUG oslo.service.loopingcall [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.956 226833 DEBUG nova.compute.manager [-] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:46:07 compute-2 nova_compute[226829]: 2026-01-31 07:46:07.957 226833 DEBUG nova.network.neutron [-] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.005 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Error from libvirt while getting description of instance-0000003b: [Error Code 42] Domain not found: no domain with matching uuid 'ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4' (instance-0000003b): libvirt.libvirtError: Domain not found: no domain with matching uuid 'ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4' (instance-0000003b)
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.010 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.010 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.018 226833 DEBUG oslo_concurrency.lockutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.019 226833 DEBUG oslo_concurrency.lockutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.019 226833 DEBUG oslo_concurrency.lockutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.020 226833 DEBUG oslo_concurrency.lockutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.020 226833 DEBUG oslo_concurrency.lockutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.021 226833 INFO nova.compute.manager [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Terminating instance
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.022 226833 DEBUG nova.compute.manager [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:46:08 compute-2 kernel: tap3196ca0e-ce (unregistering): left promiscuous mode
Jan 31 07:46:08 compute-2 NetworkManager[48999]: <info>  [1769845568.1113] device (tap3196ca0e-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.115 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:08 compute-2 ovn_controller[133834]: 2026-01-31T07:46:08Z|00152|binding|INFO|Releasing lport 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c from this chassis (sb_readonly=0)
Jan 31 07:46:08 compute-2 ovn_controller[133834]: 2026-01-31T07:46:08Z|00153|binding|INFO|Setting lport 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c down in Southbound
Jan 31 07:46:08 compute-2 ovn_controller[133834]: 2026-01-31T07:46:08Z|00154|binding|INFO|Removing iface tap3196ca0e-ce ovn-installed in OVS
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.125 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.129 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:47:4e 10.100.0.8'], port_security=['fa:16:3e:3b:47:4e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e7df3fd9-ff03-4b35-930a-330e9dff6d0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9ddf930129cf4e0395f8c5e70fd9eda8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be9fd1f2-df08-4f20-8be4-2f77d359418d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=530610e1-1646-4c1c-9b6d-a046ad77685d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=3196ca0e-ce5d-4fbf-9341-3c29ba2d513c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.130 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 3196ca0e-ce5d-4fbf-9341-3c29ba2d513c in datapath d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 unbound from our chassis
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.132 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.133 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac4aa39-5f52-4981-b3a7-d846a6d7f8b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.133 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 namespace which is not needed anymore
Jan 31 07:46:08 compute-2 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000038.scope: Deactivated successfully.
Jan 31 07:46:08 compute-2 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000038.scope: Consumed 16.405s CPU time.
Jan 31 07:46:08 compute-2 systemd-machined[195142]: Machine qemu-23-instance-00000038 terminated.
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.211 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.212 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4372MB free_disk=20.82366943359375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.212 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.212 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:08 compute-2 ceph-mon[77282]: pgmap v1431: 305 pgs: 305 active+clean; 339 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 35 KiB/s wr, 149 op/s
Jan 31 07:46:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4285144845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3296735725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.251 226833 INFO nova.virt.libvirt.driver [-] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Instance destroyed successfully.
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.252 226833 DEBUG nova.objects.instance [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'resources' on Instance uuid e7df3fd9-ff03-4b35-930a-330e9dff6d0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.312 226833 DEBUG nova.virt.libvirt.vif [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:44:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-795324657',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-795324657',id=56,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/jDytd8Ym5FxcVX16m0xs7mZjnpulUTkIvV8si73F9lzKYe980w/3RbovGTB1QQOm/Ss45P0fTDRJRtI1toiRP5c4zSltvuzCoq9BdQDxvme5rWNAqRGyanoC79C91qw==',key_name='tempest-keypair-1249601107',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:45:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9ddf930129cf4e0395f8c5e70fd9eda8',ramdisk_id='',reservation_id='r-sdbgddgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:45:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4865905ed4e4262a2242d3f323d4314',uuid=e7df3fd9-ff03-4b35-930a-330e9dff6d0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.312 226833 DEBUG nova.network.os_vif_util [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converting VIF {"id": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "address": "fa:16:3e:3b:47:4e", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3196ca0e-ce", "ovs_interfaceid": "3196ca0e-ce5d-4fbf-9341-3c29ba2d513c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.313 226833 DEBUG nova.network.os_vif_util [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3b:47:4e,bridge_name='br-int',has_traffic_filtering=True,id=3196ca0e-ce5d-4fbf-9341-3c29ba2d513c,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3196ca0e-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.313 226833 DEBUG os_vif [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:47:4e,bridge_name='br-int',has_traffic_filtering=True,id=3196ca0e-ce5d-4fbf-9341-3c29ba2d513c,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3196ca0e-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.315 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3196ca0e-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.317 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:08 compute-2 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[250032]: [NOTICE]   (250069) : haproxy version is 2.8.14-c23fe91
Jan 31 07:46:08 compute-2 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[250032]: [NOTICE]   (250069) : path to executable is /usr/sbin/haproxy
Jan 31 07:46:08 compute-2 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[250032]: [WARNING]  (250069) : Exiting Master process...
Jan 31 07:46:08 compute-2 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[250032]: [WARNING]  (250069) : Exiting Master process...
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.320 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:08 compute-2 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[250032]: [ALERT]    (250069) : Current worker (250082) exited with code 143 (Terminated)
Jan 31 07:46:08 compute-2 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[250032]: [WARNING]  (250069) : All workers exited. Exiting... (0)
Jan 31 07:46:08 compute-2 systemd[1]: libpod-6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485.scope: Deactivated successfully.
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.325 226833 INFO os_vif [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:47:4e,bridge_name='br-int',has_traffic_filtering=True,id=3196ca0e-ce5d-4fbf-9341-3c29ba2d513c,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3196ca0e-ce')
Jan 31 07:46:08 compute-2 podman[251533]: 2026-01-31 07:46:08.330129125 +0000 UTC m=+0.110453384 container died 6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 07:46:08 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485-userdata-shm.mount: Deactivated successfully.
Jan 31 07:46:08 compute-2 systemd[1]: var-lib-containers-storage-overlay-a18f6d5b0a8cf22bc0d34fa9fbf9b8bc02368bdb9867166c13b7db905d2408aa-merged.mount: Deactivated successfully.
Jan 31 07:46:08 compute-2 podman[251533]: 2026-01-31 07:46:08.531114569 +0000 UTC m=+0.311438828 container cleanup 6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:46:08 compute-2 systemd[1]: libpod-conmon-6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485.scope: Deactivated successfully.
Jan 31 07:46:08 compute-2 podman[251592]: 2026-01-31 07:46:08.651445429 +0000 UTC m=+0.100147163 container remove 6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.660 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d19bd0-8290-4bc5-959e-7c13a03d157c]: (4, ('Sat Jan 31 07:46:08 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 (6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485)\n6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485\nSat Jan 31 07:46:08 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 (6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485)\n6d2f93b036f29d7f052ea68d603cde097020d445768d810d32c743cfe2a3f485\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.661 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[59a52bb1-59cd-42b8-9b65-37abfa80f7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.662 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd52bfdcb-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.664 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:08 compute-2 kernel: tapd52bfdcb-a0: left promiscuous mode
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.668 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.674 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.676 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[81fcdfae-2ff9-4fc9-9c57-ca38e498ad35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.690 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3e53ef93-9e88-413b-9835-2f9e6dde8829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.692 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a772a0-4fee-4c4e-89db-fea203ac9ac1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.699 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e7df3fd9-ff03-4b35-930a-330e9dff6d0e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.699 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.700 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:46:08 compute-2 nova_compute[226829]: 2026-01-31 07:46:08.700 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.707 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fdca079c-8e51-4f08-9170-e0663ca24369]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576416, 'reachable_time': 39575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251607, 'error': None, 'target': 'ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:08 compute-2 systemd[1]: run-netns-ovnmeta\x2dd52bfdcb\x2da5f3\x2d4946\x2d8fca\x2d4e9f67091fc3.mount: Deactivated successfully.
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.710 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.710 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[fd02c6a3-b57e-4134-adda-1901215b4231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:08.711 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:46:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:08.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:09 compute-2 nova_compute[226829]: 2026-01-31 07:46:09.043 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:09.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:09 compute-2 nova_compute[226829]: 2026-01-31 07:46:09.240 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:46:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2220353050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:09 compute-2 nova_compute[226829]: 2026-01-31 07:46:09.513 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:09 compute-2 nova_compute[226829]: 2026-01-31 07:46:09.519 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:46:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2543815248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:09 compute-2 nova_compute[226829]: 2026-01-31 07:46:09.554 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:46:09 compute-2 nova_compute[226829]: 2026-01-31 07:46:09.628 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:46:09 compute-2 nova_compute[226829]: 2026-01-31 07:46:09.629 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:09 compute-2 nova_compute[226829]: 2026-01-31 07:46:09.630 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:09 compute-2 nova_compute[226829]: 2026-01-31 07:46:09.630 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.376 226833 DEBUG nova.compute.manager [req-c5c4fab7-22e6-4ea2-afe4-9d987d27acb8 req-c21d09c6-5072-4792-9c4f-3781bbdf24dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Received event network-vif-unplugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.376 226833 DEBUG oslo_concurrency.lockutils [req-c5c4fab7-22e6-4ea2-afe4-9d987d27acb8 req-c21d09c6-5072-4792-9c4f-3781bbdf24dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.377 226833 DEBUG oslo_concurrency.lockutils [req-c5c4fab7-22e6-4ea2-afe4-9d987d27acb8 req-c21d09c6-5072-4792-9c4f-3781bbdf24dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.377 226833 DEBUG oslo_concurrency.lockutils [req-c5c4fab7-22e6-4ea2-afe4-9d987d27acb8 req-c21d09c6-5072-4792-9c4f-3781bbdf24dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.377 226833 DEBUG nova.compute.manager [req-c5c4fab7-22e6-4ea2-afe4-9d987d27acb8 req-c21d09c6-5072-4792-9c4f-3781bbdf24dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] No waiting events found dispatching network-vif-unplugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.378 226833 DEBUG nova.compute.manager [req-c5c4fab7-22e6-4ea2-afe4-9d987d27acb8 req-c21d09c6-5072-4792-9c4f-3781bbdf24dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Received event network-vif-unplugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:46:10 compute-2 ceph-mon[77282]: pgmap v1432: 305 pgs: 305 active+clean; 311 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 162 op/s
Jan 31 07:46:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2220353050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/254862565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.600 226833 DEBUG nova.network.neutron [-] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.619 226833 INFO nova.virt.libvirt.driver [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Deleting instance files /var/lib/nova/instances/e7df3fd9-ff03-4b35-930a-330e9dff6d0e_del
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.620 226833 INFO nova.virt.libvirt.driver [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Deletion of /var/lib/nova/instances/e7df3fd9-ff03-4b35-930a-330e9dff6d0e_del complete
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.652 226833 INFO nova.compute.manager [-] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Took 2.70 seconds to deallocate network for instance.
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.728 226833 DEBUG oslo_concurrency.lockutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.728 226833 DEBUG oslo_concurrency.lockutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.735 226833 DEBUG nova.compute.manager [req-bd30616c-3b7f-4aa0-8d1f-ae002cc677ee req-c041988e-d3e6-4344-8ac9-a5f885928913 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Received event network-vif-deleted-e24f0391-b643-4a28-a184-a94f3b8aac45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.752 226833 INFO nova.compute.manager [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Took 2.73 seconds to destroy the instance on the hypervisor.
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.752 226833 DEBUG oslo.service.loopingcall [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.752 226833 DEBUG nova.compute.manager [-] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.753 226833 DEBUG nova.network.neutron [-] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.755 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:10.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.811 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.812 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.812 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:46:10 compute-2 nova_compute[226829]: 2026-01-31 07:46:10.909 226833 DEBUG oslo_concurrency.processutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:11.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:46:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1753264854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:11 compute-2 nova_compute[226829]: 2026-01-31 07:46:11.325 226833 DEBUG oslo_concurrency.processutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:11 compute-2 nova_compute[226829]: 2026-01-31 07:46:11.331 226833 DEBUG nova.compute.provider_tree [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:46:11 compute-2 nova_compute[226829]: 2026-01-31 07:46:11.450 226833 DEBUG nova.scheduler.client.report [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:46:11 compute-2 nova_compute[226829]: 2026-01-31 07:46:11.476 226833 DEBUG oslo_concurrency.lockutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:11 compute-2 nova_compute[226829]: 2026-01-31 07:46:11.548 226833 INFO nova.scheduler.client.report [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Deleted allocations for instance ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4
Jan 31 07:46:11 compute-2 nova_compute[226829]: 2026-01-31 07:46:11.700 226833 DEBUG oslo_concurrency.lockutils [None req-bb39799b-08dd-419d-a090-93b1906819ea 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:11 compute-2 ceph-mon[77282]: pgmap v1433: 305 pgs: 305 active+clean; 217 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 5.1 KiB/s wr, 106 op/s
Jan 31 07:46:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1753264854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:12 compute-2 nova_compute[226829]: 2026-01-31 07:46:12.540 226833 DEBUG nova.compute.manager [req-4661b9a9-b883-48df-a172-dcd60089b3bd req-797e1c6b-9a8c-4bce-aca5-351f809cb56d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Received event network-vif-plugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:12 compute-2 nova_compute[226829]: 2026-01-31 07:46:12.540 226833 DEBUG oslo_concurrency.lockutils [req-4661b9a9-b883-48df-a172-dcd60089b3bd req-797e1c6b-9a8c-4bce-aca5-351f809cb56d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:12 compute-2 nova_compute[226829]: 2026-01-31 07:46:12.540 226833 DEBUG oslo_concurrency.lockutils [req-4661b9a9-b883-48df-a172-dcd60089b3bd req-797e1c6b-9a8c-4bce-aca5-351f809cb56d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:12 compute-2 nova_compute[226829]: 2026-01-31 07:46:12.540 226833 DEBUG oslo_concurrency.lockutils [req-4661b9a9-b883-48df-a172-dcd60089b3bd req-797e1c6b-9a8c-4bce-aca5-351f809cb56d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:12 compute-2 nova_compute[226829]: 2026-01-31 07:46:12.541 226833 DEBUG nova.compute.manager [req-4661b9a9-b883-48df-a172-dcd60089b3bd req-797e1c6b-9a8c-4bce-aca5-351f809cb56d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] No waiting events found dispatching network-vif-plugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:46:12 compute-2 nova_compute[226829]: 2026-01-31 07:46:12.541 226833 WARNING nova.compute.manager [req-4661b9a9-b883-48df-a172-dcd60089b3bd req-797e1c6b-9a8c-4bce-aca5-351f809cb56d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Received unexpected event network-vif-plugged-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c for instance with vm_state active and task_state deleting.
Jan 31 07:46:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:12.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:12 compute-2 sudo[251655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:12 compute-2 sudo[251655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:12 compute-2 sudo[251655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:12 compute-2 sudo[251680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:12 compute-2 sudo[251680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:12 compute-2 sudo[251680]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:13.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3818271398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:13 compute-2 nova_compute[226829]: 2026-01-31 07:46:13.320 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:13 compute-2 nova_compute[226829]: 2026-01-31 07:46:13.454 226833 DEBUG nova.network.neutron [-] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:13 compute-2 nova_compute[226829]: 2026-01-31 07:46:13.519 226833 INFO nova.compute.manager [-] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Took 2.77 seconds to deallocate network for instance.
Jan 31 07:46:13 compute-2 nova_compute[226829]: 2026-01-31 07:46:13.609 226833 DEBUG nova.compute.manager [req-e38fb476-e6be-4ff8-b870-30bc54ec8c25 req-a0d07a80-1566-416b-97e6-c14f880d3503 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Received event network-vif-deleted-3196ca0e-ce5d-4fbf-9341-3c29ba2d513c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:13 compute-2 nova_compute[226829]: 2026-01-31 07:46:13.656 226833 DEBUG oslo_concurrency.lockutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:13 compute-2 nova_compute[226829]: 2026-01-31 07:46:13.657 226833 DEBUG oslo_concurrency.lockutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:13.712 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:13 compute-2 nova_compute[226829]: 2026-01-31 07:46:13.752 226833 DEBUG oslo_concurrency.processutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:46:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/352187108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:14 compute-2 nova_compute[226829]: 2026-01-31 07:46:14.206 226833 DEBUG oslo_concurrency.processutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:14 compute-2 nova_compute[226829]: 2026-01-31 07:46:14.211 226833 DEBUG nova.compute.provider_tree [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:46:14 compute-2 podman[251726]: 2026-01-31 07:46:14.21246869 +0000 UTC m=+0.099466945 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:46:14 compute-2 nova_compute[226829]: 2026-01-31 07:46:14.240 226833 DEBUG nova.scheduler.client.report [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:46:14 compute-2 nova_compute[226829]: 2026-01-31 07:46:14.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:14 compute-2 nova_compute[226829]: 2026-01-31 07:46:14.313 226833 DEBUG oslo_concurrency.lockutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:14 compute-2 ceph-mon[77282]: pgmap v1434: 305 pgs: 305 active+clean; 200 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 4.8 KiB/s wr, 105 op/s
Jan 31 07:46:14 compute-2 nova_compute[226829]: 2026-01-31 07:46:14.470 226833 INFO nova.scheduler.client.report [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Deleted allocations for instance e7df3fd9-ff03-4b35-930a-330e9dff6d0e
Jan 31 07:46:14 compute-2 nova_compute[226829]: 2026-01-31 07:46:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:14.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:14 compute-2 nova_compute[226829]: 2026-01-31 07:46:14.866 226833 DEBUG oslo_concurrency.lockutils [None req-3ce9a830-80c6-48f1-80bf-3fcb99be42c6 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "e7df3fd9-ff03-4b35-930a-330e9dff6d0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:15.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/352187108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:15 compute-2 nova_compute[226829]: 2026-01-31 07:46:15.981 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:16 compute-2 ceph-mon[77282]: pgmap v1435: 305 pgs: 305 active+clean; 200 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.8 KiB/s wr, 78 op/s
Jan 31 07:46:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:16.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:17.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:17 compute-2 nova_compute[226829]: 2026-01-31 07:46:17.482 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "070c0648-719f-49ef-9721-ce18b2d03fd3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:17 compute-2 nova_compute[226829]: 2026-01-31 07:46:17.483 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:17 compute-2 nova_compute[226829]: 2026-01-31 07:46:17.511 226833 DEBUG nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:46:17 compute-2 nova_compute[226829]: 2026-01-31 07:46:17.616 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:17 compute-2 nova_compute[226829]: 2026-01-31 07:46:17.617 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:17 compute-2 nova_compute[226829]: 2026-01-31 07:46:17.625 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:46:17 compute-2 nova_compute[226829]: 2026-01-31 07:46:17.625 226833 INFO nova.compute.claims [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:46:17 compute-2 nova_compute[226829]: 2026-01-31 07:46:17.777 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:18 compute-2 ceph-mon[77282]: pgmap v1436: 305 pgs: 305 active+clean; 232 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 1.1 MiB/s wr, 91 op/s
Jan 31 07:46:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:46:18 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3675761999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.231 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.238 226833 DEBUG nova.compute.provider_tree [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.270 226833 DEBUG nova.scheduler.client.report [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.324 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.343 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.345 226833 DEBUG nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.456 226833 DEBUG nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.457 226833 DEBUG nova.network.neutron [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.487 226833 INFO nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.528 226833 DEBUG nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.740 226833 DEBUG nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.742 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.742 226833 INFO nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Creating image(s)
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.777 226833 DEBUG nova.storage.rbd_utils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 070c0648-719f-49ef-9721-ce18b2d03fd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:46:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:18.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.816 226833 DEBUG nova.storage.rbd_utils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 070c0648-719f-49ef-9721-ce18b2d03fd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.851 226833 DEBUG nova.storage.rbd_utils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 070c0648-719f-49ef-9721-ce18b2d03fd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.856 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.875 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845563.8360462, ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.876 226833 INFO nova.compute.manager [-] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] VM Stopped (Lifecycle Event)
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.911 226833 DEBUG nova.compute.manager [None req-533d2adb-8e27-46c1-9a54-acc91a81f7dc - - - - - -] [instance: ddb3e699-e03b-48f7-a9e9-4d245dfc3ac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.928 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.929 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.929 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.930 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.956 226833 DEBUG nova.storage.rbd_utils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 070c0648-719f-49ef-9721-ce18b2d03fd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.959 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 070c0648-719f-49ef-9721-ce18b2d03fd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:18 compute-2 nova_compute[226829]: 2026-01-31 07:46:18.980 226833 DEBUG nova.policy [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97abab8eb79247cd89fb2ebff295b890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:46:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:19.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:19 compute-2 nova_compute[226829]: 2026-01-31 07:46:19.245 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3675761999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:20 compute-2 nova_compute[226829]: 2026-01-31 07:46:20.220 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 070c0648-719f-49ef-9721-ce18b2d03fd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:20 compute-2 nova_compute[226829]: 2026-01-31 07:46:20.301 226833 DEBUG nova.storage.rbd_utils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] resizing rbd image 070c0648-719f-49ef-9721-ce18b2d03fd3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:46:20 compute-2 ceph-mon[77282]: pgmap v1437: 305 pgs: 305 active+clean; 246 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 31 07:46:20 compute-2 nova_compute[226829]: 2026-01-31 07:46:20.590 226833 DEBUG nova.objects.instance [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'migration_context' on Instance uuid 070c0648-719f-49ef-9721-ce18b2d03fd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:20 compute-2 nova_compute[226829]: 2026-01-31 07:46:20.690 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:46:20 compute-2 nova_compute[226829]: 2026-01-31 07:46:20.691 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Ensure instance console log exists: /var/lib/nova/instances/070c0648-719f-49ef-9721-ce18b2d03fd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:46:20 compute-2 nova_compute[226829]: 2026-01-31 07:46:20.691 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:20 compute-2 nova_compute[226829]: 2026-01-31 07:46:20.692 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:20 compute-2 nova_compute[226829]: 2026-01-31 07:46:20.693 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:20.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:20 compute-2 nova_compute[226829]: 2026-01-31 07:46:20.874 226833 DEBUG nova.network.neutron [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Successfully created port: f67f1507-621e-4954-99cd-71bf5dd626d8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:46:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:21.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3836211079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:46:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:22.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:22 compute-2 ceph-mon[77282]: pgmap v1438: 305 pgs: 305 active+clean; 231 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 3.2 MiB/s wr, 107 op/s
Jan 31 07:46:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1517547197' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:46:22 compute-2 nova_compute[226829]: 2026-01-31 07:46:22.955 226833 DEBUG nova.network.neutron [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Successfully updated port: f67f1507-621e-4954-99cd-71bf5dd626d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:46:22 compute-2 nova_compute[226829]: 2026-01-31 07:46:22.983 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:46:22 compute-2 nova_compute[226829]: 2026-01-31 07:46:22.984 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:46:22 compute-2 nova_compute[226829]: 2026-01-31 07:46:22.984 226833 DEBUG nova.network.neutron [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:46:23 compute-2 nova_compute[226829]: 2026-01-31 07:46:23.063 226833 DEBUG nova.compute.manager [req-2bfa415a-bc15-4530-857a-08865e177b81 req-6dd9129b-1236-493f-889c-5b74e88367c3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Received event network-changed-f67f1507-621e-4954-99cd-71bf5dd626d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:23 compute-2 nova_compute[226829]: 2026-01-31 07:46:23.063 226833 DEBUG nova.compute.manager [req-2bfa415a-bc15-4530-857a-08865e177b81 req-6dd9129b-1236-493f-889c-5b74e88367c3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Refreshing instance network info cache due to event network-changed-f67f1507-621e-4954-99cd-71bf5dd626d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:46:23 compute-2 nova_compute[226829]: 2026-01-31 07:46:23.063 226833 DEBUG oslo_concurrency.lockutils [req-2bfa415a-bc15-4530-857a-08865e177b81 req-6dd9129b-1236-493f-889c-5b74e88367c3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:46:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:23.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:23 compute-2 podman[251947]: 2026-01-31 07:46:23.197203003 +0000 UTC m=+0.076485314 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:46:23 compute-2 nova_compute[226829]: 2026-01-31 07:46:23.250 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845568.2483108, e7df3fd9-ff03-4b35-930a-330e9dff6d0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:46:23 compute-2 nova_compute[226829]: 2026-01-31 07:46:23.250 226833 INFO nova.compute.manager [-] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] VM Stopped (Lifecycle Event)
Jan 31 07:46:23 compute-2 nova_compute[226829]: 2026-01-31 07:46:23.304 226833 DEBUG nova.compute.manager [None req-cca01134-cf69-4565-8932-94baa16c099e - - - - - -] [instance: e7df3fd9-ff03-4b35-930a-330e9dff6d0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:23 compute-2 nova_compute[226829]: 2026-01-31 07:46:23.322 226833 DEBUG nova.network.neutron [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:46:23 compute-2 nova_compute[226829]: 2026-01-31 07:46:23.355 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:24 compute-2 nova_compute[226829]: 2026-01-31 07:46:24.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:24 compute-2 ceph-mon[77282]: pgmap v1439: 305 pgs: 305 active+clean; 213 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 3.5 MiB/s wr, 84 op/s
Jan 31 07:46:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1870794508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:24 compute-2 nova_compute[226829]: 2026-01-31 07:46:24.771 226833 DEBUG nova.network.neutron [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Updating instance_info_cache with network_info: [{"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:46:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:24.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:46:24 compute-2 nova_compute[226829]: 2026-01-31 07:46:24.993 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:46:24 compute-2 nova_compute[226829]: 2026-01-31 07:46:24.993 226833 DEBUG nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Instance network_info: |[{"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:46:24 compute-2 nova_compute[226829]: 2026-01-31 07:46:24.993 226833 DEBUG oslo_concurrency.lockutils [req-2bfa415a-bc15-4530-857a-08865e177b81 req-6dd9129b-1236-493f-889c-5b74e88367c3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:46:24 compute-2 nova_compute[226829]: 2026-01-31 07:46:24.994 226833 DEBUG nova.network.neutron [req-2bfa415a-bc15-4530-857a-08865e177b81 req-6dd9129b-1236-493f-889c-5b74e88367c3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Refreshing network info cache for port f67f1507-621e-4954-99cd-71bf5dd626d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:46:24 compute-2 nova_compute[226829]: 2026-01-31 07:46:24.996 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Start _get_guest_xml network_info=[{"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.002 226833 WARNING nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.012 226833 DEBUG nova.virt.libvirt.host [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.012 226833 DEBUG nova.virt.libvirt.host [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.016 226833 DEBUG nova.virt.libvirt.host [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.017 226833 DEBUG nova.virt.libvirt.host [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.018 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.018 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.019 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.019 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.019 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.019 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.020 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.020 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.020 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.020 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.020 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.021 226833 DEBUG nova.virt.hardware [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.023 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:25.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:46:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/356577153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.420 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.460 226833 DEBUG nova.storage.rbd_utils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 070c0648-719f-49ef-9721-ce18b2d03fd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.465 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:25 compute-2 ceph-mon[77282]: pgmap v1440: 305 pgs: 305 active+clean; 213 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 73 op/s
Jan 31 07:46:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/356577153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:46:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:46:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2705593188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.936 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.938 226833 DEBUG nova.virt.libvirt.vif [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:46:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-216539408',display_name='tempest-DeleteServersTestJSON-server-216539408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-216539408',id=61,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-mlh1ptlq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:46:18Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=070c0648-719f-49ef-9721-ce18b2d03fd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.939 226833 DEBUG nova.network.os_vif_util [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.940 226833 DEBUG nova.network.os_vif_util [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:11,bridge_name='br-int',has_traffic_filtering=True,id=f67f1507-621e-4954-99cd-71bf5dd626d8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67f1507-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.942 226833 DEBUG nova.objects.instance [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'pci_devices' on Instance uuid 070c0648-719f-49ef-9721-ce18b2d03fd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.962 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <uuid>070c0648-719f-49ef-9721-ce18b2d03fd3</uuid>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <name>instance-0000003d</name>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <nova:name>tempest-DeleteServersTestJSON-server-216539408</nova:name>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:46:25</nova:creationTime>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <nova:user uuid="97abab8eb79247cd89fb2ebff295b890">tempest-DeleteServersTestJSON-2031597701-project-member</nova:user>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <nova:project uuid="f299640bb1f64e5fa12b23955e5a2127">tempest-DeleteServersTestJSON-2031597701</nova:project>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <nova:port uuid="f67f1507-621e-4954-99cd-71bf5dd626d8">
Jan 31 07:46:25 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <system>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <entry name="serial">070c0648-719f-49ef-9721-ce18b2d03fd3</entry>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <entry name="uuid">070c0648-719f-49ef-9721-ce18b2d03fd3</entry>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     </system>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <os>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   </os>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <features>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   </features>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/070c0648-719f-49ef-9721-ce18b2d03fd3_disk">
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       </source>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/070c0648-719f-49ef-9721-ce18b2d03fd3_disk.config">
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       </source>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:46:25 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:2a:0b:11"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <target dev="tapf67f1507-62"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/070c0648-719f-49ef-9721-ce18b2d03fd3/console.log" append="off"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <video>
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     </video>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:46:25 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:46:25 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:46:25 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:46:25 compute-2 nova_compute[226829]: </domain>
Jan 31 07:46:25 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.964 226833 DEBUG nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Preparing to wait for external event network-vif-plugged-f67f1507-621e-4954-99cd-71bf5dd626d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.964 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.964 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.965 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.966 226833 DEBUG nova.virt.libvirt.vif [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:46:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-216539408',display_name='tempest-DeleteServersTestJSON-server-216539408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-216539408',id=61,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-mlh1ptlq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:46:18Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=070c0648-719f-49ef-9721-ce18b2d03fd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.966 226833 DEBUG nova.network.os_vif_util [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.967 226833 DEBUG nova.network.os_vif_util [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:11,bridge_name='br-int',has_traffic_filtering=True,id=f67f1507-621e-4954-99cd-71bf5dd626d8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67f1507-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.967 226833 DEBUG os_vif [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:11,bridge_name='br-int',has_traffic_filtering=True,id=f67f1507-621e-4954-99cd-71bf5dd626d8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67f1507-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.968 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.969 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.969 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.974 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.974 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf67f1507-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.975 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf67f1507-62, col_values=(('external_ids', {'iface-id': 'f67f1507-621e-4954-99cd-71bf5dd626d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:0b:11', 'vm-uuid': '070c0648-719f-49ef-9721-ce18b2d03fd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:25 compute-2 NetworkManager[48999]: <info>  [1769845585.9782] manager: (tapf67f1507-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.979 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.982 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:25 compute-2 nova_compute[226829]: 2026-01-31 07:46:25.983 226833 INFO os_vif [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:11,bridge_name='br-int',has_traffic_filtering=True,id=f67f1507-621e-4954-99cd-71bf5dd626d8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67f1507-62')
Jan 31 07:46:26 compute-2 nova_compute[226829]: 2026-01-31 07:46:26.084 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:46:26 compute-2 nova_compute[226829]: 2026-01-31 07:46:26.085 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:46:26 compute-2 nova_compute[226829]: 2026-01-31 07:46:26.085 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No VIF found with MAC fa:16:3e:2a:0b:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:46:26 compute-2 nova_compute[226829]: 2026-01-31 07:46:26.085 226833 INFO nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Using config drive
Jan 31 07:46:26 compute-2 nova_compute[226829]: 2026-01-31 07:46:26.117 226833 DEBUG nova.storage.rbd_utils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 070c0648-719f-49ef-9721-ce18b2d03fd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:46:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:26.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2705593188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:46:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:27.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.357 226833 INFO nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Creating config drive at /var/lib/nova/instances/070c0648-719f-49ef-9721-ce18b2d03fd3/disk.config
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.364 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/070c0648-719f-49ef-9721-ce18b2d03fd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7_baucbi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.500 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/070c0648-719f-49ef-9721-ce18b2d03fd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7_baucbi" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.531 226833 DEBUG nova.storage.rbd_utils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 070c0648-719f-49ef-9721-ce18b2d03fd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.534 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/070c0648-719f-49ef-9721-ce18b2d03fd3/disk.config 070c0648-719f-49ef-9721-ce18b2d03fd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.737 226833 DEBUG oslo_concurrency.processutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/070c0648-719f-49ef-9721-ce18b2d03fd3/disk.config 070c0648-719f-49ef-9721-ce18b2d03fd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.737 226833 INFO nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Deleting local config drive /var/lib/nova/instances/070c0648-719f-49ef-9721-ce18b2d03fd3/disk.config because it was imported into RBD.
Jan 31 07:46:27 compute-2 kernel: tapf67f1507-62: entered promiscuous mode
Jan 31 07:46:27 compute-2 NetworkManager[48999]: <info>  [1769845587.7816] manager: (tapf67f1507-62): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Jan 31 07:46:27 compute-2 ovn_controller[133834]: 2026-01-31T07:46:27Z|00155|binding|INFO|Claiming lport f67f1507-621e-4954-99cd-71bf5dd626d8 for this chassis.
Jan 31 07:46:27 compute-2 ovn_controller[133834]: 2026-01-31T07:46:27Z|00156|binding|INFO|f67f1507-621e-4954-99cd-71bf5dd626d8: Claiming fa:16:3e:2a:0b:11 10.100.0.3
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.783 226833 DEBUG nova.network.neutron [req-2bfa415a-bc15-4530-857a-08865e177b81 req-6dd9129b-1236-493f-889c-5b74e88367c3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Updated VIF entry in instance network info cache for port f67f1507-621e-4954-99cd-71bf5dd626d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.784 226833 DEBUG nova.network.neutron [req-2bfa415a-bc15-4530-857a-08865e177b81 req-6dd9129b-1236-493f-889c-5b74e88367c3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Updating instance_info_cache with network_info: [{"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.787 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:27 compute-2 ovn_controller[133834]: 2026-01-31T07:46:27Z|00157|binding|INFO|Setting lport f67f1507-621e-4954-99cd-71bf5dd626d8 ovn-installed in OVS
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.864 226833 DEBUG oslo_concurrency.lockutils [req-2bfa415a-bc15-4530-857a-08865e177b81 req-6dd9129b-1236-493f-889c-5b74e88367c3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:46:27 compute-2 nova_compute[226829]: 2026-01-31 07:46:27.865 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:27 compute-2 ovn_controller[133834]: 2026-01-31T07:46:27Z|00158|binding|INFO|Setting lport f67f1507-621e-4954-99cd-71bf5dd626d8 up in Southbound
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.867 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:0b:11 10.100.0.3'], port_security=['fa:16:3e:2a:0b:11 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '070c0648-719f-49ef-9721-ce18b2d03fd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=f67f1507-621e-4954-99cd-71bf5dd626d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.871 143841 INFO neutron.agent.ovn.metadata.agent [-] Port f67f1507-621e-4954-99cd-71bf5dd626d8 in datapath 60244e92-1524-47f0-8207-05d0104afa47 bound to our chassis
Jan 31 07:46:27 compute-2 systemd-machined[195142]: New machine qemu-25-instance-0000003d.
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.876 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:46:27 compute-2 systemd[1]: Started Virtual Machine qemu-25-instance-0000003d.
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.886 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0368ca-e78d-4910-b50c-98a921333e3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.887 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60244e92-11 in ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.889 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60244e92-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.890 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae5d20c-ba01-4685-a67b-b154c436b03e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:27 compute-2 systemd-udevd[252106]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.892 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[886103dd-3f14-4deb-a0fc-78da826e51a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:27 compute-2 NetworkManager[48999]: <info>  [1769845587.9001] device (tapf67f1507-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:46:27 compute-2 NetworkManager[48999]: <info>  [1769845587.9007] device (tapf67f1507-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.904 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[8592634c-1731-499b-907e-8b8febae7cfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.914 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e1c955-490e-4da9-8796-67f0242ce03a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.935 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e89849a1-d869-42a4-952c-974828c0eade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:27 compute-2 NetworkManager[48999]: <info>  [1769845587.9428] manager: (tap60244e92-10): new Veth device (/org/freedesktop/NetworkManager/Devices/91)
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.941 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4ea633-fd82-4256-93c1-d483276e1d3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.967 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[eac1d66c-c140-4ba1-9459-63dd8ba1c1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.970 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[6293b487-b8bc-4e9b-8608-096d00cfdf40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:27 compute-2 NetworkManager[48999]: <info>  [1769845587.9861] device (tap60244e92-10): carrier: link connected
Jan 31 07:46:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:27.989 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[6b19b077-befd-4540-93d9-b23760237ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.000 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[64165507-d8fe-4961-883a-512d525dc00c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584045, 'reachable_time': 43779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252138, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.010 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e141e361-d6ce-49b5-9aeb-025c1b1e8acb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:59f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584045, 'tstamp': 584045}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252139, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.031 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[423c42d1-67a9-425f-b3a3-d504cfd607e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584045, 'reachable_time': 43779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252140, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.052 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[da26d496-7c53-4d9c-af33-8b635d488fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.092 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a3387625-df44-4c9d-a112-2d99652058e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.094 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.094 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.094 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60244e92-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.096 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:28 compute-2 kernel: tap60244e92-10: entered promiscuous mode
Jan 31 07:46:28 compute-2 NetworkManager[48999]: <info>  [1769845588.0973] manager: (tap60244e92-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.102 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.103 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60244e92-10, col_values=(('external_ids', {'iface-id': '4e20d708-9f46-41f3-86c0-9ab3849f8392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.104 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:28 compute-2 ovn_controller[133834]: 2026-01-31T07:46:28Z|00159|binding|INFO|Releasing lport 4e20d708-9f46-41f3-86c0-9ab3849f8392 from this chassis (sb_readonly=0)
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.105 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.106 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.107 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d417b4bc-7a99-440a-8cee-c9e2836bd7e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.107 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:46:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:28.108 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'env', 'PROCESS_TAG=haproxy-60244e92-1524-47f0-8207-05d0104afa47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60244e92-1524-47f0-8207-05d0104afa47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.114 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:28 compute-2 ceph-mon[77282]: pgmap v1441: 305 pgs: 305 active+clean; 213 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 3.6 MiB/s wr, 84 op/s
Jan 31 07:46:28 compute-2 podman[252213]: 2026-01-31 07:46:28.441566955 +0000 UTC m=+0.053932393 container create 6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:46:28 compute-2 systemd[1]: Started libpod-conmon-6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e.scope.
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.492 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845588.4908803, 070c0648-719f-49ef-9721-ce18b2d03fd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.494 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] VM Started (Lifecycle Event)
Jan 31 07:46:28 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:46:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f9a45ab794e130d16bcc148c587d9353a135e33e75d97852ce3c0bcef57a5ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:28 compute-2 podman[252213]: 2026-01-31 07:46:28.412386333 +0000 UTC m=+0.024751751 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:46:28 compute-2 podman[252213]: 2026-01-31 07:46:28.515708493 +0000 UTC m=+0.128073981 container init 6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 07:46:28 compute-2 podman[252213]: 2026-01-31 07:46:28.52153437 +0000 UTC m=+0.133899778 container start 6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.543 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:28 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[252229]: [NOTICE]   (252233) : New worker (252235) forked
Jan 31 07:46:28 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[252229]: [NOTICE]   (252233) : Loading success.
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.549 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845588.4912508, 070c0648-719f-49ef-9721-ce18b2d03fd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.550 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] VM Paused (Lifecycle Event)
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.574 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.580 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:46:28 compute-2 nova_compute[226829]: 2026-01-31 07:46:28.607 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:46:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:28.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:29.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.248 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.323 226833 DEBUG nova.compute.manager [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Received event network-vif-plugged-f67f1507-621e-4954-99cd-71bf5dd626d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.324 226833 DEBUG oslo_concurrency.lockutils [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.324 226833 DEBUG oslo_concurrency.lockutils [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.324 226833 DEBUG oslo_concurrency.lockutils [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.325 226833 DEBUG nova.compute.manager [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Processing event network-vif-plugged-f67f1507-621e-4954-99cd-71bf5dd626d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.325 226833 DEBUG nova.compute.manager [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Received event network-vif-plugged-f67f1507-621e-4954-99cd-71bf5dd626d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.325 226833 DEBUG oslo_concurrency.lockutils [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.325 226833 DEBUG oslo_concurrency.lockutils [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.326 226833 DEBUG oslo_concurrency.lockutils [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.326 226833 DEBUG nova.compute.manager [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] No waiting events found dispatching network-vif-plugged-f67f1507-621e-4954-99cd-71bf5dd626d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.326 226833 WARNING nova.compute.manager [req-e1761246-7740-47db-8c1a-fb8cf5bdb379 req-6ad833a5-0e4a-45fb-865d-8d97a427785d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Received unexpected event network-vif-plugged-f67f1507-621e-4954-99cd-71bf5dd626d8 for instance with vm_state building and task_state spawning.
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.327 226833 DEBUG nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.332 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845589.3318105, 070c0648-719f-49ef-9721-ce18b2d03fd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.332 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] VM Resumed (Lifecycle Event)
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.334 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.338 226833 INFO nova.virt.libvirt.driver [-] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Instance spawned successfully.
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.338 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:46:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2055626247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.438 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.442 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.452 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.452 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.453 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.453 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.454 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.454 226833 DEBUG nova.virt.libvirt.driver [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.617 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.743 226833 INFO nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Took 11.00 seconds to spawn the instance on the hypervisor.
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.744 226833 DEBUG nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:29 compute-2 nova_compute[226829]: 2026-01-31 07:46:29.889 226833 INFO nova.compute.manager [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Took 12.30 seconds to build instance.
Jan 31 07:46:30 compute-2 nova_compute[226829]: 2026-01-31 07:46:30.001 226833 DEBUG oslo_concurrency.lockutils [None req-9ec9e001-b929-4381-a467-fb0443e7c6ed 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:30 compute-2 ceph-mon[77282]: pgmap v1442: 305 pgs: 305 active+clean; 195 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Jan 31 07:46:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:30.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:30 compute-2 nova_compute[226829]: 2026-01-31 07:46:30.978 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:31.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/50917927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:46:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/50917927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:46:32 compute-2 nova_compute[226829]: 2026-01-31 07:46:32.616 226833 DEBUG oslo_concurrency.lockutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "070c0648-719f-49ef-9721-ce18b2d03fd3" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:32 compute-2 nova_compute[226829]: 2026-01-31 07:46:32.618 226833 DEBUG oslo_concurrency.lockutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:32 compute-2 nova_compute[226829]: 2026-01-31 07:46:32.619 226833 INFO nova.compute.manager [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Shelving
Jan 31 07:46:32 compute-2 nova_compute[226829]: 2026-01-31 07:46:32.668 226833 DEBUG nova.virt.libvirt.driver [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:46:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:46:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:32.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:46:32 compute-2 ceph-mon[77282]: pgmap v1443: 305 pgs: 305 active+clean; 134 MiB data, 596 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Jan 31 07:46:32 compute-2 sudo[252247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:32 compute-2 sudo[252247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:32 compute-2 sudo[252247]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:33 compute-2 sudo[252272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:33 compute-2 sudo[252272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:33 compute-2 sudo[252272]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:33.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:34 compute-2 nova_compute[226829]: 2026-01-31 07:46:34.249 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:34 compute-2 ceph-mon[77282]: pgmap v1444: 305 pgs: 305 active+clean; 134 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 423 KiB/s wr, 192 op/s
Jan 31 07:46:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:46:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:34.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:46:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:35.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:35 compute-2 sudo[252299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:35 compute-2 sudo[252299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:35 compute-2 sudo[252299]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:35 compute-2 sudo[252324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:46:35 compute-2 sudo[252324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:35 compute-2 sudo[252324]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:35 compute-2 ovn_controller[133834]: 2026-01-31T07:46:35Z|00160|binding|INFO|Releasing lport 4e20d708-9f46-41f3-86c0-9ab3849f8392 from this chassis (sb_readonly=0)
Jan 31 07:46:35 compute-2 nova_compute[226829]: 2026-01-31 07:46:35.891 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:35 compute-2 sudo[252349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:35 compute-2 sudo[252349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:35 compute-2 sudo[252349]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:35 compute-2 nova_compute[226829]: 2026-01-31 07:46:35.980 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:35 compute-2 ovn_controller[133834]: 2026-01-31T07:46:35Z|00161|binding|INFO|Releasing lport 4e20d708-9f46-41f3-86c0-9ab3849f8392 from this chassis (sb_readonly=0)
Jan 31 07:46:35 compute-2 nova_compute[226829]: 2026-01-31 07:46:35.996 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:36 compute-2 sudo[252374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:46:36 compute-2 sudo[252374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:36 compute-2 ceph-mon[77282]: pgmap v1445: 305 pgs: 305 active+clean; 134 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 26 KiB/s wr, 180 op/s
Jan 31 07:46:36 compute-2 sudo[252374]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:36.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:37.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:46:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:46:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:46:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:46:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:46:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:46:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/713672979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:46:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/713672979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:46:38 compute-2 ceph-mon[77282]: pgmap v1446: 305 pgs: 305 active+clean; 102 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 216 op/s
Jan 31 07:46:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:38.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:39.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:39 compute-2 nova_compute[226829]: 2026-01-31 07:46:39.252 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:40 compute-2 ceph-mon[77282]: pgmap v1447: 305 pgs: 305 active+clean; 88 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 218 op/s
Jan 31 07:46:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:40.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:41 compute-2 nova_compute[226829]: 2026-01-31 07:46:41.003 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:46:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:41.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:46:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1379675515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:42 compute-2 ceph-mon[77282]: pgmap v1448: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 193 op/s
Jan 31 07:46:42 compute-2 nova_compute[226829]: 2026-01-31 07:46:42.724 226833 DEBUG nova.virt.libvirt.driver [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:46:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:42.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:46:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:43.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:46:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:44 compute-2 nova_compute[226829]: 2026-01-31 07:46:44.255 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:44 compute-2 ceph-mon[77282]: pgmap v1449: 305 pgs: 305 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 148 KiB/s wr, 100 op/s
Jan 31 07:46:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:46:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:44.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:46:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:45.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:45 compute-2 podman[252436]: 2026-01-31 07:46:45.275571388 +0000 UTC m=+0.134814843 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 07:46:45 compute-2 ceph-mon[77282]: pgmap v1450: 305 pgs: 305 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 558 KiB/s rd, 148 KiB/s wr, 54 op/s
Jan 31 07:46:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/861024559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:46:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/861024559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:46:46 compute-2 nova_compute[226829]: 2026-01-31 07:46:46.008 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:46 compute-2 sudo[252464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:46 compute-2 sudo[252464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:46 compute-2 sudo[252464]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:46.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:46 compute-2 sudo[252489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:46:46 compute-2 sudo[252489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:46 compute-2 sudo[252489]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:47.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:46:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:46:48 compute-2 ovn_controller[133834]: 2026-01-31T07:46:48Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:0b:11 10.100.0.3
Jan 31 07:46:48 compute-2 ovn_controller[133834]: 2026-01-31T07:46:48Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:0b:11 10.100.0.3
Jan 31 07:46:48 compute-2 ceph-mon[77282]: pgmap v1451: 305 pgs: 305 active+clean; 108 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 585 KiB/s rd, 1.9 MiB/s wr, 73 op/s
Jan 31 07:46:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:48.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:49.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:49 compute-2 nova_compute[226829]: 2026-01-31 07:46:49.258 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:50 compute-2 ceph-mon[77282]: pgmap v1452: 305 pgs: 305 active+clean; 109 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Jan 31 07:46:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:50.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:51 compute-2 nova_compute[226829]: 2026-01-31 07:46:51.013 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:46:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:51.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:46:52 compute-2 ceph-mon[77282]: pgmap v1453: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 31 07:46:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:52.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:53.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:53 compute-2 sudo[252517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:53 compute-2 sudo[252517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:53 compute-2 sudo[252517]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:53 compute-2 sudo[252542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:53 compute-2 sudo[252542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:53 compute-2 sudo[252542]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:53 compute-2 nova_compute[226829]: 2026-01-31 07:46:53.782 226833 DEBUG nova.virt.libvirt.driver [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:46:53 compute-2 ceph-mon[77282]: pgmap v1454: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 07:46:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:54 compute-2 podman[252568]: 2026-01-31 07:46:54.20007855 +0000 UTC m=+0.076160005 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:46:54 compute-2 nova_compute[226829]: 2026-01-31 07:46:54.261 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:54.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:55.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:56 compute-2 ceph-mon[77282]: pgmap v1455: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 292 KiB/s rd, 2.0 MiB/s wr, 56 op/s
Jan 31 07:46:56 compute-2 nova_compute[226829]: 2026-01-31 07:46:56.016 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:56 compute-2 nova_compute[226829]: 2026-01-31 07:46:56.796 226833 INFO nova.virt.libvirt.driver [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Instance shutdown successfully after 24 seconds.
Jan 31 07:46:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:56.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:46:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:57.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:46:57 compute-2 kernel: tapf67f1507-62 (unregistering): left promiscuous mode
Jan 31 07:46:57 compute-2 nova_compute[226829]: 2026-01-31 07:46:57.514 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:57 compute-2 NetworkManager[48999]: <info>  [1769845617.5163] device (tapf67f1507-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:46:57 compute-2 ovn_controller[133834]: 2026-01-31T07:46:57Z|00162|binding|INFO|Releasing lport f67f1507-621e-4954-99cd-71bf5dd626d8 from this chassis (sb_readonly=0)
Jan 31 07:46:57 compute-2 ovn_controller[133834]: 2026-01-31T07:46:57Z|00163|binding|INFO|Setting lport f67f1507-621e-4954-99cd-71bf5dd626d8 down in Southbound
Jan 31 07:46:57 compute-2 nova_compute[226829]: 2026-01-31 07:46:57.525 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:57 compute-2 ovn_controller[133834]: 2026-01-31T07:46:57Z|00164|binding|INFO|Removing iface tapf67f1507-62 ovn-installed in OVS
Jan 31 07:46:57 compute-2 nova_compute[226829]: 2026-01-31 07:46:57.528 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:57 compute-2 nova_compute[226829]: 2026-01-31 07:46:57.536 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:57 compute-2 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Jan 31 07:46:57 compute-2 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000003d.scope: Consumed 14.497s CPU time.
Jan 31 07:46:57 compute-2 systemd-machined[195142]: Machine qemu-25-instance-0000003d terminated.
Jan 31 07:46:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:57.577 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:0b:11 10.100.0.3'], port_security=['fa:16:3e:2a:0b:11 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '070c0648-719f-49ef-9721-ce18b2d03fd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=f67f1507-621e-4954-99cd-71bf5dd626d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:46:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:57.580 143841 INFO neutron.agent.ovn.metadata.agent [-] Port f67f1507-621e-4954-99cd-71bf5dd626d8 in datapath 60244e92-1524-47f0-8207-05d0104afa47 unbound from our chassis
Jan 31 07:46:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:57.582 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60244e92-1524-47f0-8207-05d0104afa47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:46:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:57.585 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7e31e62b-bb9c-4e80-bd44-034162563c3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:57.586 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace which is not needed anymore
Jan 31 07:46:57 compute-2 nova_compute[226829]: 2026-01-31 07:46:57.625 226833 INFO nova.virt.libvirt.driver [-] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Instance destroyed successfully.
Jan 31 07:46:57 compute-2 nova_compute[226829]: 2026-01-31 07:46:57.626 226833 DEBUG nova.objects.instance [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'numa_topology' on Instance uuid 070c0648-719f-49ef-9721-ce18b2d03fd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:57 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[252229]: [NOTICE]   (252233) : haproxy version is 2.8.14-c23fe91
Jan 31 07:46:57 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[252229]: [NOTICE]   (252233) : path to executable is /usr/sbin/haproxy
Jan 31 07:46:57 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[252229]: [WARNING]  (252233) : Exiting Master process...
Jan 31 07:46:57 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[252229]: [ALERT]    (252233) : Current worker (252235) exited with code 143 (Terminated)
Jan 31 07:46:57 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[252229]: [WARNING]  (252233) : All workers exited. Exiting... (0)
Jan 31 07:46:57 compute-2 systemd[1]: libpod-6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e.scope: Deactivated successfully.
Jan 31 07:46:57 compute-2 podman[252625]: 2026-01-31 07:46:57.783806785 +0000 UTC m=+0.117799742 container died 6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:46:58 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e-userdata-shm.mount: Deactivated successfully.
Jan 31 07:46:58 compute-2 systemd[1]: var-lib-containers-storage-overlay-1f9a45ab794e130d16bcc148c587d9353a135e33e75d97852ce3c0bcef57a5ac-merged.mount: Deactivated successfully.
Jan 31 07:46:58 compute-2 ceph-mon[77282]: pgmap v1456: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Jan 31 07:46:58 compute-2 podman[252625]: 2026-01-31 07:46:58.460167567 +0000 UTC m=+0.794160514 container cleanup 6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:46:58 compute-2 systemd[1]: libpod-conmon-6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e.scope: Deactivated successfully.
Jan 31 07:46:58 compute-2 podman[252656]: 2026-01-31 07:46:58.846283298 +0000 UTC m=+0.365430301 container remove 6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 07:46:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:58.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:58.851 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[50df7ee9-1eeb-4f6b-9547-9333e596aa66]: (4, ('Sat Jan 31 07:46:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e)\n6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e\nSat Jan 31 07:46:58 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e)\n6bc6c70c39a40ae7e002eb1b38b6b80c4b555be05136a0fadc3b23f3e2137e7e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:58.854 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cdbd474c-36b0-4a09-87e5-3f01bd4f67fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:58.856 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:58 compute-2 nova_compute[226829]: 2026-01-31 07:46:58.859 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:58 compute-2 kernel: tap60244e92-10: left promiscuous mode
Jan 31 07:46:58 compute-2 nova_compute[226829]: 2026-01-31 07:46:58.874 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:58.879 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb68d72-bfe0-4a64-83ae-c1f76d030a23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:58.903 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7cb797-6173-476c-b6bf-d664bb8a0d7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:58.904 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4502a22f-9b2f-438d-bbb2-9086d0a8a0ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:58.923 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d03136e3-0692-4767-b839-70d5fc641b4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584040, 'reachable_time': 24207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252676, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:58 compute-2 systemd[1]: run-netns-ovnmeta\x2d60244e92\x2d1524\x2d47f0\x2d8207\x2d05d0104afa47.mount: Deactivated successfully.
Jan 31 07:46:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:58.932 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:46:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:46:58.932 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[f1eddd75-42bf-4f70-8aa6-b70707959289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:46:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:46:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:59.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:46:59 compute-2 nova_compute[226829]: 2026-01-31 07:46:59.263 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:46:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6728 writes, 34K keys, 6728 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 6727 writes, 6727 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1661 writes, 8298 keys, 1661 commit groups, 1.0 writes per commit group, ingest: 16.13 MB, 0.03 MB/s
                                           Interval WAL: 1660 writes, 1660 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.7      0.63              0.11        18    0.035       0      0       0.0       0.0
                                             L6      1/0    9.90 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6    131.0    108.3      1.36              0.48        17    0.080     87K   9945       0.0       0.0
                                            Sum      1/0    9.90 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.6     89.5     94.5      1.99              0.59        35    0.057     87K   9945       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     54.0     55.5      0.91              0.18         8    0.113     25K   3113       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    131.0    108.3      1.36              0.48        17    0.080     87K   9945       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.9      0.63              0.11        17    0.037       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.040, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.18 GB write, 0.08 MB/s write, 0.17 GB read, 0.07 MB/s read, 2.0 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 19.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000132 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1141,18.91 MB,6.22037%) FilterBlock(35,250.23 KB,0.0803847%) IndexBlock(35,453.28 KB,0.145611%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:46:59 compute-2 nova_compute[226829]: 2026-01-31 07:46:59.738 226833 INFO nova.virt.libvirt.driver [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Beginning cold snapshot process
Jan 31 07:47:00 compute-2 nova_compute[226829]: 2026-01-31 07:47:00.115 226833 DEBUG nova.compute.manager [req-f2495e84-1949-4367-81bb-bc97c4a42a24 req-23a15207-6bbb-441d-9117-29874269559f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Received event network-vif-unplugged-f67f1507-621e-4954-99cd-71bf5dd626d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:47:00 compute-2 nova_compute[226829]: 2026-01-31 07:47:00.116 226833 DEBUG oslo_concurrency.lockutils [req-f2495e84-1949-4367-81bb-bc97c4a42a24 req-23a15207-6bbb-441d-9117-29874269559f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:00 compute-2 nova_compute[226829]: 2026-01-31 07:47:00.117 226833 DEBUG oslo_concurrency.lockutils [req-f2495e84-1949-4367-81bb-bc97c4a42a24 req-23a15207-6bbb-441d-9117-29874269559f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:00 compute-2 nova_compute[226829]: 2026-01-31 07:47:00.117 226833 DEBUG oslo_concurrency.lockutils [req-f2495e84-1949-4367-81bb-bc97c4a42a24 req-23a15207-6bbb-441d-9117-29874269559f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:00 compute-2 nova_compute[226829]: 2026-01-31 07:47:00.118 226833 DEBUG nova.compute.manager [req-f2495e84-1949-4367-81bb-bc97c4a42a24 req-23a15207-6bbb-441d-9117-29874269559f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] No waiting events found dispatching network-vif-unplugged-f67f1507-621e-4954-99cd-71bf5dd626d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:47:00 compute-2 nova_compute[226829]: 2026-01-31 07:47:00.119 226833 WARNING nova.compute.manager [req-f2495e84-1949-4367-81bb-bc97c4a42a24 req-23a15207-6bbb-441d-9117-29874269559f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Received unexpected event network-vif-unplugged-f67f1507-621e-4954-99cd-71bf5dd626d8 for instance with vm_state active and task_state shelving.
Jan 31 07:47:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e216 e216: 3 total, 3 up, 3 in
Jan 31 07:47:00 compute-2 ceph-mon[77282]: pgmap v1457: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 211 KiB/s wr, 40 op/s
Jan 31 07:47:00 compute-2 nova_compute[226829]: 2026-01-31 07:47:00.427 226833 DEBUG nova.virt.libvirt.imagebackend [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:47:00 compute-2 nova_compute[226829]: 2026-01-31 07:47:00.516 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:00.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:01 compute-2 nova_compute[226829]: 2026-01-31 07:47:01.021 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:01.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:01 compute-2 nova_compute[226829]: 2026-01-31 07:47:01.225 226833 DEBUG nova.storage.rbd_utils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] creating snapshot(3d332fd47a6344abac7e81fb985adcc0) on rbd image(070c0648-719f-49ef-9721-ce18b2d03fd3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:47:01 compute-2 ceph-mon[77282]: osdmap e216: 3 total, 3 up, 3 in
Jan 31 07:47:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e217 e217: 3 total, 3 up, 3 in
Jan 31 07:47:01 compute-2 nova_compute[226829]: 2026-01-31 07:47:01.506 226833 DEBUG nova.storage.rbd_utils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] cloning vms/070c0648-719f-49ef-9721-ce18b2d03fd3_disk@3d332fd47a6344abac7e81fb985adcc0 to images/b009eb64-1e75-45c8-99c5-fb9d2b66fdcb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:47:01 compute-2 nova_compute[226829]: 2026-01-31 07:47:01.643 226833 DEBUG nova.storage.rbd_utils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] flattening images/b009eb64-1e75-45c8-99c5-fb9d2b66fdcb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:47:02 compute-2 nova_compute[226829]: 2026-01-31 07:47:02.366 226833 DEBUG nova.compute.manager [req-43827d27-dcd2-49d6-b2da-8310aad1d807 req-7f050107-919f-47b6-a0fc-1223fe48c7c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Received event network-vif-plugged-f67f1507-621e-4954-99cd-71bf5dd626d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:47:02 compute-2 nova_compute[226829]: 2026-01-31 07:47:02.367 226833 DEBUG oslo_concurrency.lockutils [req-43827d27-dcd2-49d6-b2da-8310aad1d807 req-7f050107-919f-47b6-a0fc-1223fe48c7c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:02 compute-2 nova_compute[226829]: 2026-01-31 07:47:02.369 226833 DEBUG oslo_concurrency.lockutils [req-43827d27-dcd2-49d6-b2da-8310aad1d807 req-7f050107-919f-47b6-a0fc-1223fe48c7c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:02 compute-2 nova_compute[226829]: 2026-01-31 07:47:02.369 226833 DEBUG oslo_concurrency.lockutils [req-43827d27-dcd2-49d6-b2da-8310aad1d807 req-7f050107-919f-47b6-a0fc-1223fe48c7c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:02 compute-2 nova_compute[226829]: 2026-01-31 07:47:02.369 226833 DEBUG nova.compute.manager [req-43827d27-dcd2-49d6-b2da-8310aad1d807 req-7f050107-919f-47b6-a0fc-1223fe48c7c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] No waiting events found dispatching network-vif-plugged-f67f1507-621e-4954-99cd-71bf5dd626d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:47:02 compute-2 nova_compute[226829]: 2026-01-31 07:47:02.370 226833 WARNING nova.compute.manager [req-43827d27-dcd2-49d6-b2da-8310aad1d807 req-7f050107-919f-47b6-a0fc-1223fe48c7c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Received unexpected event network-vif-plugged-f67f1507-621e-4954-99cd-71bf5dd626d8 for instance with vm_state active and task_state shelving_image_uploading.
Jan 31 07:47:02 compute-2 nova_compute[226829]: 2026-01-31 07:47:02.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:02 compute-2 ceph-mon[77282]: pgmap v1459: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 36 KiB/s wr, 13 op/s
Jan 31 07:47:02 compute-2 ceph-mon[77282]: osdmap e217: 3 total, 3 up, 3 in
Jan 31 07:47:02 compute-2 nova_compute[226829]: 2026-01-31 07:47:02.608 226833 DEBUG nova.storage.rbd_utils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] removing snapshot(3d332fd47a6344abac7e81fb985adcc0) on rbd image(070c0648-719f-49ef-9721-ce18b2d03fd3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:47:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:02.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:03.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:03 compute-2 nova_compute[226829]: 2026-01-31 07:47:03.540 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:03 compute-2 nova_compute[226829]: 2026-01-31 07:47:03.541 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e218 e218: 3 total, 3 up, 3 in
Jan 31 07:47:03 compute-2 ceph-mon[77282]: pgmap v1461: 305 pgs: 305 active+clean; 145 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 938 KiB/s rd, 1.7 MiB/s wr, 29 op/s
Jan 31 07:47:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/677423198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:03 compute-2 nova_compute[226829]: 2026-01-31 07:47:03.646 226833 DEBUG nova.storage.rbd_utils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] creating snapshot(snap) on rbd image(b009eb64-1e75-45c8-99c5-fb9d2b66fdcb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:47:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:04 compute-2 nova_compute[226829]: 2026-01-31 07:47:04.265 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:04 compute-2 nova_compute[226829]: 2026-01-31 07:47:04.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:04 compute-2 nova_compute[226829]: 2026-01-31 07:47:04.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:47:04 compute-2 ceph-mon[77282]: osdmap e218: 3 total, 3 up, 3 in
Jan 31 07:47:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/446125117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:04 compute-2 nova_compute[226829]: 2026-01-31 07:47:04.697 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:47:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e219 e219: 3 total, 3 up, 3 in
Jan 31 07:47:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:04.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:05.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:05 compute-2 nova_compute[226829]: 2026-01-31 07:47:05.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:05 compute-2 nova_compute[226829]: 2026-01-31 07:47:05.539 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:05 compute-2 nova_compute[226829]: 2026-01-31 07:47:05.539 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:05 compute-2 nova_compute[226829]: 2026-01-31 07:47:05.540 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:05 compute-2 nova_compute[226829]: 2026-01-31 07:47:05.540 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:47:05 compute-2 nova_compute[226829]: 2026-01-31 07:47:05.541 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:05 compute-2 ceph-mon[77282]: osdmap e219: 3 total, 3 up, 3 in
Jan 31 07:47:05 compute-2 ceph-mon[77282]: pgmap v1464: 305 pgs: 305 active+clean; 145 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 41 op/s
Jan 31 07:47:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2829126141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:47:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3416463148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.019 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.025 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.238 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.238 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.362 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.363 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4595MB free_disk=20.942768096923828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.363 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.364 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.843 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 070c0648-719f-49ef-9721-ce18b2d03fd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.844 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.844 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:47:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:47:06.856 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:47:06.856 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:47:06.856 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:06.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.874 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.915 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.915 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.937 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:47:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3416463148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:06 compute-2 nova_compute[226829]: 2026-01-31 07:47:06.965 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:47:07 compute-2 nova_compute[226829]: 2026-01-31 07:47:07.049 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:07.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:47:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/542345774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:07 compute-2 nova_compute[226829]: 2026-01-31 07:47:07.500 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:07 compute-2 nova_compute[226829]: 2026-01-31 07:47:07.505 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:47:07 compute-2 nova_compute[226829]: 2026-01-31 07:47:07.579 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:47:08 compute-2 nova_compute[226829]: 2026-01-31 07:47:08.043 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:47:08 compute-2 nova_compute[226829]: 2026-01-31 07:47:08.044 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:08 compute-2 ceph-mon[77282]: pgmap v1465: 305 pgs: 305 active+clean; 200 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 186 op/s
Jan 31 07:47:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1325665326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/542345774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:08 compute-2 nova_compute[226829]: 2026-01-31 07:47:08.555 226833 INFO nova.virt.libvirt.driver [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Snapshot image upload complete
Jan 31 07:47:08 compute-2 nova_compute[226829]: 2026-01-31 07:47:08.555 226833 DEBUG nova.compute.manager [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:08.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:09 compute-2 nova_compute[226829]: 2026-01-31 07:47:09.045 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e220 e220: 3 total, 3 up, 3 in
Jan 31 07:47:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:09.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/991230648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:09 compute-2 nova_compute[226829]: 2026-01-31 07:47:09.206 226833 INFO nova.compute.manager [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Shelve offloading
Jan 31 07:47:09 compute-2 nova_compute[226829]: 2026-01-31 07:47:09.218 226833 INFO nova.virt.libvirt.driver [-] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Instance destroyed successfully.
Jan 31 07:47:09 compute-2 nova_compute[226829]: 2026-01-31 07:47:09.218 226833 DEBUG nova.compute.manager [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:09 compute-2 nova_compute[226829]: 2026-01-31 07:47:09.223 226833 DEBUG oslo_concurrency.lockutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:47:09 compute-2 nova_compute[226829]: 2026-01-31 07:47:09.223 226833 DEBUG oslo_concurrency.lockutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:47:09 compute-2 nova_compute[226829]: 2026-01-31 07:47:09.224 226833 DEBUG nova.network.neutron [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:47:09 compute-2 nova_compute[226829]: 2026-01-31 07:47:09.268 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:10 compute-2 ceph-mon[77282]: pgmap v1466: 305 pgs: 305 active+clean; 209 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.7 MiB/s wr, 176 op/s
Jan 31 07:47:10 compute-2 ceph-mon[77282]: osdmap e220: 3 total, 3 up, 3 in
Jan 31 07:47:10 compute-2 nova_compute[226829]: 2026-01-31 07:47:10.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:10 compute-2 nova_compute[226829]: 2026-01-31 07:47:10.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:10 compute-2 nova_compute[226829]: 2026-01-31 07:47:10.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:47:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:10.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:11 compute-2 nova_compute[226829]: 2026-01-31 07:47:11.067 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:11.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e221 e221: 3 total, 3 up, 3 in
Jan 31 07:47:12 compute-2 nova_compute[226829]: 2026-01-31 07:47:12.625 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845617.6233346, 070c0648-719f-49ef-9721-ce18b2d03fd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:47:12 compute-2 nova_compute[226829]: 2026-01-31 07:47:12.625 226833 INFO nova.compute.manager [-] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] VM Stopped (Lifecycle Event)
Jan 31 07:47:12 compute-2 ceph-mon[77282]: pgmap v1468: 305 pgs: 305 active+clean; 246 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.5 MiB/s wr, 221 op/s
Jan 31 07:47:12 compute-2 ceph-mon[77282]: osdmap e221: 3 total, 3 up, 3 in
Jan 31 07:47:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:13 compute-2 nova_compute[226829]: 2026-01-31 07:47:13.098 226833 DEBUG nova.compute.manager [None req-8eda7b3a-640b-4e88-a963-1b92a8d2b22e - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:13 compute-2 nova_compute[226829]: 2026-01-31 07:47:13.103 226833 DEBUG nova.compute.manager [None req-8eda7b3a-640b-4e88-a963-1b92a8d2b22e - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:47:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:13.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:13 compute-2 nova_compute[226829]: 2026-01-31 07:47:13.229 226833 INFO nova.compute.manager [None req-8eda7b3a-640b-4e88-a963-1b92a8d2b22e - - - - - -] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Jan 31 07:47:13 compute-2 sudo[252871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:13 compute-2 sudo[252871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:13 compute-2 sudo[252871]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:13 compute-2 sudo[252896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:13 compute-2 sudo[252896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:13 compute-2 sudo[252896]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:13 compute-2 ceph-mon[77282]: pgmap v1470: 305 pgs: 305 active+clean; 246 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.8 MiB/s wr, 201 op/s
Jan 31 07:47:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:14 compute-2 nova_compute[226829]: 2026-01-31 07:47:14.007 226833 DEBUG nova.network.neutron [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Updating instance_info_cache with network_info: [{"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:47:14 compute-2 nova_compute[226829]: 2026-01-31 07:47:14.041 226833 DEBUG oslo_concurrency.lockutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:47:14 compute-2 nova_compute[226829]: 2026-01-31 07:47:14.270 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:14.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:15.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 07:47:16 compute-2 ceph-mon[77282]: pgmap v1471: 305 pgs: 305 active+clean; 246 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 MiB/s wr, 83 op/s
Jan 31 07:47:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/944526431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.070 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:16 compute-2 podman[252922]: 2026-01-31 07:47:16.227223006 +0000 UTC m=+0.104956844 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.703 226833 INFO nova.virt.libvirt.driver [-] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Instance destroyed successfully.
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.703 226833 DEBUG nova.objects.instance [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'resources' on Instance uuid 070c0648-719f-49ef-9721-ce18b2d03fd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.786 226833 DEBUG nova.virt.libvirt.vif [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:46:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-216539408',display_name='tempest-DeleteServersTestJSON-server-216539408',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-216539408',id=61,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:46:29Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-mlh1ptlq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member',shelved_at='2026-01-31T07:47:08.555834',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='b009eb64-1e75-45c8-99c5-fb9d2b66fdcb'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:47:00Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=070c0648-719f-49ef-9721-ce18b2d03fd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.787 226833 DEBUG nova.network.os_vif_util [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67f1507-62", "ovs_interfaceid": "f67f1507-621e-4954-99cd-71bf5dd626d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.789 226833 DEBUG nova.network.os_vif_util [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:11,bridge_name='br-int',has_traffic_filtering=True,id=f67f1507-621e-4954-99cd-71bf5dd626d8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67f1507-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.789 226833 DEBUG os_vif [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:11,bridge_name='br-int',has_traffic_filtering=True,id=f67f1507-621e-4954-99cd-71bf5dd626d8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67f1507-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.795 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.796 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf67f1507-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.798 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.802 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:47:16 compute-2 nova_compute[226829]: 2026-01-31 07:47:16.809 226833 INFO os_vif [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:11,bridge_name='br-int',has_traffic_filtering=True,id=f67f1507-621e-4954-99cd-71bf5dd626d8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67f1507-62')
Jan 31 07:47:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:16.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2113549329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2462762902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:17.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:17 compute-2 nova_compute[226829]: 2026-01-31 07:47:17.171 226833 DEBUG nova.compute.manager [req-849bdab2-baa2-4e5b-8238-1f70b5ff5d9a req-9706b9cc-8473-4ae6-aedc-7e0bf57c278d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Received event network-changed-f67f1507-621e-4954-99cd-71bf5dd626d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:47:17 compute-2 nova_compute[226829]: 2026-01-31 07:47:17.172 226833 DEBUG nova.compute.manager [req-849bdab2-baa2-4e5b-8238-1f70b5ff5d9a req-9706b9cc-8473-4ae6-aedc-7e0bf57c278d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Refreshing instance network info cache due to event network-changed-f67f1507-621e-4954-99cd-71bf5dd626d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:47:17 compute-2 nova_compute[226829]: 2026-01-31 07:47:17.172 226833 DEBUG oslo_concurrency.lockutils [req-849bdab2-baa2-4e5b-8238-1f70b5ff5d9a req-9706b9cc-8473-4ae6-aedc-7e0bf57c278d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:47:17 compute-2 nova_compute[226829]: 2026-01-31 07:47:17.173 226833 DEBUG oslo_concurrency.lockutils [req-849bdab2-baa2-4e5b-8238-1f70b5ff5d9a req-9706b9cc-8473-4ae6-aedc-7e0bf57c278d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:47:17 compute-2 nova_compute[226829]: 2026-01-31 07:47:17.173 226833 DEBUG nova.network.neutron [req-849bdab2-baa2-4e5b-8238-1f70b5ff5d9a req-9706b9cc-8473-4ae6-aedc-7e0bf57c278d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Refreshing network info cache for port f67f1507-621e-4954-99cd-71bf5dd626d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:47:17 compute-2 nova_compute[226829]: 2026-01-31 07:47:17.864 226833 INFO nova.virt.libvirt.driver [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Deleting instance files /var/lib/nova/instances/070c0648-719f-49ef-9721-ce18b2d03fd3_del
Jan 31 07:47:17 compute-2 nova_compute[226829]: 2026-01-31 07:47:17.865 226833 INFO nova.virt.libvirt.driver [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Deletion of /var/lib/nova/instances/070c0648-719f-49ef-9721-ce18b2d03fd3_del complete
Jan 31 07:47:18 compute-2 ceph-mon[77282]: pgmap v1472: 305 pgs: 305 active+clean; 246 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 59 op/s
Jan 31 07:47:18 compute-2 nova_compute[226829]: 2026-01-31 07:47:18.362 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "1605ed48-f361-4895-81e2-915a59f90ec9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:18 compute-2 nova_compute[226829]: 2026-01-31 07:47:18.363 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "1605ed48-f361-4895-81e2-915a59f90ec9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:18 compute-2 nova_compute[226829]: 2026-01-31 07:47:18.369 226833 INFO nova.scheduler.client.report [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Deleted allocations for instance 070c0648-719f-49ef-9721-ce18b2d03fd3
Jan 31 07:47:18 compute-2 nova_compute[226829]: 2026-01-31 07:47:18.390 226833 DEBUG nova.compute.manager [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:47:18 compute-2 nova_compute[226829]: 2026-01-31 07:47:18.458 226833 DEBUG oslo_concurrency.lockutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:18 compute-2 nova_compute[226829]: 2026-01-31 07:47:18.459 226833 DEBUG oslo_concurrency.lockutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:18 compute-2 nova_compute[226829]: 2026-01-31 07:47:18.562 226833 DEBUG oslo_concurrency.processutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:18 compute-2 nova_compute[226829]: 2026-01-31 07:47:18.634 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:47:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:18.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:47:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:47:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2986912803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:19 compute-2 nova_compute[226829]: 2026-01-31 07:47:19.035 226833 DEBUG oslo_concurrency.processutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:19 compute-2 nova_compute[226829]: 2026-01-31 07:47:19.041 226833 DEBUG nova.compute.provider_tree [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:47:19 compute-2 nova_compute[226829]: 2026-01-31 07:47:19.148 226833 DEBUG nova.scheduler.client.report [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:47:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:19.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:19 compute-2 nova_compute[226829]: 2026-01-31 07:47:19.274 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:19 compute-2 nova_compute[226829]: 2026-01-31 07:47:19.287 226833 DEBUG oslo_concurrency.lockutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:19 compute-2 nova_compute[226829]: 2026-01-31 07:47:19.292 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:19 compute-2 nova_compute[226829]: 2026-01-31 07:47:19.301 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:47:19 compute-2 nova_compute[226829]: 2026-01-31 07:47:19.302 226833 INFO nova.compute.claims [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:47:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2986912803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:19 compute-2 nova_compute[226829]: 2026-01-31 07:47:19.474 226833 DEBUG oslo_concurrency.lockutils [None req-0f683434-2169-447a-b221-1160eda7b99a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "070c0648-719f-49ef-9721-ce18b2d03fd3" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 46.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:19 compute-2 nova_compute[226829]: 2026-01-31 07:47:19.816 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:20 compute-2 nova_compute[226829]: 2026-01-31 07:47:20.049 226833 DEBUG nova.network.neutron [req-849bdab2-baa2-4e5b-8238-1f70b5ff5d9a req-9706b9cc-8473-4ae6-aedc-7e0bf57c278d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Updated VIF entry in instance network info cache for port f67f1507-621e-4954-99cd-71bf5dd626d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:47:20 compute-2 nova_compute[226829]: 2026-01-31 07:47:20.050 226833 DEBUG nova.network.neutron [req-849bdab2-baa2-4e5b-8238-1f70b5ff5d9a req-9706b9cc-8473-4ae6-aedc-7e0bf57c278d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 070c0648-719f-49ef-9721-ce18b2d03fd3] Updating instance_info_cache with network_info: [{"id": "f67f1507-621e-4954-99cd-71bf5dd626d8", "address": "fa:16:3e:2a:0b:11", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": null, "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapf67f1507-62", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:47:20 compute-2 nova_compute[226829]: 2026-01-31 07:47:20.213 226833 DEBUG oslo_concurrency.lockutils [req-849bdab2-baa2-4e5b-8238-1f70b5ff5d9a req-9706b9cc-8473-4ae6-aedc-7e0bf57c278d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-070c0648-719f-49ef-9721-ce18b2d03fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:47:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:47:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1947828328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:20 compute-2 nova_compute[226829]: 2026-01-31 07:47:20.264 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:20 compute-2 nova_compute[226829]: 2026-01-31 07:47:20.270 226833 DEBUG nova.compute.provider_tree [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:47:20 compute-2 ceph-mon[77282]: pgmap v1473: 305 pgs: 305 active+clean; 237 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 31 07:47:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2357984663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3624245809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1947828328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:20 compute-2 nova_compute[226829]: 2026-01-31 07:47:20.699 226833 DEBUG nova.scheduler.client.report [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:47:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:20.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:20 compute-2 nova_compute[226829]: 2026-01-31 07:47:20.917 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:47:20.917 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:47:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:47:20.922 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:47:20 compute-2 nova_compute[226829]: 2026-01-31 07:47:20.925 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:20 compute-2 nova_compute[226829]: 2026-01-31 07:47:20.926 226833 DEBUG nova.compute.manager [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.114 226833 DEBUG nova.compute.manager [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.115 226833 DEBUG nova.network.neutron [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:47:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:21.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.195 226833 INFO nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.283 226833 DEBUG nova.compute.manager [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:47:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e222 e222: 3 total, 3 up, 3 in
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.637 226833 DEBUG nova.compute.manager [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.638 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.638 226833 INFO nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Creating image(s)
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.666 226833 DEBUG nova.storage.rbd_utils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 1605ed48-f361-4895-81e2-915a59f90ec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.699 226833 DEBUG nova.storage.rbd_utils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 1605ed48-f361-4895-81e2-915a59f90ec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.727 226833 DEBUG nova.storage.rbd_utils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 1605ed48-f361-4895-81e2-915a59f90ec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.732 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.755 226833 DEBUG nova.network.neutron [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.755 226833 DEBUG nova.compute.manager [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.799 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.807 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.808 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.808 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.808 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.829 226833 DEBUG nova.storage.rbd_utils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 1605ed48-f361-4895-81e2-915a59f90ec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:21 compute-2 nova_compute[226829]: 2026-01-31 07:47:21.833 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1605ed48-f361-4895-81e2-915a59f90ec9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:22 compute-2 ceph-mon[77282]: pgmap v1474: 305 pgs: 305 active+clean; 209 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 31 07:47:22 compute-2 ceph-mon[77282]: osdmap e222: 3 total, 3 up, 3 in
Jan 31 07:47:22 compute-2 nova_compute[226829]: 2026-01-31 07:47:22.851 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1605ed48-f361-4895-81e2-915a59f90ec9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:22.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:22 compute-2 nova_compute[226829]: 2026-01-31 07:47:22.918 226833 DEBUG nova.storage.rbd_utils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] resizing rbd image 1605ed48-f361-4895-81e2-915a59f90ec9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:47:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:23.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.283 226833 DEBUG nova.objects.instance [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lazy-loading 'migration_context' on Instance uuid 1605ed48-f361-4895-81e2-915a59f90ec9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.302 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.303 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Ensure instance console log exists: /var/lib/nova/instances/1605ed48-f361-4895-81e2-915a59f90ec9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.304 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.304 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.305 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.308 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.315 226833 WARNING nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.322 226833 DEBUG nova.virt.libvirt.host [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.323 226833 DEBUG nova.virt.libvirt.host [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.328 226833 DEBUG nova.virt.libvirt.host [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.328 226833 DEBUG nova.virt.libvirt.host [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.331 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.331 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.332 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.333 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.333 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.334 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.334 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.335 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.335 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.335 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.336 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.336 226833 DEBUG nova.virt.hardware [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.341 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:23 compute-2 ceph-mon[77282]: pgmap v1476: 305 pgs: 305 active+clean; 210 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 449 KiB/s rd, 3.4 MiB/s wr, 106 op/s
Jan 31 07:47:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:47:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3540240285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.804 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.834 226833 DEBUG nova.storage.rbd_utils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 1605ed48-f361-4895-81e2-915a59f90ec9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:23 compute-2 nova_compute[226829]: 2026-01-31 07:47:23.838 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:24 compute-2 nova_compute[226829]: 2026-01-31 07:47:24.275 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:47:24 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3620649895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:24 compute-2 nova_compute[226829]: 2026-01-31 07:47:24.312 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:24 compute-2 nova_compute[226829]: 2026-01-31 07:47:24.313 226833 DEBUG nova.objects.instance [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lazy-loading 'pci_devices' on Instance uuid 1605ed48-f361-4895-81e2-915a59f90ec9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:47:24 compute-2 nova_compute[226829]: 2026-01-31 07:47:24.350 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <uuid>1605ed48-f361-4895-81e2-915a59f90ec9</uuid>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <name>instance-00000040</name>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1985325628</nova:name>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:47:23</nova:creationTime>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <nova:user uuid="aeb97fa2b1284c3faf0028734652a72c">tempest-ListImageFiltersTestJSON-1086962866-project-member</nova:user>
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <nova:project uuid="d8503864fef643f698a175cc6364101c">tempest-ListImageFiltersTestJSON-1086962866</nova:project>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <system>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <entry name="serial">1605ed48-f361-4895-81e2-915a59f90ec9</entry>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <entry name="uuid">1605ed48-f361-4895-81e2-915a59f90ec9</entry>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     </system>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <os>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   </os>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <features>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   </features>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1605ed48-f361-4895-81e2-915a59f90ec9_disk">
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       </source>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1605ed48-f361-4895-81e2-915a59f90ec9_disk.config">
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       </source>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:47:24 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/1605ed48-f361-4895-81e2-915a59f90ec9/console.log" append="off"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <video>
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     </video>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:47:24 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:47:24 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:47:24 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:47:24 compute-2 nova_compute[226829]: </domain>
Jan 31 07:47:24 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:47:24 compute-2 nova_compute[226829]: 2026-01-31 07:47:24.429 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:47:24 compute-2 nova_compute[226829]: 2026-01-31 07:47:24.430 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:47:24 compute-2 nova_compute[226829]: 2026-01-31 07:47:24.430 226833 INFO nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Using config drive
Jan 31 07:47:24 compute-2 nova_compute[226829]: 2026-01-31 07:47:24.455 226833 DEBUG nova.storage.rbd_utils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 1605ed48-f361-4895-81e2-915a59f90ec9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:24.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:47:24.925 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:47:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3540240285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3620649895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:25 compute-2 podman[253263]: 2026-01-31 07:47:25.182288146 +0000 UTC m=+0.068048404 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 07:47:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:25.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:25 compute-2 nova_compute[226829]: 2026-01-31 07:47:25.420 226833 INFO nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Creating config drive at /var/lib/nova/instances/1605ed48-f361-4895-81e2-915a59f90ec9/disk.config
Jan 31 07:47:25 compute-2 nova_compute[226829]: 2026-01-31 07:47:25.425 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1605ed48-f361-4895-81e2-915a59f90ec9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9nkt331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:25 compute-2 nova_compute[226829]: 2026-01-31 07:47:25.558 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1605ed48-f361-4895-81e2-915a59f90ec9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9nkt331s" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:25 compute-2 nova_compute[226829]: 2026-01-31 07:47:25.589 226833 DEBUG nova.storage.rbd_utils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 1605ed48-f361-4895-81e2-915a59f90ec9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:25 compute-2 nova_compute[226829]: 2026-01-31 07:47:25.593 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1605ed48-f361-4895-81e2-915a59f90ec9/disk.config 1605ed48-f361-4895-81e2-915a59f90ec9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:26 compute-2 nova_compute[226829]: 2026-01-31 07:47:26.802 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:26.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:26 compute-2 ceph-mon[77282]: pgmap v1477: 305 pgs: 305 active+clean; 210 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 449 KiB/s rd, 3.4 MiB/s wr, 106 op/s
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.169 226833 DEBUG oslo_concurrency.processutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1605ed48-f361-4895-81e2-915a59f90ec9/disk.config 1605ed48-f361-4895-81e2-915a59f90ec9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.170 226833 INFO nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Deleting local config drive /var/lib/nova/instances/1605ed48-f361-4895-81e2-915a59f90ec9/disk.config because it was imported into RBD.
Jan 31 07:47:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:27.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:27 compute-2 systemd-machined[195142]: New machine qemu-26-instance-00000040.
Jan 31 07:47:27 compute-2 systemd[1]: Started Virtual Machine qemu-26-instance-00000040.
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.713 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845647.7122235, 1605ed48-f361-4895-81e2-915a59f90ec9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.714 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] VM Resumed (Lifecycle Event)
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.717 226833 DEBUG nova.compute.manager [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.718 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.721 226833 INFO nova.virt.libvirt.driver [-] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Instance spawned successfully.
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.722 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.815 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.820 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.861 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.861 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.861 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.862 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.862 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.862 226833 DEBUG nova.virt.libvirt.driver [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.892 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.892 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845647.7136858, 1605ed48-f361-4895-81e2-915a59f90ec9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:47:27 compute-2 nova_compute[226829]: 2026-01-31 07:47:27.892 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] VM Started (Lifecycle Event)
Jan 31 07:47:28 compute-2 nova_compute[226829]: 2026-01-31 07:47:28.096 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:28 compute-2 nova_compute[226829]: 2026-01-31 07:47:28.100 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:47:28 compute-2 nova_compute[226829]: 2026-01-31 07:47:28.135 226833 INFO nova.compute.manager [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Took 6.50 seconds to spawn the instance on the hypervisor.
Jan 31 07:47:28 compute-2 nova_compute[226829]: 2026-01-31 07:47:28.136 226833 DEBUG nova.compute.manager [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:28 compute-2 nova_compute[226829]: 2026-01-31 07:47:28.264 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:47:28 compute-2 nova_compute[226829]: 2026-01-31 07:47:28.456 226833 INFO nova.compute.manager [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Took 9.87 seconds to build instance.
Jan 31 07:47:28 compute-2 ceph-mon[77282]: pgmap v1478: 305 pgs: 305 active+clean; 181 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 301 op/s
Jan 31 07:47:28 compute-2 nova_compute[226829]: 2026-01-31 07:47:28.656 226833 DEBUG oslo_concurrency.lockutils [None req-99450d31-a92b-46c4-8e67-79743c4163ab aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "1605ed48-f361-4895-81e2-915a59f90ec9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:47:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 25K writes, 104K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.04 MB/s
                                           Cumulative WAL: 25K writes, 8267 syncs, 3.07 writes per sync, written: 0.10 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 43K keys, 10K commit groups, 1.0 writes per commit group, ingest: 45.97 MB, 0.08 MB/s
                                           Interval WAL: 10K writes, 3830 syncs, 2.78 writes per sync, written: 0.04 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 07:47:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:28.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:29.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:29 compute-2 nova_compute[226829]: 2026-01-31 07:47:29.278 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:29 compute-2 ceph-mon[77282]: pgmap v1479: 305 pgs: 305 active+clean; 181 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 300 op/s
Jan 31 07:47:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:47:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:30.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:47:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:31.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e223 e223: 3 total, 3 up, 3 in
Jan 31 07:47:31 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 31 07:47:31 compute-2 nova_compute[226829]: 2026-01-31 07:47:31.805 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:32 compute-2 ceph-mon[77282]: pgmap v1480: 305 pgs: 305 active+clean; 181 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.2 MiB/s wr, 269 op/s
Jan 31 07:47:32 compute-2 ceph-mon[77282]: osdmap e223: 3 total, 3 up, 3 in
Jan 31 07:47:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:32.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:33.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:33 compute-2 sudo[253385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:33 compute-2 sudo[253385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:33 compute-2 sudo[253385]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:33 compute-2 sudo[253410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:33 compute-2 sudo[253410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:33 compute-2 sudo[253410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:33 compute-2 ceph-mon[77282]: pgmap v1482: 305 pgs: 305 active+clean; 183 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.1 MiB/s wr, 289 op/s
Jan 31 07:47:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:34 compute-2 nova_compute[226829]: 2026-01-31 07:47:34.279 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:34.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:47:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:35.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:47:35 compute-2 ceph-mon[77282]: pgmap v1483: 305 pgs: 305 active+clean; 183 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.1 MiB/s wr, 289 op/s
Jan 31 07:47:36 compute-2 nova_compute[226829]: 2026-01-31 07:47:36.808 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:36.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e224 e224: 3 total, 3 up, 3 in
Jan 31 07:47:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:37.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:38 compute-2 ceph-mon[77282]: pgmap v1484: 305 pgs: 305 active+clean; 229 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 222 op/s
Jan 31 07:47:38 compute-2 ceph-mon[77282]: osdmap e224: 3 total, 3 up, 3 in
Jan 31 07:47:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:38.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:39.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:39 compute-2 nova_compute[226829]: 2026-01-31 07:47:39.282 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:40 compute-2 ceph-mon[77282]: pgmap v1486: 305 pgs: 305 active+clean; 246 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.4 MiB/s wr, 244 op/s
Jan 31 07:47:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:40.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:41.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4165145417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:41 compute-2 nova_compute[226829]: 2026-01-31 07:47:41.811 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:42.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:42 compute-2 ceph-mon[77282]: pgmap v1487: 305 pgs: 305 active+clean; 332 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 12 MiB/s wr, 277 op/s
Jan 31 07:47:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e225 e225: 3 total, 3 up, 3 in
Jan 31 07:47:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:43.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:44 compute-2 nova_compute[226829]: 2026-01-31 07:47:44.301 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:44 compute-2 ceph-mon[77282]: pgmap v1488: 305 pgs: 305 active+clean; 346 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 12 MiB/s wr, 253 op/s
Jan 31 07:47:44 compute-2 ceph-mon[77282]: osdmap e225: 3 total, 3 up, 3 in
Jan 31 07:47:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e226 e226: 3 total, 3 up, 3 in
Jan 31 07:47:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:44.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:47:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/325263316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:47:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:47:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/325263316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:47:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:45.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:45 compute-2 ceph-mon[77282]: osdmap e226: 3 total, 3 up, 3 in
Jan 31 07:47:45 compute-2 ceph-mon[77282]: pgmap v1491: 305 pgs: 305 active+clean; 346 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 10 MiB/s wr, 159 op/s
Jan 31 07:47:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/325263316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:47:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/325263316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:47:46 compute-2 nova_compute[226829]: 2026-01-31 07:47:46.813 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:46.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:46 compute-2 sudo[253441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:46 compute-2 sudo[253441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:46 compute-2 sudo[253441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:46 compute-2 sudo[253467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:47:46 compute-2 sudo[253467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:46 compute-2 sudo[253467]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:47 compute-2 sudo[253506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:47 compute-2 sudo[253506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:47 compute-2 sudo[253506]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:47 compute-2 podman[253465]: 2026-01-31 07:47:47.04471304 +0000 UTC m=+0.085314662 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible)
Jan 31 07:47:47 compute-2 sudo[253542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:47:47 compute-2 sudo[253542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:47.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:47 compute-2 sudo[253542]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:48 compute-2 ceph-mon[77282]: pgmap v1492: 305 pgs: 305 active+clean; 365 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 9.8 MiB/s wr, 226 op/s
Jan 31 07:47:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:47:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:47:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:47:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:47:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:47:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:48.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:49.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:49 compute-2 nova_compute[226829]: 2026-01-31 07:47:49.304 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:50 compute-2 ceph-mon[77282]: pgmap v1493: 305 pgs: 305 active+clean; 388 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 160 op/s
Jan 31 07:47:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:50.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:51.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:51 compute-2 nova_compute[226829]: 2026-01-31 07:47:51.286 226833 DEBUG nova.compute.manager [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e227 e227: 3 total, 3 up, 3 in
Jan 31 07:47:51 compute-2 nova_compute[226829]: 2026-01-31 07:47:51.415 226833 INFO nova.compute.manager [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] instance snapshotting
Jan 31 07:47:51 compute-2 nova_compute[226829]: 2026-01-31 07:47:51.815 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:51 compute-2 nova_compute[226829]: 2026-01-31 07:47:51.956 226833 INFO nova.virt.libvirt.driver [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Beginning live snapshot process
Jan 31 07:47:52 compute-2 nova_compute[226829]: 2026-01-31 07:47:52.398 226833 DEBUG nova.virt.libvirt.imagebackend [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:47:52 compute-2 ceph-mon[77282]: pgmap v1494: 305 pgs: 305 active+clean; 405 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 2.9 MiB/s wr, 136 op/s
Jan 31 07:47:52 compute-2 ceph-mon[77282]: osdmap e227: 3 total, 3 up, 3 in
Jan 31 07:47:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:52.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:52 compute-2 nova_compute[226829]: 2026-01-31 07:47:52.963 226833 DEBUG nova.storage.rbd_utils [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] creating snapshot(c0ef358efbdc42668be00878820461d8) on rbd image(1605ed48-f361-4895-81e2-915a59f90ec9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:47:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:53.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e228 e228: 3 total, 3 up, 3 in
Jan 31 07:47:53 compute-2 ceph-mon[77282]: pgmap v1496: 305 pgs: 305 active+clean; 405 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 377 KiB/s rd, 2.8 MiB/s wr, 131 op/s
Jan 31 07:47:53 compute-2 sudo[253654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:53 compute-2 sudo[253654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:53 compute-2 sudo[253654]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:54 compute-2 nova_compute[226829]: 2026-01-31 07:47:54.049 226833 DEBUG nova.storage.rbd_utils [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] cloning vms/1605ed48-f361-4895-81e2-915a59f90ec9_disk@c0ef358efbdc42668be00878820461d8 to images/2afd45fd-a049-4e1d-819a-36a36d22f6a2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:47:54 compute-2 sudo[253679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:54 compute-2 sudo[253679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:54 compute-2 sudo[253679]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:54 compute-2 nova_compute[226829]: 2026-01-31 07:47:54.183 226833 DEBUG nova.storage.rbd_utils [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] flattening images/2afd45fd-a049-4e1d-819a-36a36d22f6a2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:47:54 compute-2 nova_compute[226829]: 2026-01-31 07:47:54.306 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:54 compute-2 sudo[253758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:54 compute-2 sudo[253758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:54 compute-2 sudo[253758]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:54 compute-2 sudo[253783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:47:54 compute-2 sudo[253783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:54 compute-2 sudo[253783]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:54 compute-2 nova_compute[226829]: 2026-01-31 07:47:54.797 226833 DEBUG nova.storage.rbd_utils [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] removing snapshot(c0ef358efbdc42668be00878820461d8) on rbd image(1605ed48-f361-4895-81e2-915a59f90ec9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:47:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:54.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/376835805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:54 compute-2 ceph-mon[77282]: osdmap e228: 3 total, 3 up, 3 in
Jan 31 07:47:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3959903741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e229 e229: 3 total, 3 up, 3 in
Jan 31 07:47:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:55.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:55 compute-2 nova_compute[226829]: 2026-01-31 07:47:55.259 226833 DEBUG nova.storage.rbd_utils [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] creating snapshot(snap) on rbd image(2afd45fd-a049-4e1d-819a-36a36d22f6a2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:47:56 compute-2 podman[253845]: 2026-01-31 07:47:56.177936135 +0000 UTC m=+0.061482117 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 07:47:56 compute-2 ceph-mon[77282]: pgmap v1498: 305 pgs: 305 active+clean; 405 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 1.9 MiB/s wr, 48 op/s
Jan 31 07:47:56 compute-2 ceph-mon[77282]: osdmap e229: 3 total, 3 up, 3 in
Jan 31 07:47:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e230 e230: 3 total, 3 up, 3 in
Jan 31 07:47:56 compute-2 nova_compute[226829]: 2026-01-31 07:47:56.817 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:56.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:57.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:57 compute-2 ceph-mon[77282]: osdmap e230: 3 total, 3 up, 3 in
Jan 31 07:47:58 compute-2 ceph-mon[77282]: pgmap v1501: 305 pgs: 305 active+clean; 462 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.4 MiB/s wr, 128 op/s
Jan 31 07:47:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:58.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:47:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:47:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:59.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:47:59 compute-2 nova_compute[226829]: 2026-01-31 07:47:59.308 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:59 compute-2 nova_compute[226829]: 2026-01-31 07:47:59.327 226833 INFO nova.virt.libvirt.driver [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Snapshot image upload complete
Jan 31 07:47:59 compute-2 nova_compute[226829]: 2026-01-31 07:47:59.328 226833 INFO nova.compute.manager [None req-5d24e897-cf33-40b5-9500-aea1c27c9bcc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Took 7.91 seconds to snapshot the instance on the hypervisor.
Jan 31 07:48:00 compute-2 ceph-mon[77282]: pgmap v1502: 305 pgs: 305 active+clean; 484 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 151 op/s
Jan 31 07:48:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:00.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:01.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:01 compute-2 nova_compute[226829]: 2026-01-31 07:48:01.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:01 compute-2 nova_compute[226829]: 2026-01-31 07:48:01.855 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:02 compute-2 ceph-mon[77282]: pgmap v1503: 305 pgs: 305 active+clean; 484 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.7 MiB/s wr, 162 op/s
Jan 31 07:48:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:02.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:03.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:03 compute-2 nova_compute[226829]: 2026-01-31 07:48:03.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:04 compute-2 nova_compute[226829]: 2026-01-31 07:48:04.310 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:04 compute-2 nova_compute[226829]: 2026-01-31 07:48:04.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:04 compute-2 nova_compute[226829]: 2026-01-31 07:48:04.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:04 compute-2 ceph-mon[77282]: pgmap v1504: 305 pgs: 305 active+clean; 484 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.8 MiB/s wr, 178 op/s
Jan 31 07:48:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e231 e231: 3 total, 3 up, 3 in
Jan 31 07:48:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:04.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:05.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:05 compute-2 ceph-mon[77282]: osdmap e231: 3 total, 3 up, 3 in
Jan 31 07:48:05 compute-2 ceph-mon[77282]: pgmap v1506: 305 pgs: 305 active+clean; 484 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.0 MiB/s wr, 84 op/s
Jan 31 07:48:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1783155386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:06 compute-2 nova_compute[226829]: 2026-01-31 07:48:06.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:06 compute-2 nova_compute[226829]: 2026-01-31 07:48:06.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:48:06 compute-2 nova_compute[226829]: 2026-01-31 07:48:06.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:48:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e232 e232: 3 total, 3 up, 3 in
Jan 31 07:48:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1085148854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:06 compute-2 ceph-mon[77282]: osdmap e232: 3 total, 3 up, 3 in
Jan 31 07:48:06 compute-2 nova_compute[226829]: 2026-01-31 07:48:06.857 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:48:06.857 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:48:06.858 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:48:06.858 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:06.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:07 compute-2 nova_compute[226829]: 2026-01-31 07:48:07.041 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-1605ed48-f361-4895-81e2-915a59f90ec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:48:07 compute-2 nova_compute[226829]: 2026-01-31 07:48:07.041 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-1605ed48-f361-4895-81e2-915a59f90ec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:48:07 compute-2 nova_compute[226829]: 2026-01-31 07:48:07.042 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:48:07 compute-2 nova_compute[226829]: 2026-01-31 07:48:07.042 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1605ed48-f361-4895-81e2-915a59f90ec9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:48:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:07.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e233 e233: 3 total, 3 up, 3 in
Jan 31 07:48:07 compute-2 nova_compute[226829]: 2026-01-31 07:48:07.909 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:48:07 compute-2 ceph-mon[77282]: pgmap v1508: 305 pgs: 305 active+clean; 484 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 153 op/s
Jan 31 07:48:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:08.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:09 compute-2 ceph-mon[77282]: osdmap e233: 3 total, 3 up, 3 in
Jan 31 07:48:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e234 e234: 3 total, 3 up, 3 in
Jan 31 07:48:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:09.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:09 compute-2 nova_compute[226829]: 2026-01-31 07:48:09.312 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:10 compute-2 ceph-mon[77282]: pgmap v1510: 305 pgs: 305 active+clean; 513 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Jan 31 07:48:10 compute-2 ceph-mon[77282]: osdmap e234: 3 total, 3 up, 3 in
Jan 31 07:48:10 compute-2 nova_compute[226829]: 2026-01-31 07:48:10.494 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:48:10 compute-2 nova_compute[226829]: 2026-01-31 07:48:10.559 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-1605ed48-f361-4895-81e2-915a59f90ec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:48:10 compute-2 nova_compute[226829]: 2026-01-31 07:48:10.560 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:48:10 compute-2 nova_compute[226829]: 2026-01-31 07:48:10.562 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:10 compute-2 nova_compute[226829]: 2026-01-31 07:48:10.563 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:10 compute-2 nova_compute[226829]: 2026-01-31 07:48:10.622 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:10 compute-2 nova_compute[226829]: 2026-01-31 07:48:10.622 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:10 compute-2 nova_compute[226829]: 2026-01-31 07:48:10.623 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:10 compute-2 nova_compute[226829]: 2026-01-31 07:48:10.623 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:48:10 compute-2 nova_compute[226829]: 2026-01-31 07:48:10.624 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:10.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:48:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/369527385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.092 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/369527385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.200 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.201 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:48:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:48:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:11.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.364 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.366 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4298MB free_disk=20.830928802490234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.366 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.367 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.506 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 1605ed48-f361-4895-81e2-915a59f90ec9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.507 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.507 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.577 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:11 compute-2 nova_compute[226829]: 2026-01-31 07:48:11.859 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:48:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3531660523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:12 compute-2 nova_compute[226829]: 2026-01-31 07:48:12.038 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:12 compute-2 nova_compute[226829]: 2026-01-31 07:48:12.045 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:48:12 compute-2 nova_compute[226829]: 2026-01-31 07:48:12.155 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:48:12 compute-2 nova_compute[226829]: 2026-01-31 07:48:12.191 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:48:12 compute-2 nova_compute[226829]: 2026-01-31 07:48:12.192 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:12 compute-2 ceph-mon[77282]: pgmap v1512: 305 pgs: 305 active+clean; 504 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 7.8 MiB/s wr, 280 op/s
Jan 31 07:48:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4272989584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4063594915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3531660523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:12.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:13 compute-2 nova_compute[226829]: 2026-01-31 07:48:13.118 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:13 compute-2 nova_compute[226829]: 2026-01-31 07:48:13.155 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:13 compute-2 nova_compute[226829]: 2026-01-31 07:48:13.155 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:13 compute-2 nova_compute[226829]: 2026-01-31 07:48:13.155 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:48:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:13.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3177206803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:14 compute-2 sudo[253918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:14 compute-2 sudo[253918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:14 compute-2 sudo[253918]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:14 compute-2 sudo[253943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:14 compute-2 sudo[253943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:14 compute-2 sudo[253943]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:14 compute-2 nova_compute[226829]: 2026-01-31 07:48:14.314 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:14 compute-2 ceph-mon[77282]: pgmap v1513: 305 pgs: 305 active+clean; 484 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 7.4 MiB/s wr, 199 op/s
Jan 31 07:48:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:14.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:15.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:16 compute-2 ceph-mon[77282]: pgmap v1514: 305 pgs: 305 active+clean; 484 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 158 op/s
Jan 31 07:48:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e235 e235: 3 total, 3 up, 3 in
Jan 31 07:48:16 compute-2 nova_compute[226829]: 2026-01-31 07:48:16.861 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:16.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:17 compute-2 podman[253969]: 2026-01-31 07:48:17.188871845 +0000 UTC m=+0.073339298 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:48:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:17.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:17 compute-2 ceph-mon[77282]: osdmap e235: 3 total, 3 up, 3 in
Jan 31 07:48:18 compute-2 ceph-mon[77282]: pgmap v1516: 305 pgs: 305 active+clean; 512 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.7 MiB/s wr, 185 op/s
Jan 31 07:48:18 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 07:48:18 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 07:48:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:18.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:19.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:19 compute-2 nova_compute[226829]: 2026-01-31 07:48:19.316 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:19 compute-2 ceph-mon[77282]: pgmap v1517: 305 pgs: 305 active+clean; 516 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.6 MiB/s wr, 171 op/s
Jan 31 07:48:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:20.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:21 compute-2 nova_compute[226829]: 2026-01-31 07:48:21.253 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:48:21.253 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:48:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:48:21.255 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:48:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:21.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:21 compute-2 nova_compute[226829]: 2026-01-31 07:48:21.865 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:21 compute-2 ceph-mon[77282]: pgmap v1518: 305 pgs: 305 active+clean; 516 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 2.6 MiB/s wr, 169 op/s
Jan 31 07:48:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:22.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:48:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:23.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:48:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:24 compute-2 ceph-mon[77282]: pgmap v1519: 305 pgs: 305 active+clean; 516 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 481 KiB/s rd, 2.6 MiB/s wr, 227 op/s
Jan 31 07:48:24 compute-2 nova_compute[226829]: 2026-01-31 07:48:24.318 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:24.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:25.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:26 compute-2 ceph-mon[77282]: pgmap v1520: 305 pgs: 305 active+clean; 516 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 481 KiB/s rd, 2.6 MiB/s wr, 227 op/s
Jan 31 07:48:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e236 e236: 3 total, 3 up, 3 in
Jan 31 07:48:26 compute-2 nova_compute[226829]: 2026-01-31 07:48:26.867 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:26.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:27 compute-2 ceph-mon[77282]: osdmap e236: 3 total, 3 up, 3 in
Jan 31 07:48:27 compute-2 podman[254000]: 2026-01-31 07:48:27.155773998 +0000 UTC m=+0.041495410 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:48:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:27.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e237 e237: 3 total, 3 up, 3 in
Jan 31 07:48:28 compute-2 ceph-mon[77282]: pgmap v1522: 305 pgs: 305 active+clean; 516 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 191 KiB/s rd, 900 KiB/s wr, 235 op/s
Jan 31 07:48:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:28.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:29 compute-2 ceph-mon[77282]: osdmap e237: 3 total, 3 up, 3 in
Jan 31 07:48:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:29.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:29 compute-2 nova_compute[226829]: 2026-01-31 07:48:29.321 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:48:30.259 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:30 compute-2 ceph-mon[77282]: pgmap v1524: 305 pgs: 305 active+clean; 487 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 45 KiB/s wr, 232 op/s
Jan 31 07:48:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:48:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:30.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:48:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:31.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:31 compute-2 ceph-mon[77282]: pgmap v1525: 305 pgs: 305 active+clean; 380 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 57 KiB/s wr, 151 op/s
Jan 31 07:48:31 compute-2 nova_compute[226829]: 2026-01-31 07:48:31.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e238 e238: 3 total, 3 up, 3 in
Jan 31 07:48:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:32.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:33.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:33 compute-2 ceph-mon[77282]: osdmap e238: 3 total, 3 up, 3 in
Jan 31 07:48:33 compute-2 ceph-mon[77282]: pgmap v1527: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 334 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 18 KiB/s wr, 99 op/s
Jan 31 07:48:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1982302776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:34 compute-2 sudo[254024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:34 compute-2 sudo[254024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:34 compute-2 sudo[254024]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.323 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:34 compute-2 sudo[254049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:34 compute-2 sudo[254049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:34 compute-2 sudo[254049]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.536 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "1605ed48-f361-4895-81e2-915a59f90ec9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.537 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "1605ed48-f361-4895-81e2-915a59f90ec9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.537 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "1605ed48-f361-4895-81e2-915a59f90ec9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.538 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "1605ed48-f361-4895-81e2-915a59f90ec9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.538 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "1605ed48-f361-4895-81e2-915a59f90ec9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.540 226833 INFO nova.compute.manager [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Terminating instance
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.541 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "refresh_cache-1605ed48-f361-4895-81e2-915a59f90ec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.541 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquired lock "refresh_cache-1605ed48-f361-4895-81e2-915a59f90ec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.541 226833 DEBUG nova.network.neutron [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:48:34 compute-2 nova_compute[226829]: 2026-01-31 07:48:34.765 226833 DEBUG nova.network.neutron [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:48:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:34.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:35 compute-2 nova_compute[226829]: 2026-01-31 07:48:35.153 226833 DEBUG nova.network.neutron [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:48:35 compute-2 nova_compute[226829]: 2026-01-31 07:48:35.172 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Releasing lock "refresh_cache-1605ed48-f361-4895-81e2-915a59f90ec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:48:35 compute-2 nova_compute[226829]: 2026-01-31 07:48:35.172 226833 DEBUG nova.compute.manager [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:48:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:35.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:35 compute-2 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000040.scope: Deactivated successfully.
Jan 31 07:48:35 compute-2 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000040.scope: Consumed 15.854s CPU time.
Jan 31 07:48:35 compute-2 systemd-machined[195142]: Machine qemu-26-instance-00000040 terminated.
Jan 31 07:48:35 compute-2 nova_compute[226829]: 2026-01-31 07:48:35.793 226833 INFO nova.virt.libvirt.driver [-] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Instance destroyed successfully.
Jan 31 07:48:35 compute-2 nova_compute[226829]: 2026-01-31 07:48:35.795 226833 DEBUG nova.objects.instance [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lazy-loading 'resources' on Instance uuid 1605ed48-f361-4895-81e2-915a59f90ec9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:48:36 compute-2 ceph-mon[77282]: pgmap v1528: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 334 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 15 KiB/s wr, 84 op/s
Jan 31 07:48:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e239 e239: 3 total, 3 up, 3 in
Jan 31 07:48:36 compute-2 nova_compute[226829]: 2026-01-31 07:48:36.872 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:48:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:36.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:48:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:37.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:37 compute-2 ceph-mon[77282]: osdmap e239: 3 total, 3 up, 3 in
Jan 31 07:48:37 compute-2 ceph-mon[77282]: pgmap v1530: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 211 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 15 KiB/s wr, 100 op/s
Jan 31 07:48:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:38.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:39.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:39 compute-2 nova_compute[226829]: 2026-01-31 07:48:39.325 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:40 compute-2 ceph-mon[77282]: pgmap v1531: 305 pgs: 305 active+clean; 200 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.7 KiB/s wr, 81 op/s
Jan 31 07:48:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:40.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:41.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:41 compute-2 nova_compute[226829]: 2026-01-31 07:48:41.553 226833 INFO nova.virt.libvirt.driver [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Deleting instance files /var/lib/nova/instances/1605ed48-f361-4895-81e2-915a59f90ec9_del
Jan 31 07:48:41 compute-2 nova_compute[226829]: 2026-01-31 07:48:41.555 226833 INFO nova.virt.libvirt.driver [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Deletion of /var/lib/nova/instances/1605ed48-f361-4895-81e2-915a59f90ec9_del complete
Jan 31 07:48:41 compute-2 nova_compute[226829]: 2026-01-31 07:48:41.633 226833 INFO nova.compute.manager [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Took 6.46 seconds to destroy the instance on the hypervisor.
Jan 31 07:48:41 compute-2 nova_compute[226829]: 2026-01-31 07:48:41.634 226833 DEBUG oslo.service.loopingcall [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:48:41 compute-2 nova_compute[226829]: 2026-01-31 07:48:41.634 226833 DEBUG nova.compute.manager [-] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:48:41 compute-2 nova_compute[226829]: 2026-01-31 07:48:41.635 226833 DEBUG nova.network.neutron [-] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:48:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3633645125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e240 e240: 3 total, 3 up, 3 in
Jan 31 07:48:41 compute-2 nova_compute[226829]: 2026-01-31 07:48:41.874 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:41 compute-2 nova_compute[226829]: 2026-01-31 07:48:41.942 226833 DEBUG nova.network.neutron [-] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:48:41 compute-2 nova_compute[226829]: 2026-01-31 07:48:41.957 226833 DEBUG nova.network.neutron [-] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:48:41 compute-2 nova_compute[226829]: 2026-01-31 07:48:41.977 226833 INFO nova.compute.manager [-] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Took 0.34 seconds to deallocate network for instance.
Jan 31 07:48:42 compute-2 nova_compute[226829]: 2026-01-31 07:48:42.040 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:42 compute-2 nova_compute[226829]: 2026-01-31 07:48:42.041 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:42 compute-2 nova_compute[226829]: 2026-01-31 07:48:42.109 226833 DEBUG oslo_concurrency.processutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:48:42 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3212683337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:42 compute-2 nova_compute[226829]: 2026-01-31 07:48:42.563 226833 DEBUG oslo_concurrency.processutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:42 compute-2 nova_compute[226829]: 2026-01-31 07:48:42.571 226833 DEBUG nova.compute.provider_tree [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:48:42 compute-2 nova_compute[226829]: 2026-01-31 07:48:42.601 226833 DEBUG nova.scheduler.client.report [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:48:42 compute-2 nova_compute[226829]: 2026-01-31 07:48:42.636 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:42 compute-2 nova_compute[226829]: 2026-01-31 07:48:42.674 226833 INFO nova.scheduler.client.report [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Deleted allocations for instance 1605ed48-f361-4895-81e2-915a59f90ec9
Jan 31 07:48:42 compute-2 nova_compute[226829]: 2026-01-31 07:48:42.741 226833 DEBUG oslo_concurrency.lockutils [None req-6062f908-4c3d-4acc-9d12-5898e5e7a388 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "1605ed48-f361-4895-81e2-915a59f90ec9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:42 compute-2 ceph-mon[77282]: pgmap v1532: 305 pgs: 305 active+clean; 148 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 4.3 KiB/s wr, 104 op/s
Jan 31 07:48:42 compute-2 ceph-mon[77282]: osdmap e240: 3 total, 3 up, 3 in
Jan 31 07:48:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3212683337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:42.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:43.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:44 compute-2 ceph-mon[77282]: pgmap v1534: 305 pgs: 305 active+clean; 121 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 4.9 KiB/s wr, 99 op/s
Jan 31 07:48:44 compute-2 nova_compute[226829]: 2026-01-31 07:48:44.329 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:44.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:45.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/626385488' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:48:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/626385488' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:48:46 compute-2 nova_compute[226829]: 2026-01-31 07:48:46.876 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:46 compute-2 ceph-mon[77282]: pgmap v1535: 305 pgs: 305 active+clean; 121 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.5 KiB/s wr, 41 op/s
Jan 31 07:48:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4148921643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:48:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:48:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:47.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:48:48 compute-2 ceph-mon[77282]: pgmap v1536: 305 pgs: 305 active+clean; 153 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.6 MiB/s wr, 72 op/s
Jan 31 07:48:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3370960855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:48:48 compute-2 podman[254125]: 2026-01-31 07:48:48.234077541 +0000 UTC m=+0.111278491 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller)
Jan 31 07:48:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:49 compute-2 nova_compute[226829]: 2026-01-31 07:48:49.330 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:49.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:50 compute-2 ceph-mon[77282]: pgmap v1537: 305 pgs: 305 active+clean; 154 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 07:48:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3811372512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:50 compute-2 nova_compute[226829]: 2026-01-31 07:48:50.792 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845715.7908144, 1605ed48-f361-4895-81e2-915a59f90ec9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:48:50 compute-2 nova_compute[226829]: 2026-01-31 07:48:50.793 226833 INFO nova.compute.manager [-] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] VM Stopped (Lifecycle Event)
Jan 31 07:48:50 compute-2 nova_compute[226829]: 2026-01-31 07:48:50.822 226833 DEBUG nova.compute.manager [None req-b35f3e4c-c446-4455-bed2-eb53c0faffab - - - - - -] [instance: 1605ed48-f361-4895-81e2-915a59f90ec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:48:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:50.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:51.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e241 e241: 3 total, 3 up, 3 in
Jan 31 07:48:51 compute-2 nova_compute[226829]: 2026-01-31 07:48:51.921 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:52 compute-2 ceph-mon[77282]: pgmap v1538: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Jan 31 07:48:52 compute-2 ceph-mon[77282]: osdmap e241: 3 total, 3 up, 3 in
Jan 31 07:48:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e242 e242: 3 total, 3 up, 3 in
Jan 31 07:48:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:48:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:52.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:48:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:48:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:53.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:48:53 compute-2 ceph-mon[77282]: osdmap e242: 3 total, 3 up, 3 in
Jan 31 07:48:53 compute-2 ceph-mon[77282]: pgmap v1541: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 170 KiB/s rd, 2.7 MiB/s wr, 102 op/s
Jan 31 07:48:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:54 compute-2 nova_compute[226829]: 2026-01-31 07:48:54.332 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:54 compute-2 sudo[254155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:54 compute-2 sudo[254155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-2 sudo[254155]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:54 compute-2 sudo[254180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:54 compute-2 sudo[254180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-2 sudo[254180]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:54 compute-2 sudo[254205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:54 compute-2 sudo[254205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-2 sudo[254205]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:54 compute-2 sudo[254230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:48:54 compute-2 sudo[254230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-2 sudo[254230]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:54 compute-2 sudo[254255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:54 compute-2 sudo[254255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-2 sudo[254255]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:54 compute-2 sudo[254280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:48:54 compute-2 sudo[254280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e243 e243: 3 total, 3 up, 3 in
Jan 31 07:48:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:55 compute-2 sudo[254280]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:48:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:55.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:48:55 compute-2 sudo[254326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:55 compute-2 sudo[254326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:55 compute-2 sudo[254326]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:55 compute-2 sudo[254351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:48:55 compute-2 sudo[254351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:55 compute-2 sudo[254351]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:55 compute-2 sudo[254376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:55 compute-2 sudo[254376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:55 compute-2 sudo[254376]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:55 compute-2 sudo[254401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:48:55 compute-2 sudo[254401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:55 compute-2 sudo[254401]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:56 compute-2 ceph-mon[77282]: osdmap e243: 3 total, 3 up, 3 in
Jan 31 07:48:56 compute-2 ceph-mon[77282]: pgmap v1543: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 184 KiB/s rd, 4.0 KiB/s wr, 71 op/s
Jan 31 07:48:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:48:56 compute-2 nova_compute[226829]: 2026-01-31 07:48:56.924 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:48:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:57.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:48:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:48:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:48:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:48:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:48:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:48:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:48:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:57.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:58 compute-2 podman[254458]: 2026-01-31 07:48:58.18514554 +0000 UTC m=+0.064387257 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:48:58 compute-2 ceph-mon[77282]: pgmap v1544: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 28 KiB/s wr, 193 op/s
Jan 31 07:48:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:48:58.747 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:48:58 compute-2 nova_compute[226829]: 2026-01-31 07:48:58.747 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:48:58.749 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:48:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:59 compute-2 nova_compute[226829]: 2026-01-31 07:48:59.334 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:48:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:59.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:59 compute-2 ceph-mon[77282]: pgmap v1545: 305 pgs: 305 active+clean; 82 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 24 KiB/s wr, 203 op/s
Jan 31 07:49:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1907091529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.859445) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740859583, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2534, "num_deletes": 262, "total_data_size": 5638714, "memory_usage": 5725760, "flush_reason": "Manual Compaction"}
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740886625, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 3691359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33988, "largest_seqno": 36517, "table_properties": {"data_size": 3680901, "index_size": 6760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 22153, "raw_average_key_size": 21, "raw_value_size": 3659861, "raw_average_value_size": 3492, "num_data_blocks": 291, "num_entries": 1048, "num_filter_entries": 1048, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845558, "oldest_key_time": 1769845558, "file_creation_time": 1769845740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 27233 microseconds, and 12514 cpu microseconds.
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.886679) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 3691359 bytes OK
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.886698) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.888952) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.888978) EVENT_LOG_v1 {"time_micros": 1769845740888971, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.889002) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 5627518, prev total WAL file size 5627518, number of live WAL files 2.
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.890370) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(3604KB)], [63(10141KB)]
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740890446, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 14076352, "oldest_snapshot_seqno": -1}
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6329 keys, 12109979 bytes, temperature: kUnknown
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740951162, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 12109979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12063841, "index_size": 29196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 161578, "raw_average_key_size": 25, "raw_value_size": 11946465, "raw_average_value_size": 1887, "num_data_blocks": 1174, "num_entries": 6329, "num_filter_entries": 6329, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769845740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.951421) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 12109979 bytes
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.952590) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.4 rd, 199.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 6866, records dropped: 537 output_compression: NoCompression
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.952609) EVENT_LOG_v1 {"time_micros": 1769845740952600, "job": 38, "event": "compaction_finished", "compaction_time_micros": 60822, "compaction_time_cpu_micros": 27555, "output_level": 6, "num_output_files": 1, "total_output_size": 12109979, "num_input_records": 6866, "num_output_records": 6329, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740953052, "job": 38, "event": "table_file_deletion", "file_number": 65}
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740953896, "job": 38, "event": "table_file_deletion", "file_number": 63}
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.890255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.954111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.954121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.954127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.954132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:00.954136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:49:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:01.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:49:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:01.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:01 compute-2 nova_compute[226829]: 2026-01-31 07:49:01.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e244 e244: 3 total, 3 up, 3 in
Jan 31 07:49:01 compute-2 ceph-mon[77282]: pgmap v1546: 305 pgs: 305 active+clean; 49 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 24 KiB/s wr, 223 op/s
Jan 31 07:49:01 compute-2 ceph-mon[77282]: osdmap e244: 3 total, 3 up, 3 in
Jan 31 07:49:01 compute-2 anacron[52153]: Job `cron.daily' started
Jan 31 07:49:01 compute-2 anacron[52153]: Job `cron.daily' terminated
Jan 31 07:49:01 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:49:01 compute-2 nova_compute[226829]: 2026-01-31 07:49:01.925 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:01 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:49:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e245 e245: 3 total, 3 up, 3 in
Jan 31 07:49:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:03.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:03.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:03 compute-2 nova_compute[226829]: 2026-01-31 07:49:03.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:03 compute-2 sudo[254483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:03 compute-2 sudo[254483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:03 compute-2 sudo[254483]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:03 compute-2 sudo[254508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:49:03 compute-2 sudo[254508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:03 compute-2 sudo[254508]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:03 compute-2 ceph-mon[77282]: osdmap e245: 3 total, 3 up, 3 in
Jan 31 07:49:03 compute-2 ceph-mon[77282]: pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 234 op/s
Jan 31 07:49:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:49:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:49:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:04 compute-2 nova_compute[226829]: 2026-01-31 07:49:04.376 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:04 compute-2 nova_compute[226829]: 2026-01-31 07:49:04.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1096458877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:49:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:05.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:49:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:05.752 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:06 compute-2 ceph-mon[77282]: pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 4.5 KiB/s wr, 95 op/s
Jan 31 07:49:06 compute-2 nova_compute[226829]: 2026-01-31 07:49:06.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:06.858 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:06.859 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:06.859 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:06 compute-2 nova_compute[226829]: 2026-01-31 07:49:06.927 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:49:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:49:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/968428789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:07.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:08 compute-2 ceph-mon[77282]: pgmap v1551: 305 pgs: 305 active+clean; 56 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 735 KiB/s wr, 70 op/s
Jan 31 07:49:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2089316534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2812782918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.521 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.522 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.523 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.551 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.551 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.552 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.552 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:49:08 compute-2 nova_compute[226829]: 2026-01-31 07:49:08.553 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:09.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:49:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2827750949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.105 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2410917404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2827750949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.295 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.297 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4585MB free_disk=20.975112915039062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.297 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.297 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.361 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.362 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.378 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:09.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.397 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:49:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2125014550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.803 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.810 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.833 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.858 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:49:09 compute-2 nova_compute[226829]: 2026-01-31 07:49:09.859 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:10 compute-2 ceph-mon[77282]: pgmap v1552: 305 pgs: 305 active+clean; 64 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 838 KiB/s wr, 35 op/s
Jan 31 07:49:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3542332933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2125014550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/937353425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:11.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:11.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.540631) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751540706, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 423, "num_deletes": 252, "total_data_size": 446390, "memory_usage": 455200, "flush_reason": "Manual Compaction"}
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751545566, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 293730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36522, "largest_seqno": 36940, "table_properties": {"data_size": 291266, "index_size": 564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6848, "raw_average_key_size": 21, "raw_value_size": 286145, "raw_average_value_size": 877, "num_data_blocks": 24, "num_entries": 326, "num_filter_entries": 326, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845741, "oldest_key_time": 1769845741, "file_creation_time": 1769845751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 4997 microseconds, and 1835 cpu microseconds.
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.545629) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 293730 bytes OK
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.545651) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.547637) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.547659) EVENT_LOG_v1 {"time_micros": 1769845751547652, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.547682) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 443681, prev total WAL file size 443681, number of live WAL files 2.
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.548236) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303038' seq:72057594037927935, type:22 .. '6D6772737461740031323539' seq:0, type:0; will stop at (end)
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(286KB)], [66(11MB)]
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751548281, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 12403709, "oldest_snapshot_seqno": -1}
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6135 keys, 8560303 bytes, temperature: kUnknown
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751603772, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 8560303, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8520146, "index_size": 23727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 157730, "raw_average_key_size": 25, "raw_value_size": 8410710, "raw_average_value_size": 1370, "num_data_blocks": 946, "num_entries": 6135, "num_filter_entries": 6135, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769845751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.604173) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8560303 bytes
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.605462) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.0 rd, 153.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.5 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(71.4) write-amplify(29.1) OK, records in: 6655, records dropped: 520 output_compression: NoCompression
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.605493) EVENT_LOG_v1 {"time_micros": 1769845751605478, "job": 40, "event": "compaction_finished", "compaction_time_micros": 55627, "compaction_time_cpu_micros": 32182, "output_level": 6, "num_output_files": 1, "total_output_size": 8560303, "num_input_records": 6655, "num_output_records": 6135, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751605848, "job": 40, "event": "table_file_deletion", "file_number": 68}
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751607877, "job": 40, "event": "table_file_deletion", "file_number": 66}
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.548158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.607937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.607943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.607946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.607948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:49:11.607951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-2 nova_compute[226829]: 2026-01-31 07:49:11.930 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:12 compute-2 ceph-mon[77282]: pgmap v1553: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.3 MiB/s wr, 55 op/s
Jan 31 07:49:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:13.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:13.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:13 compute-2 nova_compute[226829]: 2026-01-31 07:49:13.826 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:13 compute-2 nova_compute[226829]: 2026-01-31 07:49:13.827 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:13 compute-2 nova_compute[226829]: 2026-01-31 07:49:13.828 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:49:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:14 compute-2 nova_compute[226829]: 2026-01-31 07:49:14.379 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:14 compute-2 sudo[254583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:14 compute-2 sudo[254583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:14 compute-2 sudo[254583]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:14 compute-2 ceph-mon[77282]: pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 31 07:49:14 compute-2 sudo[254608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:14 compute-2 sudo[254608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:14 compute-2 sudo[254608]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:15.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:15.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:15 compute-2 ceph-mon[77282]: pgmap v1555: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 31 07:49:16 compute-2 nova_compute[226829]: 2026-01-31 07:49:16.933 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:49:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:17.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:49:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:17.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:18 compute-2 ceph-mon[77282]: pgmap v1556: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 31 07:49:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:49:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:19.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:49:19 compute-2 podman[254635]: 2026-01-31 07:49:19.218963664 +0000 UTC m=+0.090213284 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:49:19 compute-2 nova_compute[226829]: 2026-01-31 07:49:19.380 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:19 compute-2 nova_compute[226829]: 2026-01-31 07:49:19.754 226833 DEBUG nova.compute.manager [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 07:49:19 compute-2 ceph-mon[77282]: pgmap v1557: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Jan 31 07:49:20 compute-2 nova_compute[226829]: 2026-01-31 07:49:20.042 226833 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:20 compute-2 nova_compute[226829]: 2026-01-31 07:49:20.043 226833 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:20 compute-2 nova_compute[226829]: 2026-01-31 07:49:20.251 226833 DEBUG nova.objects.instance [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4a932d78-e0fe-4c23-a756-916144472a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:49:20 compute-2 nova_compute[226829]: 2026-01-31 07:49:20.735 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:49:20 compute-2 nova_compute[226829]: 2026-01-31 07:49:20.735 226833 INFO nova.compute.claims [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:49:20 compute-2 nova_compute[226829]: 2026-01-31 07:49:20.736 226833 DEBUG nova.objects.instance [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'resources' on Instance uuid 4a932d78-e0fe-4c23-a756-916144472a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:49:20 compute-2 nova_compute[226829]: 2026-01-31 07:49:20.858 226833 DEBUG nova.objects.instance [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a932d78-e0fe-4c23-a756-916144472a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:49:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:21.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:21.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:21 compute-2 nova_compute[226829]: 2026-01-31 07:49:21.732 226833 INFO nova.compute.resource_tracker [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updating resource usage from migration af6e1651-e4b2-4410-bf6e-1d810c0a085c
Jan 31 07:49:21 compute-2 nova_compute[226829]: 2026-01-31 07:49:21.733 226833 DEBUG nova.compute.resource_tracker [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Starting to track incoming migration af6e1651-e4b2-4410-bf6e-1d810c0a085c with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 07:49:21 compute-2 nova_compute[226829]: 2026-01-31 07:49:21.935 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:21 compute-2 ceph-mon[77282]: pgmap v1558: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 89 op/s
Jan 31 07:49:22 compute-2 nova_compute[226829]: 2026-01-31 07:49:22.232 226833 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:49:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2455197800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:22 compute-2 nova_compute[226829]: 2026-01-31 07:49:22.631 226833 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:22 compute-2 nova_compute[226829]: 2026-01-31 07:49:22.635 226833 DEBUG nova.compute.provider_tree [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:49:22 compute-2 nova_compute[226829]: 2026-01-31 07:49:22.835 226833 DEBUG nova.scheduler.client.report [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:49:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2455197800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:23 compute-2 nova_compute[226829]: 2026-01-31 07:49:23.171 226833 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 3.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:23 compute-2 nova_compute[226829]: 2026-01-31 07:49:23.172 226833 INFO nova.compute.manager [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Migrating
Jan 31 07:49:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:24 compute-2 ceph-mon[77282]: pgmap v1559: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 07:49:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e246 e246: 3 total, 3 up, 3 in
Jan 31 07:49:24 compute-2 nova_compute[226829]: 2026-01-31 07:49:24.382 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:25 compute-2 ceph-mon[77282]: osdmap e246: 3 total, 3 up, 3 in
Jan 31 07:49:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:25.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:26 compute-2 ceph-mon[77282]: pgmap v1561: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 78 op/s
Jan 31 07:49:26 compute-2 nova_compute[226829]: 2026-01-31 07:49:26.938 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:27.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:27.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:28 compute-2 sshd-session[254688]: Accepted publickey for nova from 192.168.122.101 port 58934 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 07:49:28 compute-2 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 07:49:28 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 07:49:28 compute-2 systemd-logind[801]: New session 51 of user nova.
Jan 31 07:49:28 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 07:49:28 compute-2 ceph-mon[77282]: pgmap v1562: 305 pgs: 305 active+clean; 106 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 320 KiB/s rd, 1.4 MiB/s wr, 65 op/s
Jan 31 07:49:28 compute-2 systemd[1]: Starting User Manager for UID 42436...
Jan 31 07:49:28 compute-2 podman[254690]: 2026-01-31 07:49:28.535920565 +0000 UTC m=+0.053001710 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:49:28 compute-2 systemd[254709]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 07:49:28 compute-2 systemd[254709]: Queued start job for default target Main User Target.
Jan 31 07:49:28 compute-2 systemd[254709]: Created slice User Application Slice.
Jan 31 07:49:28 compute-2 systemd[254709]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:49:28 compute-2 systemd[254709]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:49:28 compute-2 systemd[254709]: Reached target Paths.
Jan 31 07:49:28 compute-2 systemd[254709]: Reached target Timers.
Jan 31 07:49:28 compute-2 systemd[254709]: Starting D-Bus User Message Bus Socket...
Jan 31 07:49:28 compute-2 systemd[254709]: Starting Create User's Volatile Files and Directories...
Jan 31 07:49:28 compute-2 systemd[254709]: Finished Create User's Volatile Files and Directories.
Jan 31 07:49:28 compute-2 systemd[254709]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:49:28 compute-2 systemd[254709]: Reached target Sockets.
Jan 31 07:49:28 compute-2 systemd[254709]: Reached target Basic System.
Jan 31 07:49:28 compute-2 systemd[254709]: Reached target Main User Target.
Jan 31 07:49:28 compute-2 systemd[254709]: Startup finished in 150ms.
Jan 31 07:49:28 compute-2 systemd[1]: Started User Manager for UID 42436.
Jan 31 07:49:28 compute-2 systemd[1]: Started Session 51 of User nova.
Jan 31 07:49:28 compute-2 sshd-session[254688]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 07:49:28 compute-2 sshd-session[254724]: Received disconnect from 192.168.122.101 port 58934:11: disconnected by user
Jan 31 07:49:28 compute-2 sshd-session[254724]: Disconnected from user nova 192.168.122.101 port 58934
Jan 31 07:49:28 compute-2 sshd-session[254688]: pam_unix(sshd:session): session closed for user nova
Jan 31 07:49:28 compute-2 systemd[1]: session-51.scope: Deactivated successfully.
Jan 31 07:49:28 compute-2 systemd-logind[801]: Session 51 logged out. Waiting for processes to exit.
Jan 31 07:49:28 compute-2 systemd-logind[801]: Removed session 51.
Jan 31 07:49:28 compute-2 sshd-session[254726]: Accepted publickey for nova from 192.168.122.101 port 58940 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 07:49:28 compute-2 systemd-logind[801]: New session 53 of user nova.
Jan 31 07:49:28 compute-2 systemd[1]: Started Session 53 of User nova.
Jan 31 07:49:28 compute-2 sshd-session[254726]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 07:49:28 compute-2 sshd-session[254729]: Received disconnect from 192.168.122.101 port 58940:11: disconnected by user
Jan 31 07:49:28 compute-2 sshd-session[254729]: Disconnected from user nova 192.168.122.101 port 58940
Jan 31 07:49:29 compute-2 sshd-session[254726]: pam_unix(sshd:session): session closed for user nova
Jan 31 07:49:29 compute-2 systemd[1]: session-53.scope: Deactivated successfully.
Jan 31 07:49:29 compute-2 systemd-logind[801]: Session 53 logged out. Waiting for processes to exit.
Jan 31 07:49:29 compute-2 systemd-logind[801]: Removed session 53.
Jan 31 07:49:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:29.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:29 compute-2 nova_compute[226829]: 2026-01-31 07:49:29.384 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:29 compute-2 ceph-mon[77282]: pgmap v1563: 305 pgs: 305 active+clean; 121 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Jan 31 07:49:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:31.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e247 e247: 3 total, 3 up, 3 in
Jan 31 07:49:31 compute-2 nova_compute[226829]: 2026-01-31 07:49:31.940 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:32 compute-2 ceph-mon[77282]: pgmap v1564: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 408 KiB/s rd, 2.6 MiB/s wr, 101 op/s
Jan 31 07:49:32 compute-2 ceph-mon[77282]: osdmap e247: 3 total, 3 up, 3 in
Jan 31 07:49:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:33.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:33 compute-2 nova_compute[226829]: 2026-01-31 07:49:33.128 226833 DEBUG nova.compute.manager [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-unplugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:33 compute-2 nova_compute[226829]: 2026-01-31 07:49:33.128 226833 DEBUG oslo_concurrency.lockutils [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:33 compute-2 nova_compute[226829]: 2026-01-31 07:49:33.129 226833 DEBUG oslo_concurrency.lockutils [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:33 compute-2 nova_compute[226829]: 2026-01-31 07:49:33.129 226833 DEBUG oslo_concurrency.lockutils [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:33 compute-2 nova_compute[226829]: 2026-01-31 07:49:33.129 226833 DEBUG nova.compute.manager [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-unplugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:49:33 compute-2 nova_compute[226829]: 2026-01-31 07:49:33.129 226833 WARNING nova.compute.manager [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-unplugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state active and task_state resize_migrated.
Jan 31 07:49:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:33.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:33 compute-2 nova_compute[226829]: 2026-01-31 07:49:33.501 226833 INFO nova.network.neutron [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updating port b1f53ff3-6ad6-4599-8b83-463239ecd8c8 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 07:49:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:34 compute-2 nova_compute[226829]: 2026-01-31 07:49:34.386 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:34 compute-2 ceph-mon[77282]: pgmap v1566: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 444 KiB/s rd, 2.9 MiB/s wr, 111 op/s
Jan 31 07:49:34 compute-2 sudo[254735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:34 compute-2 sudo[254735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:34 compute-2 sudo[254735]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:34 compute-2 sudo[254760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:34 compute-2 sudo[254760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:34 compute-2 sudo[254760]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:35.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.111 226833 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.112 226833 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.112 226833 DEBUG nova.network.neutron [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:49:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:35.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.586 226833 DEBUG nova.compute.manager [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.586 226833 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.586 226833 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.586 226833 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.587 226833 DEBUG nova.compute.manager [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.587 226833 WARNING nova.compute.manager [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state active and task_state resize_migrated.
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.587 226833 DEBUG nova.compute.manager [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-changed-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.587 226833 DEBUG nova.compute.manager [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Refreshing instance network info cache due to event network-changed-b1f53ff3-6ad6-4599-8b83-463239ecd8c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:49:35 compute-2 nova_compute[226829]: 2026-01-31 07:49:35.587 226833 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:49:36 compute-2 ceph-mon[77282]: pgmap v1567: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 394 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Jan 31 07:49:36 compute-2 nova_compute[226829]: 2026-01-31 07:49:36.943 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:37.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:37.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:37 compute-2 nova_compute[226829]: 2026-01-31 07:49:37.586 226833 DEBUG nova.network.neutron [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updating instance_info_cache with network_info: [{"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:49:37 compute-2 nova_compute[226829]: 2026-01-31 07:49:37.609 226833 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:49:37 compute-2 nova_compute[226829]: 2026-01-31 07:49:37.612 226833 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:49:37 compute-2 nova_compute[226829]: 2026-01-31 07:49:37.613 226833 DEBUG nova.network.neutron [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Refreshing network info cache for port b1f53ff3-6ad6-4599-8b83-463239ecd8c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:49:37 compute-2 ceph-mon[77282]: pgmap v1568: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 31 07:49:37 compute-2 nova_compute[226829]: 2026-01-31 07:49:37.982 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 07:49:37 compute-2 nova_compute[226829]: 2026-01-31 07:49:37.984 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 07:49:37 compute-2 nova_compute[226829]: 2026-01-31 07:49:37.984 226833 INFO nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Creating image(s)
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.030 226833 DEBUG nova.storage.rbd_utils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] creating snapshot(nova-resize) on rbd image(4a932d78-e0fe-4c23-a756-916144472a64_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.729 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:38.729 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:49:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:38.731 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:49:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e248 e248: 3 total, 3 up, 3 in
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.847 226833 DEBUG nova.objects.instance [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4a932d78-e0fe-4c23-a756-916144472a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.969 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.969 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Ensure instance console log exists: /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.970 226833 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.970 226833 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.971 226833 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.974 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Start _get_guest_xml network_info=[{"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1527532153-network", "vif_mac": "fa:16:3e:e2:12:46"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.979 226833 WARNING nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.985 226833 DEBUG nova.virt.libvirt.host [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.986 226833 DEBUG nova.virt.libvirt.host [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.989 226833 DEBUG nova.virt.libvirt.host [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.990 226833 DEBUG nova.virt.libvirt.host [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.991 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.991 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.992 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.992 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.992 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.993 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.993 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.993 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.994 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.994 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.994 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.994 226833 DEBUG nova.virt.hardware [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:49:38 compute-2 nova_compute[226829]: 2026-01-31 07:49:38.995 226833 DEBUG nova.objects.instance [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4a932d78-e0fe-4c23-a756-916144472a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.020 226833 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:39.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:39 compute-2 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 07:49:39 compute-2 systemd[254709]: Activating special unit Exit the Session...
Jan 31 07:49:39 compute-2 systemd[254709]: Stopped target Main User Target.
Jan 31 07:49:39 compute-2 systemd[254709]: Stopped target Basic System.
Jan 31 07:49:39 compute-2 systemd[254709]: Stopped target Paths.
Jan 31 07:49:39 compute-2 systemd[254709]: Stopped target Sockets.
Jan 31 07:49:39 compute-2 systemd[254709]: Stopped target Timers.
Jan 31 07:49:39 compute-2 systemd[254709]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:49:39 compute-2 systemd[254709]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 07:49:39 compute-2 systemd[254709]: Closed D-Bus User Message Bus Socket.
Jan 31 07:49:39 compute-2 systemd[254709]: Stopped Create User's Volatile Files and Directories.
Jan 31 07:49:39 compute-2 systemd[254709]: Removed slice User Application Slice.
Jan 31 07:49:39 compute-2 systemd[254709]: Reached target Shutdown.
Jan 31 07:49:39 compute-2 systemd[254709]: Finished Exit the Session.
Jan 31 07:49:39 compute-2 systemd[254709]: Reached target Exit the Session.
Jan 31 07:49:39 compute-2 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 07:49:39 compute-2 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 07:49:39 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 07:49:39 compute-2 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 07:49:39 compute-2 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 07:49:39 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 07:49:39 compute-2 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.389 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:39.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:49:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3117650854' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.533 226833 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.567 226833 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:39 compute-2 ceph-mon[77282]: osdmap e248: 3 total, 3 up, 3 in
Jan 31 07:49:39 compute-2 ceph-mon[77282]: pgmap v1570: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 37 KiB/s wr, 3 op/s
Jan 31 07:49:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3117650854' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:49:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2044729548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.970 226833 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.972 226833 DEBUG nova.virt.libvirt.vif [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:49:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1393865902',display_name='tempest-DeleteServersTestJSON-server-1393865902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1393865902',id=67,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:49:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-4aq9tghe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:49:32Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=4a932d78-e0fe-4c23-a756-916144472a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1527532153-network", "vif_mac": "fa:16:3e:e2:12:46"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.973 226833 DEBUG nova.network.os_vif_util [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1527532153-network", "vif_mac": "fa:16:3e:e2:12:46"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.974 226833 DEBUG nova.network.os_vif_util [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.978 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <uuid>4a932d78-e0fe-4c23-a756-916144472a64</uuid>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <name>instance-00000043</name>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <memory>196608</memory>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <nova:name>tempest-DeleteServersTestJSON-server-1393865902</nova:name>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:49:38</nova:creationTime>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <nova:flavor name="m1.micro">
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <nova:memory>192</nova:memory>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <nova:user uuid="97abab8eb79247cd89fb2ebff295b890">tempest-DeleteServersTestJSON-2031597701-project-member</nova:user>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <nova:project uuid="f299640bb1f64e5fa12b23955e5a2127">tempest-DeleteServersTestJSON-2031597701</nova:project>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <nova:port uuid="b1f53ff3-6ad6-4599-8b83-463239ecd8c8">
Jan 31 07:49:39 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <system>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <entry name="serial">4a932d78-e0fe-4c23-a756-916144472a64</entry>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <entry name="uuid">4a932d78-e0fe-4c23-a756-916144472a64</entry>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     </system>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <os>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   </os>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <features>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   </features>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/4a932d78-e0fe-4c23-a756-916144472a64_disk">
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       </source>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/4a932d78-e0fe-4c23-a756-916144472a64_disk.config">
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       </source>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:49:39 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:e2:12:46"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <target dev="tapb1f53ff3-6a"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/console.log" append="off"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <video>
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     </video>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:49:39 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:49:39 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:49:39 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:49:39 compute-2 nova_compute[226829]: </domain>
Jan 31 07:49:39 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.980 226833 DEBUG nova.virt.libvirt.vif [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:49:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1393865902',display_name='tempest-DeleteServersTestJSON-server-1393865902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1393865902',id=67,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:49:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-4aq9tghe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:49:32Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=4a932d78-e0fe-4c23-a756-916144472a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1527532153-network", "vif_mac": "fa:16:3e:e2:12:46"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.980 226833 DEBUG nova.network.os_vif_util [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1527532153-network", "vif_mac": "fa:16:3e:e2:12:46"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.981 226833 DEBUG nova.network.os_vif_util [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.981 226833 DEBUG os_vif [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.982 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.982 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.983 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.988 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.989 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1f53ff3-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.989 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1f53ff3-6a, col_values=(('external_ids', {'iface-id': 'b1f53ff3-6ad6-4599-8b83-463239ecd8c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:12:46', 'vm-uuid': '4a932d78-e0fe-4c23-a756-916144472a64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.991 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.993 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:49:39 compute-2 NetworkManager[48999]: <info>  [1769845779.9938] manager: (tapb1f53ff3-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.997 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:39 compute-2 nova_compute[226829]: 2026-01-31 07:49:39.998 226833 INFO os_vif [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a')
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.043 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.044 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.045 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No VIF found with MAC fa:16:3e:e2:12:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.045 226833 INFO nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Using config drive
Jan 31 07:49:40 compute-2 kernel: tapb1f53ff3-6a: entered promiscuous mode
Jan 31 07:49:40 compute-2 NetworkManager[48999]: <info>  [1769845780.1203] manager: (tapb1f53ff3-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Jan 31 07:49:40 compute-2 ovn_controller[133834]: 2026-01-31T07:49:40Z|00165|binding|INFO|Claiming lport b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for this chassis.
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.123 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:40 compute-2 ovn_controller[133834]: 2026-01-31T07:49:40Z|00166|binding|INFO|b1f53ff3-6ad6-4599-8b83-463239ecd8c8: Claiming fa:16:3e:e2:12:46 10.100.0.12
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.125 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:40 compute-2 ovn_controller[133834]: 2026-01-31T07:49:40Z|00167|binding|INFO|Setting lport b1f53ff3-6ad6-4599-8b83-463239ecd8c8 ovn-installed in OVS
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.129 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:40 compute-2 ovn_controller[133834]: 2026-01-31T07:49:40Z|00168|binding|INFO|Setting lport b1f53ff3-6ad6-4599-8b83-463239ecd8c8 up in Southbound
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.130 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:12:46 10.100.0.12'], port_security=['fa:16:3e:e2:12:46 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4a932d78-e0fe-4c23-a756-916144472a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=b1f53ff3-6ad6-4599-8b83-463239ecd8c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.132 143841 INFO neutron.agent.ovn.metadata.agent [-] Port b1f53ff3-6ad6-4599-8b83-463239ecd8c8 in datapath 60244e92-1524-47f0-8207-05d0104afa47 bound to our chassis
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.133 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.134 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.139 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.149 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3202e254-ac46-4ca0-916f-d5938bca76d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.151 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60244e92-11 in ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:49:40 compute-2 systemd-machined[195142]: New machine qemu-27-instance-00000043.
Jan 31 07:49:40 compute-2 systemd-udevd[254956]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.155 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60244e92-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.155 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5176610f-50f4-4fa5-8610-97417d80cefc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.156 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2e2fd2-f8a7-4077-8105-e90ce4a569bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 NetworkManager[48999]: <info>  [1769845780.1642] device (tapb1f53ff3-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:49:40 compute-2 systemd[1]: Started Virtual Machine qemu-27-instance-00000043.
Jan 31 07:49:40 compute-2 NetworkManager[48999]: <info>  [1769845780.1646] device (tapb1f53ff3-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.169 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[ae17b4e6-12b1-4a9d-bfef-62d5369ffa4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.182 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4892cab0-d9a3-46d3-8d20-788695938fd1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.205 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[6089e81a-d8e8-41a9-9217-9e6bd73a4af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 NetworkManager[48999]: <info>  [1769845780.2117] manager: (tap60244e92-10): new Veth device (/org/freedesktop/NetworkManager/Devices/95)
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.211 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed42646-ab93-4143-98ed-1bf03e057cb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.250 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[0530f5fb-56a8-4602-93a4-875adb111927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.253 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[00b13ca4-1229-464d-b33f-4bcfac8c867b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 NetworkManager[48999]: <info>  [1769845780.2724] device (tap60244e92-10): carrier: link connected
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.274 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[38cb6edc-f3b1-4f5a-81ec-e1afd4c0b831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.286 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[16c12b38-a9b2-4fc9-b476-32ddf726cbaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603274, 'reachable_time': 22262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254988, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.297 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[48b5a28d-d298-4bd9-9826-b78132cab694]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:59f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603274, 'tstamp': 603274}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254989, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.310 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[90e2abfd-f291-48fc-bb3b-46c9aa13b4fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603274, 'reachable_time': 22262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254990, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.336 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[98bffa27-60ed-42b2-9d68-9ce481c69409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.376 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dc365249-a1e6-44d4-a197-1de2e4e34321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.378 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.378 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.379 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60244e92-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:40 compute-2 kernel: tap60244e92-10: entered promiscuous mode
Jan 31 07:49:40 compute-2 NetworkManager[48999]: <info>  [1769845780.3827] manager: (tap60244e92-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.381 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.384 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.387 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60244e92-10, col_values=(('external_ids', {'iface-id': '4e20d708-9f46-41f3-86c0-9ab3849f8392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.388 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:40 compute-2 ovn_controller[133834]: 2026-01-31T07:49:40Z|00169|binding|INFO|Releasing lport 4e20d708-9f46-41f3-86c0-9ab3849f8392 from this chassis (sb_readonly=0)
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.390 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.391 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7078328c-faae-4b02-81f9-52c60526bc83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.392 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:49:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:40.393 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'env', 'PROCESS_TAG=haproxy-60244e92-1524-47f0-8207-05d0104afa47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60244e92-1524-47f0-8207-05d0104afa47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.396 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.559 226833 DEBUG nova.compute.manager [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.567 226833 DEBUG oslo_concurrency.lockutils [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.568 226833 DEBUG oslo_concurrency.lockutils [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.569 226833 DEBUG oslo_concurrency.lockutils [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.569 226833 DEBUG nova.compute.manager [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:49:40 compute-2 nova_compute[226829]: 2026-01-31 07:49:40.570 226833 WARNING nova.compute.manager [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state active and task_state resize_finish.
Jan 31 07:49:40 compute-2 podman[255023]: 2026-01-31 07:49:40.765578889 +0000 UTC m=+0.054698386 container create b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 07:49:40 compute-2 systemd[1]: Started libpod-conmon-b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91.scope.
Jan 31 07:49:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2044729548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:40 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:49:40 compute-2 podman[255023]: 2026-01-31 07:49:40.739459816 +0000 UTC m=+0.028579333 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:49:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0380bef3b4a4bb8299879715250ca33c061280dd8e9cc9945a5425859d2c0f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:49:40 compute-2 podman[255023]: 2026-01-31 07:49:40.851226288 +0000 UTC m=+0.140345825 container init b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 07:49:40 compute-2 podman[255023]: 2026-01-31 07:49:40.856168122 +0000 UTC m=+0.145287629 container start b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:49:40 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[255038]: [NOTICE]   (255049) : New worker (255060) forked
Jan 31 07:49:40 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[255038]: [NOTICE]   (255049) : Loading success.
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.022 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845781.022167, 4a932d78-e0fe-4c23-a756-916144472a64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.022 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] VM Resumed (Lifecycle Event)
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.025 226833 DEBUG nova.compute.manager [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.029 226833 INFO nova.virt.libvirt.driver [-] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Instance running successfully.
Jan 31 07:49:41 compute-2 virtqemud[226546]: argument unsupported: QEMU guest agent is not configured
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.031 226833 DEBUG nova.virt.libvirt.guest [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.032 226833 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 07:49:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:41.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.064 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.067 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.076 226833 DEBUG nova.network.neutron [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updated VIF entry in instance network info cache for port b1f53ff3-6ad6-4599-8b83-463239ecd8c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.076 226833 DEBUG nova.network.neutron [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updating instance_info_cache with network_info: [{"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.108 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.108 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845781.0246854, 4a932d78-e0fe-4c23-a756-916144472a64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.109 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] VM Started (Lifecycle Event)
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.115 226833 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.141 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:49:41 compute-2 nova_compute[226829]: 2026-01-31 07:49:41.145 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:49:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:41 compute-2 ceph-mon[77282]: pgmap v1571: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 32 KiB/s wr, 25 op/s
Jan 31 07:49:42 compute-2 nova_compute[226829]: 2026-01-31 07:49:42.750 226833 DEBUG nova.compute.manager [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:42 compute-2 nova_compute[226829]: 2026-01-31 07:49:42.752 226833 DEBUG oslo_concurrency.lockutils [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:42 compute-2 nova_compute[226829]: 2026-01-31 07:49:42.753 226833 DEBUG oslo_concurrency.lockutils [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:42 compute-2 nova_compute[226829]: 2026-01-31 07:49:42.754 226833 DEBUG oslo_concurrency.lockutils [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:42 compute-2 nova_compute[226829]: 2026-01-31 07:49:42.754 226833 DEBUG nova.compute.manager [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:49:42 compute-2 nova_compute[226829]: 2026-01-31 07:49:42.755 226833 WARNING nova.compute.manager [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state resized and task_state deleting.
Jan 31 07:49:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:49:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:43.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:49:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:43.735 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:44 compute-2 ceph-mon[77282]: pgmap v1572: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 409 B/s wr, 31 op/s
Jan 31 07:49:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2465379160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:44 compute-2 nova_compute[226829]: 2026-01-31 07:49:44.390 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:49:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2397668109' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:49:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:49:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2397668109' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:49:44 compute-2 nova_compute[226829]: 2026-01-31 07:49:44.991 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:45.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2397668109' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:49:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2397668109' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:49:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:49:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:45.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:49:46 compute-2 ceph-mon[77282]: pgmap v1573: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 409 B/s wr, 31 op/s
Jan 31 07:49:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e249 e249: 3 total, 3 up, 3 in
Jan 31 07:49:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:47.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:47 compute-2 ceph-mon[77282]: osdmap e249: 3 total, 3 up, 3 in
Jan 31 07:49:47 compute-2 ceph-mon[77282]: pgmap v1575: 305 pgs: 305 active+clean; 158 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.658 226833 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.659 226833 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.659 226833 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.660 226833 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.660 226833 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.662 226833 INFO nova.compute.manager [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Terminating instance
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.662 226833 DEBUG nova.compute.manager [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:49:48 compute-2 kernel: tapb1f53ff3-6a (unregistering): left promiscuous mode
Jan 31 07:49:48 compute-2 NetworkManager[48999]: <info>  [1769845788.7418] device (tapb1f53ff3-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.749 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:48 compute-2 ovn_controller[133834]: 2026-01-31T07:49:48Z|00170|binding|INFO|Releasing lport b1f53ff3-6ad6-4599-8b83-463239ecd8c8 from this chassis (sb_readonly=0)
Jan 31 07:49:48 compute-2 ovn_controller[133834]: 2026-01-31T07:49:48Z|00171|binding|INFO|Setting lport b1f53ff3-6ad6-4599-8b83-463239ecd8c8 down in Southbound
Jan 31 07:49:48 compute-2 ovn_controller[133834]: 2026-01-31T07:49:48Z|00172|binding|INFO|Removing iface tapb1f53ff3-6a ovn-installed in OVS
Jan 31 07:49:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:48.757 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:12:46 10.100.0.12'], port_security=['fa:16:3e:e2:12:46 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4a932d78-e0fe-4c23-a756-916144472a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=b1f53ff3-6ad6-4599-8b83-463239ecd8c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:49:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:48.759 143841 INFO neutron.agent.ovn.metadata.agent [-] Port b1f53ff3-6ad6-4599-8b83-463239ecd8c8 in datapath 60244e92-1524-47f0-8207-05d0104afa47 unbound from our chassis
Jan 31 07:49:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:48.761 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60244e92-1524-47f0-8207-05d0104afa47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.761 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:48.762 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[552c6263-1a36-47ea-8408-173ffd46e927]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:48.763 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace which is not needed anymore
Jan 31 07:49:48 compute-2 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000043.scope: Deactivated successfully.
Jan 31 07:49:48 compute-2 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000043.scope: Consumed 8.761s CPU time.
Jan 31 07:49:48 compute-2 systemd-machined[195142]: Machine qemu-27-instance-00000043 terminated.
Jan 31 07:49:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4009705974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3597531801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2233301675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.893 226833 INFO nova.virt.libvirt.driver [-] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Instance destroyed successfully.
Jan 31 07:49:48 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[255038]: [NOTICE]   (255049) : haproxy version is 2.8.14-c23fe91
Jan 31 07:49:48 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[255038]: [NOTICE]   (255049) : path to executable is /usr/sbin/haproxy
Jan 31 07:49:48 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[255038]: [WARNING]  (255049) : Exiting Master process...
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.894 226833 DEBUG nova.objects.instance [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'resources' on Instance uuid 4a932d78-e0fe-4c23-a756-916144472a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:49:48 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[255038]: [ALERT]    (255049) : Current worker (255060) exited with code 143 (Terminated)
Jan 31 07:49:48 compute-2 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[255038]: [WARNING]  (255049) : All workers exited. Exiting... (0)
Jan 31 07:49:48 compute-2 systemd[1]: libpod-b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91.scope: Deactivated successfully.
Jan 31 07:49:48 compute-2 podman[255122]: 2026-01-31 07:49:48.90630347 +0000 UTC m=+0.064830529 container died b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.915 226833 DEBUG nova.virt.libvirt.vif [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:49:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1393865902',display_name='tempest-DeleteServersTestJSON-server-1393865902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1393865902',id=67,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:49:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-4aq9tghe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:49:41Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=4a932d78-e0fe-4c23-a756-916144472a64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.916 226833 DEBUG nova.network.os_vif_util [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.917 226833 DEBUG nova.network.os_vif_util [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.917 226833 DEBUG os_vif [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.919 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.919 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1f53ff3-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.920 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.923 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.926 226833 INFO os_vif [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a')
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.992 226833 DEBUG nova.compute.manager [req-7bb418ac-9efa-4171-8763-ee7398352e32 req-4c7a2f2a-1ae8-4a52-8669-96b95ca15378 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-unplugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.994 226833 DEBUG oslo_concurrency.lockutils [req-7bb418ac-9efa-4171-8763-ee7398352e32 req-4c7a2f2a-1ae8-4a52-8669-96b95ca15378 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.994 226833 DEBUG oslo_concurrency.lockutils [req-7bb418ac-9efa-4171-8763-ee7398352e32 req-4c7a2f2a-1ae8-4a52-8669-96b95ca15378 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.995 226833 DEBUG oslo_concurrency.lockutils [req-7bb418ac-9efa-4171-8763-ee7398352e32 req-4c7a2f2a-1ae8-4a52-8669-96b95ca15378 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.995 226833 DEBUG nova.compute.manager [req-7bb418ac-9efa-4171-8763-ee7398352e32 req-4c7a2f2a-1ae8-4a52-8669-96b95ca15378 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-unplugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:49:48 compute-2 nova_compute[226829]: 2026-01-31 07:49:48.995 226833 WARNING nova.compute.manager [req-7bb418ac-9efa-4171-8763-ee7398352e32 req-4c7a2f2a-1ae8-4a52-8669-96b95ca15378 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-unplugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state active and task_state None.
Jan 31 07:49:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:49 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91-userdata-shm.mount: Deactivated successfully.
Jan 31 07:49:49 compute-2 systemd[1]: var-lib-containers-storage-overlay-c0380bef3b4a4bb8299879715250ca33c061280dd8e9cc9945a5425859d2c0f3-merged.mount: Deactivated successfully.
Jan 31 07:49:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:49 compute-2 podman[255122]: 2026-01-31 07:49:49.304158976 +0000 UTC m=+0.462686005 container cleanup b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:49:49 compute-2 systemd[1]: libpod-conmon-b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91.scope: Deactivated successfully.
Jan 31 07:49:49 compute-2 nova_compute[226829]: 2026-01-31 07:49:49.446 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:49.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:49 compute-2 podman[255184]: 2026-01-31 07:49:49.825956595 +0000 UTC m=+0.501761340 container remove b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 07:49:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:49.830 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cc87ccc4-01b1-469c-b1b0-d0cfe2613352]: (4, ('Sat Jan 31 07:49:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91)\nb1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91\nSat Jan 31 07:49:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (b1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91)\nb1d8d1be2924ee3d6a854c9e387a660d0727febd4cb0eaed60faedc7fdf8be91\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:49.832 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3563e937-211e-4bfe-a60d-23c578ed3c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:49.832 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:49 compute-2 nova_compute[226829]: 2026-01-31 07:49:49.834 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:49 compute-2 kernel: tap60244e92-10: left promiscuous mode
Jan 31 07:49:49 compute-2 nova_compute[226829]: 2026-01-31 07:49:49.836 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:49.838 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1b012ffc-0ea9-4eff-9555-e05fb2ae9d62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:49 compute-2 nova_compute[226829]: 2026-01-31 07:49:49.840 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:49.851 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[04474159-d5cc-4664-948d-e9f8cce548e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:49.852 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6eda9b1c-9058-4207-a5df-2abac1e37ceb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:49.862 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[600377f9-9eb0-489c-b32b-6bdb91e7cf47]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603267, 'reachable_time': 24839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255211, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:49 compute-2 systemd[1]: run-netns-ovnmeta\x2d60244e92\x2d1524\x2d47f0\x2d8207\x2d05d0104afa47.mount: Deactivated successfully.
Jan 31 07:49:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:49.865 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:49:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:49:49.865 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[4169fa0f-f7f9-4257-b73d-e6fcaa947c55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:49 compute-2 podman[255185]: 2026-01-31 07:49:49.919734634 +0000 UTC m=+0.582559328 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 07:49:50 compute-2 ceph-mon[77282]: pgmap v1576: 305 pgs: 305 active+clean; 161 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 131 op/s
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.019 226833 INFO nova.virt.libvirt.driver [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Deleting instance files /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64_del
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.020 226833 INFO nova.virt.libvirt.driver [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Deletion of /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64_del complete
Jan 31 07:49:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.109 226833 INFO nova.compute.manager [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Took 2.45 seconds to destroy the instance on the hypervisor.
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.110 226833 DEBUG oslo.service.loopingcall [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.110 226833 DEBUG nova.compute.manager [-] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.111 226833 DEBUG nova.network.neutron [-] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.244 226833 DEBUG nova.compute.manager [req-64126417-115c-446f-8cf2-e4686d5a70c5 req-e99122c8-8dde-44dd-9d75-d35f0e7cca8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.245 226833 DEBUG oslo_concurrency.lockutils [req-64126417-115c-446f-8cf2-e4686d5a70c5 req-e99122c8-8dde-44dd-9d75-d35f0e7cca8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.246 226833 DEBUG oslo_concurrency.lockutils [req-64126417-115c-446f-8cf2-e4686d5a70c5 req-e99122c8-8dde-44dd-9d75-d35f0e7cca8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.246 226833 DEBUG oslo_concurrency.lockutils [req-64126417-115c-446f-8cf2-e4686d5a70c5 req-e99122c8-8dde-44dd-9d75-d35f0e7cca8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.247 226833 DEBUG nova.compute.manager [req-64126417-115c-446f-8cf2-e4686d5a70c5 req-e99122c8-8dde-44dd-9d75-d35f0e7cca8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.247 226833 WARNING nova.compute.manager [req-64126417-115c-446f-8cf2-e4686d5a70c5 req-e99122c8-8dde-44dd-9d75-d35f0e7cca8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state active and task_state None.
Jan 31 07:49:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:51.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 e250: 3 total, 3 up, 3 in
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.912 226833 DEBUG nova.network.neutron [-] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.934 226833 INFO nova.compute.manager [-] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Took 0.82 seconds to deallocate network for instance.
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.993 226833 DEBUG nova.compute.manager [req-3eb94646-04d4-4fee-80ed-e859809a168e req-84186210-51c8-456e-9470-0c1a447ccae7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-deleted-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.996 226833 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:51 compute-2 nova_compute[226829]: 2026-01-31 07:49:51.997 226833 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:52 compute-2 nova_compute[226829]: 2026-01-31 07:49:52.003 226833 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:52 compute-2 nova_compute[226829]: 2026-01-31 07:49:52.045 226833 INFO nova.scheduler.client.report [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Deleted allocations for instance 4a932d78-e0fe-4c23-a756-916144472a64
Jan 31 07:49:52 compute-2 ceph-mon[77282]: pgmap v1577: 305 pgs: 305 active+clean; 137 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 31 07:49:52 compute-2 ceph-mon[77282]: osdmap e250: 3 total, 3 up, 3 in
Jan 31 07:49:52 compute-2 nova_compute[226829]: 2026-01-31 07:49:52.316 226833 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:53.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:53.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:53 compute-2 nova_compute[226829]: 2026-01-31 07:49:53.921 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:54 compute-2 ceph-mon[77282]: pgmap v1579: 305 pgs: 305 active+clean; 114 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Jan 31 07:49:54 compute-2 nova_compute[226829]: 2026-01-31 07:49:54.449 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:54 compute-2 sudo[255229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:54 compute-2 sudo[255229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:54 compute-2 sudo[255229]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:54 compute-2 sudo[255254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:54 compute-2 sudo[255254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:54 compute-2 sudo[255254]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:55.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1330305516' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:49:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1330305516' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:49:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:55.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:56 compute-2 ceph-mon[77282]: pgmap v1580: 305 pgs: 305 active+clean; 114 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 101 KiB/s rd, 294 KiB/s wr, 47 op/s
Jan 31 07:49:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:57.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:57.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:49:58 compute-2 ceph-mon[77282]: pgmap v1581: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 256 KiB/s wr, 140 op/s
Jan 31 07:49:58 compute-2 nova_compute[226829]: 2026-01-31 07:49:58.924 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:49:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:59.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:49:59 compute-2 podman[255281]: 2026-01-31 07:49:59.164692006 +0000 UTC m=+0.050312047 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:49:59 compute-2 nova_compute[226829]: 2026-01-31 07:49:59.450 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:49:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:49:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:00 compute-2 ceph-mon[77282]: pgmap v1582: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 227 KiB/s wr, 152 op/s
Jan 31 07:50:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:50:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:01.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:01.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:02 compute-2 ceph-mon[77282]: pgmap v1583: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 120 op/s
Jan 31 07:50:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:03.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:03 compute-2 nova_compute[226829]: 2026-01-31 07:50:03.158 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:03.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:03 compute-2 nova_compute[226829]: 2026-01-31 07:50:03.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:03 compute-2 sudo[255303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:03 compute-2 sudo[255303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:03 compute-2 sudo[255303]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:03 compute-2 nova_compute[226829]: 2026-01-31 07:50:03.892 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845788.8908927, 4a932d78-e0fe-4c23-a756-916144472a64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:50:03 compute-2 nova_compute[226829]: 2026-01-31 07:50:03.892 226833 INFO nova.compute.manager [-] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] VM Stopped (Lifecycle Event)
Jan 31 07:50:03 compute-2 nova_compute[226829]: 2026-01-31 07:50:03.919 226833 DEBUG nova.compute.manager [None req-7b9db14a-347b-447a-a9ea-066c43ad4d63 - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:03 compute-2 nova_compute[226829]: 2026-01-31 07:50:03.925 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:03 compute-2 sudo[255328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:50:03 compute-2 sudo[255328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:03 compute-2 sudo[255328]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:03 compute-2 sudo[255353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:03 compute-2 sudo[255353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:03 compute-2 sudo[255353]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:04 compute-2 sudo[255378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:50:04 compute-2 sudo[255378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:04 compute-2 sudo[255378]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:04 compute-2 ceph-mon[77282]: pgmap v1584: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 105 op/s
Jan 31 07:50:04 compute-2 nova_compute[226829]: 2026-01-31 07:50:04.452 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:04 compute-2 nova_compute[226829]: 2026-01-31 07:50:04.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:05.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:50:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:50:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:50:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:50:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:50:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:50:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:50:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:05.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:05 compute-2 nova_compute[226829]: 2026-01-31 07:50:05.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:06 compute-2 ceph-mon[77282]: pgmap v1585: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 97 op/s
Jan 31 07:50:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:06.859 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:06.861 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:06.862 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:07.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:07 compute-2 nova_compute[226829]: 2026-01-31 07:50:07.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:07 compute-2 ceph-mon[77282]: pgmap v1586: 305 pgs: 305 active+clean; 116 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 154 op/s
Jan 31 07:50:08 compute-2 nova_compute[226829]: 2026-01-31 07:50:08.927 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:09.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:09 compute-2 nova_compute[226829]: 2026-01-31 07:50:09.454 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:50:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:09.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:50:09 compute-2 nova_compute[226829]: 2026-01-31 07:50:09.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:09 compute-2 nova_compute[226829]: 2026-01-31 07:50:09.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:50:09 compute-2 nova_compute[226829]: 2026-01-31 07:50:09.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:50:09 compute-2 nova_compute[226829]: 2026-01-31 07:50:09.523 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:50:09 compute-2 nova_compute[226829]: 2026-01-31 07:50:09.524 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:10 compute-2 ceph-mon[77282]: pgmap v1587: 305 pgs: 305 active+clean; 119 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 07:50:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2475019247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3085610044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:10 compute-2 nova_compute[226829]: 2026-01-31 07:50:10.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:10 compute-2 nova_compute[226829]: 2026-01-31 07:50:10.513 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:10 compute-2 nova_compute[226829]: 2026-01-31 07:50:10.513 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:10 compute-2 nova_compute[226829]: 2026-01-31 07:50:10.514 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:10 compute-2 nova_compute[226829]: 2026-01-31 07:50:10.514 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:50:10 compute-2 nova_compute[226829]: 2026-01-31 07:50:10.515 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:50:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/37391559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.012 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:50:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:11.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.178 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.179 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4557MB free_disk=20.942955017089844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.179 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.179 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/37391559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:50:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.250 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.250 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.271 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:11 compute-2 sudo[255463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:11 compute-2 sudo[255463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:11 compute-2 sudo[255463]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:11 compute-2 sudo[255498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:50:11 compute-2 sudo[255498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:11 compute-2 sudo[255498]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:11.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:50:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/889192927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.702 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.710 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.734 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.764 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:50:11 compute-2 nova_compute[226829]: 2026-01-31 07:50:11.764 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:12 compute-2 ceph-mon[77282]: pgmap v1588: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1573730805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/889192927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1809094251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:13.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:13.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:13 compute-2 nova_compute[226829]: 2026-01-31 07:50:13.929 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:14 compute-2 ceph-mon[77282]: pgmap v1589: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:14 compute-2 nova_compute[226829]: 2026-01-31 07:50:14.456 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:14 compute-2 nova_compute[226829]: 2026-01-31 07:50:14.764 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:14 compute-2 sudo[255535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:14 compute-2 sudo[255535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:14 compute-2 sudo[255535]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:15 compute-2 sudo[255560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:15 compute-2 sudo[255560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:15 compute-2 sudo[255560]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:15.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:15 compute-2 nova_compute[226829]: 2026-01-31 07:50:15.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:15 compute-2 nova_compute[226829]: 2026-01-31 07:50:15.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:50:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:15.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:16 compute-2 ceph-mon[77282]: pgmap v1590: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:16 compute-2 nova_compute[226829]: 2026-01-31 07:50:16.870 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:16 compute-2 nova_compute[226829]: 2026-01-31 07:50:16.871 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:16 compute-2 nova_compute[226829]: 2026-01-31 07:50:16.900 226833 DEBUG nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:50:16 compute-2 nova_compute[226829]: 2026-01-31 07:50:16.980 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:16 compute-2 nova_compute[226829]: 2026-01-31 07:50:16.981 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:16 compute-2 nova_compute[226829]: 2026-01-31 07:50:16.986 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:50:16 compute-2 nova_compute[226829]: 2026-01-31 07:50:16.987 226833 INFO nova.compute.claims [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:50:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:17.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.113 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:17.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:50:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3679847673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.600 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.605 226833 DEBUG nova.compute.provider_tree [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.624 226833 DEBUG nova.scheduler.client.report [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.647 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.648 226833 DEBUG nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.705 226833 DEBUG nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.706 226833 DEBUG nova.network.neutron [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.733 226833 INFO nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.761 226833 DEBUG nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.903 226833 DEBUG nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.906 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.907 226833 INFO nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Creating image(s)
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.945 226833 DEBUG nova.storage.rbd_utils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:17 compute-2 nova_compute[226829]: 2026-01-31 07:50:17.976 226833 DEBUG nova.storage.rbd_utils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.005 226833 DEBUG nova.storage.rbd_utils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.008 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.065 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.065 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.066 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.066 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.093 226833 DEBUG nova.storage.rbd_utils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.097 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.123 226833 DEBUG nova.policy [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '07aa1b5aaea444449f8ef00dfe56e8eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a52e7591d7f4e068e3f9fa0e4e288d5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:50:18 compute-2 ceph-mon[77282]: pgmap v1591: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3679847673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.837 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:18 compute-2 nova_compute[226829]: 2026-01-31 07:50:18.907 226833 DEBUG nova.storage.rbd_utils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] resizing rbd image 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:50:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:19 compute-2 nova_compute[226829]: 2026-01-31 07:50:19.061 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:19.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:19 compute-2 nova_compute[226829]: 2026-01-31 07:50:19.450 226833 DEBUG nova.network.neutron [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Successfully created port: ab21ca46-ceec-496e-859e-91c639339de6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:50:19 compute-2 nova_compute[226829]: 2026-01-31 07:50:19.459 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:19 compute-2 nova_compute[226829]: 2026-01-31 07:50:19.509 226833 DEBUG nova.objects.instance [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lazy-loading 'migration_context' on Instance uuid 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:50:19 compute-2 nova_compute[226829]: 2026-01-31 07:50:19.548 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:50:19 compute-2 nova_compute[226829]: 2026-01-31 07:50:19.549 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Ensure instance console log exists: /var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:50:19 compute-2 nova_compute[226829]: 2026-01-31 07:50:19.549 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:19 compute-2 nova_compute[226829]: 2026-01-31 07:50:19.550 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:19 compute-2 nova_compute[226829]: 2026-01-31 07:50:19.550 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:19 compute-2 ceph-mon[77282]: pgmap v1592: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 158 KiB/s wr, 5 op/s
Jan 31 07:50:20 compute-2 podman[255776]: 2026-01-31 07:50:20.197978654 +0000 UTC m=+0.085538358 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 07:50:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:21.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:21 compute-2 nova_compute[226829]: 2026-01-31 07:50:21.257 226833 DEBUG nova.network.neutron [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Successfully updated port: ab21ca46-ceec-496e-859e-91c639339de6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:50:21 compute-2 nova_compute[226829]: 2026-01-31 07:50:21.282 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:50:21 compute-2 nova_compute[226829]: 2026-01-31 07:50:21.282 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquired lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:50:21 compute-2 nova_compute[226829]: 2026-01-31 07:50:21.283 226833 DEBUG nova.network.neutron [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:50:21 compute-2 nova_compute[226829]: 2026-01-31 07:50:21.432 226833 DEBUG nova.compute.manager [req-2e5c63ac-e52d-4c89-a927-2d8735be3ef1 req-57f0014f-591e-40ec-86cf-5bf60567f7f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-changed-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:21 compute-2 nova_compute[226829]: 2026-01-31 07:50:21.433 226833 DEBUG nova.compute.manager [req-2e5c63ac-e52d-4c89-a927-2d8735be3ef1 req-57f0014f-591e-40ec-86cf-5bf60567f7f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Refreshing instance network info cache due to event network-changed-ab21ca46-ceec-496e-859e-91c639339de6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:50:21 compute-2 nova_compute[226829]: 2026-01-31 07:50:21.433 226833 DEBUG oslo_concurrency.lockutils [req-2e5c63ac-e52d-4c89-a927-2d8735be3ef1 req-57f0014f-591e-40ec-86cf-5bf60567f7f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:50:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:50:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:21.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:50:21 compute-2 nova_compute[226829]: 2026-01-31 07:50:21.586 226833 DEBUG nova.network.neutron [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:50:22 compute-2 ceph-mon[77282]: pgmap v1593: 305 pgs: 305 active+clean; 159 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.773 226833 DEBUG nova.network.neutron [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Updating instance_info_cache with network_info: [{"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.794 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Releasing lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.795 226833 DEBUG nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Instance network_info: |[{"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.795 226833 DEBUG oslo_concurrency.lockutils [req-2e5c63ac-e52d-4c89-a927-2d8735be3ef1 req-57f0014f-591e-40ec-86cf-5bf60567f7f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.796 226833 DEBUG nova.network.neutron [req-2e5c63ac-e52d-4c89-a927-2d8735be3ef1 req-57f0014f-591e-40ec-86cf-5bf60567f7f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Refreshing network info cache for port ab21ca46-ceec-496e-859e-91c639339de6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.798 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Start _get_guest_xml network_info=[{"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.803 226833 WARNING nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.809 226833 DEBUG nova.virt.libvirt.host [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.809 226833 DEBUG nova.virt.libvirt.host [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.815 226833 DEBUG nova.virt.libvirt.host [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.815 226833 DEBUG nova.virt.libvirt.host [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.816 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.817 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.817 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.817 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.817 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.818 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.818 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.818 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.818 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.819 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.819 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.819 226833 DEBUG nova.virt.hardware [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:50:22 compute-2 nova_compute[226829]: 2026-01-31 07:50:22.821 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:23.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:50:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3912991515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.260 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.286 226833 DEBUG nova.storage.rbd_utils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.289 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:50:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/147065805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.723 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.726 226833 DEBUG nova.virt.libvirt.vif [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:50:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2100114645',display_name='tempest-SecurityGroupsTestJSON-server-2100114645',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2100114645',id=69,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a52e7591d7f4e068e3f9fa0e4e288d5',ramdisk_id='',reservation_id='r-tq9hwigs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-31421501',owner_user_name='tempest-SecurityGroupsTestJSON-31421501-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:50:17Z,user_data=None,user_id='07aa1b5aaea444449f8ef00dfe56e8eb',uuid=045df8b7-b820-4d9e-98c8-0fdacc84b4b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.727 226833 DEBUG nova.network.os_vif_util [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converting VIF {"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.729 226833 DEBUG nova.network.os_vif_util [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.731 226833 DEBUG nova.objects.instance [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.761 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <uuid>045df8b7-b820-4d9e-98c8-0fdacc84b4b9</uuid>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <name>instance-00000045</name>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <nova:name>tempest-SecurityGroupsTestJSON-server-2100114645</nova:name>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:50:22</nova:creationTime>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <nova:user uuid="07aa1b5aaea444449f8ef00dfe56e8eb">tempest-SecurityGroupsTestJSON-31421501-project-member</nova:user>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <nova:project uuid="2a52e7591d7f4e068e3f9fa0e4e288d5">tempest-SecurityGroupsTestJSON-31421501</nova:project>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <nova:port uuid="ab21ca46-ceec-496e-859e-91c639339de6">
Jan 31 07:50:23 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <system>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <entry name="serial">045df8b7-b820-4d9e-98c8-0fdacc84b4b9</entry>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <entry name="uuid">045df8b7-b820-4d9e-98c8-0fdacc84b4b9</entry>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     </system>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <os>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   </os>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <features>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   </features>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk">
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       </source>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk.config">
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       </source>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:50:23 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:54:f9:35"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <target dev="tapab21ca46-ce"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9/console.log" append="off"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <video>
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     </video>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:50:23 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:50:23 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:50:23 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:50:23 compute-2 nova_compute[226829]: </domain>
Jan 31 07:50:23 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.763 226833 DEBUG nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Preparing to wait for external event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.764 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.765 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.765 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.766 226833 DEBUG nova.virt.libvirt.vif [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:50:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2100114645',display_name='tempest-SecurityGroupsTestJSON-server-2100114645',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2100114645',id=69,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a52e7591d7f4e068e3f9fa0e4e288d5',ramdisk_id='',reservation_id='r-tq9hwigs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-31421501',owner_user_name='tempest-SecurityGroupsTestJSON-31421501-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:50:17Z,user_data=None,user_id='07aa1b5aaea444449f8ef00dfe56e8eb',uuid=045df8b7-b820-4d9e-98c8-0fdacc84b4b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.767 226833 DEBUG nova.network.os_vif_util [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converting VIF {"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.768 226833 DEBUG nova.network.os_vif_util [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.769 226833 DEBUG os_vif [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.770 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.771 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.772 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.784 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.785 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab21ca46-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.786 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab21ca46-ce, col_values=(('external_ids', {'iface-id': 'ab21ca46-ceec-496e-859e-91c639339de6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:f9:35', 'vm-uuid': '045df8b7-b820-4d9e-98c8-0fdacc84b4b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:23 compute-2 NetworkManager[48999]: <info>  [1769845823.7895] manager: (tapab21ca46-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.791 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.798 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.799 226833 INFO os_vif [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce')
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.887 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.888 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.888 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] No VIF found with MAC fa:16:3e:54:f9:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.889 226833 INFO nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Using config drive
Jan 31 07:50:23 compute-2 nova_compute[226829]: 2026-01-31 07:50:23.926 226833 DEBUG nova.storage.rbd_utils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:24 compute-2 ceph-mon[77282]: pgmap v1594: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:50:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3912991515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/147065805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:24.380 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:50:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:24.384 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:50:24 compute-2 nova_compute[226829]: 2026-01-31 07:50:24.414 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:24 compute-2 nova_compute[226829]: 2026-01-31 07:50:24.461 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:24 compute-2 nova_compute[226829]: 2026-01-31 07:50:24.596 226833 INFO nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Creating config drive at /var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9/disk.config
Jan 31 07:50:24 compute-2 nova_compute[226829]: 2026-01-31 07:50:24.600 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwsawv8wh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:24 compute-2 nova_compute[226829]: 2026-01-31 07:50:24.728 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwsawv8wh" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:24 compute-2 nova_compute[226829]: 2026-01-31 07:50:24.755 226833 DEBUG nova.storage.rbd_utils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:24 compute-2 nova_compute[226829]: 2026-01-31 07:50:24.759 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9/disk.config 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:24 compute-2 nova_compute[226829]: 2026-01-31 07:50:24.939 226833 DEBUG oslo_concurrency.processutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9/disk.config 045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:24 compute-2 nova_compute[226829]: 2026-01-31 07:50:24.940 226833 INFO nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Deleting local config drive /var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9/disk.config because it was imported into RBD.
Jan 31 07:50:24 compute-2 kernel: tapab21ca46-ce: entered promiscuous mode
Jan 31 07:50:24 compute-2 NetworkManager[48999]: <info>  [1769845824.9905] manager: (tapab21ca46-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Jan 31 07:50:24 compute-2 ovn_controller[133834]: 2026-01-31T07:50:24Z|00173|binding|INFO|Claiming lport ab21ca46-ceec-496e-859e-91c639339de6 for this chassis.
Jan 31 07:50:24 compute-2 ovn_controller[133834]: 2026-01-31T07:50:24Z|00174|binding|INFO|ab21ca46-ceec-496e-859e-91c639339de6: Claiming fa:16:3e:54:f9:35 10.100.0.8
Jan 31 07:50:24 compute-2 nova_compute[226829]: 2026-01-31 07:50:24.992 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.015 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:25 compute-2 ovn_controller[133834]: 2026-01-31T07:50:25Z|00175|binding|INFO|Setting lport ab21ca46-ceec-496e-859e-91c639339de6 ovn-installed in OVS
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.020 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:25 compute-2 systemd-machined[195142]: New machine qemu-28-instance-00000045.
Jan 31 07:50:25 compute-2 ovn_controller[133834]: 2026-01-31T07:50:25Z|00176|binding|INFO|Setting lport ab21ca46-ceec-496e-859e-91c639339de6 up in Southbound
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.026 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:f9:35 10.100.0.8'], port_security=['fa:16:3e:54:f9:35 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '045df8b7-b820-4d9e-98c8-0fdacc84b4b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a52e7591d7f4e068e3f9fa0e4e288d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6aa0181-fdb2-4d3e-a8a9-d361e28d0bfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2b3302f-04ed-4085-954c-3fd369ef549b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ab21ca46-ceec-496e-859e-91c639339de6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.027 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ab21ca46-ceec-496e-859e-91c639339de6 in datapath abea69d7-4daf-4b3f-9f7f-2b06f416400d bound to our chassis
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.029 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abea69d7-4daf-4b3f-9f7f-2b06f416400d
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.038 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1278f8c4-e9f2-416a-9bed-8d6866289c97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.039 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabea69d7-41 in ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:50:25 compute-2 systemd[1]: Started Virtual Machine qemu-28-instance-00000045.
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.041 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabea69d7-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.041 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7e644a93-fcd9-47c9-aa5d-65c849a2d199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.042 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac7464b-3ef8-4170-add9-dabe903c4447]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 systemd-udevd[255942]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.052 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f6df3a-e0d3-4966-b9d4-8f30df54d22c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 NetworkManager[48999]: <info>  [1769845825.0621] device (tapab21ca46-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:50:25 compute-2 NetworkManager[48999]: <info>  [1769845825.0628] device (tapab21ca46-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.063 226833 DEBUG nova.network.neutron [req-2e5c63ac-e52d-4c89-a927-2d8735be3ef1 req-57f0014f-591e-40ec-86cf-5bf60567f7f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Updated VIF entry in instance network info cache for port ab21ca46-ceec-496e-859e-91c639339de6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.063 226833 DEBUG nova.network.neutron [req-2e5c63ac-e52d-4c89-a927-2d8735be3ef1 req-57f0014f-591e-40ec-86cf-5bf60567f7f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Updating instance_info_cache with network_info: [{"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.064 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dc910b5a-0deb-41a7-be45-3ef0200ba794]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.088 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc7535c-958d-4ea0-b150-f00350d1f7e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.093 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b0fd7185-e6bf-4642-8a37-fa9ea804c35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 NetworkManager[48999]: <info>  [1769845825.0944] manager: (tapabea69d7-40): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Jan 31 07:50:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:25.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.123 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[11b1e2b9-dec1-4ce5-862d-05527e7f7797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.126 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c8d417-f1c5-4210-90b4-14ff5c13a6df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 NetworkManager[48999]: <info>  [1769845825.1448] device (tapabea69d7-40): carrier: link connected
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.148 226833 DEBUG oslo_concurrency.lockutils [req-2e5c63ac-e52d-4c89-a927-2d8735be3ef1 req-57f0014f-591e-40ec-86cf-5bf60567f7f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.150 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[016640d0-3e41-465c-8718-dc5bdd9a4b92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.168 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4db4bb-9294-46a4-8b08-6a0c265ed8be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabea69d7-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:68:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607761, 'reachable_time': 42821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255973, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.181 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[12ef098b-2498-4160-9826-0b90fd8531bf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:68f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607761, 'tstamp': 607761}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255974, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.193 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7798eb69-31f1-46e0-bc53-f845f265be33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabea69d7-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:68:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607761, 'reachable_time': 42821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255975, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.213 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[892cfbc4-9183-4905-b7be-b9f4ab0fa0ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.264 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d02c9b60-3c95-4435-9a23-2fd27bdaa5f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.267 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabea69d7-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.268 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.269 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabea69d7-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:25 compute-2 NetworkManager[48999]: <info>  [1769845825.2729] manager: (tapabea69d7-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Jan 31 07:50:25 compute-2 kernel: tapabea69d7-40: entered promiscuous mode
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.275 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.277 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabea69d7-40, col_values=(('external_ids', {'iface-id': '38a02e8d-e8fe-4f86-adeb-b79ce221c983'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.279 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.280 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:25 compute-2 ovn_controller[133834]: 2026-01-31T07:50:25Z|00177|binding|INFO|Releasing lport 38a02e8d-e8fe-4f86-adeb-b79ce221c983 from this chassis (sb_readonly=0)
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.281 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abea69d7-4daf-4b3f-9f7f-2b06f416400d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abea69d7-4daf-4b3f-9f7f-2b06f416400d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.282 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bedec255-e8f2-4929-8f8a-e58b3f1609fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.283 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-abea69d7-4daf-4b3f-9f7f-2b06f416400d
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/abea69d7-4daf-4b3f-9f7f-2b06f416400d.pid.haproxy
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID abea69d7-4daf-4b3f-9f7f-2b06f416400d
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:50:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:25.284 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'env', 'PROCESS_TAG=haproxy-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abea69d7-4daf-4b3f-9f7f-2b06f416400d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.286 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:25.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:25 compute-2 podman[256044]: 2026-01-31 07:50:25.612140012 +0000 UTC m=+0.049727732 container create cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:50:25 compute-2 systemd[1]: Started libpod-conmon-cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07.scope.
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.661 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845825.661182, 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.662 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] VM Started (Lifecycle Event)
Jan 31 07:50:25 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:50:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e748053e20b4d21ab825a42187fed54f87f207dbce40776a4ba26f9ec97a3016/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:25 compute-2 podman[256044]: 2026-01-31 07:50:25.581905127 +0000 UTC m=+0.019492867 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:50:25 compute-2 podman[256044]: 2026-01-31 07:50:25.680387642 +0000 UTC m=+0.117975382 container init cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:50:25 compute-2 podman[256044]: 2026-01-31 07:50:25.684227595 +0000 UTC m=+0.121815325 container start cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.693 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.697 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845825.661959, 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.697 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] VM Paused (Lifecycle Event)
Jan 31 07:50:25 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256064]: [NOTICE]   (256068) : New worker (256070) forked
Jan 31 07:50:25 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256064]: [NOTICE]   (256068) : Loading success.
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.744 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.747 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:50:25 compute-2 nova_compute[226829]: 2026-01-31 07:50:25.767 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:50:26 compute-2 ceph-mon[77282]: pgmap v1595: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:50:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:27.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.302 226833 DEBUG nova.compute.manager [req-55de674a-138e-4107-9520-3f318f4e38ca req-e6ffcce2-9e83-4e80-a692-a42d1f7b425e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.302 226833 DEBUG oslo_concurrency.lockutils [req-55de674a-138e-4107-9520-3f318f4e38ca req-e6ffcce2-9e83-4e80-a692-a42d1f7b425e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.302 226833 DEBUG oslo_concurrency.lockutils [req-55de674a-138e-4107-9520-3f318f4e38ca req-e6ffcce2-9e83-4e80-a692-a42d1f7b425e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.303 226833 DEBUG oslo_concurrency.lockutils [req-55de674a-138e-4107-9520-3f318f4e38ca req-e6ffcce2-9e83-4e80-a692-a42d1f7b425e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.303 226833 DEBUG nova.compute.manager [req-55de674a-138e-4107-9520-3f318f4e38ca req-e6ffcce2-9e83-4e80-a692-a42d1f7b425e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Processing event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.303 226833 DEBUG nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.306 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845827.3068452, 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.307 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] VM Resumed (Lifecycle Event)
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.308 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.311 226833 INFO nova.virt.libvirt.driver [-] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Instance spawned successfully.
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.312 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.329 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.334 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.338 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.338 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.338 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.339 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.339 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.340 226833 DEBUG nova.virt.libvirt.driver [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.366 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.400 226833 INFO nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Took 9.50 seconds to spawn the instance on the hypervisor.
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.400 226833 DEBUG nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.490 226833 INFO nova.compute.manager [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Took 10.53 seconds to build instance.
Jan 31 07:50:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:27.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:27 compute-2 nova_compute[226829]: 2026-01-31 07:50:27.532 226833 DEBUG oslo_concurrency.lockutils [None req-1ea71aba-0e12-45e8-bffa-58e7b76d2661 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:28 compute-2 ceph-mon[77282]: pgmap v1596: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 31 07:50:28 compute-2 nova_compute[226829]: 2026-01-31 07:50:28.791 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:29.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:29 compute-2 nova_compute[226829]: 2026-01-31 07:50:29.449 226833 DEBUG nova.compute.manager [req-c3f4de64-fc7d-4c46-a9ee-55120c663820 req-6363b7ff-5a91-4df8-8568-f6d5cbb0273f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:29 compute-2 nova_compute[226829]: 2026-01-31 07:50:29.450 226833 DEBUG oslo_concurrency.lockutils [req-c3f4de64-fc7d-4c46-a9ee-55120c663820 req-6363b7ff-5a91-4df8-8568-f6d5cbb0273f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:29 compute-2 nova_compute[226829]: 2026-01-31 07:50:29.450 226833 DEBUG oslo_concurrency.lockutils [req-c3f4de64-fc7d-4c46-a9ee-55120c663820 req-6363b7ff-5a91-4df8-8568-f6d5cbb0273f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:29 compute-2 nova_compute[226829]: 2026-01-31 07:50:29.450 226833 DEBUG oslo_concurrency.lockutils [req-c3f4de64-fc7d-4c46-a9ee-55120c663820 req-6363b7ff-5a91-4df8-8568-f6d5cbb0273f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:29 compute-2 nova_compute[226829]: 2026-01-31 07:50:29.451 226833 DEBUG nova.compute.manager [req-c3f4de64-fc7d-4c46-a9ee-55120c663820 req-6363b7ff-5a91-4df8-8568-f6d5cbb0273f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] No waiting events found dispatching network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:50:29 compute-2 nova_compute[226829]: 2026-01-31 07:50:29.451 226833 WARNING nova.compute.manager [req-c3f4de64-fc7d-4c46-a9ee-55120c663820 req-6363b7ff-5a91-4df8-8568-f6d5cbb0273f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received unexpected event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 for instance with vm_state active and task_state None.
Jan 31 07:50:29 compute-2 nova_compute[226829]: 2026-01-31 07:50:29.463 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:29.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:30 compute-2 podman[256082]: 2026-01-31 07:50:30.174779578 +0000 UTC m=+0.055629721 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 07:50:30 compute-2 nova_compute[226829]: 2026-01-31 07:50:30.263 226833 DEBUG nova.compute.manager [req-d5b9eca0-8901-4ce8-bb58-dd57ffa77a65 req-e1478778-5807-4824-ad0f-600847668d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-changed-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:30 compute-2 nova_compute[226829]: 2026-01-31 07:50:30.263 226833 DEBUG nova.compute.manager [req-d5b9eca0-8901-4ce8-bb58-dd57ffa77a65 req-e1478778-5807-4824-ad0f-600847668d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Refreshing instance network info cache due to event network-changed-ab21ca46-ceec-496e-859e-91c639339de6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:50:30 compute-2 nova_compute[226829]: 2026-01-31 07:50:30.263 226833 DEBUG oslo_concurrency.lockutils [req-d5b9eca0-8901-4ce8-bb58-dd57ffa77a65 req-e1478778-5807-4824-ad0f-600847668d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:50:30 compute-2 nova_compute[226829]: 2026-01-31 07:50:30.264 226833 DEBUG oslo_concurrency.lockutils [req-d5b9eca0-8901-4ce8-bb58-dd57ffa77a65 req-e1478778-5807-4824-ad0f-600847668d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:50:30 compute-2 nova_compute[226829]: 2026-01-31 07:50:30.264 226833 DEBUG nova.network.neutron [req-d5b9eca0-8901-4ce8-bb58-dd57ffa77a65 req-e1478778-5807-4824-ad0f-600847668d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Refreshing network info cache for port ab21ca46-ceec-496e-859e-91c639339de6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:50:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:30.387 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:30 compute-2 ceph-mon[77282]: pgmap v1597: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 31 07:50:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:31.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:31 compute-2 nova_compute[226829]: 2026-01-31 07:50:31.366 226833 DEBUG oslo_concurrency.lockutils [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:31 compute-2 nova_compute[226829]: 2026-01-31 07:50:31.368 226833 DEBUG oslo_concurrency.lockutils [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:31 compute-2 nova_compute[226829]: 2026-01-31 07:50:31.369 226833 INFO nova.compute.manager [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Rebooting instance
Jan 31 07:50:31 compute-2 nova_compute[226829]: 2026-01-31 07:50:31.436 226833 DEBUG oslo_concurrency.lockutils [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:50:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:31.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4063991903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:32 compute-2 nova_compute[226829]: 2026-01-31 07:50:32.016 226833 DEBUG nova.network.neutron [req-d5b9eca0-8901-4ce8-bb58-dd57ffa77a65 req-e1478778-5807-4824-ad0f-600847668d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Updated VIF entry in instance network info cache for port ab21ca46-ceec-496e-859e-91c639339de6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:50:32 compute-2 nova_compute[226829]: 2026-01-31 07:50:32.017 226833 DEBUG nova.network.neutron [req-d5b9eca0-8901-4ce8-bb58-dd57ffa77a65 req-e1478778-5807-4824-ad0f-600847668d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Updating instance_info_cache with network_info: [{"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:32 compute-2 nova_compute[226829]: 2026-01-31 07:50:32.107 226833 DEBUG oslo_concurrency.lockutils [req-d5b9eca0-8901-4ce8-bb58-dd57ffa77a65 req-e1478778-5807-4824-ad0f-600847668d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:50:32 compute-2 nova_compute[226829]: 2026-01-31 07:50:32.108 226833 DEBUG oslo_concurrency.lockutils [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquired lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:50:32 compute-2 nova_compute[226829]: 2026-01-31 07:50:32.109 226833 DEBUG nova.network.neutron [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:50:32 compute-2 ceph-mon[77282]: pgmap v1598: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 31 07:50:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:33.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:33.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:33 compute-2 nova_compute[226829]: 2026-01-31 07:50:33.797 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:33 compute-2 ceph-mon[77282]: pgmap v1599: 305 pgs: 305 active+clean; 182 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 780 KiB/s wr, 75 op/s
Jan 31 07:50:33 compute-2 nova_compute[226829]: 2026-01-31 07:50:33.961 226833 DEBUG nova.network.neutron [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Updating instance_info_cache with network_info: [{"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:33 compute-2 nova_compute[226829]: 2026-01-31 07:50:33.977 226833 DEBUG oslo_concurrency.lockutils [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Releasing lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:50:33 compute-2 nova_compute[226829]: 2026-01-31 07:50:33.980 226833 DEBUG nova.compute.manager [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:34 compute-2 kernel: tapab21ca46-ce (unregistering): left promiscuous mode
Jan 31 07:50:34 compute-2 NetworkManager[48999]: <info>  [1769845834.1973] device (tapab21ca46-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:34 compute-2 ovn_controller[133834]: 2026-01-31T07:50:34Z|00178|binding|INFO|Releasing lport ab21ca46-ceec-496e-859e-91c639339de6 from this chassis (sb_readonly=0)
Jan 31 07:50:34 compute-2 ovn_controller[133834]: 2026-01-31T07:50:34Z|00179|binding|INFO|Setting lport ab21ca46-ceec-496e-859e-91c639339de6 down in Southbound
Jan 31 07:50:34 compute-2 ovn_controller[133834]: 2026-01-31T07:50:34Z|00180|binding|INFO|Removing iface tapab21ca46-ce ovn-installed in OVS
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.252 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:f9:35 10.100.0.8'], port_security=['fa:16:3e:54:f9:35 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '045df8b7-b820-4d9e-98c8-0fdacc84b4b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a52e7591d7f4e068e3f9fa0e4e288d5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3ac4692f-5623-43e8-ae72-2833dcd8e7a5 d6aa0181-fdb2-4d3e-a8a9-d361e28d0bfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2b3302f-04ed-4085-954c-3fd369ef549b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ab21ca46-ceec-496e-859e-91c639339de6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.254 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ab21ca46-ceec-496e-859e-91c639339de6 in datapath abea69d7-4daf-4b3f-9f7f-2b06f416400d unbound from our chassis
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.256 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abea69d7-4daf-4b3f-9f7f-2b06f416400d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.258 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.258 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[471f7a23-1066-427e-b57b-8b1ffeb7706d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.259 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d namespace which is not needed anymore
Jan 31 07:50:34 compute-2 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000045.scope: Deactivated successfully.
Jan 31 07:50:34 compute-2 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000045.scope: Consumed 7.615s CPU time.
Jan 31 07:50:34 compute-2 systemd-machined[195142]: Machine qemu-28-instance-00000045 terminated.
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.318 226833 INFO nova.virt.libvirt.driver [-] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Instance destroyed successfully.
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.319 226833 DEBUG nova.objects.instance [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lazy-loading 'resources' on Instance uuid 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.333 226833 DEBUG nova.virt.libvirt.vif [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2100114645',display_name='tempest-SecurityGroupsTestJSON-server-2100114645',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2100114645',id=69,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a52e7591d7f4e068e3f9fa0e4e288d5',ramdisk_id='',reservation_id='r-tq9hwigs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-31421501',owner_user_name='tempest-SecurityGroupsTestJSON-31421501-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:50:34Z,user_data=None,user_id='07aa1b5aaea444449f8ef00dfe56e8eb',uuid=045df8b7-b820-4d9e-98c8-0fdacc84b4b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.334 226833 DEBUG nova.network.os_vif_util [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converting VIF {"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.335 226833 DEBUG nova.network.os_vif_util [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.335 226833 DEBUG os_vif [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.338 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.339 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab21ca46-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.340 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.343 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.345 226833 INFO os_vif [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce')
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.353 226833 DEBUG nova.virt.libvirt.driver [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Start _get_guest_xml network_info=[{"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.358 226833 WARNING nova.virt.libvirt.driver [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.363 226833 DEBUG nova.virt.libvirt.host [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.364 226833 DEBUG nova.virt.libvirt.host [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.366 226833 DEBUG nova.virt.libvirt.host [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.367 226833 DEBUG nova.virt.libvirt.host [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.368 226833 DEBUG nova.virt.libvirt.driver [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.369 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.369 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.370 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.370 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.371 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.371 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.371 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.371 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.372 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.372 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.372 226833 DEBUG nova.virt.hardware [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.373 226833 DEBUG nova.objects.instance [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.392 226833 DEBUG oslo_concurrency.processutils [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:34 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256064]: [NOTICE]   (256068) : haproxy version is 2.8.14-c23fe91
Jan 31 07:50:34 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256064]: [NOTICE]   (256068) : path to executable is /usr/sbin/haproxy
Jan 31 07:50:34 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256064]: [WARNING]  (256068) : Exiting Master process...
Jan 31 07:50:34 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256064]: [WARNING]  (256068) : Exiting Master process...
Jan 31 07:50:34 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256064]: [ALERT]    (256068) : Current worker (256070) exited with code 143 (Terminated)
Jan 31 07:50:34 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256064]: [WARNING]  (256068) : All workers exited. Exiting... (0)
Jan 31 07:50:34 compute-2 systemd[1]: libpod-cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07.scope: Deactivated successfully.
Jan 31 07:50:34 compute-2 podman[256137]: 2026-01-31 07:50:34.4538278 +0000 UTC m=+0.118014323 container died cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.466 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.500 226833 DEBUG nova.compute.manager [req-9528e4cf-5896-4907-93a1-ee6ea60f8a5c req-eab31379-e8ae-4df2-9d41-f0fe85398e96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-vif-unplugged-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.501 226833 DEBUG oslo_concurrency.lockutils [req-9528e4cf-5896-4907-93a1-ee6ea60f8a5c req-eab31379-e8ae-4df2-9d41-f0fe85398e96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.502 226833 DEBUG oslo_concurrency.lockutils [req-9528e4cf-5896-4907-93a1-ee6ea60f8a5c req-eab31379-e8ae-4df2-9d41-f0fe85398e96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.502 226833 DEBUG oslo_concurrency.lockutils [req-9528e4cf-5896-4907-93a1-ee6ea60f8a5c req-eab31379-e8ae-4df2-9d41-f0fe85398e96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.502 226833 DEBUG nova.compute.manager [req-9528e4cf-5896-4907-93a1-ee6ea60f8a5c req-eab31379-e8ae-4df2-9d41-f0fe85398e96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] No waiting events found dispatching network-vif-unplugged-ab21ca46-ceec-496e-859e-91c639339de6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.503 226833 WARNING nova.compute.manager [req-9528e4cf-5896-4907-93a1-ee6ea60f8a5c req-eab31379-e8ae-4df2-9d41-f0fe85398e96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received unexpected event network-vif-unplugged-ab21ca46-ceec-496e-859e-91c639339de6 for instance with vm_state active and task_state reboot_started_hard.
Jan 31 07:50:34 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07-userdata-shm.mount: Deactivated successfully.
Jan 31 07:50:34 compute-2 systemd[1]: var-lib-containers-storage-overlay-e748053e20b4d21ab825a42187fed54f87f207dbce40776a4ba26f9ec97a3016-merged.mount: Deactivated successfully.
Jan 31 07:50:34 compute-2 podman[256137]: 2026-01-31 07:50:34.769639625 +0000 UTC m=+0.433826148 container cleanup cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:50:34 compute-2 systemd[1]: libpod-conmon-cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07.scope: Deactivated successfully.
Jan 31 07:50:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:50:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3424192575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.817 226833 DEBUG oslo_concurrency.processutils [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3424192575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:34 compute-2 podman[256188]: 2026-01-31 07:50:34.85699735 +0000 UTC m=+0.061613792 container remove cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.861 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9fdfe14d-891b-4205-bc2a-e79014283cca]: (4, ('Sat Jan 31 07:50:34 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d (cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07)\ncfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07\nSat Jan 31 07:50:34 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d (cfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07)\ncfd415dee460226957ab0f36a88a79aad5849791022609c827523ab6325afa07\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.863 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[15e40808-01b3-4223-ac2d-17c4e602fde6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.863 226833 DEBUG oslo_concurrency.processutils [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.864 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabea69d7-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:34 compute-2 kernel: tapabea69d7-40: left promiscuous mode
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.876 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5ccf4e-3b8e-4abc-945d-16ff011aa671]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:34 compute-2 nova_compute[226829]: 2026-01-31 07:50:34.883 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.896 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bf722e94-d74b-4fbb-a8e1-5fe563f9babd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.898 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c807e402-b45c-4815-b012-2ce0c3957378]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.909 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c87b35a1-65a9-4b6b-9974-7a643d309a0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607755, 'reachable_time': 22707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256224, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:34 compute-2 systemd[1]: run-netns-ovnmeta\x2dabea69d7\x2d4daf\x2d4b3f\x2d9f7f\x2d2b06f416400d.mount: Deactivated successfully.
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.913 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:50:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:34.914 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[88c0d652-b59d-4f78-ab67-767291cd05ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 sudo[256244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:35 compute-2 sudo[256244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:35 compute-2 sudo[256244]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:35 compute-2 sudo[256269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:35 compute-2 sudo[256269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:35 compute-2 sudo[256269]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:35.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:50:35 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1386995953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.300 226833 DEBUG oslo_concurrency.processutils [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.302 226833 DEBUG nova.virt.libvirt.vif [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2100114645',display_name='tempest-SecurityGroupsTestJSON-server-2100114645',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2100114645',id=69,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a52e7591d7f4e068e3f9fa0e4e288d5',ramdisk_id='',reservation_id='r-tq9hwigs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-31421501',owner_user_name='tempest-SecurityGroupsTestJSON-31421501-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:50:34Z,user_data=None,user_id='07aa1b5aaea444449f8ef00dfe56e8eb',uuid=045df8b7-b820-4d9e-98c8-0fdacc84b4b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.302 226833 DEBUG nova.network.os_vif_util [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converting VIF {"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.304 226833 DEBUG nova.network.os_vif_util [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.306 226833 DEBUG nova.objects.instance [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.330 226833 DEBUG nova.virt.libvirt.driver [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <uuid>045df8b7-b820-4d9e-98c8-0fdacc84b4b9</uuid>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <name>instance-00000045</name>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <nova:name>tempest-SecurityGroupsTestJSON-server-2100114645</nova:name>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:50:34</nova:creationTime>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <nova:user uuid="07aa1b5aaea444449f8ef00dfe56e8eb">tempest-SecurityGroupsTestJSON-31421501-project-member</nova:user>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <nova:project uuid="2a52e7591d7f4e068e3f9fa0e4e288d5">tempest-SecurityGroupsTestJSON-31421501</nova:project>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <nova:port uuid="ab21ca46-ceec-496e-859e-91c639339de6">
Jan 31 07:50:35 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <system>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <entry name="serial">045df8b7-b820-4d9e-98c8-0fdacc84b4b9</entry>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <entry name="uuid">045df8b7-b820-4d9e-98c8-0fdacc84b4b9</entry>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     </system>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <os>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   </os>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <features>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   </features>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk">
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       </source>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/045df8b7-b820-4d9e-98c8-0fdacc84b4b9_disk.config">
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       </source>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:50:35 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:54:f9:35"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <target dev="tapab21ca46-ce"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9/console.log" append="off"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <video>
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     </video>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:50:35 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:50:35 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:50:35 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:50:35 compute-2 nova_compute[226829]: </domain>
Jan 31 07:50:35 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.332 226833 DEBUG nova.virt.libvirt.driver [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.332 226833 DEBUG nova.virt.libvirt.driver [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.334 226833 DEBUG nova.virt.libvirt.vif [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2100114645',display_name='tempest-SecurityGroupsTestJSON-server-2100114645',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2100114645',id=69,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='2a52e7591d7f4e068e3f9fa0e4e288d5',ramdisk_id='',reservation_id='r-tq9hwigs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-31421501',owner_user_name='tempest-SecurityGroupsTestJSON-31421501-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:50:34Z,user_data=None,user_id='07aa1b5aaea444449f8ef00dfe56e8eb',uuid=045df8b7-b820-4d9e-98c8-0fdacc84b4b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.334 226833 DEBUG nova.network.os_vif_util [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converting VIF {"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.335 226833 DEBUG nova.network.os_vif_util [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.336 226833 DEBUG os_vif [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.336 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.337 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.338 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.341 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.341 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab21ca46-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.342 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab21ca46-ce, col_values=(('external_ids', {'iface-id': 'ab21ca46-ceec-496e-859e-91c639339de6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:f9:35', 'vm-uuid': '045df8b7-b820-4d9e-98c8-0fdacc84b4b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.344 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:35 compute-2 NetworkManager[48999]: <info>  [1769845835.3456] manager: (tapab21ca46-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.346 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.349 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.350 226833 INFO os_vif [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce')
Jan 31 07:50:35 compute-2 kernel: tapab21ca46-ce: entered promiscuous mode
Jan 31 07:50:35 compute-2 systemd-udevd[256107]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.410 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:35 compute-2 ovn_controller[133834]: 2026-01-31T07:50:35Z|00181|binding|INFO|Claiming lport ab21ca46-ceec-496e-859e-91c639339de6 for this chassis.
Jan 31 07:50:35 compute-2 ovn_controller[133834]: 2026-01-31T07:50:35Z|00182|binding|INFO|ab21ca46-ceec-496e-859e-91c639339de6: Claiming fa:16:3e:54:f9:35 10.100.0.8
Jan 31 07:50:35 compute-2 NetworkManager[48999]: <info>  [1769845835.4122] manager: (tapab21ca46-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Jan 31 07:50:35 compute-2 ovn_controller[133834]: 2026-01-31T07:50:35Z|00183|binding|INFO|Setting lport ab21ca46-ceec-496e-859e-91c639339de6 ovn-installed in OVS
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.419 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:35 compute-2 ovn_controller[133834]: 2026-01-31T07:50:35Z|00184|binding|INFO|Setting lport ab21ca46-ceec-496e-859e-91c639339de6 up in Southbound
Jan 31 07:50:35 compute-2 NetworkManager[48999]: <info>  [1769845835.4229] device (tapab21ca46-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:50:35 compute-2 NetworkManager[48999]: <info>  [1769845835.4235] device (tapab21ca46-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.421 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:f9:35 10.100.0.8'], port_security=['fa:16:3e:54:f9:35 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '045df8b7-b820-4d9e-98c8-0fdacc84b4b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a52e7591d7f4e068e3f9fa0e4e288d5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3ac4692f-5623-43e8-ae72-2833dcd8e7a5 d6aa0181-fdb2-4d3e-a8a9-d361e28d0bfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2b3302f-04ed-4085-954c-3fd369ef549b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ab21ca46-ceec-496e-859e-91c639339de6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.422 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ab21ca46-ceec-496e-859e-91c639339de6 in datapath abea69d7-4daf-4b3f-9f7f-2b06f416400d bound to our chassis
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.424 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abea69d7-4daf-4b3f-9f7f-2b06f416400d
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.431 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bab8c16a-176f-47d6-bbbe-47752203eee1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.431 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabea69d7-41 in ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:50:35 compute-2 systemd-machined[195142]: New machine qemu-29-instance-00000045.
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.434 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabea69d7-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.434 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6704b6dc-8487-4ac2-bcd5-a0e205542d06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.435 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[425c5f5c-fc43-4f37-86bd-0c6841276759]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.443 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[b195d9d9-869b-475b-8116-711eca2d30b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 systemd[1]: Started Virtual Machine qemu-29-instance-00000045.
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.451 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[14c1450c-9d4a-4c0d-93ae-6ff3cdc6e309]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.472 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7d895c-2e7d-4db9-9485-88a28c5d19f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 NetworkManager[48999]: <info>  [1769845835.4761] manager: (tapabea69d7-40): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.476 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[424a79c1-b691-4bd4-a038-c92fc7548e70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.501 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8682cd7e-2c0c-4c21-ae31-e428d55e5aab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.505 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[54d42b92-337d-4e1e-a5bc-7aeae2353a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 NetworkManager[48999]: <info>  [1769845835.5195] device (tapabea69d7-40): carrier: link connected
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.522 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e563d296-7068-42b6-a0b0-0cba25b5ffc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:35.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.536 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4cbbbf66-2237-4197-bb96-315a2f67300a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabea69d7-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:68:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608799, 'reachable_time': 28327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256341, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.547 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7370ae5d-5a56-4679-880d-f47521c53ad3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:68f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608799, 'tstamp': 608799}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256342, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.559 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0a78fd38-cef5-4825-9777-285bc9ab3a1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabea69d7-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:68:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608799, 'reachable_time': 28327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256343, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.578 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[46f6b575-a222-4a3f-8760-2e5f5e5b9228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.615 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0cea4798-e52b-4053-9eec-da0ab7f39ee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.616 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabea69d7-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.617 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.617 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabea69d7-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:35 compute-2 kernel: tapabea69d7-40: entered promiscuous mode
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:35 compute-2 NetworkManager[48999]: <info>  [1769845835.6201] manager: (tapabea69d7-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.623 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabea69d7-40, col_values=(('external_ids', {'iface-id': '38a02e8d-e8fe-4f86-adeb-b79ce221c983'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.624 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:35 compute-2 ovn_controller[133834]: 2026-01-31T07:50:35Z|00185|binding|INFO|Releasing lport 38a02e8d-e8fe-4f86-adeb-b79ce221c983 from this chassis (sb_readonly=0)
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.626 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abea69d7-4daf-4b3f-9f7f-2b06f416400d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abea69d7-4daf-4b3f-9f7f-2b06f416400d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.626 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.627 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[91e9b0e5-1df0-4ac6-8940-cf64ac84f067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.628 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-abea69d7-4daf-4b3f-9f7f-2b06f416400d
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/abea69d7-4daf-4b3f-9f7f-2b06f416400d.pid.haproxy
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID abea69d7-4daf-4b3f-9f7f-2b06f416400d
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:35.629 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'env', 'PROCESS_TAG=haproxy-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abea69d7-4daf-4b3f-9f7f-2b06f416400d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:50:35 compute-2 nova_compute[226829]: 2026-01-31 07:50:35.631 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:35 compute-2 ceph-mon[77282]: pgmap v1600: 305 pgs: 305 active+clean; 182 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 481 KiB/s wr, 74 op/s
Jan 31 07:50:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1386995953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:35 compute-2 podman[256409]: 2026-01-31 07:50:35.979463583 +0000 UTC m=+0.049316310 container create 2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 07:50:36 compute-2 systemd[1]: Started libpod-conmon-2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d.scope.
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.017 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.017 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845836.0163717, 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.017 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] VM Resumed (Lifecycle Event)
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.020 226833 DEBUG nova.compute.manager [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:50:36 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.023 226833 INFO nova.virt.libvirt.driver [-] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Instance rebooted successfully.
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.023 226833 DEBUG nova.compute.manager [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:36 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebbd3393eaf505b65620fdc9d0b4e680d34e76ecc23c4ca4f424a7106a14312/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:36 compute-2 podman[256409]: 2026-01-31 07:50:36.0371983 +0000 UTC m=+0.107051057 container init 2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 07:50:36 compute-2 podman[256409]: 2026-01-31 07:50:36.04090909 +0000 UTC m=+0.110761827 container start 2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 07:50:36 compute-2 podman[256409]: 2026-01-31 07:50:35.953263687 +0000 UTC m=+0.023116434 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:50:36 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256432]: [NOTICE]   (256436) : New worker (256438) forked
Jan 31 07:50:36 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256432]: [NOTICE]   (256436) : Loading success.
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.086 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.090 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.124 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.124 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845836.0193586, 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.125 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] VM Started (Lifecycle Event)
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.136 226833 DEBUG oslo_concurrency.lockutils [None req-f64afa92-bf4d-407e-b52a-246da0595eba 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.144 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.147 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.650 226833 DEBUG nova.compute.manager [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.651 226833 DEBUG oslo_concurrency.lockutils [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.651 226833 DEBUG oslo_concurrency.lockutils [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.652 226833 DEBUG oslo_concurrency.lockutils [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.652 226833 DEBUG nova.compute.manager [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] No waiting events found dispatching network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.653 226833 WARNING nova.compute.manager [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received unexpected event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 for instance with vm_state active and task_state None.
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.654 226833 DEBUG nova.compute.manager [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.654 226833 DEBUG oslo_concurrency.lockutils [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.655 226833 DEBUG oslo_concurrency.lockutils [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.655 226833 DEBUG oslo_concurrency.lockutils [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.655 226833 DEBUG nova.compute.manager [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] No waiting events found dispatching network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.656 226833 WARNING nova.compute.manager [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received unexpected event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 for instance with vm_state active and task_state None.
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.656 226833 DEBUG nova.compute.manager [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.657 226833 DEBUG oslo_concurrency.lockutils [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.657 226833 DEBUG oslo_concurrency.lockutils [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.658 226833 DEBUG oslo_concurrency.lockutils [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.658 226833 DEBUG nova.compute.manager [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] No waiting events found dispatching network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:50:36 compute-2 nova_compute[226829]: 2026-01-31 07:50:36.659 226833 WARNING nova.compute.manager [req-03911868-1cd2-4284-9ff0-d0a237e74e68 req-95a10c0c-4210-4d29-987a-d407a6c3edc1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received unexpected event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 for instance with vm_state active and task_state None.
Jan 31 07:50:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:37.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:37.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:38 compute-2 ceph-mon[77282]: pgmap v1601: 305 pgs: 305 active+clean; 213 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 31 07:50:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2732920842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2809114543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:39.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.250 226833 DEBUG nova.compute.manager [req-207655e8-463d-4f0a-9611-b3a962b60d2a req-019e75f6-dd6d-4ce9-b078-0e2e5fe6e730 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-changed-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.251 226833 DEBUG nova.compute.manager [req-207655e8-463d-4f0a-9611-b3a962b60d2a req-019e75f6-dd6d-4ce9-b078-0e2e5fe6e730 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Refreshing instance network info cache due to event network-changed-ab21ca46-ceec-496e-859e-91c639339de6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.251 226833 DEBUG oslo_concurrency.lockutils [req-207655e8-463d-4f0a-9611-b3a962b60d2a req-019e75f6-dd6d-4ce9-b078-0e2e5fe6e730 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.251 226833 DEBUG oslo_concurrency.lockutils [req-207655e8-463d-4f0a-9611-b3a962b60d2a req-019e75f6-dd6d-4ce9-b078-0e2e5fe6e730 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.252 226833 DEBUG nova.network.neutron [req-207655e8-463d-4f0a-9611-b3a962b60d2a req-019e75f6-dd6d-4ce9-b078-0e2e5fe6e730 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Refreshing network info cache for port ab21ca46-ceec-496e-859e-91c639339de6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.466 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:50:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:39.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:50:39 compute-2 ceph-mon[77282]: pgmap v1602: 305 pgs: 305 active+clean; 213 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.957 226833 DEBUG oslo_concurrency.lockutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.958 226833 DEBUG oslo_concurrency.lockutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.958 226833 DEBUG oslo_concurrency.lockutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.958 226833 DEBUG oslo_concurrency.lockutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.958 226833 DEBUG oslo_concurrency.lockutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.960 226833 INFO nova.compute.manager [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Terminating instance
Jan 31 07:50:39 compute-2 nova_compute[226829]: 2026-01-31 07:50:39.961 226833 DEBUG nova.compute.manager [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:50:40 compute-2 kernel: tapab21ca46-ce (unregistering): left promiscuous mode
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.005 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:40 compute-2 NetworkManager[48999]: <info>  [1769845840.0070] device (tapab21ca46-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.009 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:40 compute-2 ovn_controller[133834]: 2026-01-31T07:50:40Z|00186|binding|INFO|Releasing lport ab21ca46-ceec-496e-859e-91c639339de6 from this chassis (sb_readonly=0)
Jan 31 07:50:40 compute-2 ovn_controller[133834]: 2026-01-31T07:50:40Z|00187|binding|INFO|Setting lport ab21ca46-ceec-496e-859e-91c639339de6 down in Southbound
Jan 31 07:50:40 compute-2 ovn_controller[133834]: 2026-01-31T07:50:40Z|00188|binding|INFO|Removing iface tapab21ca46-ce ovn-installed in OVS
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.011 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.017 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.030 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:f9:35 10.100.0.8'], port_security=['fa:16:3e:54:f9:35 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '045df8b7-b820-4d9e-98c8-0fdacc84b4b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a52e7591d7f4e068e3f9fa0e4e288d5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '39385760-df78-4885-aee0-e6b1b202d23f 3ac4692f-5623-43e8-ae72-2833dcd8e7a5 d6aa0181-fdb2-4d3e-a8a9-d361e28d0bfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2b3302f-04ed-4085-954c-3fd369ef549b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ab21ca46-ceec-496e-859e-91c639339de6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.033 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ab21ca46-ceec-496e-859e-91c639339de6 in datapath abea69d7-4daf-4b3f-9f7f-2b06f416400d unbound from our chassis
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.038 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abea69d7-4daf-4b3f-9f7f-2b06f416400d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.039 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0be640-f21a-4760-87d2-472732788fb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.040 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d namespace which is not needed anymore
Jan 31 07:50:40 compute-2 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000045.scope: Deactivated successfully.
Jan 31 07:50:40 compute-2 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000045.scope: Consumed 4.744s CPU time.
Jan 31 07:50:40 compute-2 systemd-machined[195142]: Machine qemu-29-instance-00000045 terminated.
Jan 31 07:50:40 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256432]: [NOTICE]   (256436) : haproxy version is 2.8.14-c23fe91
Jan 31 07:50:40 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256432]: [NOTICE]   (256436) : path to executable is /usr/sbin/haproxy
Jan 31 07:50:40 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256432]: [WARNING]  (256436) : Exiting Master process...
Jan 31 07:50:40 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256432]: [ALERT]    (256436) : Current worker (256438) exited with code 143 (Terminated)
Jan 31 07:50:40 compute-2 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[256432]: [WARNING]  (256436) : All workers exited. Exiting... (0)
Jan 31 07:50:40 compute-2 systemd[1]: libpod-2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d.scope: Deactivated successfully.
Jan 31 07:50:40 compute-2 podman[256473]: 2026-01-31 07:50:40.163288237 +0000 UTC m=+0.044271435 container died 2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:50:40 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d-userdata-shm.mount: Deactivated successfully.
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.195 226833 INFO nova.virt.libvirt.driver [-] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Instance destroyed successfully.
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.196 226833 DEBUG nova.objects.instance [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lazy-loading 'resources' on Instance uuid 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:50:40 compute-2 systemd[1]: var-lib-containers-storage-overlay-7ebbd3393eaf505b65620fdc9d0b4e680d34e76ecc23c4ca4f424a7106a14312-merged.mount: Deactivated successfully.
Jan 31 07:50:40 compute-2 podman[256473]: 2026-01-31 07:50:40.207469349 +0000 UTC m=+0.088452517 container cleanup 2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:50:40 compute-2 systemd[1]: libpod-conmon-2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d.scope: Deactivated successfully.
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.214 226833 DEBUG nova.virt.libvirt.vif [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2100114645',display_name='tempest-SecurityGroupsTestJSON-server-2100114645',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2100114645',id=69,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a52e7591d7f4e068e3f9fa0e4e288d5',ramdisk_id='',reservation_id='r-tq9hwigs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-31421501',owner_user_name='tempest-SecurityGroupsTestJSON-31421501-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:50:36Z,user_data=None,user_id='07aa1b5aaea444449f8ef00dfe56e8eb',uuid=045df8b7-b820-4d9e-98c8-0fdacc84b4b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.215 226833 DEBUG nova.network.os_vif_util [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converting VIF {"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.215 226833 DEBUG nova.network.os_vif_util [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.216 226833 DEBUG os_vif [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.217 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.217 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab21ca46-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.218 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.220 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.222 226833 INFO os_vif [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f9:35,bridge_name='br-int',has_traffic_filtering=True,id=ab21ca46-ceec-496e-859e-91c639339de6,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab21ca46-ce')
Jan 31 07:50:40 compute-2 podman[256513]: 2026-01-31 07:50:40.272589954 +0000 UTC m=+0.043637658 container remove 2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.276 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[98a33a6a-078b-41a7-aec3-1a243044e853]: (4, ('Sat Jan 31 07:50:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d (2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d)\n2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d\nSat Jan 31 07:50:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d (2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d)\n2857ad5cae3e243d141db4fbd5fe655a30e1c8fb4dcb19dea79de3fe4f042e6d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.278 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c43f91d3-4b18-4b41-b048-c32b7b557b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.279 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabea69d7-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.280 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:40 compute-2 kernel: tapabea69d7-40: left promiscuous mode
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.284 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.286 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6451925f-d8ac-4e54-b5a5-e7a76af86e97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.299 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eddcd55e-b46a-4566-baa2-624c07c4aabc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.300 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9d06e0-a5a0-4887-a32e-6f7bd2a93059]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.309 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b82af9d5-1df6-437b-bfa8-7d4ed099986c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608794, 'reachable_time': 44536, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256546, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:40 compute-2 systemd[1]: run-netns-ovnmeta\x2dabea69d7\x2d4daf\x2d4b3f\x2d9f7f\x2d2b06f416400d.mount: Deactivated successfully.
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.312 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:50:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:40.313 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[772d7ea5-0021-46ab-9ec8-b05d1d9e8a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.747 226833 INFO nova.virt.libvirt.driver [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Deleting instance files /var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9_del
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.748 226833 INFO nova.virt.libvirt.driver [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Deletion of /var/lib/nova/instances/045df8b7-b820-4d9e-98c8-0fdacc84b4b9_del complete
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.789 226833 DEBUG nova.compute.manager [req-88d4abb8-85bf-4b22-8ad8-dc329cbdc482 req-9eb9ae15-3936-4cce-9256-b6aa4e3c8344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-vif-unplugged-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.790 226833 DEBUG oslo_concurrency.lockutils [req-88d4abb8-85bf-4b22-8ad8-dc329cbdc482 req-9eb9ae15-3936-4cce-9256-b6aa4e3c8344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.791 226833 DEBUG oslo_concurrency.lockutils [req-88d4abb8-85bf-4b22-8ad8-dc329cbdc482 req-9eb9ae15-3936-4cce-9256-b6aa4e3c8344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.791 226833 DEBUG oslo_concurrency.lockutils [req-88d4abb8-85bf-4b22-8ad8-dc329cbdc482 req-9eb9ae15-3936-4cce-9256-b6aa4e3c8344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.791 226833 DEBUG nova.compute.manager [req-88d4abb8-85bf-4b22-8ad8-dc329cbdc482 req-9eb9ae15-3936-4cce-9256-b6aa4e3c8344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] No waiting events found dispatching network-vif-unplugged-ab21ca46-ceec-496e-859e-91c639339de6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.792 226833 DEBUG nova.compute.manager [req-88d4abb8-85bf-4b22-8ad8-dc329cbdc482 req-9eb9ae15-3936-4cce-9256-b6aa4e3c8344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-vif-unplugged-ab21ca46-ceec-496e-859e-91c639339de6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.854 226833 INFO nova.compute.manager [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Took 0.89 seconds to destroy the instance on the hypervisor.
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.854 226833 DEBUG oslo.service.loopingcall [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.855 226833 DEBUG nova.compute.manager [-] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:50:40 compute-2 nova_compute[226829]: 2026-01-31 07:50:40.855 226833 DEBUG nova.network.neutron [-] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:50:41 compute-2 nova_compute[226829]: 2026-01-31 07:50:41.122 226833 DEBUG nova.network.neutron [req-207655e8-463d-4f0a-9611-b3a962b60d2a req-019e75f6-dd6d-4ce9-b078-0e2e5fe6e730 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Updated VIF entry in instance network info cache for port ab21ca46-ceec-496e-859e-91c639339de6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:50:41 compute-2 nova_compute[226829]: 2026-01-31 07:50:41.123 226833 DEBUG nova.network.neutron [req-207655e8-463d-4f0a-9611-b3a962b60d2a req-019e75f6-dd6d-4ce9-b078-0e2e5fe6e730 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Updating instance_info_cache with network_info: [{"id": "ab21ca46-ceec-496e-859e-91c639339de6", "address": "fa:16:3e:54:f9:35", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab21ca46-ce", "ovs_interfaceid": "ab21ca46-ceec-496e-859e-91c639339de6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:41.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:41 compute-2 nova_compute[226829]: 2026-01-31 07:50:41.157 226833 DEBUG oslo_concurrency.lockutils [req-207655e8-463d-4f0a-9611-b3a962b60d2a req-019e75f6-dd6d-4ce9-b078-0e2e5fe6e730 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-045df8b7-b820-4d9e-98c8-0fdacc84b4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:50:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:50:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:41.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:50:41 compute-2 nova_compute[226829]: 2026-01-31 07:50:41.995 226833 DEBUG nova.network.neutron [-] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.114 226833 INFO nova.compute.manager [-] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Took 1.26 seconds to deallocate network for instance.
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.156 226833 DEBUG oslo_concurrency.lockutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.156 226833 DEBUG oslo_concurrency.lockutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:42 compute-2 ceph-mon[77282]: pgmap v1603: 305 pgs: 305 active+clean; 212 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.4 MiB/s wr, 205 op/s
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.234 226833 DEBUG oslo_concurrency.processutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:50:42 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3365562488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.684 226833 DEBUG oslo_concurrency.processutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.691 226833 DEBUG nova.compute.provider_tree [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.721 226833 DEBUG nova.scheduler.client.report [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.771 226833 DEBUG oslo_concurrency.lockutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.848 226833 INFO nova.scheduler.client.report [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Deleted allocations for instance 045df8b7-b820-4d9e-98c8-0fdacc84b4b9
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.909 226833 DEBUG nova.compute.manager [req-3d1252fd-dc1e-496a-a1fa-9c36ee515b51 req-ff303770-3a0d-4067-ab32-974c16a409d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.910 226833 DEBUG oslo_concurrency.lockutils [req-3d1252fd-dc1e-496a-a1fa-9c36ee515b51 req-ff303770-3a0d-4067-ab32-974c16a409d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.910 226833 DEBUG oslo_concurrency.lockutils [req-3d1252fd-dc1e-496a-a1fa-9c36ee515b51 req-ff303770-3a0d-4067-ab32-974c16a409d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.911 226833 DEBUG oslo_concurrency.lockutils [req-3d1252fd-dc1e-496a-a1fa-9c36ee515b51 req-ff303770-3a0d-4067-ab32-974c16a409d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.911 226833 DEBUG nova.compute.manager [req-3d1252fd-dc1e-496a-a1fa-9c36ee515b51 req-ff303770-3a0d-4067-ab32-974c16a409d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] No waiting events found dispatching network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.911 226833 WARNING nova.compute.manager [req-3d1252fd-dc1e-496a-a1fa-9c36ee515b51 req-ff303770-3a0d-4067-ab32-974c16a409d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received unexpected event network-vif-plugged-ab21ca46-ceec-496e-859e-91c639339de6 for instance with vm_state deleted and task_state None.
Jan 31 07:50:42 compute-2 nova_compute[226829]: 2026-01-31 07:50:42.912 226833 DEBUG nova.compute.manager [req-3d1252fd-dc1e-496a-a1fa-9c36ee515b51 req-ff303770-3a0d-4067-ab32-974c16a409d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Received event network-vif-deleted-ab21ca46-ceec-496e-859e-91c639339de6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:43 compute-2 nova_compute[226829]: 2026-01-31 07:50:43.022 226833 DEBUG oslo_concurrency.lockutils [None req-e0c7ff46-607f-49ed-86ec-e4364bfdcd70 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "045df8b7-b820-4d9e-98c8-0fdacc84b4b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:43.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3365562488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:43.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.015 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.015 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.051 226833 DEBUG nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:50:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:44 compute-2 ceph-mon[77282]: pgmap v1604: 305 pgs: 305 active+clean; 226 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 167 op/s
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.233 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.235 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.244 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.244 226833 INFO nova.compute.claims [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.432 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.467 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:50:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/145280449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.851 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.856 226833 DEBUG nova.compute.provider_tree [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:50:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:50:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2515165356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:50:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:50:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2515165356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:50:44 compute-2 nova_compute[226829]: 2026-01-31 07:50:44.954 226833 DEBUG nova.scheduler.client.report [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.117 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.117 226833 DEBUG nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:50:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:50:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:45.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.219 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/145280449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2515165356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:50:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2515165356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.283 226833 DEBUG nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.283 226833 DEBUG nova.network.neutron [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.315 226833 INFO nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.345 226833 DEBUG nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:50:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:45.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.580 226833 DEBUG nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.582 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.582 226833 INFO nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Creating image(s)
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.614 226833 DEBUG nova.storage.rbd_utils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] rbd image 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.640 226833 DEBUG nova.storage.rbd_utils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] rbd image 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.673 226833 DEBUG nova.storage.rbd_utils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] rbd image 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.677 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.735 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.736 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.737 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.737 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.765 226833 DEBUG nova.storage.rbd_utils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] rbd image 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.769 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:45 compute-2 nova_compute[226829]: 2026-01-31 07:50:45.917 226833 DEBUG nova.policy [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'da7f93fef8fa4e0d8682702e040a7476', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8a037791bb04bafaec8d4639d3907ae', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:50:46 compute-2 nova_compute[226829]: 2026-01-31 07:50:46.057 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:46 compute-2 nova_compute[226829]: 2026-01-31 07:50:46.137 226833 DEBUG nova.storage.rbd_utils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] resizing rbd image 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:50:46 compute-2 ceph-mon[77282]: pgmap v1605: 305 pgs: 305 active+clean; 226 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 156 op/s
Jan 31 07:50:46 compute-2 nova_compute[226829]: 2026-01-31 07:50:46.268 226833 DEBUG nova.objects.instance [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lazy-loading 'migration_context' on Instance uuid 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:50:46 compute-2 nova_compute[226829]: 2026-01-31 07:50:46.285 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:50:46 compute-2 nova_compute[226829]: 2026-01-31 07:50:46.285 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Ensure instance console log exists: /var/lib/nova/instances/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:50:46 compute-2 nova_compute[226829]: 2026-01-31 07:50:46.286 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:46 compute-2 nova_compute[226829]: 2026-01-31 07:50:46.286 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:46 compute-2 nova_compute[226829]: 2026-01-31 07:50:46.287 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:46 compute-2 nova_compute[226829]: 2026-01-31 07:50:46.849 226833 DEBUG nova.network.neutron [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Successfully created port: ca7b65d5-b18e-4cc6-a3f3-eb754739df9a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:50:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:47.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:47.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:48 compute-2 ceph-mon[77282]: pgmap v1606: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.1 MiB/s wr, 219 op/s
Jan 31 07:50:48 compute-2 nova_compute[226829]: 2026-01-31 07:50:48.356 226833 DEBUG nova.network.neutron [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Successfully updated port: ca7b65d5-b18e-4cc6-a3f3-eb754739df9a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:50:48 compute-2 nova_compute[226829]: 2026-01-31 07:50:48.372 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:50:48 compute-2 nova_compute[226829]: 2026-01-31 07:50:48.373 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquired lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:50:48 compute-2 nova_compute[226829]: 2026-01-31 07:50:48.373 226833 DEBUG nova.network.neutron [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:50:48 compute-2 nova_compute[226829]: 2026-01-31 07:50:48.465 226833 DEBUG nova.compute.manager [req-3c87d940-50a4-4eb9-964a-375cc704b543 req-a97d06e7-0e0c-489b-b775-2f1747a7b636 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received event network-changed-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:48 compute-2 nova_compute[226829]: 2026-01-31 07:50:48.466 226833 DEBUG nova.compute.manager [req-3c87d940-50a4-4eb9-964a-375cc704b543 req-a97d06e7-0e0c-489b-b775-2f1747a7b636 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Refreshing instance network info cache due to event network-changed-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:50:48 compute-2 nova_compute[226829]: 2026-01-31 07:50:48.466 226833 DEBUG oslo_concurrency.lockutils [req-3c87d940-50a4-4eb9-964a-375cc704b543 req-a97d06e7-0e0c-489b-b775-2f1747a7b636 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:50:48 compute-2 nova_compute[226829]: 2026-01-31 07:50:48.599 226833 DEBUG nova.network.neutron [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:50:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:49.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.318 226833 DEBUG nova.network.neutron [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updating instance_info_cache with network_info: [{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.347 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Releasing lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.347 226833 DEBUG nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Instance network_info: |[{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.348 226833 DEBUG oslo_concurrency.lockutils [req-3c87d940-50a4-4eb9-964a-375cc704b543 req-a97d06e7-0e0c-489b-b775-2f1747a7b636 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.348 226833 DEBUG nova.network.neutron [req-3c87d940-50a4-4eb9-964a-375cc704b543 req-a97d06e7-0e0c-489b-b775-2f1747a7b636 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Refreshing network info cache for port ca7b65d5-b18e-4cc6-a3f3-eb754739df9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.354 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Start _get_guest_xml network_info=[{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.360 226833 WARNING nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.364 226833 DEBUG nova.virt.libvirt.host [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.365 226833 DEBUG nova.virt.libvirt.host [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.373 226833 DEBUG nova.virt.libvirt.host [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.374 226833 DEBUG nova.virt.libvirt.host [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.376 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.376 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.377 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.378 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.378 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.378 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.379 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.379 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.380 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.380 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.380 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.381 226833 DEBUG nova.virt.hardware [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.387 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.470 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:49.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:50:49 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2419056380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.832 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.861 226833 DEBUG nova.storage.rbd_utils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] rbd image 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:49 compute-2 nova_compute[226829]: 2026-01-31 07:50:49.866 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:50:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4286215794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.269 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.270 226833 DEBUG nova.virt.libvirt.vif [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:50:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1646636956',display_name='tempest-AttachInterfacesUnderV243Test-server-1646636956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1646636956',id=71,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGmrBUXVGXTIX8YzH1r6kil/nhA8jwNHTY37N+fAuBFkccTNb/7NpE1FwQotASjyjUj5eOwfrH+FZgf7m53XPnmLYDq2d+B8HMrfzRss1ABP0DnA52Zl+YJSa7ShFLA9g==',key_name='tempest-keypair-1969491340',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8a037791bb04bafaec8d4639d3907ae',ramdisk_id='',reservation_id='r-7fo7300w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-490839015',owner_user_name='tempest-AttachInterfacesUnderV243Test-490839015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:50:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='da7f93fef8fa4e0d8682702e040a7476',uuid=3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.270 226833 DEBUG nova.network.os_vif_util [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Converting VIF {"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.271 226833 DEBUG nova.network.os_vif_util [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=ca7b65d5-b18e-4cc6-a3f3-eb754739df9a,network=Network(ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca7b65d5-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.272 226833 DEBUG nova.objects.instance [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:50:50 compute-2 ceph-mon[77282]: pgmap v1607: 305 pgs: 305 active+clean; 234 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Jan 31 07:50:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2419056380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4286215794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.582 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <uuid>3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba</uuid>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <name>instance-00000047</name>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1646636956</nova:name>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:50:49</nova:creationTime>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <nova:user uuid="da7f93fef8fa4e0d8682702e040a7476">tempest-AttachInterfacesUnderV243Test-490839015-project-member</nova:user>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <nova:project uuid="c8a037791bb04bafaec8d4639d3907ae">tempest-AttachInterfacesUnderV243Test-490839015</nova:project>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <nova:port uuid="ca7b65d5-b18e-4cc6-a3f3-eb754739df9a">
Jan 31 07:50:50 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <system>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <entry name="serial">3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba</entry>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <entry name="uuid">3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba</entry>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     </system>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <os>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   </os>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <features>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   </features>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk">
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       </source>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk.config">
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       </source>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:50:50 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:40:b4:8f"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <target dev="tapca7b65d5-b1"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba/console.log" append="off"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <video>
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     </video>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:50:50 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:50:50 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:50:50 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:50:50 compute-2 nova_compute[226829]: </domain>
Jan 31 07:50:50 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.583 226833 DEBUG nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Preparing to wait for external event network-vif-plugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.584 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.584 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.585 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.586 226833 DEBUG nova.virt.libvirt.vif [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:50:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1646636956',display_name='tempest-AttachInterfacesUnderV243Test-server-1646636956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1646636956',id=71,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGmrBUXVGXTIX8YzH1r6kil/nhA8jwNHTY37N+fAuBFkccTNb/7NpE1FwQotASjyjUj5eOwfrH+FZgf7m53XPnmLYDq2d+B8HMrfzRss1ABP0DnA52Zl+YJSa7ShFLA9g==',key_name='tempest-keypair-1969491340',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8a037791bb04bafaec8d4639d3907ae',ramdisk_id='',reservation_id='r-7fo7300w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-490839015',owner_user_name='tempest-AttachInterfacesUnderV243Test-490839015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:50:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='da7f93fef8fa4e0d8682702e040a7476',uuid=3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.587 226833 DEBUG nova.network.os_vif_util [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Converting VIF {"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.588 226833 DEBUG nova.network.os_vif_util [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=ca7b65d5-b18e-4cc6-a3f3-eb754739df9a,network=Network(ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca7b65d5-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.589 226833 DEBUG os_vif [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=ca7b65d5-b18e-4cc6-a3f3-eb754739df9a,network=Network(ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca7b65d5-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.590 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.591 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.592 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.596 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.597 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca7b65d5-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.599 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca7b65d5-b1, col_values=(('external_ids', {'iface-id': 'ca7b65d5-b18e-4cc6-a3f3-eb754739df9a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:b4:8f', 'vm-uuid': '3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.601 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:50 compute-2 NetworkManager[48999]: <info>  [1769845850.6021] manager: (tapca7b65d5-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.605 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.606 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.607 226833 INFO os_vif [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=ca7b65d5-b18e-4cc6-a3f3-eb754739df9a,network=Network(ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca7b65d5-b1')
Jan 31 07:50:50 compute-2 podman[256828]: 2026-01-31 07:50:50.701324342 +0000 UTC m=+0.066613477 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.866 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.867 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.867 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] No VIF found with MAC fa:16:3e:40:b4:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.868 226833 INFO nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Using config drive
Jan 31 07:50:50 compute-2 nova_compute[226829]: 2026-01-31 07:50:50.898 226833 DEBUG nova.storage.rbd_utils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] rbd image 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:51.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:51.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:51 compute-2 nova_compute[226829]: 2026-01-31 07:50:51.638 226833 DEBUG nova.network.neutron [req-3c87d940-50a4-4eb9-964a-375cc704b543 req-a97d06e7-0e0c-489b-b775-2f1747a7b636 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updated VIF entry in instance network info cache for port ca7b65d5-b18e-4cc6-a3f3-eb754739df9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:50:51 compute-2 nova_compute[226829]: 2026-01-31 07:50:51.639 226833 DEBUG nova.network.neutron [req-3c87d940-50a4-4eb9-964a-375cc704b543 req-a97d06e7-0e0c-489b-b775-2f1747a7b636 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updating instance_info_cache with network_info: [{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:51 compute-2 nova_compute[226829]: 2026-01-31 07:50:51.704 226833 DEBUG oslo_concurrency.lockutils [req-3c87d940-50a4-4eb9-964a-375cc704b543 req-a97d06e7-0e0c-489b-b775-2f1747a7b636 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.098 226833 INFO nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Creating config drive at /var/lib/nova/instances/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba/disk.config
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.103 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzig9mu6q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.225 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzig9mu6q" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.252 226833 DEBUG nova.storage.rbd_utils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] rbd image 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.256 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba/disk.config 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:52 compute-2 ceph-mon[77282]: pgmap v1608: 305 pgs: 305 active+clean; 187 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 249 op/s
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.677 226833 DEBUG oslo_concurrency.processutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba/disk.config 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.678 226833 INFO nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Deleting local config drive /var/lib/nova/instances/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba/disk.config because it was imported into RBD.
Jan 31 07:50:52 compute-2 kernel: tapca7b65d5-b1: entered promiscuous mode
Jan 31 07:50:52 compute-2 NetworkManager[48999]: <info>  [1769845852.7198] manager: (tapca7b65d5-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Jan 31 07:50:52 compute-2 ovn_controller[133834]: 2026-01-31T07:50:52Z|00189|binding|INFO|Claiming lport ca7b65d5-b18e-4cc6-a3f3-eb754739df9a for this chassis.
Jan 31 07:50:52 compute-2 ovn_controller[133834]: 2026-01-31T07:50:52Z|00190|binding|INFO|ca7b65d5-b18e-4cc6-a3f3-eb754739df9a: Claiming fa:16:3e:40:b4:8f 10.100.0.9
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.721 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.723 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.725 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.737 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:b4:8f 10.100.0.9'], port_security=['fa:16:3e:40:b4:8f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8a037791bb04bafaec8d4639d3907ae', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06254fea-c724-468b-99a0-468bca1b7ad6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d3b22ca-7707-4658-b5d3-4f2316452c5f, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ca7b65d5-b18e-4cc6-a3f3-eb754739df9a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.739 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ca7b65d5-b18e-4cc6-a3f3-eb754739df9a in datapath ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1 bound to our chassis
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.741 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1
Jan 31 07:50:52 compute-2 systemd-udevd[256926]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:50:52 compute-2 systemd-machined[195142]: New machine qemu-30-instance-00000047.
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.746 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:52 compute-2 ovn_controller[133834]: 2026-01-31T07:50:52Z|00191|binding|INFO|Setting lport ca7b65d5-b18e-4cc6-a3f3-eb754739df9a ovn-installed in OVS
Jan 31 07:50:52 compute-2 ovn_controller[133834]: 2026-01-31T07:50:52Z|00192|binding|INFO|Setting lport ca7b65d5-b18e-4cc6-a3f3-eb754739df9a up in Southbound
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.749 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.749 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a696f378-7e79-4bd9-af2c-1809ed318fb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.750 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapccdc763c-41 in ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.752 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapccdc763c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.752 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f450ec31-c74b-4afd-80c2-4efe744dda55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.752 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9129c479-bf6d-41de-ad83-4e986dfc5116]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 NetworkManager[48999]: <info>  [1769845852.7567] device (tapca7b65d5-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:50:52 compute-2 NetworkManager[48999]: <info>  [1769845852.7572] device (tapca7b65d5-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:50:52 compute-2 systemd[1]: Started Virtual Machine qemu-30-instance-00000047.
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.764 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa3793d-0966-4b66-9627-5c5e175417f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.775 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b26ccb63-7530-4d38-ba61-be1b4384327d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.795 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[47fadbe2-649c-47b0-922a-0a7283328e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 NetworkManager[48999]: <info>  [1769845852.8156] manager: (tapccdc763c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/107)
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.815 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bd662c94-442f-4b11-b59c-f820150795e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.829 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.837 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8762ad-26db-4cc2-a9a2-f29041ef8bc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.839 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9cdb1b-e86b-4d4f-9dc9-2669543c5e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 NetworkManager[48999]: <info>  [1769845852.8516] device (tapccdc763c-40): carrier: link connected
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.854 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5d23e088-e851-424b-aed5-a465bf33e3a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.865 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8f43a367-b436-46f6-ad49-551eb78dbd73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapccdc763c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:1a:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610532, 'reachable_time': 40844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256959, 'error': None, 'target': 'ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.873 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[379afd61-fbaf-48c7-a22e-0e6a5cd5ac29]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:1a53'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610532, 'tstamp': 610532}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256960, 'error': None, 'target': 'ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.885 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d6463357-c281-43de-adbc-2ef1af1f9074]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapccdc763c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:1a:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610532, 'reachable_time': 40844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256961, 'error': None, 'target': 'ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.903 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c7946f94-7af7-4369-b334-de28dcc40570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.943 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eab7c0ab-1158-476e-8854-b03aa67b92e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.944 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccdc763c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.944 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.945 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccdc763c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.946 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:52 compute-2 NetworkManager[48999]: <info>  [1769845852.9473] manager: (tapccdc763c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Jan 31 07:50:52 compute-2 kernel: tapccdc763c-40: entered promiscuous mode
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.949 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapccdc763c-40, col_values=(('external_ids', {'iface-id': 'baa67f49-7ef9-41cc-bae4-66786d0756ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.950 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:52 compute-2 ovn_controller[133834]: 2026-01-31T07:50:52Z|00193|binding|INFO|Releasing lport baa67f49-7ef9-41cc-bae4-66786d0756ef from this chassis (sb_readonly=0)
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.952 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.952 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aea95e6f-f214-40d1-8bf2-379b2ade388e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.953 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1.pid.haproxy
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:50:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:50:52.954 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1', 'env', 'PROCESS_TAG=haproxy-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:50:52 compute-2 nova_compute[226829]: 2026-01-31 07:50:52.955 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.097 226833 DEBUG nova.compute.manager [req-fd88c0c0-be3c-4b37-94c4-344d6341ad67 req-8fcb1736-6fbe-4543-aa39-d3ba1db287ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received event network-vif-plugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.098 226833 DEBUG oslo_concurrency.lockutils [req-fd88c0c0-be3c-4b37-94c4-344d6341ad67 req-8fcb1736-6fbe-4543-aa39-d3ba1db287ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.098 226833 DEBUG oslo_concurrency.lockutils [req-fd88c0c0-be3c-4b37-94c4-344d6341ad67 req-8fcb1736-6fbe-4543-aa39-d3ba1db287ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.098 226833 DEBUG oslo_concurrency.lockutils [req-fd88c0c0-be3c-4b37-94c4-344d6341ad67 req-8fcb1736-6fbe-4543-aa39-d3ba1db287ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.098 226833 DEBUG nova.compute.manager [req-fd88c0c0-be3c-4b37-94c4-344d6341ad67 req-8fcb1736-6fbe-4543-aa39-d3ba1db287ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Processing event network-vif-plugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:50:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:53.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:53 compute-2 podman[257011]: 2026-01-31 07:50:53.253800232 +0000 UTC m=+0.026809705 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.406 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845853.406411, 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.407 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] VM Started (Lifecycle Event)
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.409 226833 DEBUG nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.412 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.415 226833 INFO nova.virt.libvirt.driver [-] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Instance spawned successfully.
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.415 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.474 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.477 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.487 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.488 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.488 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.488 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.489 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.489 226833 DEBUG nova.virt.libvirt.driver [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:50:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:53.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.607 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.608 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845853.4072464, 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.608 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] VM Paused (Lifecycle Event)
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.655 226833 INFO nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Took 8.07 seconds to spawn the instance on the hypervisor.
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.656 226833 DEBUG nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.753 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.756 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845853.411877, 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.756 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] VM Resumed (Lifecycle Event)
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.796 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.798 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.819 226833 INFO nova.compute.manager [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Took 9.61 seconds to build instance.
Jan 31 07:50:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4106909107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:53 compute-2 nova_compute[226829]: 2026-01-31 07:50:53.986 226833 DEBUG oslo_concurrency.lockutils [None req-93d3754d-8395-4e7c-b62c-260cd6cb234e da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:54 compute-2 nova_compute[226829]: 2026-01-31 07:50:54.472 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:55.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:55 compute-2 nova_compute[226829]: 2026-01-31 07:50:55.193 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845840.1924307, 045df8b7-b820-4d9e-98c8-0fdacc84b4b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:50:55 compute-2 nova_compute[226829]: 2026-01-31 07:50:55.194 226833 INFO nova.compute.manager [-] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] VM Stopped (Lifecycle Event)
Jan 31 07:50:55 compute-2 sudo[257048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:55 compute-2 sudo[257048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:55 compute-2 sudo[257048]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:55 compute-2 sudo[257074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:55 compute-2 sudo[257074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:55 compute-2 sudo[257074]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:55 compute-2 nova_compute[226829]: 2026-01-31 07:50:55.395 226833 DEBUG nova.compute.manager [req-230e1425-4122-471f-a445-84399356b789 req-be2452eb-ebe8-4beb-9b25-9f0434836d24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received event network-vif-plugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:55 compute-2 nova_compute[226829]: 2026-01-31 07:50:55.395 226833 DEBUG oslo_concurrency.lockutils [req-230e1425-4122-471f-a445-84399356b789 req-be2452eb-ebe8-4beb-9b25-9f0434836d24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:55 compute-2 nova_compute[226829]: 2026-01-31 07:50:55.396 226833 DEBUG oslo_concurrency.lockutils [req-230e1425-4122-471f-a445-84399356b789 req-be2452eb-ebe8-4beb-9b25-9f0434836d24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:55 compute-2 nova_compute[226829]: 2026-01-31 07:50:55.396 226833 DEBUG oslo_concurrency.lockutils [req-230e1425-4122-471f-a445-84399356b789 req-be2452eb-ebe8-4beb-9b25-9f0434836d24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:55 compute-2 nova_compute[226829]: 2026-01-31 07:50:55.397 226833 DEBUG nova.compute.manager [req-230e1425-4122-471f-a445-84399356b789 req-be2452eb-ebe8-4beb-9b25-9f0434836d24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] No waiting events found dispatching network-vif-plugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:50:55 compute-2 nova_compute[226829]: 2026-01-31 07:50:55.397 226833 WARNING nova.compute.manager [req-230e1425-4122-471f-a445-84399356b789 req-be2452eb-ebe8-4beb-9b25-9f0434836d24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received unexpected event network-vif-plugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a for instance with vm_state active and task_state None.
Jan 31 07:50:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:55.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:55 compute-2 nova_compute[226829]: 2026-01-31 07:50:55.592 226833 DEBUG nova.compute.manager [None req-f8d127a3-cea0-4f10-a89e-600f80f39b54 - - - - - -] [instance: 045df8b7-b820-4d9e-98c8-0fdacc84b4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:50:55 compute-2 nova_compute[226829]: 2026-01-31 07:50:55.602 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:57.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:57.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:58 compute-2 NetworkManager[48999]: <info>  [1769845858.8600] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Jan 31 07:50:58 compute-2 nova_compute[226829]: 2026-01-31 07:50:58.859 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:58 compute-2 NetworkManager[48999]: <info>  [1769845858.8609] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Jan 31 07:50:58 compute-2 nova_compute[226829]: 2026-01-31 07:50:58.948 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:58 compute-2 ovn_controller[133834]: 2026-01-31T07:50:58Z|00194|binding|INFO|Releasing lport baa67f49-7ef9-41cc-bae4-66786d0756ef from this chassis (sb_readonly=0)
Jan 31 07:50:58 compute-2 nova_compute[226829]: 2026-01-31 07:50:58.978 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:50:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:59.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:50:59 compute-2 nova_compute[226829]: 2026-01-31 07:50:59.474 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:50:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:59.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:59 compute-2 nova_compute[226829]: 2026-01-31 07:50:59.625 226833 DEBUG nova.compute.manager [req-19911040-f85f-4125-b7c4-a8b918260a11 req-6ae835bf-7e6f-4a38-a42e-7cbf5b5562cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received event network-changed-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:59 compute-2 nova_compute[226829]: 2026-01-31 07:50:59.626 226833 DEBUG nova.compute.manager [req-19911040-f85f-4125-b7c4-a8b918260a11 req-6ae835bf-7e6f-4a38-a42e-7cbf5b5562cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Refreshing instance network info cache due to event network-changed-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:50:59 compute-2 nova_compute[226829]: 2026-01-31 07:50:59.627 226833 DEBUG oslo_concurrency.lockutils [req-19911040-f85f-4125-b7c4-a8b918260a11 req-6ae835bf-7e6f-4a38-a42e-7cbf5b5562cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:50:59 compute-2 nova_compute[226829]: 2026-01-31 07:50:59.627 226833 DEBUG oslo_concurrency.lockutils [req-19911040-f85f-4125-b7c4-a8b918260a11 req-6ae835bf-7e6f-4a38-a42e-7cbf5b5562cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:50:59 compute-2 nova_compute[226829]: 2026-01-31 07:50:59.627 226833 DEBUG nova.network.neutron [req-19911040-f85f-4125-b7c4-a8b918260a11 req-6ae835bf-7e6f-4a38-a42e-7cbf5b5562cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Refreshing network info cache for port ca7b65d5-b18e-4cc6-a3f3-eb754739df9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:51:00 compute-2 nova_compute[226829]: 2026-01-31 07:51:00.605 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:01 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:51:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:51:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:01.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:51:02 compute-2 nova_compute[226829]: 2026-01-31 07:51:02.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:02 compute-2 nova_compute[226829]: 2026-01-31 07:51:02.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:51:02 compute-2 nova_compute[226829]: 2026-01-31 07:51:02.830 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:51:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:51:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:03.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:51:03 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for submit_transact, latency = 9.222604752s
Jan 31 07:51:03 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for throttle_transact, latency = 9.222396851s
Jan 31 07:51:03 compute-2 podman[257102]: 2026-01-31 07:51:03.271729524 +0000 UTC m=+2.145361414 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 07:51:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:51:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:03.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:51:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).paxos(paxos updating c 3013..3655) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 4.962079048s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Jan 31 07:51:03 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2[77278]: 2026-01-31T07:51:03.664+0000 7fa4f938c640 -1 mon.compute-2@1(peon).paxos(paxos updating c 3013..3655) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 4.962079048s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Jan 31 07:51:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:03 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_flush, latency = 9.352479935s
Jan 31 07:51:03 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 9.809533119s
Jan 31 07:51:03 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.810427666s, txc = 0x562dbaadc900
Jan 31 07:51:03 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:51:04 compute-2 podman[257011]: 2026-01-31 07:51:04.005411726 +0000 UTC m=+10.778421219 container create b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.116278648s, txc = 0x562dbc7ba300
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.114465714s, txc = 0x562dbac1fb00
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.113340378s, txc = 0x562dbb769200
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.112468719s, txc = 0x562dbac44600
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.111641884s, txc = 0x562dbbb16600
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.111897469s, txc = 0x562dbc80cc00
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.111433983s, txc = 0x562dbab14f00
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.111383438s, txc = 0x562dbac44000
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.111024857s, txc = 0x562dbab7f800
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.111057281s, txc = 0x562dbaa89b00
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.110769272s, txc = 0x562dbaa89200
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.110516548s, txc = 0x562dbaa03500
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.110047340s, txc = 0x562dbac45200
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.109246254s, txc = 0x562dbac62600
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.109183311s, txc = 0x562dbaba6000
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.109073639s, txc = 0x562dbaa03b00
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.108950615s, txc = 0x562dbab7e600
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.108909607s, txc = 0x562dbabfef00
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.108842850s, txc = 0x562dbaba7800
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.108740807s, txc = 0x562dbba6d500
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.108415604s, txc = 0x562dbad67800
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.108130455s, txc = 0x562dbaa88c00
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.107435226s, txc = 0x562dbaadd800
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.106311798s, txc = 0x562dbb9fb500
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.105959892s, txc = 0x562dbac63200
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.105927467s, txc = 0x562dbab14600
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.105697632s, txc = 0x562dbbb16300
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.105454445s, txc = 0x562dbac44900
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.105272293s, txc = 0x562dbaa03800
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.105082512s, txc = 0x562dbb768900
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.091160774s, txc = 0x562dba835200
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.066860199s, txc = 0x562dbaba6900
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.004306793s, txc = 0x562dbc7bdb00
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.988657951s, txc = 0x562dbb6ae000
Jan 31 07:51:04 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.988422394s, txc = 0x562dbba6d200
Jan 31 07:51:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:51:04 compute-2 systemd[1]: Started libpod-conmon-b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb.scope.
Jan 31 07:51:04 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:51:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096265f06a526e9366d136315dec532bfb6622210b5ac0202294e2fba2c19fd7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:04 compute-2 podman[257011]: 2026-01-31 07:51:04.367449437 +0000 UTC m=+11.140458920 container init b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 07:51:04 compute-2 podman[257011]: 2026-01-31 07:51:04.373920421 +0000 UTC m=+11.146929884 container start b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 07:51:04 compute-2 neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1[257126]: [NOTICE]   (257130) : New worker (257132) forked
Jan 31 07:51:04 compute-2 neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1[257126]: [NOTICE]   (257130) : Loading success.
Jan 31 07:51:04 compute-2 nova_compute[226829]: 2026-01-31 07:51:04.476 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:04 compute-2 nova_compute[226829]: 2026-01-31 07:51:04.699 226833 DEBUG nova.network.neutron [req-19911040-f85f-4125-b7c4-a8b918260a11 req-6ae835bf-7e6f-4a38-a42e-7cbf5b5562cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updated VIF entry in instance network info cache for port ca7b65d5-b18e-4cc6-a3f3-eb754739df9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:51:04 compute-2 nova_compute[226829]: 2026-01-31 07:51:04.700 226833 DEBUG nova.network.neutron [req-19911040-f85f-4125-b7c4-a8b918260a11 req-6ae835bf-7e6f-4a38-a42e-7cbf5b5562cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updating instance_info_cache with network_info: [{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:51:04 compute-2 nova_compute[226829]: 2026-01-31 07:51:04.830 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:05.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:05 compute-2 ceph-mon[77282]: pgmap v1610: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 139 op/s
Jan 31 07:51:05 compute-2 ceph-mon[77282]: pgmap v1611: 305 pgs: 305 active+clean; 184 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 31 07:51:05 compute-2 ceph-mon[77282]: pgmap v1612: 305 pgs: 305 active+clean; 193 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 919 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Jan 31 07:51:05 compute-2 ceph-mon[77282]: pgmap v1613: 305 pgs: 305 active+clean; 193 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 31 07:51:05 compute-2 ceph-mon[77282]: pgmap v1614: 305 pgs: 305 active+clean; 193 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 916 KiB/s wr, 29 op/s
Jan 31 07:51:05 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 07:51:05 compute-2 ceph-mon[77282]: mon.compute-1 calling monitor election
Jan 31 07:51:05 compute-2 ceph-mon[77282]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:51:05 compute-2 ceph-mon[77282]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:51:05 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:51:05 compute-2 ceph-mon[77282]: osdmap e250: 3 total, 3 up, 3 in
Jan 31 07:51:05 compute-2 ceph-mon[77282]: mgrmap e11: compute-0.hhuoua(active, since 45m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:51:05 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:51:05 compute-2 nova_compute[226829]: 2026-01-31 07:51:05.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:05 compute-2 nova_compute[226829]: 2026-01-31 07:51:05.608 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:05 compute-2 nova_compute[226829]: 2026-01-31 07:51:05.892 226833 DEBUG oslo_concurrency.lockutils [req-19911040-f85f-4125-b7c4-a8b918260a11 req-6ae835bf-7e6f-4a38-a42e-7cbf5b5562cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:51:06 compute-2 ceph-mon[77282]: pgmap v1615: 305 pgs: 305 active+clean; 194 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 320 KiB/s rd, 1001 KiB/s wr, 38 op/s
Jan 31 07:51:06 compute-2 nova_compute[226829]: 2026-01-31 07:51:06.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:06.860 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:06.863 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:06.863 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:51:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:07.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:07.398 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:51:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:07.399 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:51:07 compute-2 nova_compute[226829]: 2026-01-31 07:51:07.403 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:07 compute-2 nova_compute[226829]: 2026-01-31 07:51:07.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:07.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:08 compute-2 ceph-mon[77282]: pgmap v1616: 305 pgs: 305 active+clean; 194 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 1002 KiB/s wr, 49 op/s
Jan 31 07:51:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:51:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:09.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:51:09 compute-2 ovn_controller[133834]: 2026-01-31T07:51:09Z|00195|binding|INFO|Releasing lport baa67f49-7ef9-41cc-bae4-66786d0756ef from this chassis (sb_readonly=0)
Jan 31 07:51:09 compute-2 nova_compute[226829]: 2026-01-31 07:51:09.259 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:09 compute-2 nova_compute[226829]: 2026-01-31 07:51:09.478 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:09.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:10 compute-2 nova_compute[226829]: 2026-01-31 07:51:10.611 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:10 compute-2 ceph-mon[77282]: pgmap v1617: 305 pgs: 305 active+clean; 211 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 125 op/s
Jan 31 07:51:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3836526511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:11.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:11 compute-2 nova_compute[226829]: 2026-01-31 07:51:11.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:11 compute-2 nova_compute[226829]: 2026-01-31 07:51:11.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:51:11 compute-2 nova_compute[226829]: 2026-01-31 07:51:11.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:51:11 compute-2 sudo[257145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:11 compute-2 sudo[257145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:11 compute-2 sudo[257145]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:11 compute-2 sudo[257170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:51:11 compute-2 sudo[257170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:11 compute-2 sudo[257170]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:11.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:11 compute-2 sudo[257195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:11 compute-2 sudo[257195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:11 compute-2 sudo[257195]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:11 compute-2 sudo[257220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:51:11 compute-2 sudo[257220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:11 compute-2 ceph-mon[77282]: pgmap v1618: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 07:51:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2883700870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:12 compute-2 sudo[257220]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:12 compute-2 nova_compute[226829]: 2026-01-31 07:51:12.229 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:51:12 compute-2 nova_compute[226829]: 2026-01-31 07:51:12.230 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:51:12 compute-2 nova_compute[226829]: 2026-01-31 07:51:12.230 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:51:12 compute-2 nova_compute[226829]: 2026-01-31 07:51:12.230 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:51:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:12.402 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:51:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:51:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:51:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:51:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:51:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:51:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:51:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:51:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:51:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:13.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:14 compute-2 ceph-mon[77282]: pgmap v1619: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 07:51:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3780460574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:14 compute-2 nova_compute[226829]: 2026-01-31 07:51:14.482 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:51:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:15.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:51:15 compute-2 sudo[257279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:15 compute-2 sudo[257279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:15 compute-2 sudo[257279]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:15 compute-2 sudo[257304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:15 compute-2 sudo[257304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:15 compute-2 sudo[257304]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:15.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:15 compute-2 nova_compute[226829]: 2026-01-31 07:51:15.613 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:15 compute-2 nova_compute[226829]: 2026-01-31 07:51:15.774 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updating instance_info_cache with network_info: [{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:51:16 compute-2 nova_compute[226829]: 2026-01-31 07:51:16.230 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:51:16 compute-2 nova_compute[226829]: 2026-01-31 07:51:16.231 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:51:16 compute-2 nova_compute[226829]: 2026-01-31 07:51:16.232 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:16 compute-2 nova_compute[226829]: 2026-01-31 07:51:16.233 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:16 compute-2 nova_compute[226829]: 2026-01-31 07:51:16.234 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:16 compute-2 ceph-mon[77282]: pgmap v1620: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 07:51:17 compute-2 nova_compute[226829]: 2026-01-31 07:51:17.144 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:17 compute-2 nova_compute[226829]: 2026-01-31 07:51:17.144 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:17 compute-2 nova_compute[226829]: 2026-01-31 07:51:17.144 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:51:17 compute-2 nova_compute[226829]: 2026-01-31 07:51:17.144 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:51:17 compute-2 nova_compute[226829]: 2026-01-31 07:51:17.145 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:17.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:51:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2162422567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:17.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:17 compute-2 nova_compute[226829]: 2026-01-31 07:51:17.616 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2099143682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:17 compute-2 ceph-mon[77282]: pgmap v1621: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 104 op/s
Jan 31 07:51:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2162422567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:18 compute-2 nova_compute[226829]: 2026-01-31 07:51:18.245 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:51:18 compute-2 nova_compute[226829]: 2026-01-31 07:51:18.246 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:51:18 compute-2 nova_compute[226829]: 2026-01-31 07:51:18.424 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:51:18 compute-2 nova_compute[226829]: 2026-01-31 07:51:18.426 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4356MB free_disk=20.921798706054688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:51:18 compute-2 nova_compute[226829]: 2026-01-31 07:51:18.426 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:18 compute-2 nova_compute[226829]: 2026-01-31 07:51:18.426 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:18 compute-2 ovn_controller[133834]: 2026-01-31T07:51:18Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:b4:8f 10.100.0.9
Jan 31 07:51:18 compute-2 ovn_controller[133834]: 2026-01-31T07:51:18Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:b4:8f 10.100.0.9
Jan 31 07:51:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:19.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:19 compute-2 nova_compute[226829]: 2026-01-31 07:51:19.523 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:51:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:19.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:51:19 compute-2 nova_compute[226829]: 2026-01-31 07:51:19.707 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:51:19 compute-2 nova_compute[226829]: 2026-01-31 07:51:19.708 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:51:19 compute-2 nova_compute[226829]: 2026-01-31 07:51:19.708 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:51:19 compute-2 nova_compute[226829]: 2026-01-31 07:51:19.834 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:51:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3621442984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:20 compute-2 nova_compute[226829]: 2026-01-31 07:51:20.272 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:20 compute-2 nova_compute[226829]: 2026-01-31 07:51:20.277 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:51:20 compute-2 nova_compute[226829]: 2026-01-31 07:51:20.639 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:20 compute-2 ceph-mon[77282]: pgmap v1622: 305 pgs: 305 active+clean; 220 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 114 op/s
Jan 31 07:51:20 compute-2 nova_compute[226829]: 2026-01-31 07:51:20.907 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:51:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:21.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:21 compute-2 podman[257376]: 2026-01-31 07:51:21.240911926 +0000 UTC m=+0.114515109 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:51:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:21.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:21 compute-2 nova_compute[226829]: 2026-01-31 07:51:21.756 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:51:21 compute-2 nova_compute[226829]: 2026-01-31 07:51:21.757 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:51:21 compute-2 nova_compute[226829]: 2026-01-31 07:51:21.757 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:21 compute-2 nova_compute[226829]: 2026-01-31 07:51:21.757 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:51:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3621442984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:22 compute-2 ceph-mon[77282]: pgmap v1623: 305 pgs: 305 active+clean; 239 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Jan 31 07:51:22 compute-2 nova_compute[226829]: 2026-01-31 07:51:22.070 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:22 compute-2 nova_compute[226829]: 2026-01-31 07:51:22.070 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:51:22 compute-2 sudo[257405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:22 compute-2 sudo[257405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:22 compute-2 sudo[257405]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:22 compute-2 sudo[257430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:51:22 compute-2 sudo[257430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:22 compute-2 sudo[257430]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:23.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:51:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:51:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:23.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:24 compute-2 nova_compute[226829]: 2026-01-31 07:51:24.525 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:24 compute-2 ceph-mon[77282]: pgmap v1624: 305 pgs: 305 active+clean; 239 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 31 07:51:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1587214139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:25.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:25 compute-2 nova_compute[226829]: 2026-01-31 07:51:25.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:25 compute-2 nova_compute[226829]: 2026-01-31 07:51:25.642 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:25 compute-2 ceph-mon[77282]: pgmap v1625: 305 pgs: 305 active+clean; 241 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 220 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 31 07:51:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:51:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:27.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:51:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:51:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:27.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:51:28 compute-2 ceph-mon[77282]: pgmap v1626: 305 pgs: 305 active+clean; 242 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 237 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 31 07:51:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1851526475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:51:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:29.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:29 compute-2 nova_compute[226829]: 2026-01-31 07:51:29.527 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:29.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:30 compute-2 ceph-mon[77282]: pgmap v1627: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 31 07:51:30 compute-2 nova_compute[226829]: 2026-01-31 07:51:30.644 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:31.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:31.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:32 compute-2 ceph-mon[77282]: pgmap v1628: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 217 KiB/s rd, 987 KiB/s wr, 41 op/s
Jan 31 07:51:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3089878778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:51:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:33.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:33.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:33 compute-2 ceph-mon[77282]: pgmap v1629: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 192 KiB/s rd, 158 KiB/s wr, 35 op/s
Jan 31 07:51:34 compute-2 podman[257462]: 2026-01-31 07:51:34.209209865 +0000 UTC m=+0.091874427 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:51:34 compute-2 nova_compute[226829]: 2026-01-31 07:51:34.530 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:35.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:35 compute-2 sudo[257483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:35 compute-2 sudo[257483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:35 compute-2 sudo[257483]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:35 compute-2 sudo[257508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:35 compute-2 sudo[257508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:35 compute-2 sudo[257508]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:35.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:35 compute-2 nova_compute[226829]: 2026-01-31 07:51:35.647 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:36 compute-2 ceph-mon[77282]: pgmap v1630: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 73 KiB/s wr, 8 op/s
Jan 31 07:51:36 compute-2 nova_compute[226829]: 2026-01-31 07:51:36.688 226833 DEBUG nova.objects.instance [None req-ac8eaa6e-e9bc-4f83-8972-3a4ed2371879 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lazy-loading 'flavor' on Instance uuid 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:51:37 compute-2 nova_compute[226829]: 2026-01-31 07:51:37.158 226833 DEBUG oslo_concurrency.lockutils [None req-ac8eaa6e-e9bc-4f83-8972-3a4ed2371879 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:51:37 compute-2 nova_compute[226829]: 2026-01-31 07:51:37.159 226833 DEBUG oslo_concurrency.lockutils [None req-ac8eaa6e-e9bc-4f83-8972-3a4ed2371879 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquired lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:51:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:37.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/132925161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:51:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:37.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:38 compute-2 ceph-mon[77282]: pgmap v1631: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 29 KiB/s wr, 6 op/s
Jan 31 07:51:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:51:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:39.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:51:39 compute-2 nova_compute[226829]: 2026-01-31 07:51:39.417 226833 DEBUG nova.network.neutron [None req-ac8eaa6e-e9bc-4f83-8972-3a4ed2371879 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:51:39 compute-2 nova_compute[226829]: 2026-01-31 07:51:39.533 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:39.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:39 compute-2 ceph-mon[77282]: pgmap v1632: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 22 KiB/s wr, 5 op/s
Jan 31 07:51:39 compute-2 nova_compute[226829]: 2026-01-31 07:51:39.804 226833 DEBUG nova.compute.manager [req-73655984-934c-4c3f-82d8-6303cc53c082 req-e8c82098-da04-4922-a0b9-1e5f3f337f56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received event network-changed-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:51:39 compute-2 nova_compute[226829]: 2026-01-31 07:51:39.804 226833 DEBUG nova.compute.manager [req-73655984-934c-4c3f-82d8-6303cc53c082 req-e8c82098-da04-4922-a0b9-1e5f3f337f56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Refreshing instance network info cache due to event network-changed-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:51:39 compute-2 nova_compute[226829]: 2026-01-31 07:51:39.804 226833 DEBUG oslo_concurrency.lockutils [req-73655984-934c-4c3f-82d8-6303cc53c082 req-e8c82098-da04-4922-a0b9-1e5f3f337f56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:51:40 compute-2 nova_compute[226829]: 2026-01-31 07:51:40.650 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:41.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:41.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:42 compute-2 ceph-mon[77282]: pgmap v1633: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 16 KiB/s wr, 1 op/s
Jan 31 07:51:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:43.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:43.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:44 compute-2 nova_compute[226829]: 2026-01-31 07:51:44.150 226833 DEBUG nova.network.neutron [None req-ac8eaa6e-e9bc-4f83-8972-3a4ed2371879 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updating instance_info_cache with network_info: [{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:51:44 compute-2 nova_compute[226829]: 2026-01-31 07:51:44.586 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:44 compute-2 nova_compute[226829]: 2026-01-31 07:51:44.742 226833 DEBUG oslo_concurrency.lockutils [None req-ac8eaa6e-e9bc-4f83-8972-3a4ed2371879 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Releasing lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:51:44 compute-2 nova_compute[226829]: 2026-01-31 07:51:44.743 226833 DEBUG nova.compute.manager [None req-ac8eaa6e-e9bc-4f83-8972-3a4ed2371879 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 31 07:51:44 compute-2 nova_compute[226829]: 2026-01-31 07:51:44.743 226833 DEBUG nova.compute.manager [None req-ac8eaa6e-e9bc-4f83-8972-3a4ed2371879 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] network_info to inject: |[{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 31 07:51:44 compute-2 nova_compute[226829]: 2026-01-31 07:51:44.745 226833 DEBUG oslo_concurrency.lockutils [req-73655984-934c-4c3f-82d8-6303cc53c082 req-e8c82098-da04-4922-a0b9-1e5f3f337f56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:51:44 compute-2 nova_compute[226829]: 2026-01-31 07:51:44.746 226833 DEBUG nova.network.neutron [req-73655984-934c-4c3f-82d8-6303cc53c082 req-e8c82098-da04-4922-a0b9-1e5f3f337f56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Refreshing network info cache for port ca7b65d5-b18e-4cc6-a3f3-eb754739df9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:51:44 compute-2 ceph-mon[77282]: pgmap v1634: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 15 KiB/s wr, 1 op/s
Jan 31 07:51:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:51:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1397894410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:51:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:51:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1397894410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:51:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:45.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:45 compute-2 nova_compute[226829]: 2026-01-31 07:51:45.285 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:45.286 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:51:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:45.299 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:51:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:45.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:45 compute-2 nova_compute[226829]: 2026-01-31 07:51:45.688 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:46 compute-2 ceph-mon[77282]: pgmap v1635: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Jan 31 07:51:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1397894410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:51:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1397894410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:51:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:46.303 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:51:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:47.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:47.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:47 compute-2 nova_compute[226829]: 2026-01-31 07:51:47.998 226833 DEBUG nova.objects.instance [None req-8cba0fbe-5edd-48d8-9337-d8a8e4584a34 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lazy-loading 'flavor' on Instance uuid 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:51:48 compute-2 nova_compute[226829]: 2026-01-31 07:51:48.129 226833 DEBUG oslo_concurrency.lockutils [None req-8cba0fbe-5edd-48d8-9337-d8a8e4584a34 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:51:48 compute-2 ceph-mon[77282]: pgmap v1636: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 07:51:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:49 compute-2 nova_compute[226829]: 2026-01-31 07:51:49.061 226833 DEBUG nova.network.neutron [req-73655984-934c-4c3f-82d8-6303cc53c082 req-e8c82098-da04-4922-a0b9-1e5f3f337f56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updated VIF entry in instance network info cache for port ca7b65d5-b18e-4cc6-a3f3-eb754739df9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:51:49 compute-2 nova_compute[226829]: 2026-01-31 07:51:49.061 226833 DEBUG nova.network.neutron [req-73655984-934c-4c3f-82d8-6303cc53c082 req-e8c82098-da04-4922-a0b9-1e5f3f337f56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updating instance_info_cache with network_info: [{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:51:49 compute-2 nova_compute[226829]: 2026-01-31 07:51:49.129 226833 DEBUG oslo_concurrency.lockutils [req-73655984-934c-4c3f-82d8-6303cc53c082 req-e8c82098-da04-4922-a0b9-1e5f3f337f56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:51:49 compute-2 nova_compute[226829]: 2026-01-31 07:51:49.129 226833 DEBUG oslo_concurrency.lockutils [None req-8cba0fbe-5edd-48d8-9337-d8a8e4584a34 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquired lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:51:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:49.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:49 compute-2 nova_compute[226829]: 2026-01-31 07:51:49.589 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:49.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:50 compute-2 ceph-mon[77282]: pgmap v1637: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 0 op/s
Jan 31 07:51:50 compute-2 nova_compute[226829]: 2026-01-31 07:51:50.691 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:50 compute-2 nova_compute[226829]: 2026-01-31 07:51:50.912 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:51 compute-2 nova_compute[226829]: 2026-01-31 07:51:51.057 226833 DEBUG nova.network.neutron [None req-8cba0fbe-5edd-48d8-9337-d8a8e4584a34 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:51:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:51.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:51 compute-2 nova_compute[226829]: 2026-01-31 07:51:51.589 226833 DEBUG nova.compute.manager [req-ff5fd88f-335e-4b76-b5b1-aa1806773714 req-9c2ec11d-7026-4d4f-9384-e9480c8923a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received event network-changed-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:51:51 compute-2 nova_compute[226829]: 2026-01-31 07:51:51.589 226833 DEBUG nova.compute.manager [req-ff5fd88f-335e-4b76-b5b1-aa1806773714 req-9c2ec11d-7026-4d4f-9384-e9480c8923a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Refreshing instance network info cache due to event network-changed-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:51:51 compute-2 nova_compute[226829]: 2026-01-31 07:51:51.589 226833 DEBUG oslo_concurrency.lockutils [req-ff5fd88f-335e-4b76-b5b1-aa1806773714 req-9c2ec11d-7026-4d4f-9384-e9480c8923a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:51:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:51.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:51 compute-2 ceph-mon[77282]: pgmap v1638: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 07:51:52 compute-2 podman[257541]: 2026-01-31 07:51:52.191892214 +0000 UTC m=+0.070577834 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Jan 31 07:51:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:53.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:53 compute-2 nova_compute[226829]: 2026-01-31 07:51:53.670 226833 DEBUG nova.network.neutron [None req-8cba0fbe-5edd-48d8-9337-d8a8e4584a34 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updating instance_info_cache with network_info: [{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:51:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:53.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:53 compute-2 nova_compute[226829]: 2026-01-31 07:51:53.779 226833 DEBUG oslo_concurrency.lockutils [None req-8cba0fbe-5edd-48d8-9337-d8a8e4584a34 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Releasing lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:51:53 compute-2 nova_compute[226829]: 2026-01-31 07:51:53.780 226833 DEBUG nova.compute.manager [None req-8cba0fbe-5edd-48d8-9337-d8a8e4584a34 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 31 07:51:53 compute-2 nova_compute[226829]: 2026-01-31 07:51:53.780 226833 DEBUG nova.compute.manager [None req-8cba0fbe-5edd-48d8-9337-d8a8e4584a34 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] network_info to inject: |[{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 31 07:51:53 compute-2 nova_compute[226829]: 2026-01-31 07:51:53.786 226833 DEBUG oslo_concurrency.lockutils [req-ff5fd88f-335e-4b76-b5b1-aa1806773714 req-9c2ec11d-7026-4d4f-9384-e9480c8923a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:51:53 compute-2 nova_compute[226829]: 2026-01-31 07:51:53.786 226833 DEBUG nova.network.neutron [req-ff5fd88f-335e-4b76-b5b1-aa1806773714 req-9c2ec11d-7026-4d4f-9384-e9480c8923a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Refreshing network info cache for port ca7b65d5-b18e-4cc6-a3f3-eb754739df9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:51:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:54 compute-2 nova_compute[226829]: 2026-01-31 07:51:54.591 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:54 compute-2 nova_compute[226829]: 2026-01-31 07:51:54.835 226833 DEBUG oslo_concurrency.lockutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:54 compute-2 nova_compute[226829]: 2026-01-31 07:51:54.836 226833 DEBUG oslo_concurrency.lockutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:54 compute-2 nova_compute[226829]: 2026-01-31 07:51:54.836 226833 DEBUG oslo_concurrency.lockutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:54 compute-2 nova_compute[226829]: 2026-01-31 07:51:54.837 226833 DEBUG oslo_concurrency.lockutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:54 compute-2 nova_compute[226829]: 2026-01-31 07:51:54.837 226833 DEBUG oslo_concurrency.lockutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:51:54 compute-2 nova_compute[226829]: 2026-01-31 07:51:54.840 226833 INFO nova.compute.manager [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Terminating instance
Jan 31 07:51:54 compute-2 nova_compute[226829]: 2026-01-31 07:51:54.842 226833 DEBUG nova.compute.manager [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:51:55 compute-2 ceph-mon[77282]: pgmap v1639: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 07:51:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:55.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:55 compute-2 kernel: tapca7b65d5-b1 (unregistering): left promiscuous mode
Jan 31 07:51:55 compute-2 NetworkManager[48999]: <info>  [1769845915.6238] device (tapca7b65d5-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:51:55 compute-2 ovn_controller[133834]: 2026-01-31T07:51:55Z|00196|binding|INFO|Releasing lport ca7b65d5-b18e-4cc6-a3f3-eb754739df9a from this chassis (sb_readonly=0)
Jan 31 07:51:55 compute-2 ovn_controller[133834]: 2026-01-31T07:51:55Z|00197|binding|INFO|Setting lport ca7b65d5-b18e-4cc6-a3f3-eb754739df9a down in Southbound
Jan 31 07:51:55 compute-2 nova_compute[226829]: 2026-01-31 07:51:55.630 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:55 compute-2 ovn_controller[133834]: 2026-01-31T07:51:55Z|00198|binding|INFO|Removing iface tapca7b65d5-b1 ovn-installed in OVS
Jan 31 07:51:55 compute-2 nova_compute[226829]: 2026-01-31 07:51:55.633 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:55 compute-2 nova_compute[226829]: 2026-01-31 07:51:55.645 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:55 compute-2 sudo[257571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:55 compute-2 sudo[257571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:55 compute-2 sudo[257571]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:55 compute-2 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000047.scope: Deactivated successfully.
Jan 31 07:51:55 compute-2 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000047.scope: Consumed 25.610s CPU time.
Jan 31 07:51:55 compute-2 systemd-machined[195142]: Machine qemu-30-instance-00000047 terminated.
Jan 31 07:51:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:55.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:55 compute-2 nova_compute[226829]: 2026-01-31 07:51:55.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:55 compute-2 sudo[257599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:55 compute-2 sudo[257599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:55 compute-2 sudo[257599]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:55 compute-2 nova_compute[226829]: 2026-01-31 07:51:55.713 226833 DEBUG nova.network.neutron [req-ff5fd88f-335e-4b76-b5b1-aa1806773714 req-9c2ec11d-7026-4d4f-9384-e9480c8923a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updated VIF entry in instance network info cache for port ca7b65d5-b18e-4cc6-a3f3-eb754739df9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:51:55 compute-2 nova_compute[226829]: 2026-01-31 07:51:55.714 226833 DEBUG nova.network.neutron [req-ff5fd88f-335e-4b76-b5b1-aa1806773714 req-9c2ec11d-7026-4d4f-9384-e9480c8923a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updating instance_info_cache with network_info: [{"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:51:55 compute-2 nova_compute[226829]: 2026-01-31 07:51:55.893 226833 INFO nova.virt.libvirt.driver [-] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Instance destroyed successfully.
Jan 31 07:51:55 compute-2 nova_compute[226829]: 2026-01-31 07:51:55.894 226833 DEBUG nova.objects.instance [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lazy-loading 'resources' on Instance uuid 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:51:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:55.972 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:b4:8f 10.100.0.9'], port_security=['fa:16:3e:40:b4:8f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8a037791bb04bafaec8d4639d3907ae', 'neutron:revision_number': '6', 'neutron:security_group_ids': '06254fea-c724-468b-99a0-468bca1b7ad6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d3b22ca-7707-4658-b5d3-4f2316452c5f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ca7b65d5-b18e-4cc6-a3f3-eb754739df9a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:51:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:55.975 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ca7b65d5-b18e-4cc6-a3f3-eb754739df9a in datapath ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1 unbound from our chassis
Jan 31 07:51:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:55.979 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:51:55 compute-2 nova_compute[226829]: 2026-01-31 07:51:55.980 226833 DEBUG oslo_concurrency.lockutils [req-ff5fd88f-335e-4b76-b5b1-aa1806773714 req-9c2ec11d-7026-4d4f-9384-e9480c8923a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:51:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:55.983 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[70c8342e-9202-42a3-a726-1315d1da2fb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:55.985 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1 namespace which is not needed anymore
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.117 226833 DEBUG nova.virt.libvirt.vif [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1646636956',display_name='tempest-AttachInterfacesUnderV243Test-server-1646636956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1646636956',id=71,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGmrBUXVGXTIX8YzH1r6kil/nhA8jwNHTY37N+fAuBFkccTNb/7NpE1FwQotASjyjUj5eOwfrH+FZgf7m53XPnmLYDq2d+B8HMrfzRss1ABP0DnA52Zl+YJSa7ShFLA9g==',key_name='tempest-keypair-1969491340',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:53Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8a037791bb04bafaec8d4639d3907ae',ramdisk_id='',reservation_id='r-7fo7300w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-490839015',owner_user_name='tempest-AttachInterfacesUnderV243Test-490839015-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:51:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='da7f93fef8fa4e0d8682702e040a7476',uuid=3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.118 226833 DEBUG nova.network.os_vif_util [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Converting VIF {"id": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "address": "fa:16:3e:40:b4:8f", "network": {"id": "ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-269022987-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8a037791bb04bafaec8d4639d3907ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca7b65d5-b1", "ovs_interfaceid": "ca7b65d5-b18e-4cc6-a3f3-eb754739df9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.119 226833 DEBUG nova.network.os_vif_util [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=ca7b65d5-b18e-4cc6-a3f3-eb754739df9a,network=Network(ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca7b65d5-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.121 226833 DEBUG os_vif [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=ca7b65d5-b18e-4cc6-a3f3-eb754739df9a,network=Network(ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca7b65d5-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.126 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.127 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca7b65d5-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.165 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.167 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.173 226833 INFO os_vif [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=ca7b65d5-b18e-4cc6-a3f3-eb754739df9a,network=Network(ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca7b65d5-b1')
Jan 31 07:51:56 compute-2 neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1[257126]: [NOTICE]   (257130) : haproxy version is 2.8.14-c23fe91
Jan 31 07:51:56 compute-2 neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1[257126]: [NOTICE]   (257130) : path to executable is /usr/sbin/haproxy
Jan 31 07:51:56 compute-2 neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1[257126]: [WARNING]  (257130) : Exiting Master process...
Jan 31 07:51:56 compute-2 neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1[257126]: [WARNING]  (257130) : Exiting Master process...
Jan 31 07:51:56 compute-2 neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1[257126]: [ALERT]    (257130) : Current worker (257132) exited with code 143 (Terminated)
Jan 31 07:51:56 compute-2 neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1[257126]: [WARNING]  (257130) : All workers exited. Exiting... (0)
Jan 31 07:51:56 compute-2 systemd[1]: libpod-b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb.scope: Deactivated successfully.
Jan 31 07:51:56 compute-2 podman[257654]: 2026-01-31 07:51:56.20850082 +0000 UTC m=+0.101013645 container died b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:51:56 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb-userdata-shm.mount: Deactivated successfully.
Jan 31 07:51:56 compute-2 systemd[1]: var-lib-containers-storage-overlay-096265f06a526e9366d136315dec532bfb6622210b5ac0202294e2fba2c19fd7-merged.mount: Deactivated successfully.
Jan 31 07:51:56 compute-2 podman[257654]: 2026-01-31 07:51:56.254526211 +0000 UTC m=+0.147039046 container cleanup b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:51:56 compute-2 systemd[1]: libpod-conmon-b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb.scope: Deactivated successfully.
Jan 31 07:51:56 compute-2 ceph-mon[77282]: pgmap v1640: 305 pgs: 305 active+clean; 210 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 2.4 KiB/s wr, 10 op/s
Jan 31 07:51:56 compute-2 podman[257700]: 2026-01-31 07:51:56.32901269 +0000 UTC m=+0.052247740 container remove b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:51:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:56.335 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[db952cea-1ecb-487c-b668-8bc623655ee5]: (4, ('Sat Jan 31 07:51:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1 (b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb)\nb7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb\nSat Jan 31 07:51:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1 (b7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb)\nb7503ee4135cdd559dacf4bce87fcf49a357bb8742df33a0a547c043f4775eeb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:56.336 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[02db99f3-2454-4cf6-8233-e84ed2048a76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:56.338 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccdc763c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.340 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:56 compute-2 kernel: tapccdc763c-40: left promiscuous mode
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.350 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:56.354 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4b00e46b-363a-4ba4-8dc9-b7a95fe0fc98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:56.370 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[047aa91d-ab0d-41e4-97d3-5deec98e5d19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:56.371 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0484ab72-ef83-437c-b011-840e7a9ef014]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:56.384 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[65925163-daa2-44d4-bff3-5c9fd32435fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610526, 'reachable_time': 43919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257717, 'error': None, 'target': 'ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:56 compute-2 systemd[1]: run-netns-ovnmeta\x2dccdc763c\x2d44a6\x2d4e0e\x2da5d2\x2dd2aa5e9b17d1.mount: Deactivated successfully.
Jan 31 07:51:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:56.389 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ccdc763c-44a6-4e0e-a5d2-d2aa5e9b17d1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:51:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:51:56.390 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a5949e-6f81-42e2-ba02-b7a8fc2c62ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:56 compute-2 nova_compute[226829]: 2026-01-31 07:51:56.498 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:57.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:57.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:58 compute-2 nova_compute[226829]: 2026-01-31 07:51:58.038 226833 DEBUG nova.compute.manager [req-a1b8befc-30ba-4db7-b1dd-597b1fb15e73 req-c2bb4c0f-a160-4bfd-8f4b-a503349a17c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received event network-vif-unplugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:51:58 compute-2 nova_compute[226829]: 2026-01-31 07:51:58.039 226833 DEBUG oslo_concurrency.lockutils [req-a1b8befc-30ba-4db7-b1dd-597b1fb15e73 req-c2bb4c0f-a160-4bfd-8f4b-a503349a17c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:58 compute-2 nova_compute[226829]: 2026-01-31 07:51:58.040 226833 DEBUG oslo_concurrency.lockutils [req-a1b8befc-30ba-4db7-b1dd-597b1fb15e73 req-c2bb4c0f-a160-4bfd-8f4b-a503349a17c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:58 compute-2 nova_compute[226829]: 2026-01-31 07:51:58.040 226833 DEBUG oslo_concurrency.lockutils [req-a1b8befc-30ba-4db7-b1dd-597b1fb15e73 req-c2bb4c0f-a160-4bfd-8f4b-a503349a17c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:51:58 compute-2 nova_compute[226829]: 2026-01-31 07:51:58.040 226833 DEBUG nova.compute.manager [req-a1b8befc-30ba-4db7-b1dd-597b1fb15e73 req-c2bb4c0f-a160-4bfd-8f4b-a503349a17c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] No waiting events found dispatching network-vif-unplugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:51:58 compute-2 nova_compute[226829]: 2026-01-31 07:51:58.041 226833 DEBUG nova.compute.manager [req-a1b8befc-30ba-4db7-b1dd-597b1fb15e73 req-c2bb4c0f-a160-4bfd-8f4b-a503349a17c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received event network-vif-unplugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:51:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:59.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:59 compute-2 nova_compute[226829]: 2026-01-31 07:51:59.594 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:51:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:51:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:59.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:52:00 compute-2 nova_compute[226829]: 2026-01-31 07:52:00.256 226833 DEBUG nova.compute.manager [req-28f67de5-8f6f-4d59-af1e-7363047d4b19 req-549ec245-6f93-4eea-ad10-4eb019e2e2cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received event network-vif-plugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:00 compute-2 nova_compute[226829]: 2026-01-31 07:52:00.256 226833 DEBUG oslo_concurrency.lockutils [req-28f67de5-8f6f-4d59-af1e-7363047d4b19 req-549ec245-6f93-4eea-ad10-4eb019e2e2cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:00 compute-2 nova_compute[226829]: 2026-01-31 07:52:00.256 226833 DEBUG oslo_concurrency.lockutils [req-28f67de5-8f6f-4d59-af1e-7363047d4b19 req-549ec245-6f93-4eea-ad10-4eb019e2e2cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:00 compute-2 nova_compute[226829]: 2026-01-31 07:52:00.257 226833 DEBUG oslo_concurrency.lockutils [req-28f67de5-8f6f-4d59-af1e-7363047d4b19 req-549ec245-6f93-4eea-ad10-4eb019e2e2cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:00 compute-2 nova_compute[226829]: 2026-01-31 07:52:00.257 226833 DEBUG nova.compute.manager [req-28f67de5-8f6f-4d59-af1e-7363047d4b19 req-549ec245-6f93-4eea-ad10-4eb019e2e2cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] No waiting events found dispatching network-vif-plugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:00 compute-2 nova_compute[226829]: 2026-01-31 07:52:00.257 226833 WARNING nova.compute.manager [req-28f67de5-8f6f-4d59-af1e-7363047d4b19 req-549ec245-6f93-4eea-ad10-4eb019e2e2cf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received unexpected event network-vif-plugged-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a for instance with vm_state active and task_state deleting.
Jan 31 07:52:01 compute-2 nova_compute[226829]: 2026-01-31 07:52:01.167 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:52:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:01.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:52:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:01.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:03.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:52:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:03.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:52:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:04 compute-2 nova_compute[226829]: 2026-01-31 07:52:04.596 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:05 compute-2 podman[257722]: 2026-01-31 07:52:05.17372042 +0000 UTC m=+0.058922089 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 07:52:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:05.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:05 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:52:05 compute-2 nova_compute[226829]: 2026-01-31 07:52:05.563 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:05.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:06 compute-2 nova_compute[226829]: 2026-01-31 07:52:06.169 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:06 compute-2 ceph-mon[77282]: pgmap v1641: 305 pgs: 305 active+clean; 194 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 3.0 KiB/s wr, 15 op/s
Jan 31 07:52:06 compute-2 nova_compute[226829]: 2026-01-31 07:52:06.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:06.861 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:06.862 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:06.862 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:52:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:07.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:52:07 compute-2 ceph-mon[77282]: pgmap v1642: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 4.0 KiB/s wr, 34 op/s
Jan 31 07:52:07 compute-2 ceph-mon[77282]: pgmap v1643: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Jan 31 07:52:07 compute-2 ceph-mon[77282]: pgmap v1644: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Jan 31 07:52:07 compute-2 ceph-mon[77282]: pgmap v1645: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Jan 31 07:52:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:52:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:07.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:52:08 compute-2 nova_compute[226829]: 2026-01-31 07:52:08.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:08 compute-2 ceph-mon[77282]: pgmap v1646: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.9 KiB/s wr, 25 op/s
Jan 31 07:52:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:09.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:09 compute-2 nova_compute[226829]: 2026-01-31 07:52:09.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:09 compute-2 nova_compute[226829]: 2026-01-31 07:52:09.598 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:09.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:09 compute-2 ceph-mon[77282]: pgmap v1647: 305 pgs: 305 active+clean; 135 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 25 op/s
Jan 31 07:52:10 compute-2 nova_compute[226829]: 2026-01-31 07:52:10.204 226833 INFO nova.virt.libvirt.driver [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Deleting instance files /var/lib/nova/instances/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_del
Jan 31 07:52:10 compute-2 nova_compute[226829]: 2026-01-31 07:52:10.205 226833 INFO nova.virt.libvirt.driver [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Deletion of /var/lib/nova/instances/3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba_del complete
Jan 31 07:52:10 compute-2 nova_compute[226829]: 2026-01-31 07:52:10.750 226833 INFO nova.compute.manager [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Took 15.91 seconds to destroy the instance on the hypervisor.
Jan 31 07:52:10 compute-2 nova_compute[226829]: 2026-01-31 07:52:10.751 226833 DEBUG oslo.service.loopingcall [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:52:10 compute-2 nova_compute[226829]: 2026-01-31 07:52:10.751 226833 DEBUG nova.compute.manager [-] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:52:10 compute-2 nova_compute[226829]: 2026-01-31 07:52:10.751 226833 DEBUG nova.network.neutron [-] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:52:10 compute-2 nova_compute[226829]: 2026-01-31 07:52:10.890 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845915.8885107, 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:52:10 compute-2 nova_compute[226829]: 2026-01-31 07:52:10.891 226833 INFO nova.compute.manager [-] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] VM Stopped (Lifecycle Event)
Jan 31 07:52:11 compute-2 nova_compute[226829]: 2026-01-31 07:52:11.196 226833 DEBUG nova.compute.manager [None req-87ea8a57-0fd8-4d0a-ba44-7eda24cf4036 - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:52:11 compute-2 nova_compute[226829]: 2026-01-31 07:52:11.218 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:11.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:11.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:12 compute-2 nova_compute[226829]: 2026-01-31 07:52:12.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:12 compute-2 nova_compute[226829]: 2026-01-31 07:52:12.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:52:12 compute-2 nova_compute[226829]: 2026-01-31 07:52:12.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:52:12 compute-2 ceph-mon[77282]: pgmap v1648: 305 pgs: 305 active+clean; 113 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 938 B/s wr, 16 op/s
Jan 31 07:52:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2746839228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:12 compute-2 nova_compute[226829]: 2026-01-31 07:52:12.631 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 07:52:12 compute-2 nova_compute[226829]: 2026-01-31 07:52:12.632 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:52:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:13.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:13 compute-2 nova_compute[226829]: 2026-01-31 07:52:13.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:13 compute-2 nova_compute[226829]: 2026-01-31 07:52:13.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:13 compute-2 nova_compute[226829]: 2026-01-31 07:52:13.677 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:13 compute-2 nova_compute[226829]: 2026-01-31 07:52:13.677 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:13 compute-2 nova_compute[226829]: 2026-01-31 07:52:13.678 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:13 compute-2 nova_compute[226829]: 2026-01-31 07:52:13.678 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:52:13 compute-2 nova_compute[226829]: 2026-01-31 07:52:13.678 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:13.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2790364297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2589437897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/882165218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:52:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3392995350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:14 compute-2 nova_compute[226829]: 2026-01-31 07:52:14.131 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:14 compute-2 nova_compute[226829]: 2026-01-31 07:52:14.314 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:52:14 compute-2 nova_compute[226829]: 2026-01-31 07:52:14.316 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4577MB free_disk=20.98541259765625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:52:14 compute-2 nova_compute[226829]: 2026-01-31 07:52:14.316 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:14 compute-2 nova_compute[226829]: 2026-01-31 07:52:14.316 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:14 compute-2 nova_compute[226829]: 2026-01-31 07:52:14.600 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:14 compute-2 nova_compute[226829]: 2026-01-31 07:52:14.993 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:52:14 compute-2 nova_compute[226829]: 2026-01-31 07:52:14.993 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:52:14 compute-2 nova_compute[226829]: 2026-01-31 07:52:14.994 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.016 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.035 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.036 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.054 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.074 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.128 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:52:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:15.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:52:15 compute-2 ceph-mon[77282]: pgmap v1649: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 767 B/s wr, 17 op/s
Jan 31 07:52:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3392995350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4102396379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.340 226833 DEBUG nova.compute.manager [req-4fce8229-df96-4f39-8cab-c01594e2987f req-b1e76cff-42bd-45bc-bb52-42c936dcc8f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Received event network-vif-deleted-ca7b65d5-b18e-4cc6-a3f3-eb754739df9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.341 226833 INFO nova.compute.manager [req-4fce8229-df96-4f39-8cab-c01594e2987f req-b1e76cff-42bd-45bc-bb52-42c936dcc8f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Neutron deleted interface ca7b65d5-b18e-4cc6-a3f3-eb754739df9a; detaching it from the instance and deleting it from the info cache
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.342 226833 DEBUG nova.network.neutron [req-4fce8229-df96-4f39-8cab-c01594e2987f req-b1e76cff-42bd-45bc-bb52-42c936dcc8f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.346 226833 DEBUG nova.network.neutron [-] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:52:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:52:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/53417800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.565 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.569 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:52:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:15.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:15 compute-2 sudo[257793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:15 compute-2 sudo[257793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:15 compute-2 sudo[257793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:15 compute-2 sudo[257818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:15 compute-2 sudo[257818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:15 compute-2 sudo[257818]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:15 compute-2 nova_compute[226829]: 2026-01-31 07:52:15.923 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.013 226833 INFO nova.compute.manager [-] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Took 5.26 seconds to deallocate network for instance.
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.028 226833 DEBUG nova.compute.manager [req-4fce8229-df96-4f39-8cab-c01594e2987f req-b1e76cff-42bd-45bc-bb52-42c936dcc8f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba] Detach interface failed, port_id=ca7b65d5-b18e-4cc6-a3f3-eb754739df9a, reason: Instance 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.122 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.122 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.221 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.289 226833 DEBUG oslo_concurrency.lockutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.290 226833 DEBUG oslo_concurrency.lockutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.344 226833 DEBUG oslo_concurrency.processutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:16 compute-2 ceph-mon[77282]: pgmap v1650: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Jan 31 07:52:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/53417800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:52:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1494411292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.796 226833 DEBUG oslo_concurrency.processutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.801 226833 DEBUG nova.compute.provider_tree [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.845 226833 DEBUG nova.scheduler.client.report [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.884 226833 DEBUG oslo_concurrency.lockutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:16 compute-2 nova_compute[226829]: 2026-01-31 07:52:16.914 226833 INFO nova.scheduler.client.report [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Deleted allocations for instance 3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba
Jan 31 07:52:17 compute-2 nova_compute[226829]: 2026-01-31 07:52:17.006 226833 DEBUG oslo_concurrency.lockutils [None req-192884fc-df89-41d4-881d-d4f2d3fc2595 da7f93fef8fa4e0d8682702e040a7476 c8a037791bb04bafaec8d4639d3907ae - - default default] Lock "3a82c4c6-bbe7-47ab-8cad-b31ce179b9ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 22.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:17.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:17.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1494411292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:18 compute-2 ceph-mon[77282]: pgmap v1651: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 KiB/s wr, 20 op/s
Jan 31 07:52:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:19 compute-2 nova_compute[226829]: 2026-01-31 07:52:19.123 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:19.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:19 compute-2 nova_compute[226829]: 2026-01-31 07:52:19.403 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:19 compute-2 nova_compute[226829]: 2026-01-31 07:52:19.403 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:19 compute-2 nova_compute[226829]: 2026-01-31 07:52:19.403 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:52:19 compute-2 nova_compute[226829]: 2026-01-31 07:52:19.602 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:19.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:20 compute-2 ceph-mon[77282]: pgmap v1652: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 07:52:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3340991667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:21 compute-2 nova_compute[226829]: 2026-01-31 07:52:21.224 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:21.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:21.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:21 compute-2 ceph-mon[77282]: pgmap v1653: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 22 op/s
Jan 31 07:52:22 compute-2 sudo[257868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:22 compute-2 sudo[257868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:22 compute-2 sudo[257868]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:22 compute-2 sudo[257900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:52:22 compute-2 sudo[257900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:22 compute-2 sudo[257900]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:22 compute-2 podman[257892]: 2026-01-31 07:52:22.707967345 +0000 UTC m=+0.086491553 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 07:52:22 compute-2 sudo[257938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:22 compute-2 sudo[257938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:22 compute-2 sudo[257938]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:22 compute-2 sudo[257970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:52:22 compute-2 sudo[257970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:23 compute-2 sudo[257970]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:52:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:23.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:52:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:52:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:23.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:52:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:24 compute-2 ceph-mon[77282]: pgmap v1654: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 14 op/s
Jan 31 07:52:24 compute-2 nova_compute[226829]: 2026-01-31 07:52:24.605 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:24 compute-2 nova_compute[226829]: 2026-01-31 07:52:24.916 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:25 compute-2 nova_compute[226829]: 2026-01-31 07:52:25.025 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:52:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:25.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:52:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:25.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:52:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:52:26 compute-2 nova_compute[226829]: 2026-01-31 07:52:26.227 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:52:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:27.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:52:27 compute-2 ceph-mon[77282]: pgmap v1655: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 20 op/s
Jan 31 07:52:27 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:27 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:52:27 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:52:27 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:52:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:27.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:28 compute-2 ceph-mon[77282]: pgmap v1656: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 32 op/s
Jan 31 07:52:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:29.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:29 compute-2 nova_compute[226829]: 2026-01-31 07:52:29.607 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:29.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:30 compute-2 ceph-mon[77282]: pgmap v1657: 305 pgs: 305 active+clean; 98 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 729 KiB/s wr, 80 op/s
Jan 31 07:52:30 compute-2 nova_compute[226829]: 2026-01-31 07:52:30.995 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:30 compute-2 nova_compute[226829]: 2026-01-31 07:52:30.996 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.036 226833 DEBUG nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.172 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.173 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.180 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.180 226833 INFO nova.compute.claims [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.230 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.236 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:31.237 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:31.239 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:52:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:31.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.354 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:52:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:31.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:52:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:52:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2436687015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.764 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.771 226833 DEBUG nova.compute.provider_tree [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.798 226833 DEBUG nova.scheduler.client.report [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.830 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.831 226833 DEBUG nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.904 226833 DEBUG nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.904 226833 DEBUG nova.network.neutron [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.928 226833 INFO nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:52:31 compute-2 nova_compute[226829]: 2026-01-31 07:52:31.946 226833 DEBUG nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.056 226833 DEBUG nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.057 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.057 226833 INFO nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Creating image(s)
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.081 226833 DEBUG nova.storage.rbd_utils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 66066b76-4c92-4b20-ba23-c3002693dc10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.107 226833 DEBUG nova.storage.rbd_utils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 66066b76-4c92-4b20-ba23-c3002693dc10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.131 226833 DEBUG nova.storage.rbd_utils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 66066b76-4c92-4b20-ba23-c3002693dc10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.134 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.194 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.195 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.195 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.196 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.217 226833 DEBUG nova.storage.rbd_utils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 66066b76-4c92-4b20-ba23-c3002693dc10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.220 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 66066b76-4c92-4b20-ba23-c3002693dc10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:32 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:32.243 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:32 compute-2 ceph-mon[77282]: pgmap v1658: 305 pgs: 305 active+clean; 115 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 109 op/s
Jan 31 07:52:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2436687015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:32 compute-2 nova_compute[226829]: 2026-01-31 07:52:32.432 226833 DEBUG nova.policy [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1c5eef3d264666bc90735dd338d82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8cbd6cc22654dfab04487522a63426c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:52:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:33.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:33 compute-2 nova_compute[226829]: 2026-01-31 07:52:33.328 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 66066b76-4c92-4b20-ba23-c3002693dc10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:33 compute-2 nova_compute[226829]: 2026-01-31 07:52:33.407 226833 DEBUG nova.storage.rbd_utils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] resizing rbd image 66066b76-4c92-4b20-ba23-c3002693dc10_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:52:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:33.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:33 compute-2 nova_compute[226829]: 2026-01-31 07:52:33.765 226833 DEBUG nova.objects.instance [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'migration_context' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:52:33 compute-2 nova_compute[226829]: 2026-01-31 07:52:33.804 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:52:33 compute-2 nova_compute[226829]: 2026-01-31 07:52:33.805 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Ensure instance console log exists: /var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:52:33 compute-2 nova_compute[226829]: 2026-01-31 07:52:33.805 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:33 compute-2 nova_compute[226829]: 2026-01-31 07:52:33.806 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:33 compute-2 nova_compute[226829]: 2026-01-31 07:52:33.806 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:33 compute-2 nova_compute[226829]: 2026-01-31 07:52:33.982 226833 DEBUG nova.network.neutron [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Successfully created port: db982ab1-0c3e-4386-804d-c70f4b91053a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:52:34 compute-2 ceph-mgr[77635]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 07:52:34 compute-2 nova_compute[226829]: 2026-01-31 07:52:34.609 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:34 compute-2 ceph-mon[77282]: pgmap v1659: 305 pgs: 305 active+clean; 134 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 31 07:52:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:35.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:52:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:35.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:52:35 compute-2 ceph-mon[77282]: pgmap v1660: 305 pgs: 305 active+clean; 162 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 124 op/s
Jan 31 07:52:35 compute-2 sudo[258224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:35 compute-2 sudo[258224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:35 compute-2 sudo[258224]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:36 compute-2 sudo[258255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:36 compute-2 sudo[258255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:36 compute-2 sudo[258255]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:36 compute-2 podman[258248]: 2026-01-31 07:52:36.042931482 +0000 UTC m=+0.068588649 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 07:52:36 compute-2 nova_compute[226829]: 2026-01-31 07:52:36.233 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.822807) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956823496, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2185, "num_deletes": 257, "total_data_size": 5074291, "memory_usage": 5142480, "flush_reason": "Manual Compaction"}
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956846567, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3324359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36945, "largest_seqno": 39125, "table_properties": {"data_size": 3315458, "index_size": 5459, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19024, "raw_average_key_size": 20, "raw_value_size": 3297377, "raw_average_value_size": 3553, "num_data_blocks": 238, "num_entries": 928, "num_filter_entries": 928, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845752, "oldest_key_time": 1769845752, "file_creation_time": 1769845956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 23858 microseconds, and 13717 cpu microseconds.
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.846661) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3324359 bytes OK
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.846687) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.848566) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.848590) EVENT_LOG_v1 {"time_micros": 1769845956848582, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.848615) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5064631, prev total WAL file size 5064631, number of live WAL files 2.
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.849930) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3246KB)], [69(8359KB)]
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956850072, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 11884662, "oldest_snapshot_seqno": -1}
Jan 31 07:52:36 compute-2 nova_compute[226829]: 2026-01-31 07:52:36.872 226833 DEBUG nova.network.neutron [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Successfully updated port: db982ab1-0c3e-4386-804d-c70f4b91053a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:52:36 compute-2 nova_compute[226829]: 2026-01-31 07:52:36.890 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:52:36 compute-2 nova_compute[226829]: 2026-01-31 07:52:36.890 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:52:36 compute-2 nova_compute[226829]: 2026-01-31 07:52:36.891 226833 DEBUG nova.network.neutron [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6527 keys, 11725120 bytes, temperature: kUnknown
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956956814, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 11725120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11679196, "index_size": 28534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 167115, "raw_average_key_size": 25, "raw_value_size": 11559826, "raw_average_value_size": 1771, "num_data_blocks": 1149, "num_entries": 6527, "num_filter_entries": 6527, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769845956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.957123) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11725120 bytes
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.958503) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.2 rd, 109.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.2 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 7063, records dropped: 536 output_compression: NoCompression
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.958566) EVENT_LOG_v1 {"time_micros": 1769845956958525, "job": 42, "event": "compaction_finished", "compaction_time_micros": 106829, "compaction_time_cpu_micros": 51844, "output_level": 6, "num_output_files": 1, "total_output_size": 11725120, "num_input_records": 7063, "num_output_records": 6527, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956959641, "job": 42, "event": "table_file_deletion", "file_number": 71}
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956961829, "job": 42, "event": "table_file_deletion", "file_number": 69}
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.849816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.961925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.961932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.961935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.961938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:36 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:36.961941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:36 compute-2 nova_compute[226829]: 2026-01-31 07:52:36.984 226833 DEBUG nova.compute.manager [req-8ae6f756-d07c-470e-853d-566b8e5f372b req-ff949c6b-bdc9-4774-96ad-4df9d2eaff20 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-changed-db982ab1-0c3e-4386-804d-c70f4b91053a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:36 compute-2 nova_compute[226829]: 2026-01-31 07:52:36.985 226833 DEBUG nova.compute.manager [req-8ae6f756-d07c-470e-853d-566b8e5f372b req-ff949c6b-bdc9-4774-96ad-4df9d2eaff20 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Refreshing instance network info cache due to event network-changed-db982ab1-0c3e-4386-804d-c70f4b91053a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:52:36 compute-2 nova_compute[226829]: 2026-01-31 07:52:36.985 226833 DEBUG oslo_concurrency.lockutils [req-8ae6f756-d07c-470e-853d-566b8e5f372b req-ff949c6b-bdc9-4774-96ad-4df9d2eaff20 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:52:37 compute-2 nova_compute[226829]: 2026-01-31 07:52:37.040 226833 DEBUG nova.network.neutron [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:52:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:37.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:37 compute-2 sudo[258293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:37 compute-2 sudo[258293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:37 compute-2 sudo[258293]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:37.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:37 compute-2 sudo[258318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:52:37 compute-2 sudo[258318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:37 compute-2 sudo[258318]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.140 226833 DEBUG nova.network.neutron [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.160 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.160 226833 DEBUG nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Instance network_info: |[{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.161 226833 DEBUG oslo_concurrency.lockutils [req-8ae6f756-d07c-470e-853d-566b8e5f372b req-ff949c6b-bdc9-4774-96ad-4df9d2eaff20 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.161 226833 DEBUG nova.network.neutron [req-8ae6f756-d07c-470e-853d-566b8e5f372b req-ff949c6b-bdc9-4774-96ad-4df9d2eaff20 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Refreshing network info cache for port db982ab1-0c3e-4386-804d-c70f4b91053a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.163 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Start _get_guest_xml network_info=[{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.168 226833 WARNING nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.172 226833 DEBUG nova.virt.libvirt.host [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.173 226833 DEBUG nova.virt.libvirt.host [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.178 226833 DEBUG nova.virt.libvirt.host [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.178 226833 DEBUG nova.virt.libvirt.host [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.179 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.179 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.180 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.180 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.180 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.180 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.181 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.181 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.181 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.181 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.181 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.182 226833 DEBUG nova.virt.hardware [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.185 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:52:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2918745298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.678 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.707 226833 DEBUG nova.storage.rbd_utils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 66066b76-4c92-4b20-ba23-c3002693dc10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:52:38 compute-2 nova_compute[226829]: 2026-01-31 07:52:38.711 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:38 compute-2 ceph-mon[77282]: pgmap v1661: 305 pgs: 305 active+clean; 173 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 129 op/s
Jan 31 07:52:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1357689423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:52:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4230265387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.143 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.145 226833 DEBUG nova.virt.libvirt.vif [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:52:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.145 226833 DEBUG nova.network.os_vif_util [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.146 226833 DEBUG nova.network.os_vif_util [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:6f:85,bridge_name='br-int',has_traffic_filtering=True,id=db982ab1-0c3e-4386-804d-c70f4b91053a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb982ab1-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.148 226833 DEBUG nova.objects.instance [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'pci_devices' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.169 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <uuid>66066b76-4c92-4b20-ba23-c3002693dc10</uuid>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <name>instance-00000049</name>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:52:38</nova:creationTime>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:52:39 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <system>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <entry name="serial">66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <entry name="uuid">66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     </system>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <os>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   </os>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <features>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   </features>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk">
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       </source>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk.config">
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       </source>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:52:39 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:97:6f:85"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <target dev="tapdb982ab1-0c"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log" append="off"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <video>
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     </video>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:52:39 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:52:39 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:52:39 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:52:39 compute-2 nova_compute[226829]: </domain>
Jan 31 07:52:39 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.171 226833 DEBUG nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Preparing to wait for external event network-vif-plugged-db982ab1-0c3e-4386-804d-c70f4b91053a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.171 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.171 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.172 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.173 226833 DEBUG nova.virt.libvirt.vif [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:52:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.173 226833 DEBUG nova.network.os_vif_util [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.174 226833 DEBUG nova.network.os_vif_util [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:6f:85,bridge_name='br-int',has_traffic_filtering=True,id=db982ab1-0c3e-4386-804d-c70f4b91053a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb982ab1-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.174 226833 DEBUG os_vif [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:6f:85,bridge_name='br-int',has_traffic_filtering=True,id=db982ab1-0c3e-4386-804d-c70f4b91053a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb982ab1-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.175 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.175 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.176 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.183 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.184 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb982ab1-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.185 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb982ab1-0c, col_values=(('external_ids', {'iface-id': 'db982ab1-0c3e-4386-804d-c70f4b91053a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:6f:85', 'vm-uuid': '66066b76-4c92-4b20-ba23-c3002693dc10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.186 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:39 compute-2 NetworkManager[48999]: <info>  [1769845959.1875] manager: (tapdb982ab1-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.191 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.194 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.197 226833 INFO os_vif [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:6f:85,bridge_name='br-int',has_traffic_filtering=True,id=db982ab1-0c3e-4386-804d-c70f4b91053a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb982ab1-0c')
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.249 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.249 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.250 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:97:6f:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.251 226833 INFO nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Using config drive
Jan 31 07:52:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:39.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.283 226833 DEBUG nova.storage.rbd_utils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 66066b76-4c92-4b20-ba23-c3002693dc10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.597 226833 INFO nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Creating config drive at /var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/disk.config
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.601 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbnbr9nkl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.627 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.736 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbnbr9nkl" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:39.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.766 226833 DEBUG nova.storage.rbd_utils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 66066b76-4c92-4b20-ba23-c3002693dc10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.770 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/disk.config 66066b76-4c92-4b20-ba23-c3002693dc10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2918745298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2684906466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:39 compute-2 ceph-mon[77282]: pgmap v1662: 305 pgs: 305 active+clean; 180 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Jan 31 07:52:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4230265387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.963 226833 DEBUG oslo_concurrency.processutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/disk.config 66066b76-4c92-4b20-ba23-c3002693dc10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:39 compute-2 nova_compute[226829]: 2026-01-31 07:52:39.964 226833 INFO nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Deleting local config drive /var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/disk.config because it was imported into RBD.
Jan 31 07:52:40 compute-2 kernel: tapdb982ab1-0c: entered promiscuous mode
Jan 31 07:52:40 compute-2 NetworkManager[48999]: <info>  [1769845960.0235] manager: (tapdb982ab1-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Jan 31 07:52:40 compute-2 ovn_controller[133834]: 2026-01-31T07:52:40Z|00199|binding|INFO|Claiming lport db982ab1-0c3e-4386-804d-c70f4b91053a for this chassis.
Jan 31 07:52:40 compute-2 ovn_controller[133834]: 2026-01-31T07:52:40Z|00200|binding|INFO|db982ab1-0c3e-4386-804d-c70f4b91053a: Claiming fa:16:3e:97:6f:85 10.100.0.10
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.022 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.026 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.030 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:40 compute-2 systemd-udevd[258475]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:52:40 compute-2 ovn_controller[133834]: 2026-01-31T07:52:40Z|00201|binding|INFO|Setting lport db982ab1-0c3e-4386-804d-c70f4b91053a ovn-installed in OVS
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.070 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:40 compute-2 NetworkManager[48999]: <info>  [1769845960.0836] device (tapdb982ab1-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:52:40 compute-2 NetworkManager[48999]: <info>  [1769845960.0848] device (tapdb982ab1-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:52:40 compute-2 ovn_controller[133834]: 2026-01-31T07:52:40Z|00202|binding|INFO|Setting lport db982ab1-0c3e-4386-804d-c70f4b91053a up in Southbound
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.129 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:6f:85 10.100.0.10'], port_security=['fa:16:3e:97:6f:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '66066b76-4c92-4b20-ba23-c3002693dc10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45f21322-780d-4bfb-8db8-cc783d02479e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=db982ab1-0c3e-4386-804d-c70f4b91053a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.131 143841 INFO neutron.agent.ovn.metadata.agent [-] Port db982ab1-0c3e-4386-804d-c70f4b91053a in datapath 485494d9-5360-41c3-a10e-ef5098af0809 bound to our chassis
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.135 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 07:52:40 compute-2 systemd-machined[195142]: New machine qemu-31-instance-00000049.
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.149 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[da846c64-4c37-419d-b661-e297ebd0e155]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.151 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap485494d9-51 in ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.155 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap485494d9-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.155 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a8f324-cd24-486e-95f9-8da051760b03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.157 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fc634655-3bb8-4db8-b8fa-000575daaa07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 systemd[1]: Started Virtual Machine qemu-31-instance-00000049.
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.166 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[b629196f-a8ad-4e1b-aa9f-0a28bbe2b96f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.178 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[13f31533-69d7-4f4a-885b-ed8f2b38fdc6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.207 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[76c22e64-bc14-4e13-95e8-4404c63ed50d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 NetworkManager[48999]: <info>  [1769845960.2159] manager: (tap485494d9-50): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.214 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a38e5842-b114-4b7a-8777-e6d4858933da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.244 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[97ff2d12-4a5e-47ac-81dc-8f0c27070c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.248 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe81845-9bea-4ae3-965a-f1d62431e7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 NetworkManager[48999]: <info>  [1769845960.2686] device (tap485494d9-50): carrier: link connected
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.273 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff7f8a6-b331-4e19-9303-438c127cd108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.287 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fd47f93d-31b4-427c-9f1b-8728005b96d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621274, 'reachable_time': 37963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258511, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.298 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca24832-5420-4891-9f68-32d03eefba2b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:4b05'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621274, 'tstamp': 621274}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258512, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.316 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e3361acc-3e16-4ea4-8315-007c3dfe2581]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621274, 'reachable_time': 37963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258513, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.338 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[562de10f-350e-426f-9c48-48fd2c2bd4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.386 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[577d1449-8193-4394-93c2-f258296beed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.388 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.388 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.389 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:40 compute-2 NetworkManager[48999]: <info>  [1769845960.3921] manager: (tap485494d9-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.391 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:40 compute-2 kernel: tap485494d9-50: entered promiscuous mode
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.395 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:40 compute-2 ovn_controller[133834]: 2026-01-31T07:52:40Z|00203|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.403 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.405 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/485494d9-5360-41c3-a10e-ef5098af0809.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/485494d9-5360-41c3-a10e-ef5098af0809.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.406 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0d129b6e-45b4-4404-89b8-146120ea1465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.407 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/485494d9-5360-41c3-a10e-ef5098af0809.pid.haproxy
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:52:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:52:40.408 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'env', 'PROCESS_TAG=haproxy-485494d9-5360-41c3-a10e-ef5098af0809', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/485494d9-5360-41c3-a10e-ef5098af0809.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.457 226833 DEBUG nova.compute.manager [req-680d1772-5bd5-43a6-94c5-b45af4c2e03c req-9b6bc918-f0f0-480f-93fe-3fa173deaee4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-plugged-db982ab1-0c3e-4386-804d-c70f4b91053a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.458 226833 DEBUG oslo_concurrency.lockutils [req-680d1772-5bd5-43a6-94c5-b45af4c2e03c req-9b6bc918-f0f0-480f-93fe-3fa173deaee4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.458 226833 DEBUG oslo_concurrency.lockutils [req-680d1772-5bd5-43a6-94c5-b45af4c2e03c req-9b6bc918-f0f0-480f-93fe-3fa173deaee4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.458 226833 DEBUG oslo_concurrency.lockutils [req-680d1772-5bd5-43a6-94c5-b45af4c2e03c req-9b6bc918-f0f0-480f-93fe-3fa173deaee4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.458 226833 DEBUG nova.compute.manager [req-680d1772-5bd5-43a6-94c5-b45af4c2e03c req-9b6bc918-f0f0-480f-93fe-3fa173deaee4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Processing event network-vif-plugged-db982ab1-0c3e-4386-804d-c70f4b91053a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.599 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845960.5985188, 66066b76-4c92-4b20-ba23-c3002693dc10 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.600 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] VM Started (Lifecycle Event)
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.602 226833 DEBUG nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.613 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.618 226833 INFO nova.virt.libvirt.driver [-] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Instance spawned successfully.
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.618 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.662 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.667 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.768 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.769 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.769 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.769 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.770 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.770 226833 DEBUG nova.virt.libvirt.driver [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.777 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.777 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845960.5987725, 66066b76-4c92-4b20-ba23-c3002693dc10 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.777 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] VM Paused (Lifecycle Event)
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.854 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.859 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769845960.6053648, 66066b76-4c92-4b20-ba23-c3002693dc10 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.859 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] VM Resumed (Lifecycle Event)
Jan 31 07:52:40 compute-2 podman[258588]: 2026-01-31 07:52:40.765606155 +0000 UTC m=+0.023784973 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.982 226833 DEBUG nova.network.neutron [req-8ae6f756-d07c-470e-853d-566b8e5f372b req-ff949c6b-bdc9-4774-96ad-4df9d2eaff20 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updated VIF entry in instance network info cache for port db982ab1-0c3e-4386-804d-c70f4b91053a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:52:40 compute-2 nova_compute[226829]: 2026-01-31 07:52:40.982 226833 DEBUG nova.network.neutron [req-8ae6f756-d07c-470e-853d-566b8e5f372b req-ff949c6b-bdc9-4774-96ad-4df9d2eaff20 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:52:41 compute-2 podman[258588]: 2026-01-31 07:52:41.200449009 +0000 UTC m=+0.458627807 container create b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:52:41 compute-2 systemd[1]: Started libpod-conmon-b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f.scope.
Jan 31 07:52:41 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:52:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54392a5d5512b2fe4755d1134ce72e1a2e62bf8f4d5db39067ad58ab513140c1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:41.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:41 compute-2 podman[258588]: 2026-01-31 07:52:41.287616949 +0000 UTC m=+0.545795777 container init b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 07:52:41 compute-2 podman[258588]: 2026-01-31 07:52:41.29244527 +0000 UTC m=+0.550624068 container start b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 07:52:41 compute-2 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[258604]: [NOTICE]   (258608) : New worker (258610) forked
Jan 31 07:52:41 compute-2 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[258604]: [NOTICE]   (258608) : Loading success.
Jan 31 07:52:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:41.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:41 compute-2 nova_compute[226829]: 2026-01-31 07:52:41.796 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:52:41 compute-2 nova_compute[226829]: 2026-01-31 07:52:41.797 226833 DEBUG oslo_concurrency.lockutils [req-8ae6f756-d07c-470e-853d-566b8e5f372b req-ff949c6b-bdc9-4774-96ad-4df9d2eaff20 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:52:41 compute-2 nova_compute[226829]: 2026-01-31 07:52:41.801 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:52:41 compute-2 nova_compute[226829]: 2026-01-31 07:52:41.808 226833 INFO nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Took 9.75 seconds to spawn the instance on the hypervisor.
Jan 31 07:52:41 compute-2 nova_compute[226829]: 2026-01-31 07:52:41.809 226833 DEBUG nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:52:41 compute-2 nova_compute[226829]: 2026-01-31 07:52:41.821 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:52:41 compute-2 nova_compute[226829]: 2026-01-31 07:52:41.892 226833 INFO nova.compute.manager [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Took 10.75 seconds to build instance.
Jan 31 07:52:41 compute-2 nova_compute[226829]: 2026-01-31 07:52:41.907 226833 DEBUG oslo_concurrency.lockutils [None req-0270f26f-85a1-4277-b9cb-fbe3e72f7f0c 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:42 compute-2 ceph-mon[77282]: pgmap v1663: 305 pgs: 305 active+clean; 192 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 753 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Jan 31 07:52:42 compute-2 nova_compute[226829]: 2026-01-31 07:52:42.668 226833 DEBUG nova.compute.manager [req-5a3a1214-1561-456a-a639-ea10438becb8 req-d94584e3-09b6-4ab4-8f7a-518c8fa096b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-plugged-db982ab1-0c3e-4386-804d-c70f4b91053a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:42 compute-2 nova_compute[226829]: 2026-01-31 07:52:42.669 226833 DEBUG oslo_concurrency.lockutils [req-5a3a1214-1561-456a-a639-ea10438becb8 req-d94584e3-09b6-4ab4-8f7a-518c8fa096b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:42 compute-2 nova_compute[226829]: 2026-01-31 07:52:42.669 226833 DEBUG oslo_concurrency.lockutils [req-5a3a1214-1561-456a-a639-ea10438becb8 req-d94584e3-09b6-4ab4-8f7a-518c8fa096b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:42 compute-2 nova_compute[226829]: 2026-01-31 07:52:42.669 226833 DEBUG oslo_concurrency.lockutils [req-5a3a1214-1561-456a-a639-ea10438becb8 req-d94584e3-09b6-4ab4-8f7a-518c8fa096b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:42 compute-2 nova_compute[226829]: 2026-01-31 07:52:42.670 226833 DEBUG nova.compute.manager [req-5a3a1214-1561-456a-a639-ea10438becb8 req-d94584e3-09b6-4ab4-8f7a-518c8fa096b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-plugged-db982ab1-0c3e-4386-804d-c70f4b91053a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:42 compute-2 nova_compute[226829]: 2026-01-31 07:52:42.670 226833 WARNING nova.compute.manager [req-5a3a1214-1561-456a-a639-ea10438becb8 req-d94584e3-09b6-4ab4-8f7a-518c8fa096b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received unexpected event network-vif-plugged-db982ab1-0c3e-4386-804d-c70f4b91053a for instance with vm_state active and task_state None.
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.269353) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963269474, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 336, "num_deletes": 251, "total_data_size": 292649, "memory_usage": 300328, "flush_reason": "Manual Compaction"}
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963275144, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 193425, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39130, "largest_seqno": 39461, "table_properties": {"data_size": 191219, "index_size": 370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5457, "raw_average_key_size": 18, "raw_value_size": 186894, "raw_average_value_size": 644, "num_data_blocks": 16, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845957, "oldest_key_time": 1769845957, "file_creation_time": 1769845963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 5958 microseconds, and 1277 cpu microseconds.
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:52:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:43.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.275306) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 193425 bytes OK
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.275348) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.278362) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.278448) EVENT_LOG_v1 {"time_micros": 1769845963278408, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.278470) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 290298, prev total WAL file size 290298, number of live WAL files 2.
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.279278) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(188KB)], [72(11MB)]
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963279585, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 11918545, "oldest_snapshot_seqno": -1}
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6304 keys, 9986671 bytes, temperature: kUnknown
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963354787, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 9986671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9943759, "index_size": 26034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 163185, "raw_average_key_size": 25, "raw_value_size": 9829808, "raw_average_value_size": 1559, "num_data_blocks": 1037, "num_entries": 6304, "num_filter_entries": 6304, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769845963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.355079) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9986671 bytes
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.361498) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.3 rd, 132.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(113.2) write-amplify(51.6) OK, records in: 6817, records dropped: 513 output_compression: NoCompression
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.361536) EVENT_LOG_v1 {"time_micros": 1769845963361523, "job": 44, "event": "compaction_finished", "compaction_time_micros": 75276, "compaction_time_cpu_micros": 24739, "output_level": 6, "num_output_files": 1, "total_output_size": 9986671, "num_input_records": 6817, "num_output_records": 6304, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963361878, "job": 44, "event": "table_file_deletion", "file_number": 74}
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963363318, "job": 44, "event": "table_file_deletion", "file_number": 72}
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.279168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.363540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.363546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.363548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.363549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:52:43.363551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:43.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:44 compute-2 nova_compute[226829]: 2026-01-31 07:52:44.187 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:44 compute-2 ceph-mon[77282]: pgmap v1664: 305 pgs: 305 active+clean; 205 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 452 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 07:52:44 compute-2 nova_compute[226829]: 2026-01-31 07:52:44.613 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:44 compute-2 nova_compute[226829]: 2026-01-31 07:52:44.852 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:44 compute-2 NetworkManager[48999]: <info>  [1769845964.8539] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Jan 31 07:52:44 compute-2 NetworkManager[48999]: <info>  [1769845964.8552] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Jan 31 07:52:44 compute-2 nova_compute[226829]: 2026-01-31 07:52:44.905 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:44 compute-2 ovn_controller[133834]: 2026-01-31T07:52:44Z|00204|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 07:52:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:52:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1330100280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:52:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:52:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1330100280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:52:44 compute-2 nova_compute[226829]: 2026-01-31 07:52:44.926 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:45 compute-2 nova_compute[226829]: 2026-01-31 07:52:45.277 226833 DEBUG nova.compute.manager [req-ef10d17c-9c50-484c-a377-7346b3068bf7 req-a67f68b9-5acf-41c8-b0f6-315ed272aeb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-changed-db982ab1-0c3e-4386-804d-c70f4b91053a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:45 compute-2 nova_compute[226829]: 2026-01-31 07:52:45.277 226833 DEBUG nova.compute.manager [req-ef10d17c-9c50-484c-a377-7346b3068bf7 req-a67f68b9-5acf-41c8-b0f6-315ed272aeb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Refreshing instance network info cache due to event network-changed-db982ab1-0c3e-4386-804d-c70f4b91053a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:52:45 compute-2 nova_compute[226829]: 2026-01-31 07:52:45.278 226833 DEBUG oslo_concurrency.lockutils [req-ef10d17c-9c50-484c-a377-7346b3068bf7 req-a67f68b9-5acf-41c8-b0f6-315ed272aeb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:52:45 compute-2 nova_compute[226829]: 2026-01-31 07:52:45.278 226833 DEBUG oslo_concurrency.lockutils [req-ef10d17c-9c50-484c-a377-7346b3068bf7 req-a67f68b9-5acf-41c8-b0f6-315ed272aeb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:52:45 compute-2 nova_compute[226829]: 2026-01-31 07:52:45.278 226833 DEBUG nova.network.neutron [req-ef10d17c-9c50-484c-a377-7346b3068bf7 req-a67f68b9-5acf-41c8-b0f6-315ed272aeb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Refreshing network info cache for port db982ab1-0c3e-4386-804d-c70f4b91053a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:52:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1330100280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:52:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1330100280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:52:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2159163348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:45.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:46 compute-2 ceph-mon[77282]: pgmap v1665: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 31 07:52:46 compute-2 nova_compute[226829]: 2026-01-31 07:52:46.658 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:52:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:47.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:52:47 compute-2 nova_compute[226829]: 2026-01-31 07:52:47.542 226833 DEBUG nova.network.neutron [req-ef10d17c-9c50-484c-a377-7346b3068bf7 req-a67f68b9-5acf-41c8-b0f6-315ed272aeb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updated VIF entry in instance network info cache for port db982ab1-0c3e-4386-804d-c70f4b91053a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:52:47 compute-2 nova_compute[226829]: 2026-01-31 07:52:47.543 226833 DEBUG nova.network.neutron [req-ef10d17c-9c50-484c-a377-7346b3068bf7 req-a67f68b9-5acf-41c8-b0f6-315ed272aeb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:52:47 compute-2 nova_compute[226829]: 2026-01-31 07:52:47.599 226833 DEBUG oslo_concurrency.lockutils [req-ef10d17c-9c50-484c-a377-7346b3068bf7 req-a67f68b9-5acf-41c8-b0f6-315ed272aeb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:52:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:47.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:48 compute-2 ceph-mon[77282]: pgmap v1666: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 31 07:52:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:49 compute-2 nova_compute[226829]: 2026-01-31 07:52:49.190 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:49.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:49 compute-2 nova_compute[226829]: 2026-01-31 07:52:49.615 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:49.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:50 compute-2 ceph-mon[77282]: pgmap v1667: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 146 op/s
Jan 31 07:52:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:51.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:51.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:51 compute-2 ceph-mon[77282]: pgmap v1668: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Jan 31 07:52:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3956740207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:53 compute-2 podman[258625]: 2026-01-31 07:52:53.200770891 +0000 UTC m=+0.083121672 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 07:52:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:53.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:53.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:54 compute-2 nova_compute[226829]: 2026-01-31 07:52:54.194 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:54 compute-2 ceph-mon[77282]: pgmap v1669: 305 pgs: 305 active+clean; 214 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 161 op/s
Jan 31 07:52:54 compute-2 nova_compute[226829]: 2026-01-31 07:52:54.617 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:55.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:52:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:55.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:52:55 compute-2 ceph-mon[77282]: pgmap v1670: 305 pgs: 305 active+clean; 218 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 143 op/s
Jan 31 07:52:56 compute-2 sudo[258655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:56 compute-2 sudo[258655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:56 compute-2 sudo[258655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:56 compute-2 sudo[258680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:56 compute-2 sudo[258680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:56 compute-2 sudo[258680]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:56 compute-2 ovn_controller[133834]: 2026-01-31T07:52:56Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:6f:85 10.100.0.10
Jan 31 07:52:56 compute-2 ovn_controller[133834]: 2026-01-31T07:52:56Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:6f:85 10.100.0.10
Jan 31 07:52:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:57.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:57.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:58 compute-2 ceph-mon[77282]: pgmap v1671: 305 pgs: 305 active+clean; 240 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Jan 31 07:52:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1258146455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:59 compute-2 nova_compute[226829]: 2026-01-31 07:52:59.196 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:59.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:59 compute-2 nova_compute[226829]: 2026-01-31 07:52:59.620 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:52:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:59.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1360236136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:59 compute-2 ceph-mon[77282]: pgmap v1672: 305 pgs: 305 active+clean; 286 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Jan 31 07:53:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:53:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:01.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:53:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:53:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:01.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:53:02 compute-2 ceph-mon[77282]: pgmap v1673: 305 pgs: 305 active+clean; 292 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Jan 31 07:53:02 compute-2 nova_compute[226829]: 2026-01-31 07:53:02.841 226833 DEBUG oslo_concurrency.lockutils [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:02 compute-2 nova_compute[226829]: 2026-01-31 07:53:02.841 226833 DEBUG oslo_concurrency.lockutils [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:02 compute-2 nova_compute[226829]: 2026-01-31 07:53:02.842 226833 DEBUG nova.objects.instance [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'flavor' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:02 compute-2 nova_compute[226829]: 2026-01-31 07:53:02.875 226833 DEBUG nova.objects.instance [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'pci_requests' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:02 compute-2 nova_compute[226829]: 2026-01-31 07:53:02.891 226833 DEBUG nova.network.neutron [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:53:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:03.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:03 compute-2 nova_compute[226829]: 2026-01-31 07:53:03.312 226833 DEBUG nova.policy [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1c5eef3d264666bc90735dd338d82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8cbd6cc22654dfab04487522a63426c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:53:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:03.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:04 compute-2 nova_compute[226829]: 2026-01-31 07:53:04.198 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:04 compute-2 ceph-mon[77282]: pgmap v1674: 305 pgs: 305 active+clean; 293 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 183 op/s
Jan 31 07:53:04 compute-2 nova_compute[226829]: 2026-01-31 07:53:04.621 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:04 compute-2 nova_compute[226829]: 2026-01-31 07:53:04.704 226833 DEBUG nova.network.neutron [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Successfully created port: bf305380-6000-4cea-86cf-123bf60de12a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:53:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:53:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:05.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:53:05 compute-2 nova_compute[226829]: 2026-01-31 07:53:05.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:05.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:06 compute-2 podman[258710]: 2026-01-31 07:53:06.174904209 +0000 UTC m=+0.057965144 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 07:53:06 compute-2 nova_compute[226829]: 2026-01-31 07:53:06.284 226833 DEBUG nova.network.neutron [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Successfully updated port: bf305380-6000-4cea-86cf-123bf60de12a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:53:06 compute-2 nova_compute[226829]: 2026-01-31 07:53:06.299 226833 DEBUG oslo_concurrency.lockutils [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:53:06 compute-2 nova_compute[226829]: 2026-01-31 07:53:06.299 226833 DEBUG oslo_concurrency.lockutils [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:53:06 compute-2 nova_compute[226829]: 2026-01-31 07:53:06.300 226833 DEBUG nova.network.neutron [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:53:06 compute-2 ceph-mon[77282]: pgmap v1675: 305 pgs: 305 active+clean; 296 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.1 MiB/s wr, 203 op/s
Jan 31 07:53:06 compute-2 nova_compute[226829]: 2026-01-31 07:53:06.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:06 compute-2 nova_compute[226829]: 2026-01-31 07:53:06.530 226833 WARNING nova.network.neutron [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] 485494d9-5360-41c3-a10e-ef5098af0809 already exists in list: networks containing: ['485494d9-5360-41c3-a10e-ef5098af0809']. ignoring it
Jan 31 07:53:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:06.863 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:06.864 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:06.864 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:07.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:53:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:07.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:53:07 compute-2 nova_compute[226829]: 2026-01-31 07:53:07.874 226833 DEBUG nova.compute.manager [req-85af8bf4-fa23-4661-9d67-828626aca27d req-2dfa1421-679a-48ff-88b9-9b6a75f24c43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-changed-bf305380-6000-4cea-86cf-123bf60de12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:07 compute-2 nova_compute[226829]: 2026-01-31 07:53:07.875 226833 DEBUG nova.compute.manager [req-85af8bf4-fa23-4661-9d67-828626aca27d req-2dfa1421-679a-48ff-88b9-9b6a75f24c43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Refreshing instance network info cache due to event network-changed-bf305380-6000-4cea-86cf-123bf60de12a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:53:07 compute-2 nova_compute[226829]: 2026-01-31 07:53:07.875 226833 DEBUG oslo_concurrency.lockutils [req-85af8bf4-fa23-4661-9d67-828626aca27d req-2dfa1421-679a-48ff-88b9-9b6a75f24c43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:08 compute-2 ceph-mon[77282]: pgmap v1676: 305 pgs: 305 active+clean; 310 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.7 MiB/s wr, 234 op/s
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.604 226833 DEBUG nova.network.neutron [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.629 226833 DEBUG oslo_concurrency.lockutils [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.631 226833 DEBUG oslo_concurrency.lockutils [req-85af8bf4-fa23-4661-9d67-828626aca27d req-2dfa1421-679a-48ff-88b9-9b6a75f24c43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.631 226833 DEBUG nova.network.neutron [req-85af8bf4-fa23-4661-9d67-828626aca27d req-2dfa1421-679a-48ff-88b9-9b6a75f24c43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Refreshing network info cache for port bf305380-6000-4cea-86cf-123bf60de12a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.635 226833 DEBUG nova.virt.libvirt.vif [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.636 226833 DEBUG nova.network.os_vif_util [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.637 226833 DEBUG nova.network.os_vif_util [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.637 226833 DEBUG os_vif [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.638 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.639 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.639 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.668 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.668 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf305380-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.669 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf305380-60, col_values=(('external_ids', {'iface-id': 'bf305380-6000-4cea-86cf-123bf60de12a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:34:eb', 'vm-uuid': '66066b76-4c92-4b20-ba23-c3002693dc10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:08 compute-2 NetworkManager[48999]: <info>  [1769845988.6718] manager: (tapbf305380-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.671 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.674 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.676 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.678 226833 INFO os_vif [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60')
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.680 226833 DEBUG nova.virt.libvirt.vif [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.680 226833 DEBUG nova.network.os_vif_util [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.682 226833 DEBUG nova.network.os_vif_util [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.686 226833 DEBUG nova.virt.libvirt.guest [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] attach device xml: <interface type="ethernet">
Jan 31 07:53:08 compute-2 nova_compute[226829]:   <mac address="fa:16:3e:39:34:eb"/>
Jan 31 07:53:08 compute-2 nova_compute[226829]:   <model type="virtio"/>
Jan 31 07:53:08 compute-2 nova_compute[226829]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:53:08 compute-2 nova_compute[226829]:   <mtu size="1442"/>
Jan 31 07:53:08 compute-2 nova_compute[226829]:   <target dev="tapbf305380-60"/>
Jan 31 07:53:08 compute-2 nova_compute[226829]: </interface>
Jan 31 07:53:08 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 07:53:08 compute-2 kernel: tapbf305380-60: entered promiscuous mode
Jan 31 07:53:08 compute-2 NetworkManager[48999]: <info>  [1769845988.6996] manager: (tapbf305380-60): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:08 compute-2 ovn_controller[133834]: 2026-01-31T07:53:08Z|00205|binding|INFO|Claiming lport bf305380-6000-4cea-86cf-123bf60de12a for this chassis.
Jan 31 07:53:08 compute-2 ovn_controller[133834]: 2026-01-31T07:53:08Z|00206|binding|INFO|bf305380-6000-4cea-86cf-123bf60de12a: Claiming fa:16:3e:39:34:eb 10.100.0.8
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.706 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:34:eb 10.100.0.8'], port_security=['fa:16:3e:39:34:eb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '66066b76-4c92-4b20-ba23-c3002693dc10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=bf305380-6000-4cea-86cf-123bf60de12a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.708 143841 INFO neutron.agent.ovn.metadata.agent [-] Port bf305380-6000-4cea-86cf-123bf60de12a in datapath 485494d9-5360-41c3-a10e-ef5098af0809 bound to our chassis
Jan 31 07:53:08 compute-2 ovn_controller[133834]: 2026-01-31T07:53:08Z|00207|binding|INFO|Setting lport bf305380-6000-4cea-86cf-123bf60de12a ovn-installed in OVS
Jan 31 07:53:08 compute-2 ovn_controller[133834]: 2026-01-31T07:53:08Z|00208|binding|INFO|Setting lport bf305380-6000-4cea-86cf-123bf60de12a up in Southbound
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.710 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.710 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.713 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.724 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4f8c5b50-aa5d-446e-8fb0-e609b0f6847b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:08 compute-2 systemd-udevd[258737]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:53:08 compute-2 NetworkManager[48999]: <info>  [1769845988.7430] device (tapbf305380-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:53:08 compute-2 NetworkManager[48999]: <info>  [1769845988.7440] device (tapbf305380-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.750 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9672ce6e-c9db-4f13-9754-b745941fe718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.755 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[edb99b4b-0f4d-4c04-87e6-3558845c2bc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.774 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[0a63176f-c3b9-44b2-b745-07f315f85070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.787 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff2551a-f639-493b-a458-a224e1035f2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621274, 'reachable_time': 37963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258744, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.799 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[da73f669-ac46-4e73-bb5c-7a2ac6d14f7e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621283, 'tstamp': 621283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258745, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621285, 'tstamp': 621285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258745, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.800 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.802 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.803 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.803 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.804 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:08.804 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.908 226833 DEBUG nova.virt.libvirt.driver [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.909 226833 DEBUG nova.virt.libvirt.driver [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.909 226833 DEBUG nova.virt.libvirt.driver [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:97:6f:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:53:08 compute-2 nova_compute[226829]: 2026-01-31 07:53:08.910 226833 DEBUG nova.virt.libvirt.driver [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:39:34:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:53:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:53:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:09.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:53:09 compute-2 nova_compute[226829]: 2026-01-31 07:53:09.373 226833 DEBUG nova.virt.libvirt.guest [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:53:09 compute-2 nova_compute[226829]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:53:09 compute-2 nova_compute[226829]:   <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:53:09 compute-2 nova_compute[226829]:   <nova:creationTime>2026-01-31 07:53:09</nova:creationTime>
Jan 31 07:53:09 compute-2 nova_compute[226829]:   <nova:flavor name="m1.nano">
Jan 31 07:53:09 compute-2 nova_compute[226829]:     <nova:memory>128</nova:memory>
Jan 31 07:53:09 compute-2 nova_compute[226829]:     <nova:disk>1</nova:disk>
Jan 31 07:53:09 compute-2 nova_compute[226829]:     <nova:swap>0</nova:swap>
Jan 31 07:53:09 compute-2 nova_compute[226829]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:53:09 compute-2 nova_compute[226829]:     <nova:vcpus>1</nova:vcpus>
Jan 31 07:53:09 compute-2 nova_compute[226829]:   </nova:flavor>
Jan 31 07:53:09 compute-2 nova_compute[226829]:   <nova:owner>
Jan 31 07:53:09 compute-2 nova_compute[226829]:     <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:53:09 compute-2 nova_compute[226829]:     <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:53:09 compute-2 nova_compute[226829]:   </nova:owner>
Jan 31 07:53:09 compute-2 nova_compute[226829]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:53:09 compute-2 nova_compute[226829]:   <nova:ports>
Jan 31 07:53:09 compute-2 nova_compute[226829]:     <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:53:09 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:53:09 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:09 compute-2 nova_compute[226829]:     <nova:port uuid="bf305380-6000-4cea-86cf-123bf60de12a">
Jan 31 07:53:09 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:53:09 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:09 compute-2 nova_compute[226829]:   </nova:ports>
Jan 31 07:53:09 compute-2 nova_compute[226829]: </nova:instance>
Jan 31 07:53:09 compute-2 nova_compute[226829]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 31 07:53:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3758324070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:09 compute-2 nova_compute[226829]: 2026-01-31 07:53:09.623 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:09 compute-2 nova_compute[226829]: 2026-01-31 07:53:09.712 226833 DEBUG oslo_concurrency.lockutils [None req-cd58419a-7829-45f3-ad66-4a53f50ca6f8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:53:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:09.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:53:10 compute-2 nova_compute[226829]: 2026-01-31 07:53:10.635 226833 DEBUG nova.compute.manager [req-616daa3b-f9e6-4e01-b54a-750cb54b656a req-70bcb7b9-8f6a-44e4-bb7f-8704242b9e66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-plugged-bf305380-6000-4cea-86cf-123bf60de12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:10 compute-2 nova_compute[226829]: 2026-01-31 07:53:10.636 226833 DEBUG oslo_concurrency.lockutils [req-616daa3b-f9e6-4e01-b54a-750cb54b656a req-70bcb7b9-8f6a-44e4-bb7f-8704242b9e66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:10 compute-2 nova_compute[226829]: 2026-01-31 07:53:10.637 226833 DEBUG oslo_concurrency.lockutils [req-616daa3b-f9e6-4e01-b54a-750cb54b656a req-70bcb7b9-8f6a-44e4-bb7f-8704242b9e66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:10 compute-2 nova_compute[226829]: 2026-01-31 07:53:10.637 226833 DEBUG oslo_concurrency.lockutils [req-616daa3b-f9e6-4e01-b54a-750cb54b656a req-70bcb7b9-8f6a-44e4-bb7f-8704242b9e66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:10 compute-2 nova_compute[226829]: 2026-01-31 07:53:10.637 226833 DEBUG nova.compute.manager [req-616daa3b-f9e6-4e01-b54a-750cb54b656a req-70bcb7b9-8f6a-44e4-bb7f-8704242b9e66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-plugged-bf305380-6000-4cea-86cf-123bf60de12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:53:10 compute-2 nova_compute[226829]: 2026-01-31 07:53:10.638 226833 WARNING nova.compute.manager [req-616daa3b-f9e6-4e01-b54a-750cb54b656a req-70bcb7b9-8f6a-44e4-bb7f-8704242b9e66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received unexpected event network-vif-plugged-bf305380-6000-4cea-86cf-123bf60de12a for instance with vm_state active and task_state None.
Jan 31 07:53:10 compute-2 ceph-mon[77282]: pgmap v1677: 305 pgs: 305 active+clean; 326 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 206 op/s
Jan 31 07:53:11 compute-2 nova_compute[226829]: 2026-01-31 07:53:11.100 226833 DEBUG oslo_concurrency.lockutils [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:11 compute-2 nova_compute[226829]: 2026-01-31 07:53:11.100 226833 DEBUG oslo_concurrency.lockutils [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:11 compute-2 nova_compute[226829]: 2026-01-31 07:53:11.101 226833 DEBUG nova.objects.instance [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'flavor' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:11 compute-2 nova_compute[226829]: 2026-01-31 07:53:11.244 226833 DEBUG nova.network.neutron [req-85af8bf4-fa23-4661-9d67-828626aca27d req-2dfa1421-679a-48ff-88b9-9b6a75f24c43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updated VIF entry in instance network info cache for port bf305380-6000-4cea-86cf-123bf60de12a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:53:11 compute-2 nova_compute[226829]: 2026-01-31 07:53:11.245 226833 DEBUG nova.network.neutron [req-85af8bf4-fa23-4661-9d67-828626aca27d req-2dfa1421-679a-48ff-88b9-9b6a75f24c43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:11 compute-2 nova_compute[226829]: 2026-01-31 07:53:11.265 226833 DEBUG oslo_concurrency.lockutils [req-85af8bf4-fa23-4661-9d67-828626aca27d req-2dfa1421-679a-48ff-88b9-9b6a75f24c43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:53:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:11.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:11 compute-2 ovn_controller[133834]: 2026-01-31T07:53:11Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:34:eb 10.100.0.8
Jan 31 07:53:11 compute-2 ovn_controller[133834]: 2026-01-31T07:53:11Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:34:eb 10.100.0.8
Jan 31 07:53:11 compute-2 nova_compute[226829]: 2026-01-31 07:53:11.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:53:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:11.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:53:12 compute-2 ceph-mon[77282]: pgmap v1678: 305 pgs: 305 active+clean; 326 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Jan 31 07:53:12 compute-2 nova_compute[226829]: 2026-01-31 07:53:12.739 226833 DEBUG nova.objects.instance [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'pci_requests' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:12 compute-2 nova_compute[226829]: 2026-01-31 07:53:12.755 226833 DEBUG nova.network.neutron [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:53:13 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.281 226833 DEBUG nova.compute.manager [req-0a7de1a6-23e3-4627-aa0b-010d8c17cf5b req-d2a87d9f-8973-4c39-9924-b6d4fc90cd94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-plugged-bf305380-6000-4cea-86cf-123bf60de12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.282 226833 DEBUG oslo_concurrency.lockutils [req-0a7de1a6-23e3-4627-aa0b-010d8c17cf5b req-d2a87d9f-8973-4c39-9924-b6d4fc90cd94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.282 226833 DEBUG oslo_concurrency.lockutils [req-0a7de1a6-23e3-4627-aa0b-010d8c17cf5b req-d2a87d9f-8973-4c39-9924-b6d4fc90cd94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.282 226833 DEBUG oslo_concurrency.lockutils [req-0a7de1a6-23e3-4627-aa0b-010d8c17cf5b req-d2a87d9f-8973-4c39-9924-b6d4fc90cd94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.283 226833 DEBUG nova.compute.manager [req-0a7de1a6-23e3-4627-aa0b-010d8c17cf5b req-d2a87d9f-8973-4c39-9924-b6d4fc90cd94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-plugged-bf305380-6000-4cea-86cf-123bf60de12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.283 226833 WARNING nova.compute.manager [req-0a7de1a6-23e3-4627-aa0b-010d8c17cf5b req-d2a87d9f-8973-4c39-9924-b6d4fc90cd94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received unexpected event network-vif-plugged-bf305380-6000-4cea-86cf-123bf60de12a for instance with vm_state active and task_state None.
Jan 31 07:53:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:13.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.603 226833 DEBUG nova.policy [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1c5eef3d264666bc90735dd338d82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8cbd6cc22654dfab04487522a63426c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.673 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.682 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.682 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.682 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:53:13 compute-2 nova_compute[226829]: 2026-01-31 07:53:13.683 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:13.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:14 compute-2 ceph-mon[77282]: pgmap v1679: 305 pgs: 305 active+clean; 326 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 31 07:53:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3915967884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/817034885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:14 compute-2 nova_compute[226829]: 2026-01-31 07:53:14.625 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:14 compute-2 nova_compute[226829]: 2026-01-31 07:53:14.952 226833 DEBUG nova.network.neutron [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Successfully created port: 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:53:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:15.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4026578386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2769371964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:15.706 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:15 compute-2 nova_compute[226829]: 2026-01-31 07:53:15.707 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:15.709 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:53:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:53:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:15.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:53:15 compute-2 nova_compute[226829]: 2026-01-31 07:53:15.847 226833 DEBUG nova.network.neutron [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Successfully updated port: 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:53:15 compute-2 nova_compute[226829]: 2026-01-31 07:53:15.874 226833 DEBUG oslo_concurrency.lockutils [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:53:16 compute-2 sudo[258750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:16 compute-2 sudo[258750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:16 compute-2 sudo[258750]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:16 compute-2 sudo[258775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:16 compute-2 sudo[258775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:16 compute-2 sudo[258775]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.465 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.507 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.508 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.508 226833 DEBUG oslo_concurrency.lockutils [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.509 226833 DEBUG nova.network.neutron [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.511 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.512 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.512 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.553 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.553 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.554 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.554 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.554 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:53:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:16.712 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.719 226833 WARNING nova.network.neutron [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] 485494d9-5360-41c3-a10e-ef5098af0809 already exists in list: networks containing: ['485494d9-5360-41c3-a10e-ef5098af0809']. ignoring it
Jan 31 07:53:16 compute-2 nova_compute[226829]: 2026-01-31 07:53:16.720 226833 WARNING nova.network.neutron [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] 485494d9-5360-41c3-a10e-ef5098af0809 already exists in list: networks containing: ['485494d9-5360-41c3-a10e-ef5098af0809']. ignoring it
Jan 31 07:53:16 compute-2 ceph-mon[77282]: pgmap v1680: 305 pgs: 305 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 143 op/s
Jan 31 07:53:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:53:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2029330616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.136 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.235 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.236 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:53:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.408 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.410 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4332MB free_disk=20.90988540649414GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.410 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.410 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.530 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 66066b76-4c92-4b20-ba23-c3002693dc10 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.531 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.531 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.584 226833 DEBUG nova.compute.manager [req-98806c41-2da8-4b69-bc4e-5d8397643455 req-ec18a03c-05ce-4259-8b09-5fe96f56fb7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-changed-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.584 226833 DEBUG nova.compute.manager [req-98806c41-2da8-4b69-bc4e-5d8397643455 req-ec18a03c-05ce-4259-8b09-5fe96f56fb7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Refreshing instance network info cache due to event network-changed-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.585 226833 DEBUG oslo_concurrency.lockutils [req-98806c41-2da8-4b69-bc4e-5d8397643455 req-ec18a03c-05ce-4259-8b09-5fe96f56fb7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:53:17 compute-2 nova_compute[226829]: 2026-01-31 07:53:17.593 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:53:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:17.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:18 compute-2 ceph-mon[77282]: pgmap v1681: 305 pgs: 305 active+clean; 326 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.2 MiB/s wr, 135 op/s
Jan 31 07:53:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2029330616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2228712792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:53:18 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1793600199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:18 compute-2 nova_compute[226829]: 2026-01-31 07:53:18.098 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:53:18 compute-2 nova_compute[226829]: 2026-01-31 07:53:18.105 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:53:18 compute-2 nova_compute[226829]: 2026-01-31 07:53:18.153 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:53:18 compute-2 nova_compute[226829]: 2026-01-31 07:53:18.271 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:53:18 compute-2 nova_compute[226829]: 2026-01-31 07:53:18.271 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:18 compute-2 nova_compute[226829]: 2026-01-31 07:53:18.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1793600199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3412492290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3412492290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:19.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:19 compute-2 nova_compute[226829]: 2026-01-31 07:53:19.653 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:53:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/556546885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:53:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/556546885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:53:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:19.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:53:20 compute-2 ceph-mon[77282]: pgmap v1682: 305 pgs: 305 active+clean; 278 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 165 KiB/s rd, 2.9 MiB/s wr, 97 op/s
Jan 31 07:53:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/556546885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/556546885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.376 226833 DEBUG nova.network.neutron [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.417 226833 DEBUG oslo_concurrency.lockutils [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.420 226833 DEBUG oslo_concurrency.lockutils [req-98806c41-2da8-4b69-bc4e-5d8397643455 req-ec18a03c-05ce-4259-8b09-5fe96f56fb7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.421 226833 DEBUG nova.network.neutron [req-98806c41-2da8-4b69-bc4e-5d8397643455 req-ec18a03c-05ce-4259-8b09-5fe96f56fb7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Refreshing network info cache for port 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.427 226833 DEBUG nova.virt.libvirt.vif [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.428 226833 DEBUG nova.network.os_vif_util [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.429 226833 DEBUG nova.network.os_vif_util [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ea:c9,bridge_name='br-int',has_traffic_filtering=True,id=6cd8ab35-cc78-4cb7-b84e-09ec579e9f19,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd8ab35-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.430 226833 DEBUG os_vif [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ea:c9,bridge_name='br-int',has_traffic_filtering=True,id=6cd8ab35-cc78-4cb7-b84e-09ec579e9f19,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd8ab35-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.431 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.432 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.433 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.442 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.443 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6cd8ab35-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.443 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6cd8ab35-cc, col_values=(('external_ids', {'iface-id': '6cd8ab35-cc78-4cb7-b84e-09ec579e9f19', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:ea:c9', 'vm-uuid': '66066b76-4c92-4b20-ba23-c3002693dc10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:20 compute-2 NetworkManager[48999]: <info>  [1769846000.4466] manager: (tap6cd8ab35-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.445 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.448 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.454 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.455 226833 INFO os_vif [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ea:c9,bridge_name='br-int',has_traffic_filtering=True,id=6cd8ab35-cc78-4cb7-b84e-09ec579e9f19,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd8ab35-cc')
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.456 226833 DEBUG nova.virt.libvirt.vif [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.457 226833 DEBUG nova.network.os_vif_util [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.457 226833 DEBUG nova.network.os_vif_util [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:ea:c9,bridge_name='br-int',has_traffic_filtering=True,id=6cd8ab35-cc78-4cb7-b84e-09ec579e9f19,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd8ab35-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.460 226833 DEBUG nova.virt.libvirt.guest [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] attach device xml: <interface type="ethernet">
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <mac address="fa:16:3e:ea:ea:c9"/>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <model type="virtio"/>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <mtu size="1442"/>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <target dev="tap6cd8ab35-cc"/>
Jan 31 07:53:20 compute-2 nova_compute[226829]: </interface>
Jan 31 07:53:20 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 07:53:20 compute-2 NetworkManager[48999]: <info>  [1769846000.4709] manager: (tap6cd8ab35-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Jan 31 07:53:20 compute-2 kernel: tap6cd8ab35-cc: entered promiscuous mode
Jan 31 07:53:20 compute-2 ovn_controller[133834]: 2026-01-31T07:53:20Z|00209|binding|INFO|Claiming lport 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 for this chassis.
Jan 31 07:53:20 compute-2 ovn_controller[133834]: 2026-01-31T07:53:20Z|00210|binding|INFO|6cd8ab35-cc78-4cb7-b84e-09ec579e9f19: Claiming fa:16:3e:ea:ea:c9 10.100.0.3
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.473 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:20 compute-2 ovn_controller[133834]: 2026-01-31T07:53:20Z|00211|binding|INFO|Setting lport 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 ovn-installed in OVS
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.483 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:20 compute-2 ovn_controller[133834]: 2026-01-31T07:53:20Z|00212|binding|INFO|Setting lport 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 up in Southbound
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.487 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.485 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:ea:c9 10.100.0.3'], port_security=['fa:16:3e:ea:ea:c9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '66066b76-4c92-4b20-ba23-c3002693dc10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=6cd8ab35-cc78-4cb7-b84e-09ec579e9f19) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.488 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 bound to our chassis
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.491 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 07:53:20 compute-2 systemd-udevd[258854]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.507 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2301fae2-bea8-452e-87b9-815f1569a53f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:20 compute-2 NetworkManager[48999]: <info>  [1769846000.5146] device (tap6cd8ab35-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:53:20 compute-2 NetworkManager[48999]: <info>  [1769846000.5157] device (tap6cd8ab35-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.534 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[a8843667-a8aa-47dc-940b-b18ef7d03b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.539 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[31114905-7469-49a4-88db-3fa8254144e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.563 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd05be9-6298-4dc0-9b8b-b3a592b5b0ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.575 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6db199-0271-43a3-b2b0-4631005de908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621274, 'reachable_time': 37963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258861, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.591 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8f434f95-a684-4b86-97ac-2fdf66eec85b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621283, 'tstamp': 621283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258862, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621285, 'tstamp': 621285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258862, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.593 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.596 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.595 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.596 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.597 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:20.597 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.637 226833 DEBUG nova.virt.libvirt.driver [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.637 226833 DEBUG nova.virt.libvirt.driver [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.638 226833 DEBUG nova.virt.libvirt.driver [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:97:6f:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.638 226833 DEBUG nova.virt.libvirt.driver [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:39:34:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.639 226833 DEBUG nova.virt.libvirt.driver [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:ea:ea:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.668 226833 DEBUG nova.virt.libvirt.guest [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <nova:creationTime>2026-01-31 07:53:20</nova:creationTime>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <nova:flavor name="m1.nano">
Jan 31 07:53:20 compute-2 nova_compute[226829]:     <nova:memory>128</nova:memory>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     <nova:disk>1</nova:disk>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     <nova:swap>0</nova:swap>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     <nova:vcpus>1</nova:vcpus>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   </nova:flavor>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <nova:owner>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   </nova:owner>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   <nova:ports>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:53:20 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     <nova:port uuid="bf305380-6000-4cea-86cf-123bf60de12a">
Jan 31 07:53:20 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     <nova:port uuid="6cd8ab35-cc78-4cb7-b84e-09ec579e9f19">
Jan 31 07:53:20 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:53:20 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:20 compute-2 nova_compute[226829]:   </nova:ports>
Jan 31 07:53:20 compute-2 nova_compute[226829]: </nova:instance>
Jan 31 07:53:20 compute-2 nova_compute[226829]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 31 07:53:20 compute-2 nova_compute[226829]: 2026-01-31 07:53:20.700 226833 DEBUG oslo_concurrency.lockutils [None req-9fbe4809-d227-480d-a691-d4e7af4ea8d5 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1717920373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2781719887' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2781719887' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:21 compute-2 nova_compute[226829]: 2026-01-31 07:53:21.284 226833 DEBUG nova.compute.manager [req-48cdb7d1-bbcd-4a41-b3f1-9d1b00c07920 req-1666163f-f527-4453-8e2d-71323446f4a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-plugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:21 compute-2 nova_compute[226829]: 2026-01-31 07:53:21.285 226833 DEBUG oslo_concurrency.lockutils [req-48cdb7d1-bbcd-4a41-b3f1-9d1b00c07920 req-1666163f-f527-4453-8e2d-71323446f4a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:21 compute-2 nova_compute[226829]: 2026-01-31 07:53:21.285 226833 DEBUG oslo_concurrency.lockutils [req-48cdb7d1-bbcd-4a41-b3f1-9d1b00c07920 req-1666163f-f527-4453-8e2d-71323446f4a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:21 compute-2 nova_compute[226829]: 2026-01-31 07:53:21.286 226833 DEBUG oslo_concurrency.lockutils [req-48cdb7d1-bbcd-4a41-b3f1-9d1b00c07920 req-1666163f-f527-4453-8e2d-71323446f4a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:21 compute-2 nova_compute[226829]: 2026-01-31 07:53:21.286 226833 DEBUG nova.compute.manager [req-48cdb7d1-bbcd-4a41-b3f1-9d1b00c07920 req-1666163f-f527-4453-8e2d-71323446f4a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-plugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:53:21 compute-2 nova_compute[226829]: 2026-01-31 07:53:21.287 226833 WARNING nova.compute.manager [req-48cdb7d1-bbcd-4a41-b3f1-9d1b00c07920 req-1666163f-f527-4453-8e2d-71323446f4a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received unexpected event network-vif-plugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 for instance with vm_state active and task_state None.
Jan 31 07:53:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:21.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:53:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:21.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:53:21 compute-2 ovn_controller[133834]: 2026-01-31T07:53:21Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ea:ea:c9 10.100.0.3
Jan 31 07:53:21 compute-2 ovn_controller[133834]: 2026-01-31T07:53:21Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:ea:c9 10.100.0.3
Jan 31 07:53:22 compute-2 nova_compute[226829]: 2026-01-31 07:53:22.247 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:22 compute-2 nova_compute[226829]: 2026-01-31 07:53:22.247 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:53:22 compute-2 ceph-mon[77282]: pgmap v1683: 305 pgs: 305 active+clean; 248 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 74 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 31 07:53:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2052022062' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2052022062' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:22 compute-2 nova_compute[226829]: 2026-01-31 07:53:22.350 226833 DEBUG nova.network.neutron [req-98806c41-2da8-4b69-bc4e-5d8397643455 req-ec18a03c-05ce-4259-8b09-5fe96f56fb7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updated VIF entry in instance network info cache for port 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:53:22 compute-2 nova_compute[226829]: 2026-01-31 07:53:22.350 226833 DEBUG nova.network.neutron [req-98806c41-2da8-4b69-bc4e-5d8397643455 req-ec18a03c-05ce-4259-8b09-5fe96f56fb7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:22 compute-2 nova_compute[226829]: 2026-01-31 07:53:22.369 226833 DEBUG oslo_concurrency.lockutils [req-98806c41-2da8-4b69-bc4e-5d8397643455 req-ec18a03c-05ce-4259-8b09-5fe96f56fb7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:53:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.005000133s ======
Jan 31 07:53:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:23.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000133s
Jan 31 07:53:23 compute-2 nova_compute[226829]: 2026-01-31 07:53:23.661 226833 DEBUG nova.compute.manager [req-f7b9b5ea-353c-4018-9cdc-8190f7c84e3c req-efab82e8-fffd-4beb-93b4-d95a60587c21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-plugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:23 compute-2 nova_compute[226829]: 2026-01-31 07:53:23.662 226833 DEBUG oslo_concurrency.lockutils [req-f7b9b5ea-353c-4018-9cdc-8190f7c84e3c req-efab82e8-fffd-4beb-93b4-d95a60587c21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:23 compute-2 nova_compute[226829]: 2026-01-31 07:53:23.662 226833 DEBUG oslo_concurrency.lockutils [req-f7b9b5ea-353c-4018-9cdc-8190f7c84e3c req-efab82e8-fffd-4beb-93b4-d95a60587c21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:23 compute-2 nova_compute[226829]: 2026-01-31 07:53:23.663 226833 DEBUG oslo_concurrency.lockutils [req-f7b9b5ea-353c-4018-9cdc-8190f7c84e3c req-efab82e8-fffd-4beb-93b4-d95a60587c21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:23 compute-2 nova_compute[226829]: 2026-01-31 07:53:23.663 226833 DEBUG nova.compute.manager [req-f7b9b5ea-353c-4018-9cdc-8190f7c84e3c req-efab82e8-fffd-4beb-93b4-d95a60587c21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-plugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:53:23 compute-2 nova_compute[226829]: 2026-01-31 07:53:23.664 226833 WARNING nova.compute.manager [req-f7b9b5ea-353c-4018-9cdc-8190f7c84e3c req-efab82e8-fffd-4beb-93b4-d95a60587c21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received unexpected event network-vif-plugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 for instance with vm_state active and task_state None.
Jan 31 07:53:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:23.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:24 compute-2 podman[258865]: 2026-01-31 07:53:24.198702795 +0000 UTC m=+0.080589274 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:53:24 compute-2 ceph-mon[77282]: pgmap v1684: 305 pgs: 305 active+clean; 182 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 31 07:53:24 compute-2 nova_compute[226829]: 2026-01-31 07:53:24.656 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:25 compute-2 nova_compute[226829]: 2026-01-31 07:53:25.267 226833 DEBUG oslo_concurrency.lockutils [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-863150d4-3984-41d0-a375-230157e3a474" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:25 compute-2 nova_compute[226829]: 2026-01-31 07:53:25.267 226833 DEBUG oslo_concurrency.lockutils [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-863150d4-3984-41d0-a375-230157e3a474" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:25 compute-2 nova_compute[226829]: 2026-01-31 07:53:25.268 226833 DEBUG nova.objects.instance [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'flavor' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:25.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:25 compute-2 nova_compute[226829]: 2026-01-31 07:53:25.447 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:53:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:25.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:53:26 compute-2 nova_compute[226829]: 2026-01-31 07:53:26.574 226833 DEBUG nova.objects.instance [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'pci_requests' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:26 compute-2 nova_compute[226829]: 2026-01-31 07:53:26.618 226833 DEBUG nova.network.neutron [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:53:26 compute-2 ceph-mon[77282]: pgmap v1685: 305 pgs: 305 active+clean; 147 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 2.0 MiB/s wr, 114 op/s
Jan 31 07:53:27 compute-2 nova_compute[226829]: 2026-01-31 07:53:27.104 226833 DEBUG nova.policy [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1c5eef3d264666bc90735dd338d82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8cbd6cc22654dfab04487522a63426c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:53:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:27.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:27 compute-2 ovn_controller[133834]: 2026-01-31T07:53:27Z|00213|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 07:53:27 compute-2 nova_compute[226829]: 2026-01-31 07:53:27.451 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:27.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:28 compute-2 ceph-mon[77282]: pgmap v1686: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 1.1 MiB/s wr, 115 op/s
Jan 31 07:53:28 compute-2 ovn_controller[133834]: 2026-01-31T07:53:28Z|00214|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.151 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.189 226833 DEBUG nova.network.neutron [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Successfully updated port: 863150d4-3984-41d0-a375-230157e3a474 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.235 226833 DEBUG oslo_concurrency.lockutils [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.236 226833 DEBUG oslo_concurrency.lockutils [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.236 226833 DEBUG nova.network.neutron [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.364 226833 DEBUG nova.compute.manager [req-f721cd4b-842d-4052-b4aa-1e6e8c85c42d req-381b6cdf-5cb0-4b22-b950-541aef8bd790 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-changed-863150d4-3984-41d0-a375-230157e3a474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.365 226833 DEBUG nova.compute.manager [req-f721cd4b-842d-4052-b4aa-1e6e8c85c42d req-381b6cdf-5cb0-4b22-b950-541aef8bd790 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Refreshing instance network info cache due to event network-changed-863150d4-3984-41d0-a375-230157e3a474. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.366 226833 DEBUG oslo_concurrency.lockutils [req-f721cd4b-842d-4052-b4aa-1e6e8c85c42d req-381b6cdf-5cb0-4b22-b950-541aef8bd790 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.590 226833 WARNING nova.network.neutron [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] 485494d9-5360-41c3-a10e-ef5098af0809 already exists in list: networks containing: ['485494d9-5360-41c3-a10e-ef5098af0809']. ignoring it
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.590 226833 WARNING nova.network.neutron [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] 485494d9-5360-41c3-a10e-ef5098af0809 already exists in list: networks containing: ['485494d9-5360-41c3-a10e-ef5098af0809']. ignoring it
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.591 226833 WARNING nova.network.neutron [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] 485494d9-5360-41c3-a10e-ef5098af0809 already exists in list: networks containing: ['485494d9-5360-41c3-a10e-ef5098af0809']. ignoring it
Jan 31 07:53:28 compute-2 ovn_controller[133834]: 2026-01-31T07:53:28Z|00215|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 07:53:28 compute-2 nova_compute[226829]: 2026-01-31 07:53:28.658 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:29.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:29 compute-2 nova_compute[226829]: 2026-01-31 07:53:29.691 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:29.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:30 compute-2 ceph-mon[77282]: pgmap v1687: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 777 KiB/s wr, 82 op/s
Jan 31 07:53:30 compute-2 nova_compute[226829]: 2026-01-31 07:53:30.449 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:31.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:53:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:31.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:53:32 compute-2 ceph-mon[77282]: pgmap v1688: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 9.2 KiB/s wr, 57 op/s
Jan 31 07:53:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:33.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:53:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:33.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:53:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:34 compute-2 ceph-mon[77282]: pgmap v1689: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 7.9 KiB/s wr, 43 op/s
Jan 31 07:53:34 compute-2 nova_compute[226829]: 2026-01-31 07:53:34.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:35.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:35 compute-2 nova_compute[226829]: 2026-01-31 07:53:35.451 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:36 compute-2 sudo[258898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:36 compute-2 sudo[258898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:36 compute-2 sudo[258898]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:36 compute-2 podman[258922]: 2026-01-31 07:53:36.449679434 +0000 UTC m=+0.050942715 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:53:36 compute-2 sudo[258934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:36 compute-2 sudo[258934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:36 compute-2 sudo[258934]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:36 compute-2 ovn_controller[133834]: 2026-01-31T07:53:36Z|00216|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 07:53:37 compute-2 nova_compute[226829]: 2026-01-31 07:53:37.005 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:37.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:37 compute-2 ceph-mon[77282]: pgmap v1690: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 27 op/s
Jan 31 07:53:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:37.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:37 compute-2 sudo[258966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:37 compute-2 sudo[258966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:37 compute-2 sudo[258966]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:37 compute-2 sudo[258991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:53:37 compute-2 sudo[258991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:37 compute-2 sudo[258991]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:37 compute-2 sudo[259016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:37 compute-2 sudo[259016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:37 compute-2 sudo[259016]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:38 compute-2 sudo[259041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:53:38 compute-2 sudo[259041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:38 compute-2 sudo[259041]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:38 compute-2 ceph-mon[77282]: pgmap v1691: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 8.3 KiB/s wr, 13 op/s
Jan 31 07:53:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:39.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.769 226833 DEBUG nova.network.neutron [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.771 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.829 226833 DEBUG oslo_concurrency.lockutils [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.830 226833 DEBUG oslo_concurrency.lockutils [req-f721cd4b-842d-4052-b4aa-1e6e8c85c42d req-381b6cdf-5cb0-4b22-b950-541aef8bd790 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.831 226833 DEBUG nova.network.neutron [req-f721cd4b-842d-4052-b4aa-1e6e8c85c42d req-381b6cdf-5cb0-4b22-b950-541aef8bd790 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Refreshing network info cache for port 863150d4-3984-41d0-a375-230157e3a474 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.834 226833 DEBUG nova.virt.libvirt.vif [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.834 226833 DEBUG nova.network.os_vif_util [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.835 226833 DEBUG nova.network.os_vif_util [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:a8,bridge_name='br-int',has_traffic_filtering=True,id=863150d4-3984-41d0-a375-230157e3a474,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap863150d4-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.836 226833 DEBUG os_vif [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:a8,bridge_name='br-int',has_traffic_filtering=True,id=863150d4-3984-41d0-a375-230157e3a474,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap863150d4-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.836 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.837 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.837 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.840 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.840 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap863150d4-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.841 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap863150d4-39, col_values=(('external_ids', {'iface-id': '863150d4-3984-41d0-a375-230157e3a474', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:ce:a8', 'vm-uuid': '66066b76-4c92-4b20-ba23-c3002693dc10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.842 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:39 compute-2 NetworkManager[48999]: <info>  [1769846019.8436] manager: (tap863150d4-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.845 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.850 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.851 226833 INFO os_vif [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:a8,bridge_name='br-int',has_traffic_filtering=True,id=863150d4-3984-41d0-a375-230157e3a474,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap863150d4-39')
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.852 226833 DEBUG nova.virt.libvirt.vif [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.852 226833 DEBUG nova.network.os_vif_util [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.853 226833 DEBUG nova.network.os_vif_util [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:a8,bridge_name='br-int',has_traffic_filtering=True,id=863150d4-3984-41d0-a375-230157e3a474,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap863150d4-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.855 226833 DEBUG nova.virt.libvirt.guest [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] attach device xml: <interface type="ethernet">
Jan 31 07:53:39 compute-2 nova_compute[226829]:   <mac address="fa:16:3e:a3:ce:a8"/>
Jan 31 07:53:39 compute-2 nova_compute[226829]:   <model type="virtio"/>
Jan 31 07:53:39 compute-2 nova_compute[226829]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:53:39 compute-2 nova_compute[226829]:   <mtu size="1442"/>
Jan 31 07:53:39 compute-2 nova_compute[226829]:   <target dev="tap863150d4-39"/>
Jan 31 07:53:39 compute-2 nova_compute[226829]: </interface>
Jan 31 07:53:39 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 07:53:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:53:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:39.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:53:39 compute-2 kernel: tap863150d4-39: entered promiscuous mode
Jan 31 07:53:39 compute-2 NetworkManager[48999]: <info>  [1769846019.8713] manager: (tap863150d4-39): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.872 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:39 compute-2 ovn_controller[133834]: 2026-01-31T07:53:39Z|00217|binding|INFO|Claiming lport 863150d4-3984-41d0-a375-230157e3a474 for this chassis.
Jan 31 07:53:39 compute-2 ovn_controller[133834]: 2026-01-31T07:53:39Z|00218|binding|INFO|863150d4-3984-41d0-a375-230157e3a474: Claiming fa:16:3e:a3:ce:a8 10.100.0.13
Jan 31 07:53:39 compute-2 ovn_controller[133834]: 2026-01-31T07:53:39Z|00219|binding|INFO|Setting lport 863150d4-3984-41d0-a375-230157e3a474 ovn-installed in OVS
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.879 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.884 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:39 compute-2 systemd-udevd[259102]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:53:39 compute-2 ovn_controller[133834]: 2026-01-31T07:53:39Z|00220|binding|INFO|Setting lport 863150d4-3984-41d0-a375-230157e3a474 up in Southbound
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.899 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:ce:a8 10.100.0.13'], port_security=['fa:16:3e:a3:ce:a8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1306725410', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '66066b76-4c92-4b20-ba23-c3002693dc10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1306725410', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=863150d4-3984-41d0-a375-230157e3a474) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.900 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 863150d4-3984-41d0-a375-230157e3a474 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 bound to our chassis
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.902 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 07:53:39 compute-2 NetworkManager[48999]: <info>  [1769846019.9067] device (tap863150d4-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:53:39 compute-2 NetworkManager[48999]: <info>  [1769846019.9074] device (tap863150d4-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.913 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ffdb376d-ef73-4f23-8de9-f5f695dd2e4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.933 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c59896dc-a743-473c-b8c8-0556a1530913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.937 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdfb2da-10c6-492e-a88a-677a8077aa7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.955 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdcd9bc-82de-4d23-bf52-7eb593a4f6c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.968 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[015e648a-0e35-4d5f-b125-aedeacedb153]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621274, 'reachable_time': 37963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259110, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.977 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9a50bfb8-629f-41d7-ac4d-b5ddbf1c013e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621283, 'tstamp': 621283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259111, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621285, 'tstamp': 621285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259111, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.979 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:39 compute-2 nova_compute[226829]: 2026-01-31 07:53:39.980 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.981 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.981 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.982 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:39.982 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:40 compute-2 nova_compute[226829]: 2026-01-31 07:53:40.382 226833 DEBUG nova.virt.libvirt.driver [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:53:40 compute-2 nova_compute[226829]: 2026-01-31 07:53:40.383 226833 DEBUG nova.virt.libvirt.driver [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:53:40 compute-2 nova_compute[226829]: 2026-01-31 07:53:40.383 226833 DEBUG nova.virt.libvirt.driver [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:97:6f:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:53:40 compute-2 nova_compute[226829]: 2026-01-31 07:53:40.383 226833 DEBUG nova.virt.libvirt.driver [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:39:34:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:53:40 compute-2 nova_compute[226829]: 2026-01-31 07:53:40.383 226833 DEBUG nova.virt.libvirt.driver [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:ea:ea:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:53:40 compute-2 nova_compute[226829]: 2026-01-31 07:53:40.383 226833 DEBUG nova.virt.libvirt.driver [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:a3:ce:a8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:53:40 compute-2 ceph-mon[77282]: pgmap v1692: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 7.7 KiB/s wr, 1 op/s
Jan 31 07:53:40 compute-2 nova_compute[226829]: 2026-01-31 07:53:40.474 226833 DEBUG nova.virt.libvirt.guest [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:53:40 compute-2 nova_compute[226829]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:53:40 compute-2 nova_compute[226829]:   <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:53:40 compute-2 nova_compute[226829]:   <nova:creationTime>2026-01-31 07:53:40</nova:creationTime>
Jan 31 07:53:40 compute-2 nova_compute[226829]:   <nova:flavor name="m1.nano">
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:memory>128</nova:memory>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:disk>1</nova:disk>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:swap>0</nova:swap>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:vcpus>1</nova:vcpus>
Jan 31 07:53:40 compute-2 nova_compute[226829]:   </nova:flavor>
Jan 31 07:53:40 compute-2 nova_compute[226829]:   <nova:owner>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:53:40 compute-2 nova_compute[226829]:   </nova:owner>
Jan 31 07:53:40 compute-2 nova_compute[226829]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:53:40 compute-2 nova_compute[226829]:   <nova:ports>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:53:40 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:port uuid="bf305380-6000-4cea-86cf-123bf60de12a">
Jan 31 07:53:40 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:port uuid="6cd8ab35-cc78-4cb7-b84e-09ec579e9f19">
Jan 31 07:53:40 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     <nova:port uuid="863150d4-3984-41d0-a375-230157e3a474">
Jan 31 07:53:40 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:53:40 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:40 compute-2 nova_compute[226829]:   </nova:ports>
Jan 31 07:53:40 compute-2 nova_compute[226829]: </nova:instance>
Jan 31 07:53:40 compute-2 nova_compute[226829]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 31 07:53:40 compute-2 nova_compute[226829]: 2026-01-31 07:53:40.623 226833 DEBUG oslo_concurrency.lockutils [None req-1d89f036-5864-4103-9b1b-ceea06ba8567 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-863150d4-3984-41d0-a375-230157e3a474" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 15.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:41.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:53:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:53:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:53:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:53:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:53:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:41.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:42 compute-2 ceph-mon[77282]: pgmap v1693: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 0 op/s
Jan 31 07:53:42 compute-2 ovn_controller[133834]: 2026-01-31T07:53:42Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:ce:a8 10.100.0.13
Jan 31 07:53:42 compute-2 ovn_controller[133834]: 2026-01-31T07:53:42Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:ce:a8 10.100.0.13
Jan 31 07:53:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:43.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:53:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:43.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:53:44 compute-2 ceph-mon[77282]: pgmap v1694: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.7 KiB/s wr, 0 op/s
Jan 31 07:53:44 compute-2 nova_compute[226829]: 2026-01-31 07:53:44.772 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:44 compute-2 nova_compute[226829]: 2026-01-31 07:53:44.842 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:45.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:45 compute-2 nova_compute[226829]: 2026-01-31 07:53:45.437 226833 DEBUG nova.compute.manager [req-b0b1c249-9e57-4820-acf2-6e29d92678ee req-7f8d3eac-9f2d-42a7-925b-120fad43208b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-plugged-863150d4-3984-41d0-a375-230157e3a474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:45 compute-2 nova_compute[226829]: 2026-01-31 07:53:45.437 226833 DEBUG oslo_concurrency.lockutils [req-b0b1c249-9e57-4820-acf2-6e29d92678ee req-7f8d3eac-9f2d-42a7-925b-120fad43208b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:45 compute-2 nova_compute[226829]: 2026-01-31 07:53:45.437 226833 DEBUG oslo_concurrency.lockutils [req-b0b1c249-9e57-4820-acf2-6e29d92678ee req-7f8d3eac-9f2d-42a7-925b-120fad43208b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:45 compute-2 nova_compute[226829]: 2026-01-31 07:53:45.437 226833 DEBUG oslo_concurrency.lockutils [req-b0b1c249-9e57-4820-acf2-6e29d92678ee req-7f8d3eac-9f2d-42a7-925b-120fad43208b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:45 compute-2 nova_compute[226829]: 2026-01-31 07:53:45.438 226833 DEBUG nova.compute.manager [req-b0b1c249-9e57-4820-acf2-6e29d92678ee req-7f8d3eac-9f2d-42a7-925b-120fad43208b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-plugged-863150d4-3984-41d0-a375-230157e3a474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:53:45 compute-2 nova_compute[226829]: 2026-01-31 07:53:45.438 226833 WARNING nova.compute.manager [req-b0b1c249-9e57-4820-acf2-6e29d92678ee req-7f8d3eac-9f2d-42a7-925b-120fad43208b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received unexpected event network-vif-plugged-863150d4-3984-41d0-a375-230157e3a474 for instance with vm_state active and task_state None.
Jan 31 07:53:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/454254492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/454254492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:45.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:46 compute-2 nova_compute[226829]: 2026-01-31 07:53:46.387 226833 DEBUG nova.network.neutron [req-f721cd4b-842d-4052-b4aa-1e6e8c85c42d req-381b6cdf-5cb0-4b22-b950-541aef8bd790 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updated VIF entry in instance network info cache for port 863150d4-3984-41d0-a375-230157e3a474. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:53:46 compute-2 nova_compute[226829]: 2026-01-31 07:53:46.388 226833 DEBUG nova.network.neutron [req-f721cd4b-842d-4052-b4aa-1e6e8c85c42d req-381b6cdf-5cb0-4b22-b950-541aef8bd790 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:46 compute-2 ceph-mon[77282]: pgmap v1695: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.7 KiB/s wr, 0 op/s
Jan 31 07:53:46 compute-2 nova_compute[226829]: 2026-01-31 07:53:46.772 226833 DEBUG oslo_concurrency.lockutils [req-f721cd4b-842d-4052-b4aa-1e6e8c85c42d req-381b6cdf-5cb0-4b22-b950-541aef8bd790 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:53:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:53:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:47.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:53:47 compute-2 nova_compute[226829]: 2026-01-31 07:53:47.711 226833 DEBUG oslo_concurrency.lockutils [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-bf305380-6000-4cea-86cf-123bf60de12a" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:47 compute-2 nova_compute[226829]: 2026-01-31 07:53:47.712 226833 DEBUG oslo_concurrency.lockutils [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-bf305380-6000-4cea-86cf-123bf60de12a" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:47 compute-2 sudo[259116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:47 compute-2 sudo[259116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:47 compute-2 sudo[259116]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:47 compute-2 sudo[259141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:53:47 compute-2 sudo[259141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:47 compute-2 sudo[259141]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:47.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.031 226833 DEBUG nova.objects.instance [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'flavor' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.050 226833 DEBUG nova.compute.manager [req-a8bc22ab-7178-48ae-8314-9ba7bca9477f req-683b137b-1d40-4211-8ec2-510fa7f814d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-plugged-863150d4-3984-41d0-a375-230157e3a474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.050 226833 DEBUG oslo_concurrency.lockutils [req-a8bc22ab-7178-48ae-8314-9ba7bca9477f req-683b137b-1d40-4211-8ec2-510fa7f814d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.051 226833 DEBUG oslo_concurrency.lockutils [req-a8bc22ab-7178-48ae-8314-9ba7bca9477f req-683b137b-1d40-4211-8ec2-510fa7f814d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.051 226833 DEBUG oslo_concurrency.lockutils [req-a8bc22ab-7178-48ae-8314-9ba7bca9477f req-683b137b-1d40-4211-8ec2-510fa7f814d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.051 226833 DEBUG nova.compute.manager [req-a8bc22ab-7178-48ae-8314-9ba7bca9477f req-683b137b-1d40-4211-8ec2-510fa7f814d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-plugged-863150d4-3984-41d0-a375-230157e3a474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.052 226833 WARNING nova.compute.manager [req-a8bc22ab-7178-48ae-8314-9ba7bca9477f req-683b137b-1d40-4211-8ec2-510fa7f814d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received unexpected event network-vif-plugged-863150d4-3984-41d0-a375-230157e3a474 for instance with vm_state active and task_state None.
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.089 226833 DEBUG nova.virt.libvirt.vif [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.089 226833 DEBUG nova.network.os_vif_util [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.090 226833 DEBUG nova.network.os_vif_util [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.093 226833 DEBUG nova.virt.libvirt.guest [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:34:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf305380-60"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.095 226833 DEBUG nova.virt.libvirt.guest [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:34:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf305380-60"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.098 226833 DEBUG nova.virt.libvirt.driver [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Attempting to detach device tapbf305380-60 from instance 66066b76-4c92-4b20-ba23-c3002693dc10 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.099 226833 DEBUG nova.virt.libvirt.guest [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] detach device xml: <interface type="ethernet">
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <mac address="fa:16:3e:39:34:eb"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <model type="virtio"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <mtu size="1442"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <target dev="tapbf305380-60"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]: </interface>
Jan 31 07:53:48 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.208 226833 DEBUG nova.virt.libvirt.guest [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:34:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf305380-60"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.211 226833 DEBUG nova.virt.libvirt.guest [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:34:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf305380-60"/></interface>not found in domain: <domain type='kvm' id='31'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <name>instance-00000049</name>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <uuid>66066b76-4c92-4b20-ba23-c3002693dc10</uuid>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:creationTime>2026-01-31 07:53:40</nova:creationTime>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:flavor name="m1.nano">
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:memory>128</nova:memory>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:disk>1</nova:disk>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:swap>0</nova:swap>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:vcpus>1</nova:vcpus>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </nova:flavor>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:owner>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </nova:owner>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:ports>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="bf305380-6000-4cea-86cf-123bf60de12a">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="6cd8ab35-cc78-4cb7-b84e-09ec579e9f19">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="863150d4-3984-41d0-a375-230157e3a474">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </nova:ports>
Jan 31 07:53:48 compute-2 nova_compute[226829]: </nova:instance>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <memory unit='KiB'>131072</memory>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <vcpu placement='static'>1</vcpu>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <resource>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <partition>/machine</partition>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </resource>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <sysinfo type='smbios'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <system>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='manufacturer'>RDO</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='product'>OpenStack Compute</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='serial'>66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='uuid'>66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='family'>Virtual Machine</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </system>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <os>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <boot dev='hd'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <smbios mode='sysinfo'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </os>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <features>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <vmcoreinfo state='on'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </features>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <cpu mode='custom' match='exact' check='full'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <model fallback='forbid'>Nehalem</model>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <feature policy='require' name='x2apic'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <feature policy='require' name='hypervisor'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <feature policy='require' name='vme'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <clock offset='utc'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <timer name='pit' tickpolicy='delay'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <timer name='hpet' present='no'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <on_poweroff>destroy</on_poweroff>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <on_reboot>restart</on_reboot>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <on_crash>destroy</on_crash>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <disk type='network' device='disk'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <auth username='openstack'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <source protocol='rbd' name='vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk' index='2'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.100' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.102' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.101' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       </source>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='vda' bus='virtio'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='virtio-disk0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <disk type='network' device='cdrom'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <auth username='openstack'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <source protocol='rbd' name='vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk.config' index='1'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.100' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.102' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.101' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       </source>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='sda' bus='sata'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <readonly/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='sata0-0-0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='0' model='pcie-root'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pcie.0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='1' port='0x10'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='2' port='0x11'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='3' port='0x12'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.3'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='4' port='0x13'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.4'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='5' port='0x14'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.5'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='6' port='0x15'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.6'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='7' port='0x16'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.7'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='8' port='0x17'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.8'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='9' port='0x18'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.9'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='10' port='0x19'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.10'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='11' port='0x1a'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.11'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='12' port='0x1b'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.12'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='13' port='0x1c'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.13'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='14' port='0x1d'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.14'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='15' port='0x1e'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.15'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='16' port='0x1f'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.16'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='17' port='0x20'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.17'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='18' port='0x21'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.18'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='19' port='0x22'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.19'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='20' port='0x23'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.20'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='21' port='0x24'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.21'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='22' port='0x25'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.22'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='23' port='0x26'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.23'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='24' port='0x27'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.24'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='25' port='0x28'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.25'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-pci-bridge'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.26'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='usb'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='sata' index='0'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='ide'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:97:6f:85'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='tapdb982ab1-0c'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='net0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:39:34:eb'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='tapbf305380-60'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='net1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:ea:ea:c9'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='tap6cd8ab35-cc'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='net2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:a3:ce:a8'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='tap863150d4-39'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='net3'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <serial type='pty'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <source path='/dev/pts/0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <log file='/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log' append='off'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target type='isa-serial' port='0'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <model name='isa-serial'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       </target>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='serial0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <console type='pty' tty='/dev/pts/0'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <source path='/dev/pts/0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <log file='/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log' append='off'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target type='serial' port='0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='serial0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </console>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <input type='tablet' bus='usb'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='input0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='usb' bus='0' port='1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <input type='mouse' bus='ps2'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='input1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <input type='keyboard' bus='ps2'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='input2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <listen type='address' address='::0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </graphics>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <audio id='1' type='none'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <video>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model type='virtio' heads='1' primary='yes'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='video0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </video>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <watchdog model='itco' action='reset'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='watchdog0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </watchdog>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <memballoon model='virtio'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <stats period='10'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='balloon0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <rng model='virtio'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <backend model='random'>/dev/urandom</backend>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='rng0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <label>system_u:system_r:svirt_t:s0:c380,c790</label>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c380,c790</imagelabel>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </seclabel>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <label>+107:+107</label>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <imagelabel>+107:+107</imagelabel>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </seclabel>
Jan 31 07:53:48 compute-2 nova_compute[226829]: </domain>
Jan 31 07:53:48 compute-2 nova_compute[226829]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.213 226833 INFO nova.virt.libvirt.driver [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully detached device tapbf305380-60 from instance 66066b76-4c92-4b20-ba23-c3002693dc10 from the persistent domain config.
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.213 226833 DEBUG nova.virt.libvirt.driver [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] (1/8): Attempting to detach device tapbf305380-60 with device alias net1 from instance 66066b76-4c92-4b20-ba23-c3002693dc10 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.214 226833 DEBUG nova.virt.libvirt.guest [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] detach device xml: <interface type="ethernet">
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <mac address="fa:16:3e:39:34:eb"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <model type="virtio"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <mtu size="1442"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <target dev="tapbf305380-60"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]: </interface>
Jan 31 07:53:48 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:53:48 compute-2 kernel: tapbf305380-60 (unregistering): left promiscuous mode
Jan 31 07:53:48 compute-2 NetworkManager[48999]: <info>  [1769846028.3252] device (tapbf305380-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:53:48 compute-2 ovn_controller[133834]: 2026-01-31T07:53:48Z|00221|binding|INFO|Releasing lport bf305380-6000-4cea-86cf-123bf60de12a from this chassis (sb_readonly=0)
Jan 31 07:53:48 compute-2 ovn_controller[133834]: 2026-01-31T07:53:48Z|00222|binding|INFO|Setting lport bf305380-6000-4cea-86cf-123bf60de12a down in Southbound
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.336 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:48 compute-2 ovn_controller[133834]: 2026-01-31T07:53:48Z|00223|binding|INFO|Removing iface tapbf305380-60 ovn-installed in OVS
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.341 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769846028.3411677, 66066b76-4c92-4b20-ba23-c3002693dc10 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.344 226833 DEBUG nova.virt.libvirt.driver [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Start waiting for the detach event from libvirt for device tapbf305380-60 with device alias net1 for instance 66066b76-4c92-4b20-ba23-c3002693dc10 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.345 226833 DEBUG nova.virt.libvirt.guest [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:34:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf305380-60"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.346 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.353 226833 DEBUG nova.virt.libvirt.guest [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:34:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf305380-60"/></interface>not found in domain: <domain type='kvm' id='31'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <name>instance-00000049</name>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <uuid>66066b76-4c92-4b20-ba23-c3002693dc10</uuid>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:creationTime>2026-01-31 07:53:40</nova:creationTime>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:flavor name="m1.nano">
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:memory>128</nova:memory>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:disk>1</nova:disk>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:swap>0</nova:swap>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:vcpus>1</nova:vcpus>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </nova:flavor>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:owner>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </nova:owner>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:ports>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="bf305380-6000-4cea-86cf-123bf60de12a">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="6cd8ab35-cc78-4cb7-b84e-09ec579e9f19">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="863150d4-3984-41d0-a375-230157e3a474">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </nova:ports>
Jan 31 07:53:48 compute-2 nova_compute[226829]: </nova:instance>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <memory unit='KiB'>131072</memory>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <vcpu placement='static'>1</vcpu>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <resource>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <partition>/machine</partition>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </resource>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <sysinfo type='smbios'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <system>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='manufacturer'>RDO</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='product'>OpenStack Compute</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='serial'>66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='uuid'>66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <entry name='family'>Virtual Machine</entry>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </system>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <os>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <boot dev='hd'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <smbios mode='sysinfo'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </os>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <features>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <vmcoreinfo state='on'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </features>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <cpu mode='custom' match='exact' check='full'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <model fallback='forbid'>Nehalem</model>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <feature policy='require' name='x2apic'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <feature policy='require' name='hypervisor'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <feature policy='require' name='vme'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <clock offset='utc'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <timer name='pit' tickpolicy='delay'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <timer name='hpet' present='no'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <on_poweroff>destroy</on_poweroff>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <on_reboot>restart</on_reboot>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <on_crash>destroy</on_crash>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <disk type='network' device='disk'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <auth username='openstack'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <source protocol='rbd' name='vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk' index='2'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.100' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.102' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.101' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       </source>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='vda' bus='virtio'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='virtio-disk0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <disk type='network' device='cdrom'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <auth username='openstack'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <source protocol='rbd' name='vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk.config' index='1'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.100' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.102' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <host name='192.168.122.101' port='6789'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       </source>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='sda' bus='sata'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <readonly/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='sata0-0-0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='0' model='pcie-root'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pcie.0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='1' port='0x10'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='2' port='0x11'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='3' port='0x12'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.3'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='4' port='0x13'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.4'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='5' port='0x14'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.5'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='6' port='0x15'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.6'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='7' port='0x16'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.7'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='8' port='0x17'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.8'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='9' port='0x18'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.9'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='10' port='0x19'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.10'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='11' port='0x1a'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.11'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='12' port='0x1b'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.12'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='13' port='0x1c'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.13'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='14' port='0x1d'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.14'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='15' port='0x1e'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.15'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='16' port='0x1f'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.16'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='17' port='0x20'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.17'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='18' port='0x21'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.18'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='19' port='0x22'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.19'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='20' port='0x23'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.20'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='21' port='0x24'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.21'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='22' port='0x25'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.22'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='23' port='0x26'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.23'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='24' port='0x27'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.24'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target chassis='25' port='0x28'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.25'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model name='pcie-pci-bridge'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='pci.26'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='usb'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <controller type='sata' index='0'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='ide'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:97:6f:85'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='tapdb982ab1-0c'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='net0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:ea:ea:c9'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='tap6cd8ab35-cc'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='net2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:a3:ce:a8'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target dev='tap863150d4-39'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='net3'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <serial type='pty'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <source path='/dev/pts/0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <log file='/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log' append='off'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target type='isa-serial' port='0'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:         <model name='isa-serial'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       </target>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='serial0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <console type='pty' tty='/dev/pts/0'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <source path='/dev/pts/0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <log file='/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log' append='off'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <target type='serial' port='0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='serial0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </console>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <input type='tablet' bus='usb'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='input0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='usb' bus='0' port='1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <input type='mouse' bus='ps2'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='input1'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <input type='keyboard' bus='ps2'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='input2'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <listen type='address' address='::0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </graphics>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <audio id='1' type='none'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <video>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <model type='virtio' heads='1' primary='yes'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='video0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </video>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <watchdog model='itco' action='reset'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='watchdog0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </watchdog>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <memballoon model='virtio'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <stats period='10'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='balloon0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <rng model='virtio'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <backend model='random'>/dev/urandom</backend>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <alias name='rng0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <label>system_u:system_r:svirt_t:s0:c380,c790</label>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c380,c790</imagelabel>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </seclabel>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <label>+107:+107</label>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <imagelabel>+107:+107</imagelabel>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </seclabel>
Jan 31 07:53:48 compute-2 nova_compute[226829]: </domain>
Jan 31 07:53:48 compute-2 nova_compute[226829]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.356 226833 INFO nova.virt.libvirt.driver [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully detached device tapbf305380-60 from instance 66066b76-4c92-4b20-ba23-c3002693dc10 from the live domain config.
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.357 226833 DEBUG nova.virt.libvirt.vif [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.358 226833 DEBUG nova.network.os_vif_util [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.359 226833 DEBUG nova.network.os_vif_util [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.359 226833 DEBUG os_vif [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.363 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.364 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf305380-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.366 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.368 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.369 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.373 226833 INFO os_vif [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60')
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.374 226833 DEBUG nova.virt.libvirt.guest [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:creationTime>2026-01-31 07:53:48</nova:creationTime>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:flavor name="m1.nano">
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:memory>128</nova:memory>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:disk>1</nova:disk>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:swap>0</nova:swap>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:vcpus>1</nova:vcpus>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </nova:flavor>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:owner>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </nova:owner>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   <nova:ports>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="6cd8ab35-cc78-4cb7-b84e-09ec579e9f19">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     <nova:port uuid="863150d4-3984-41d0-a375-230157e3a474">
Jan 31 07:53:48 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:53:48 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:48 compute-2 nova_compute[226829]:   </nova:ports>
Jan 31 07:53:48 compute-2 nova_compute[226829]: </nova:instance>
Jan 31 07:53:48 compute-2 nova_compute[226829]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 31 07:53:48 compute-2 ceph-mon[77282]: pgmap v1696: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Jan 31 07:53:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.517 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:34:eb 10.100.0.8'], port_security=['fa:16:3e:39:34:eb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '66066b76-4c92-4b20-ba23-c3002693dc10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=bf305380-6000-4cea-86cf-123bf60de12a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.518 143841 INFO neutron.agent.ovn.metadata.agent [-] Port bf305380-6000-4cea-86cf-123bf60de12a in datapath 485494d9-5360-41c3-a10e-ef5098af0809 unbound from our chassis
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.520 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.534 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0af83a-0c58-477c-bbc7-41feb34bf58e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.559 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[08413849-4d33-42df-970c-056d70c74d15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.563 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d20bf0a7-355a-42c3-8646-6a66adc51606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.589 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdbc51b-0270-498e-a6fe-54ead6ae0524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.601 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[464fd00f-9335-485c-a52f-e33ad151357d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 11, 'rx_bytes': 826, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 11, 'rx_bytes': 826, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621274, 'reachable_time': 37963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259178, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.615 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e8990e0f-c1f0-4a17-8abb-77f1a5606bd5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621283, 'tstamp': 621283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259179, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621285, 'tstamp': 621285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259179, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.617 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.618 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:48 compute-2 nova_compute[226829]: 2026-01-31 07:53:48.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.619 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.620 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.620 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:48.620 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:49.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:49 compute-2 nova_compute[226829]: 2026-01-31 07:53:49.774 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:49.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:50 compute-2 ceph-mon[77282]: pgmap v1697: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 07:53:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:51.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:51 compute-2 nova_compute[226829]: 2026-01-31 07:53:51.477 226833 DEBUG nova.compute.manager [req-b4a6c8b5-cc29-4f01-814b-26bb19c67bcc req-25334035-6e14-42c4-adf4-660b37f51f5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-unplugged-bf305380-6000-4cea-86cf-123bf60de12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:51 compute-2 nova_compute[226829]: 2026-01-31 07:53:51.477 226833 DEBUG oslo_concurrency.lockutils [req-b4a6c8b5-cc29-4f01-814b-26bb19c67bcc req-25334035-6e14-42c4-adf4-660b37f51f5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:51 compute-2 nova_compute[226829]: 2026-01-31 07:53:51.478 226833 DEBUG oslo_concurrency.lockutils [req-b4a6c8b5-cc29-4f01-814b-26bb19c67bcc req-25334035-6e14-42c4-adf4-660b37f51f5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:51 compute-2 nova_compute[226829]: 2026-01-31 07:53:51.478 226833 DEBUG oslo_concurrency.lockutils [req-b4a6c8b5-cc29-4f01-814b-26bb19c67bcc req-25334035-6e14-42c4-adf4-660b37f51f5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:51 compute-2 nova_compute[226829]: 2026-01-31 07:53:51.478 226833 DEBUG nova.compute.manager [req-b4a6c8b5-cc29-4f01-814b-26bb19c67bcc req-25334035-6e14-42c4-adf4-660b37f51f5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-unplugged-bf305380-6000-4cea-86cf-123bf60de12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:53:51 compute-2 nova_compute[226829]: 2026-01-31 07:53:51.478 226833 WARNING nova.compute.manager [req-b4a6c8b5-cc29-4f01-814b-26bb19c67bcc req-25334035-6e14-42c4-adf4-660b37f51f5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received unexpected event network-vif-unplugged-bf305380-6000-4cea-86cf-123bf60de12a for instance with vm_state active and task_state None.
Jan 31 07:53:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:51 compute-2 nova_compute[226829]: 2026-01-31 07:53:51.884 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:51.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:52 compute-2 ceph-mon[77282]: pgmap v1698: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 07:53:52 compute-2 nova_compute[226829]: 2026-01-31 07:53:52.582 226833 DEBUG oslo_concurrency.lockutils [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:53:52 compute-2 nova_compute[226829]: 2026-01-31 07:53:52.583 226833 DEBUG oslo_concurrency.lockutils [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:53:52 compute-2 nova_compute[226829]: 2026-01-31 07:53:52.583 226833 DEBUG nova.network.neutron [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:53:52 compute-2 nova_compute[226829]: 2026-01-31 07:53:52.686 226833 DEBUG nova.compute.manager [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-deleted-bf305380-6000-4cea-86cf-123bf60de12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:52 compute-2 nova_compute[226829]: 2026-01-31 07:53:52.686 226833 INFO nova.compute.manager [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Neutron deleted interface bf305380-6000-4cea-86cf-123bf60de12a; detaching it from the instance and deleting it from the info cache
Jan 31 07:53:52 compute-2 nova_compute[226829]: 2026-01-31 07:53:52.687 226833 DEBUG nova.network.neutron [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:52 compute-2 nova_compute[226829]: 2026-01-31 07:53:52.883 226833 DEBUG nova.objects.instance [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'system_metadata' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:52 compute-2 nova_compute[226829]: 2026-01-31 07:53:52.967 226833 DEBUG nova.objects.instance [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'flavor' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.013 226833 DEBUG nova.virt.libvirt.vif [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.014 226833 DEBUG nova.network.os_vif_util [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.015 226833 DEBUG nova.network.os_vif_util [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.018 226833 DEBUG nova.virt.libvirt.guest [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:34:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf305380-60"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.022 226833 DEBUG nova.virt.libvirt.guest [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:34:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf305380-60"/></interface>not found in domain: <domain type='kvm' id='31'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <name>instance-00000049</name>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <uuid>66066b76-4c92-4b20-ba23-c3002693dc10</uuid>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:creationTime>2026-01-31 07:53:48</nova:creationTime>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:flavor name="m1.nano">
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:memory>128</nova:memory>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:disk>1</nova:disk>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:swap>0</nova:swap>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:vcpus>1</nova:vcpus>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </nova:flavor>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:owner>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </nova:owner>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:ports>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:port uuid="6cd8ab35-cc78-4cb7-b84e-09ec579e9f19">
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:port uuid="863150d4-3984-41d0-a375-230157e3a474">
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </nova:ports>
Jan 31 07:53:53 compute-2 nova_compute[226829]: </nova:instance>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <memory unit='KiB'>131072</memory>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <vcpu placement='static'>1</vcpu>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <resource>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <partition>/machine</partition>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </resource>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <sysinfo type='smbios'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <system>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='manufacturer'>RDO</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='product'>OpenStack Compute</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='serial'>66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='uuid'>66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='family'>Virtual Machine</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </system>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <os>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <boot dev='hd'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <smbios mode='sysinfo'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </os>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <features>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <vmcoreinfo state='on'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </features>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <cpu mode='custom' match='exact' check='full'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <model fallback='forbid'>Nehalem</model>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <feature policy='require' name='x2apic'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <feature policy='require' name='hypervisor'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <feature policy='require' name='vme'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <clock offset='utc'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <timer name='pit' tickpolicy='delay'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <timer name='hpet' present='no'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <on_poweroff>destroy</on_poweroff>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <on_reboot>restart</on_reboot>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <on_crash>destroy</on_crash>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <disk type='network' device='disk'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <auth username='openstack'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <source protocol='rbd' name='vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk' index='2'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.100' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.102' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.101' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       </source>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target dev='vda' bus='virtio'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='virtio-disk0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <disk type='network' device='cdrom'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <auth username='openstack'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <source protocol='rbd' name='vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk.config' index='1'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.100' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.102' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.101' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       </source>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target dev='sda' bus='sata'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <readonly/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='sata0-0-0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='0' model='pcie-root'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pcie.0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='1' port='0x10'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='2' port='0x11'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='3' port='0x12'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.3'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='4' port='0x13'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.4'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='5' port='0x14'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.5'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='6' port='0x15'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.6'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='7' port='0x16'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.7'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='8' port='0x17'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.8'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='9' port='0x18'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.9'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='10' port='0x19'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.10'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='11' port='0x1a'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.11'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='12' port='0x1b'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.12'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='13' port='0x1c'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.13'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='14' port='0x1d'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.14'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='15' port='0x1e'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.15'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='16' port='0x1f'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.16'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='17' port='0x20'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.17'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='18' port='0x21'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.18'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='19' port='0x22'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.19'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='20' port='0x23'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.20'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='21' port='0x24'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.21'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='22' port='0x25'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.22'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='23' port='0x26'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.23'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='24' port='0x27'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.24'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='25' port='0x28'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.25'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-pci-bridge'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.26'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='usb'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='sata' index='0'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='ide'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:97:6f:85'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target dev='tapdb982ab1-0c'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='net0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:ea:ea:c9'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target dev='tap6cd8ab35-cc'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='net2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:a3:ce:a8'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target dev='tap863150d4-39'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='net3'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <serial type='pty'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <source path='/dev/pts/0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <log file='/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log' append='off'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target type='isa-serial' port='0'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <model name='isa-serial'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       </target>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='serial0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <console type='pty' tty='/dev/pts/0'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <source path='/dev/pts/0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <log file='/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log' append='off'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target type='serial' port='0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='serial0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </console>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <input type='tablet' bus='usb'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='input0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='usb' bus='0' port='1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <input type='mouse' bus='ps2'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='input1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <input type='keyboard' bus='ps2'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='input2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <listen type='address' address='::0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </graphics>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <audio id='1' type='none'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <video>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model type='virtio' heads='1' primary='yes'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='video0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </video>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <watchdog model='itco' action='reset'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='watchdog0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </watchdog>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <memballoon model='virtio'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <stats period='10'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='balloon0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <rng model='virtio'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <backend model='random'>/dev/urandom</backend>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='rng0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <label>system_u:system_r:svirt_t:s0:c380,c790</label>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c380,c790</imagelabel>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </seclabel>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <label>+107:+107</label>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <imagelabel>+107:+107</imagelabel>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </seclabel>
Jan 31 07:53:53 compute-2 nova_compute[226829]: </domain>
Jan 31 07:53:53 compute-2 nova_compute[226829]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.024 226833 DEBUG nova.virt.libvirt.guest [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:34:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf305380-60"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.028 226833 DEBUG nova.virt.libvirt.guest [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:34:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf305380-60"/></interface>not found in domain: <domain type='kvm' id='31'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <name>instance-00000049</name>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <uuid>66066b76-4c92-4b20-ba23-c3002693dc10</uuid>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:creationTime>2026-01-31 07:53:48</nova:creationTime>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:flavor name="m1.nano">
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:memory>128</nova:memory>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:disk>1</nova:disk>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:swap>0</nova:swap>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:vcpus>1</nova:vcpus>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </nova:flavor>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:owner>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </nova:owner>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:ports>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:port uuid="6cd8ab35-cc78-4cb7-b84e-09ec579e9f19">
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:port uuid="863150d4-3984-41d0-a375-230157e3a474">
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </nova:ports>
Jan 31 07:53:53 compute-2 nova_compute[226829]: </nova:instance>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <memory unit='KiB'>131072</memory>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <vcpu placement='static'>1</vcpu>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <resource>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <partition>/machine</partition>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </resource>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <sysinfo type='smbios'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <system>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='manufacturer'>RDO</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='product'>OpenStack Compute</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='serial'>66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='uuid'>66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <entry name='family'>Virtual Machine</entry>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </system>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <os>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <boot dev='hd'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <smbios mode='sysinfo'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </os>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <features>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <vmcoreinfo state='on'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </features>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <cpu mode='custom' match='exact' check='full'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <model fallback='forbid'>Nehalem</model>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <feature policy='require' name='x2apic'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <feature policy='require' name='hypervisor'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <feature policy='require' name='vme'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <clock offset='utc'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <timer name='pit' tickpolicy='delay'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <timer name='hpet' present='no'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <on_poweroff>destroy</on_poweroff>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <on_reboot>restart</on_reboot>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <on_crash>destroy</on_crash>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <disk type='network' device='disk'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <auth username='openstack'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <source protocol='rbd' name='vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk' index='2'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.100' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.102' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.101' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       </source>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target dev='vda' bus='virtio'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='virtio-disk0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <disk type='network' device='cdrom'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <auth username='openstack'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <source protocol='rbd' name='vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk.config' index='1'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.100' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.102' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <host name='192.168.122.101' port='6789'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       </source>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target dev='sda' bus='sata'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <readonly/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='sata0-0-0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='0' model='pcie-root'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pcie.0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='1' port='0x10'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='2' port='0x11'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='3' port='0x12'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.3'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='4' port='0x13'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.4'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='5' port='0x14'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.5'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='6' port='0x15'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.6'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='7' port='0x16'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.7'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='8' port='0x17'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.8'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='9' port='0x18'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.9'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='10' port='0x19'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.10'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='11' port='0x1a'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.11'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='12' port='0x1b'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.12'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='13' port='0x1c'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.13'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='14' port='0x1d'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.14'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='15' port='0x1e'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.15'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='16' port='0x1f'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.16'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='17' port='0x20'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.17'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='18' port='0x21'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.18'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='19' port='0x22'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.19'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='20' port='0x23'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.20'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='21' port='0x24'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.21'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='22' port='0x25'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.22'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='23' port='0x26'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.23'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='24' port='0x27'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.24'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target chassis='25' port='0x28'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.25'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model name='pcie-pci-bridge'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='pci.26'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='usb'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <controller type='sata' index='0'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='ide'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:97:6f:85'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target dev='tapdb982ab1-0c'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='net0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:ea:ea:c9'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target dev='tap6cd8ab35-cc'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='net2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:a3:ce:a8'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target dev='tap863150d4-39'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='net3'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <serial type='pty'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <source path='/dev/pts/0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <log file='/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log' append='off'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target type='isa-serial' port='0'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:         <model name='isa-serial'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       </target>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='serial0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <console type='pty' tty='/dev/pts/0'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <source path='/dev/pts/0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <log file='/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log' append='off'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <target type='serial' port='0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='serial0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </console>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <input type='tablet' bus='usb'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='input0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='usb' bus='0' port='1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <input type='mouse' bus='ps2'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='input1'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <input type='keyboard' bus='ps2'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='input2'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <listen type='address' address='::0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </graphics>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <audio id='1' type='none'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <video>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <model type='virtio' heads='1' primary='yes'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='video0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </video>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <watchdog model='itco' action='reset'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='watchdog0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </watchdog>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <memballoon model='virtio'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <stats period='10'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='balloon0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <rng model='virtio'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <backend model='random'>/dev/urandom</backend>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <alias name='rng0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <label>system_u:system_r:svirt_t:s0:c380,c790</label>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c380,c790</imagelabel>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </seclabel>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <label>+107:+107</label>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <imagelabel>+107:+107</imagelabel>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </seclabel>
Jan 31 07:53:53 compute-2 nova_compute[226829]: </domain>
Jan 31 07:53:53 compute-2 nova_compute[226829]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.029 226833 WARNING nova.virt.libvirt.driver [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Detaching interface fa:16:3e:39:34:eb failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapbf305380-60' not found.
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.030 226833 DEBUG nova.virt.libvirt.vif [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.030 226833 DEBUG nova.network.os_vif_util [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "bf305380-6000-4cea-86cf-123bf60de12a", "address": "fa:16:3e:39:34:eb", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf305380-60", "ovs_interfaceid": "bf305380-6000-4cea-86cf-123bf60de12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.030 226833 DEBUG nova.network.os_vif_util [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.031 226833 DEBUG os_vif [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.032 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.032 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf305380-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.033 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.035 226833 INFO os_vif [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:34:eb,bridge_name='br-int',has_traffic_filtering=True,id=bf305380-6000-4cea-86cf-123bf60de12a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf305380-60')
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.035 226833 DEBUG nova.virt.libvirt.guest [req-2cfa1713-3f6e-40fc-af4d-881a2d96bbdc req-48ecf095-7a07-4256-a92e-d1b885ef3617 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:creationTime>2026-01-31 07:53:53</nova:creationTime>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:flavor name="m1.nano">
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:memory>128</nova:memory>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:disk>1</nova:disk>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:swap>0</nova:swap>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:vcpus>1</nova:vcpus>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </nova:flavor>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:owner>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </nova:owner>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   <nova:ports>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:port uuid="6cd8ab35-cc78-4cb7-b84e-09ec579e9f19">
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     <nova:port uuid="863150d4-3984-41d0-a375-230157e3a474">
Jan 31 07:53:53 compute-2 nova_compute[226829]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:53:53 compute-2 nova_compute[226829]:     </nova:port>
Jan 31 07:53:53 compute-2 nova_compute[226829]:   </nova:ports>
Jan 31 07:53:53 compute-2 nova_compute[226829]: </nova:instance>
Jan 31 07:53:53 compute-2 nova_compute[226829]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.171 143841 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 3a03a3ae-6728-4007-aef9-f3411f379157 with type ""
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.172 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:ce:a8 10.100.0.13'], port_security=['fa:16:3e:a3:ce:a8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1306725410', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '66066b76-4c92-4b20-ba23-c3002693dc10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1306725410', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=863150d4-3984-41d0-a375-230157e3a474) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.173 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 863150d4-3984-41d0-a375-230157e3a474 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 unbound from our chassis
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.174 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 07:53:53 compute-2 ovn_controller[133834]: 2026-01-31T07:53:53Z|00224|binding|INFO|Removing iface tap863150d4-39 ovn-installed in OVS
Jan 31 07:53:53 compute-2 ovn_controller[133834]: 2026-01-31T07:53:53Z|00225|binding|INFO|Removing lport 863150d4-3984-41d0-a375-230157e3a474 ovn-installed in OVS
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.181 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.186 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.188 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[91c94888-bd6b-4563-9188-2f4e3ab05ac8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.212 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ccec22e0-fb46-457b-9ea1-eda88d6e3723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.215 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5749dc-f26e-41c8-8edd-624a4b951380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.235 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[16d77e64-3e9f-4039-a689-8550eaf360a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.246 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d3bef244-9db3-435e-80e6-267a6cb870f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 868, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 868, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621274, 'reachable_time': 37963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259188, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.260 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a8a25f-f4f2-49ba-b56e-e7b4b6827593]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621283, 'tstamp': 621283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259189, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621285, 'tstamp': 621285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259189, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.262 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.263 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.265 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.265 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.266 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:53.266 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:53.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.366 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.928 226833 DEBUG nova.compute.manager [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-plugged-bf305380-6000-4cea-86cf-123bf60de12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.929 226833 DEBUG oslo_concurrency.lockutils [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.929 226833 DEBUG oslo_concurrency.lockutils [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.930 226833 DEBUG oslo_concurrency.lockutils [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.931 226833 DEBUG nova.compute.manager [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-plugged-bf305380-6000-4cea-86cf-123bf60de12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.931 226833 WARNING nova.compute.manager [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received unexpected event network-vif-plugged-bf305380-6000-4cea-86cf-123bf60de12a for instance with vm_state active and task_state None.
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.931 226833 DEBUG nova.compute.manager [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-deleted-863150d4-3984-41d0-a375-230157e3a474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.932 226833 INFO nova.compute.manager [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Neutron deleted interface 863150d4-3984-41d0-a375-230157e3a474; detaching it from the instance and deleting it from the info cache
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.932 226833 DEBUG nova.network.neutron [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:53 compute-2 nova_compute[226829]: 2026-01-31 07:53:53.988 226833 DEBUG nova.objects.instance [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'system_metadata' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.067 226833 DEBUG nova.objects.instance [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'flavor' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.069 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.130 226833 DEBUG nova.virt.libvirt.vif [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.130 226833 DEBUG nova.network.os_vif_util [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.131 226833 DEBUG nova.network.os_vif_util [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:a8,bridge_name='br-int',has_traffic_filtering=True,id=863150d4-3984-41d0-a375-230157e3a474,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap863150d4-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.136 226833 DEBUG nova.virt.libvirt.guest [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:ce:a8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap863150d4-39"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.138 226833 DEBUG nova.virt.libvirt.guest [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:ce:a8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap863150d4-39"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.140 226833 DEBUG nova.virt.libvirt.driver [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Attempting to detach device tap863150d4-39 from instance 66066b76-4c92-4b20-ba23-c3002693dc10 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.140 226833 DEBUG nova.virt.libvirt.guest [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] detach device xml: <interface type="ethernet">
Jan 31 07:53:54 compute-2 nova_compute[226829]:   <mac address="fa:16:3e:a3:ce:a8"/>
Jan 31 07:53:54 compute-2 nova_compute[226829]:   <model type="virtio"/>
Jan 31 07:53:54 compute-2 nova_compute[226829]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:53:54 compute-2 nova_compute[226829]:   <mtu size="1442"/>
Jan 31 07:53:54 compute-2 nova_compute[226829]:   <target dev="tap863150d4-39"/>
Jan 31 07:53:54 compute-2 nova_compute[226829]: </interface>
Jan 31 07:53:54 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.143 226833 DEBUG oslo_concurrency.lockutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.143 226833 DEBUG oslo_concurrency.lockutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.143 226833 DEBUG oslo_concurrency.lockutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.144 226833 DEBUG oslo_concurrency.lockutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.144 226833 DEBUG oslo_concurrency.lockutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.145 226833 INFO nova.compute.manager [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Terminating instance
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.146 226833 DEBUG nova.compute.manager [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.206 226833 DEBUG nova.virt.libvirt.guest [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:ce:a8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap863150d4-39"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 07:53:54 compute-2 ceph-mon[77282]: pgmap v1699: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:54 compute-2 kernel: tapdb982ab1-0c (unregistering): left promiscuous mode
Jan 31 07:53:54 compute-2 NetworkManager[48999]: <info>  [1769846034.9232] device (tapdb982ab1-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:53:54 compute-2 ovn_controller[133834]: 2026-01-31T07:53:54Z|00226|binding|INFO|Releasing lport db982ab1-0c3e-4386-804d-c70f4b91053a from this chassis (sb_readonly=0)
Jan 31 07:53:54 compute-2 ovn_controller[133834]: 2026-01-31T07:53:54Z|00227|binding|INFO|Setting lport db982ab1-0c3e-4386-804d-c70f4b91053a down in Southbound
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.928 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:54 compute-2 ovn_controller[133834]: 2026-01-31T07:53:54Z|00228|binding|INFO|Removing iface tapdb982ab1-0c ovn-installed in OVS
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.929 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:54 compute-2 kernel: tap6cd8ab35-cc (unregistering): left promiscuous mode
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.941 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:54 compute-2 NetworkManager[48999]: <info>  [1769846034.9480] device (tap6cd8ab35-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:53:54 compute-2 ovn_controller[133834]: 2026-01-31T07:53:54Z|00229|binding|INFO|Releasing lport 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 from this chassis (sb_readonly=1)
Jan 31 07:53:54 compute-2 ovn_controller[133834]: 2026-01-31T07:53:54Z|00230|binding|INFO|Removing iface tap6cd8ab35-cc ovn-installed in OVS
Jan 31 07:53:54 compute-2 ovn_controller[133834]: 2026-01-31T07:53:54Z|00231|if_status|INFO|Dropped 2 log messages in last 668 seconds (most recently, 668 seconds ago) due to excessive rate
Jan 31 07:53:54 compute-2 ovn_controller[133834]: 2026-01-31T07:53:54Z|00232|if_status|INFO|Not setting lport 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 down as sb is readonly
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.954 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.957 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:54 compute-2 kernel: tap863150d4-39 (unregistering): left promiscuous mode
Jan 31 07:53:54 compute-2 NetworkManager[48999]: <info>  [1769846034.9693] device (tap863150d4-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.972 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:54 compute-2 nova_compute[226829]: 2026-01-31 07:53:54.980 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000049.scope: Deactivated successfully.
Jan 31 07:53:55 compute-2 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000049.scope: Consumed 16.123s CPU time.
Jan 31 07:53:55 compute-2 systemd-machined[195142]: Machine qemu-31-instance-00000049 terminated.
Jan 31 07:53:55 compute-2 podman[259193]: 2026-01-31 07:53:55.064110696 +0000 UTC m=+0.103021279 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 07:53:55 compute-2 ovn_controller[133834]: 2026-01-31T07:53:55Z|00233|binding|INFO|Setting lport 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 down in Southbound
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.132 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:6f:85 10.100.0.10'], port_security=['fa:16:3e:97:6f:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '66066b76-4c92-4b20-ba23-c3002693dc10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45f21322-780d-4bfb-8db8-cc783d02479e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=db982ab1-0c3e-4386-804d-c70f4b91053a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.133 143841 INFO neutron.agent.ovn.metadata.agent [-] Port db982ab1-0c3e-4386-804d-c70f4b91053a in datapath 485494d9-5360-41c3-a10e-ef5098af0809 unbound from our chassis
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.134 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.145 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:ea:c9 10.100.0.3'], port_security=['fa:16:3e:ea:ea:c9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '66066b76-4c92-4b20-ba23-c3002693dc10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=6cd8ab35-cc78-4cb7-b84e-09ec579e9f19) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.152 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3eba2d-28a8-4cfa-a5f7-0b7404f53d3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.176 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[176e3737-5237-4851-a03b-966d375c222c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.179 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[cafba0dd-e832-4887-a501-15dd47b6e6b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.196 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f06191cc-0f04-4718-a72f-7ea75a636eca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.209 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[46751d12-dd0a-4c44-93ab-b8725f6b6ffd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 868, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 868, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621274, 'reachable_time': 37963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259241, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 NetworkManager[48999]: <info>  [1769846035.2199] manager: (tapdb982ab1-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.223 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8babb4b9-4fbb-4e36-8c0d-cba22e8a1e10]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621283, 'tstamp': 621283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259243, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621285, 'tstamp': 621285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259243, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.225 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.227 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 NetworkManager[48999]: <info>  [1769846035.2365] manager: (tap6cd8ab35-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.240 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.241 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.242 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.242 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.242 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.243 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 unbound from our chassis
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.246 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 485494d9-5360-41c3-a10e-ef5098af0809, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.247 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1c97cc24-7032-452d-9964-c9b1b7cdbfa5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.248 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 namespace which is not needed anymore
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.262 226833 DEBUG nova.virt.libvirt.guest [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a3:ce:a8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap863150d4-39"/></interface>not found in domain: <domain type='kvm'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <name>instance-00000049</name>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <uuid>66066b76-4c92-4b20-ba23-c3002693dc10</uuid>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <nova:name>tempest-AttachInterfacesTestJSON-server-867555935</nova:name>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:52:38</nova:creationTime>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <nova:port uuid="db982ab1-0c3e-4386-804d-c70f4b91053a">
Jan 31 07:53:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <memory unit='KiB'>131072</memory>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <vcpu placement='static'>1</vcpu>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <sysinfo type='smbios'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <system>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <entry name='manufacturer'>RDO</entry>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <entry name='product'>OpenStack Compute</entry>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <entry name='serial'>66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <entry name='uuid'>66066b76-4c92-4b20-ba23-c3002693dc10</entry>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <entry name='family'>Virtual Machine</entry>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </system>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <os>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <boot dev='hd'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <smbios mode='sysinfo'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   </os>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <features>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <vmcoreinfo state='on'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   </features>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <cpu mode='custom' match='exact' check='partial'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <model fallback='allow'>Nehalem</model>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <clock offset='utc'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <timer name='pit' tickpolicy='delay'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <timer name='hpet' present='no'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <on_poweroff>destroy</on_poweroff>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <on_reboot>restart</on_reboot>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <on_crash>destroy</on_crash>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <disk type='network' device='disk'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <auth username='openstack'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <source protocol='rbd' name='vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <host name='192.168.122.100' port='6789'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <host name='192.168.122.102' port='6789'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <host name='192.168.122.101' port='6789'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       </source>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target dev='vda' bus='virtio'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <disk type='network' device='cdrom'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <auth username='openstack'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <source protocol='rbd' name='vms/66066b76-4c92-4b20-ba23-c3002693dc10_disk.config'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <host name='192.168.122.100' port='6789'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <host name='192.168.122.102' port='6789'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <host name='192.168.122.101' port='6789'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       </source>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target dev='sda' bus='sata'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <readonly/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='0' model='pcie-root'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='1' port='0x10'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='2' port='0x11'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='3' port='0x12'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='4' port='0x13'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='5' port='0x14'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='6' port='0x15'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='7' port='0x16'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='8' port='0x17'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='9' port='0x18'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='10' port='0x19'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='11' port='0x1a'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='12' port='0x1b'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='13' port='0x1c'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='14' port='0x1d'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='15' port='0x1e'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='16' port='0x1f'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='17' port='0x20'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='18' port='0x21'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='19' port='0x22'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='20' port='0x23'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='21' port='0x24'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='22' port='0x25'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='23' port='0x26'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='24' port='0x27'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-root-port'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target chassis='25' port='0x28'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model name='pcie-pci-bridge'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <controller type='sata' index='0'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </controller>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:97:6f:85'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target dev='tapdb982ab1-0c'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <interface type='ethernet'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <mac address='fa:16:3e:ea:ea:c9'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target dev='tap6cd8ab35-cc'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model type='virtio'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <mtu size='1442'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <serial type='pty'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <log file='/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log' append='off'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target type='isa-serial' port='0'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:         <model name='isa-serial'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       </target>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <console type='pty'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <log file='/var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10/console.log' append='off'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <target type='serial' port='0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </console>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <input type='tablet' bus='usb'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='usb' bus='0' port='1'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </input>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <input type='mouse' bus='ps2'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <input type='keyboard' bus='ps2'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <listen type='address' address='::0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </graphics>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <audio id='1' type='none'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <video>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <model type='virtio' heads='1' primary='yes'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </video>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <watchdog model='itco' action='reset'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <memballoon model='virtio'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <stats period='10'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     <rng model='virtio'>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <backend model='random'>/dev/urandom</backend>
Jan 31 07:53:55 compute-2 nova_compute[226829]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:53:55 compute-2 nova_compute[226829]: </domain>
Jan 31 07:53:55 compute-2 nova_compute[226829]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.263 226833 INFO nova.virt.libvirt.driver [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Successfully detached device tap863150d4-39 from instance 66066b76-4c92-4b20-ba23-c3002693dc10 from the persistent domain config.
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.263 226833 DEBUG nova.virt.libvirt.driver [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] (1/8): Attempting to detach device tap863150d4-39 with device alias net3 from instance 66066b76-4c92-4b20-ba23-c3002693dc10 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.263 226833 DEBUG nova.virt.libvirt.guest [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] detach device xml: <interface type="ethernet">
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <mac address="fa:16:3e:a3:ce:a8"/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <model type="virtio"/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <mtu size="1442"/>
Jan 31 07:53:55 compute-2 nova_compute[226829]:   <target dev="tap863150d4-39"/>
Jan 31 07:53:55 compute-2 nova_compute[226829]: </interface>
Jan 31 07:53:55 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.266 226833 DEBUG nova.virt.libvirt.driver [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Libvirt returned error while detaching device tap863150d4-39 from instance 66066b76-4c92-4b20-ba23-c3002693dc10. Libvirt error code: 55, error message: Requested operation is not valid: domain is not running. _detach_sync /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2667
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.266 226833 WARNING nova.virt.libvirt.driver [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Unexpected libvirt error while detaching device tap863150d4-39 from instance 66066b76-4c92-4b20-ba23-c3002693dc10: Requested operation is not valid: domain is not running: libvirt.libvirtError: Requested operation is not valid: domain is not running
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.267 226833 DEBUG nova.virt.libvirt.vif [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.267 226833 DEBUG nova.network.os_vif_util [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.268 226833 DEBUG nova.network.os_vif_util [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:a8,bridge_name='br-int',has_traffic_filtering=True,id=863150d4-3984-41d0-a375-230157e3a474,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap863150d4-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.268 226833 DEBUG os_vif [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:a8,bridge_name='br-int',has_traffic_filtering=True,id=863150d4-3984-41d0-a375-230157e3a474,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap863150d4-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.270 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.270 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap863150d4-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.273 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.273 226833 INFO nova.virt.libvirt.driver [-] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Instance destroyed successfully.
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.273 226833 DEBUG nova.objects.instance [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'resources' on Instance uuid 66066b76-4c92-4b20-ba23-c3002693dc10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.277 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.279 226833 INFO os_vif [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:a8,bridge_name='br-int',has_traffic_filtering=True,id=863150d4-3984-41d0-a375-230157e3a474,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap863150d4-39')
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server [req-ff0edeb3-f947-478b-a8f7-0b163eb1059b req-217bedd4-4aee-45be-9efc-26b2a82d19ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Exception during message handling: libvirt.libvirtError: Requested operation is not valid: domain is not running
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     raise self.value
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11064, in external_instance_event
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     self._process_instance_vif_deleted_event(context,
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10871, in _process_instance_vif_deleted_event
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     self.driver.detach_interface(context, instance, vif)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2943, in detach_interface
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     self._detach_with_retry(
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2473, in _detach_with_retry
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     self._detach_from_live_with_retry(
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2529, in _detach_from_live_with_retry
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     self._detach_from_live_and_wait_for_event(
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2591, in _detach_from_live_and_wait_for_event
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     self._detach_sync(
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2663, in _detach_sync
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     guest.detach_device(dev, persistent=persistent, live=live)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py", line 466, in detach_device
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     self._domain.detachDeviceFlags(device_xml, flags=flags)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 193, in doit
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     result = proxy_call(self._autowrap, f, *args, **kwargs)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 151, in proxy_call
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     rv = execute(f, *args, **kwargs)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 132, in execute
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     six.reraise(c, e, tb)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     raise value
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 86, in tworker
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     rv = meth(*args, **kwargs)
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/site-packages/libvirt.py", line 1605, in detachDeviceFlags
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server     raise libvirtError('virDomainDetachDeviceFlags() failed')
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server libvirt.libvirtError: Requested operation is not valid: domain is not running
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.282 226833 ERROR oslo_messaging.rpc.server 
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.329 226833 DEBUG nova.virt.libvirt.vif [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.330 226833 DEBUG nova.network.os_vif_util [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.330 226833 DEBUG nova.network.os_vif_util [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:6f:85,bridge_name='br-int',has_traffic_filtering=True,id=db982ab1-0c3e-4386-804d-c70f4b91053a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb982ab1-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.331 226833 DEBUG os_vif [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:6f:85,bridge_name='br-int',has_traffic_filtering=True,id=db982ab1-0c3e-4386-804d-c70f4b91053a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb982ab1-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.332 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.332 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb982ab1-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.334 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.335 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.338 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.340 226833 INFO os_vif [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:6f:85,bridge_name='br-int',has_traffic_filtering=True,id=db982ab1-0c3e-4386-804d-c70f4b91053a,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb982ab1-0c')
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.341 226833 DEBUG nova.virt.libvirt.vif [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-867555935',display_name='tempest-AttachInterfacesTestJSON-server-867555935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-867555935',id=73,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPX6GTdGdGcfJav9VD2vKr9cDmWhHD/qSDkhR2o41JGZ+VcYlHVD/4Hcc3zTsC+yiuWl8z4VSKy7Wi9c75E6Y4p1/iIqUHnIU8ZYXlvl82jhWPIfK08UcCczSxHzDyzJ+g==',key_name='tempest-keypair-461427290',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-tpxrd0ai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=66066b76-4c92-4b20-ba23-c3002693dc10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.341 226833 DEBUG nova.network.os_vif_util [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.342 226833 DEBUG nova.network.os_vif_util [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ea:ea:c9,bridge_name='br-int',has_traffic_filtering=True,id=6cd8ab35-cc78-4cb7-b84e-09ec579e9f19,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd8ab35-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.343 226833 DEBUG os_vif [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:ea:c9,bridge_name='br-int',has_traffic_filtering=True,id=6cd8ab35-cc78-4cb7-b84e-09ec579e9f19,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd8ab35-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:53:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.353 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:55.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.354 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cd8ab35-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.356 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.357 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.359 226833 INFO os_vif [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:ea:c9,bridge_name='br-int',has_traffic_filtering=True,id=6cd8ab35-cc78-4cb7-b84e-09ec579e9f19,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd8ab35-cc')
Jan 31 07:53:55 compute-2 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[258604]: [NOTICE]   (258608) : haproxy version is 2.8.14-c23fe91
Jan 31 07:53:55 compute-2 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[258604]: [NOTICE]   (258608) : path to executable is /usr/sbin/haproxy
Jan 31 07:53:55 compute-2 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[258604]: [ALERT]    (258608) : Current worker (258610) exited with code 143 (Terminated)
Jan 31 07:53:55 compute-2 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[258604]: [WARNING]  (258608) : All workers exited. Exiting... (0)
Jan 31 07:53:55 compute-2 systemd[1]: libpod-b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f.scope: Deactivated successfully.
Jan 31 07:53:55 compute-2 podman[259301]: 2026-01-31 07:53:55.40026427 +0000 UTC m=+0.072674721 container died b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:53:55 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f-userdata-shm.mount: Deactivated successfully.
Jan 31 07:53:55 compute-2 systemd[1]: var-lib-containers-storage-overlay-54392a5d5512b2fe4755d1134ce72e1a2e62bf8f4d5db39067ad58ab513140c1-merged.mount: Deactivated successfully.
Jan 31 07:53:55 compute-2 podman[259301]: 2026-01-31 07:53:55.437357089 +0000 UTC m=+0.109767540 container cleanup b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 07:53:55 compute-2 systemd[1]: libpod-conmon-b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f.scope: Deactivated successfully.
Jan 31 07:53:55 compute-2 podman[259352]: 2026-01-31 07:53:55.497011997 +0000 UTC m=+0.042887696 container remove b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.501 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[327d4ecd-6216-4dc0-a63b-beff8e9556b2]: (4, ('Sat Jan 31 07:53:55 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 (b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f)\nb2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f\nSat Jan 31 07:53:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 (b2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f)\nb2722ee8c7bce2995a517ab6a506ea14bff98b6adf207dc351614a09f0957c5f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.503 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d1eef2a8-6231-40f8-b6fe-15ea85e22aa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.504 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.506 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 kernel: tap485494d9-50: left promiscuous mode
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.508 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.510 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7ab86d-cd6d-49f9-a980-56f5f0f04896]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.512 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.523 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[406509be-b8d1-43ae-a738-b4f8e15f37eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.525 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7517f34a-92c2-4a9f-9ae5-ff9ad3968c03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.536 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3566e9fe-ef77-4b1e-8c95-e5273567d69b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621267, 'reachable_time': 36171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259367, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 systemd[1]: run-netns-ovnmeta\x2d485494d9\x2d5360\x2d41c3\x2da10e\x2def5098af0809.mount: Deactivated successfully.
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.540 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:53:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:55.542 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0e2600-f797-48c8-b553-21056d81bcd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.878 226833 INFO nova.virt.libvirt.driver [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Deleting instance files /var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10_del
Jan 31 07:53:55 compute-2 nova_compute[226829]: 2026-01-31 07:53:55.879 226833 INFO nova.virt.libvirt.driver [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Deletion of /var/lib/nova/instances/66066b76-4c92-4b20-ba23-c3002693dc10_del complete
Jan 31 07:53:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:56 compute-2 nova_compute[226829]: 2026-01-31 07:53:56.438 226833 INFO nova.compute.manager [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Took 2.29 seconds to destroy the instance on the hypervisor.
Jan 31 07:53:56 compute-2 nova_compute[226829]: 2026-01-31 07:53:56.439 226833 DEBUG oslo.service.loopingcall [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:53:56 compute-2 nova_compute[226829]: 2026-01-31 07:53:56.439 226833 DEBUG nova.compute.manager [-] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:53:56 compute-2 nova_compute[226829]: 2026-01-31 07:53:56.439 226833 DEBUG nova.network.neutron [-] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:53:56 compute-2 ceph-mon[77282]: pgmap v1700: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s wr, 0 op/s
Jan 31 07:53:56 compute-2 sudo[259369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:56 compute-2 sudo[259369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:56 compute-2 sudo[259369]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:56 compute-2 sudo[259394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:56 compute-2 sudo[259394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:56 compute-2 sudo[259394]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:57 compute-2 nova_compute[226829]: 2026-01-31 07:53:57.057 226833 DEBUG nova.compute.manager [req-16bd209b-d780-4e2a-8915-9da7b916f7e8 req-5f15f414-6337-49dc-8ed1-a8d7c4982dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-unplugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:57 compute-2 nova_compute[226829]: 2026-01-31 07:53:57.058 226833 DEBUG oslo_concurrency.lockutils [req-16bd209b-d780-4e2a-8915-9da7b916f7e8 req-5f15f414-6337-49dc-8ed1-a8d7c4982dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:57 compute-2 nova_compute[226829]: 2026-01-31 07:53:57.058 226833 DEBUG oslo_concurrency.lockutils [req-16bd209b-d780-4e2a-8915-9da7b916f7e8 req-5f15f414-6337-49dc-8ed1-a8d7c4982dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:57 compute-2 nova_compute[226829]: 2026-01-31 07:53:57.059 226833 DEBUG oslo_concurrency.lockutils [req-16bd209b-d780-4e2a-8915-9da7b916f7e8 req-5f15f414-6337-49dc-8ed1-a8d7c4982dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:57 compute-2 nova_compute[226829]: 2026-01-31 07:53:57.059 226833 DEBUG nova.compute.manager [req-16bd209b-d780-4e2a-8915-9da7b916f7e8 req-5f15f414-6337-49dc-8ed1-a8d7c4982dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-unplugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:53:57 compute-2 nova_compute[226829]: 2026-01-31 07:53:57.059 226833 DEBUG nova.compute.manager [req-16bd209b-d780-4e2a-8915-9da7b916f7e8 req-5f15f414-6337-49dc-8ed1-a8d7c4982dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-unplugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:53:57 compute-2 nova_compute[226829]: 2026-01-31 07:53:57.146 226833 INFO nova.network.neutron [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Port bf305380-6000-4cea-86cf-123bf60de12a from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 31 07:53:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:57.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:57.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:57.925 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:57 compute-2 nova_compute[226829]: 2026-01-31 07:53:57.925 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:53:57.926 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:53:58 compute-2 ceph-mon[77282]: pgmap v1701: 305 pgs: 305 active+clean; 94 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 3.4 KiB/s rd, 5.0 KiB/s wr, 6 op/s
Jan 31 07:53:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:59.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:59 compute-2 nova_compute[226829]: 2026-01-31 07:53:59.385 226833 DEBUG nova.compute.manager [req-7d21e0d7-7c7f-446d-b6fc-8387a6ab2076 req-0ccdc3e7-91c2-47a0-a421-205e89c5cc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-plugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:59 compute-2 nova_compute[226829]: 2026-01-31 07:53:59.386 226833 DEBUG oslo_concurrency.lockutils [req-7d21e0d7-7c7f-446d-b6fc-8387a6ab2076 req-0ccdc3e7-91c2-47a0-a421-205e89c5cc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:59 compute-2 nova_compute[226829]: 2026-01-31 07:53:59.386 226833 DEBUG oslo_concurrency.lockutils [req-7d21e0d7-7c7f-446d-b6fc-8387a6ab2076 req-0ccdc3e7-91c2-47a0-a421-205e89c5cc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:59 compute-2 nova_compute[226829]: 2026-01-31 07:53:59.386 226833 DEBUG oslo_concurrency.lockutils [req-7d21e0d7-7c7f-446d-b6fc-8387a6ab2076 req-0ccdc3e7-91c2-47a0-a421-205e89c5cc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:59 compute-2 nova_compute[226829]: 2026-01-31 07:53:59.386 226833 DEBUG nova.compute.manager [req-7d21e0d7-7c7f-446d-b6fc-8387a6ab2076 req-0ccdc3e7-91c2-47a0-a421-205e89c5cc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] No waiting events found dispatching network-vif-plugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:53:59 compute-2 nova_compute[226829]: 2026-01-31 07:53:59.386 226833 WARNING nova.compute.manager [req-7d21e0d7-7c7f-446d-b6fc-8387a6ab2076 req-0ccdc3e7-91c2-47a0-a421-205e89c5cc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received unexpected event network-vif-plugged-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 for instance with vm_state active and task_state deleting.
Jan 31 07:53:59 compute-2 nova_compute[226829]: 2026-01-31 07:53:59.777 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:53:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:59.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:00 compute-2 nova_compute[226829]: 2026-01-31 07:54:00.219 226833 DEBUG nova.compute.manager [req-71718a25-9c1c-4b8b-bc8d-cbc98126c6b7 req-6e2a8e09-3e18-456f-902c-7519a0815e7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-deleted-db982ab1-0c3e-4386-804d-c70f4b91053a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:54:00 compute-2 nova_compute[226829]: 2026-01-31 07:54:00.220 226833 INFO nova.compute.manager [req-71718a25-9c1c-4b8b-bc8d-cbc98126c6b7 req-6e2a8e09-3e18-456f-902c-7519a0815e7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Neutron deleted interface db982ab1-0c3e-4386-804d-c70f4b91053a; detaching it from the instance and deleting it from the info cache
Jan 31 07:54:00 compute-2 nova_compute[226829]: 2026-01-31 07:54:00.220 226833 DEBUG nova.network.neutron [req-71718a25-9c1c-4b8b-bc8d-cbc98126c6b7 req-6e2a8e09-3e18-456f-902c-7519a0815e7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:54:00 compute-2 nova_compute[226829]: 2026-01-31 07:54:00.310 226833 DEBUG nova.compute.manager [req-71718a25-9c1c-4b8b-bc8d-cbc98126c6b7 req-6e2a8e09-3e18-456f-902c-7519a0815e7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Detach interface failed, port_id=db982ab1-0c3e-4386-804d-c70f4b91053a, reason: Instance 66066b76-4c92-4b20-ba23-c3002693dc10 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:54:00 compute-2 nova_compute[226829]: 2026-01-31 07:54:00.356 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:00 compute-2 ceph-mon[77282]: pgmap v1702: 305 pgs: 305 active+clean; 42 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 5.0 KiB/s wr, 25 op/s
Jan 31 07:54:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:01.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:01 compute-2 nova_compute[226829]: 2026-01-31 07:54:01.438 226833 DEBUG nova.network.neutron [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [{"id": "db982ab1-0c3e-4386-804d-c70f4b91053a", "address": "fa:16:3e:97:6f:85", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb982ab1-0c", "ovs_interfaceid": "db982ab1-0c3e-4386-804d-c70f4b91053a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "address": "fa:16:3e:ea:ea:c9", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd8ab35-cc", "ovs_interfaceid": "6cd8ab35-cc78-4cb7-b84e-09ec579e9f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "863150d4-3984-41d0-a375-230157e3a474", "address": "fa:16:3e:a3:ce:a8", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap863150d4-39", "ovs_interfaceid": "863150d4-3984-41d0-a375-230157e3a474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:54:01 compute-2 nova_compute[226829]: 2026-01-31 07:54:01.676 226833 DEBUG oslo_concurrency.lockutils [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-66066b76-4c92-4b20-ba23-c3002693dc10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:54:01 compute-2 nova_compute[226829]: 2026-01-31 07:54:01.726 226833 DEBUG nova.network.neutron [-] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:54:01 compute-2 nova_compute[226829]: 2026-01-31 07:54:01.734 226833 DEBUG oslo_concurrency.lockutils [None req-9b629fe3-e40c-41b7-b657-c775c5bfa822 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-66066b76-4c92-4b20-ba23-c3002693dc10-bf305380-6000-4cea-86cf-123bf60de12a" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 14.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:01.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:01 compute-2 nova_compute[226829]: 2026-01-31 07:54:01.932 226833 INFO nova.compute.manager [-] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Took 5.49 seconds to deallocate network for instance.
Jan 31 07:54:02 compute-2 nova_compute[226829]: 2026-01-31 07:54:02.010 226833 DEBUG oslo_concurrency.lockutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:02 compute-2 nova_compute[226829]: 2026-01-31 07:54:02.010 226833 DEBUG oslo_concurrency.lockutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:02 compute-2 nova_compute[226829]: 2026-01-31 07:54:02.447 226833 DEBUG oslo_concurrency.processutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:02 compute-2 ceph-mon[77282]: pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Jan 31 07:54:02 compute-2 nova_compute[226829]: 2026-01-31 07:54:02.854 226833 DEBUG nova.compute.manager [req-b7c75c31-7ecd-4842-8edc-93c6922db3ea req-55a1fe86-6772-4385-9fdc-0d5cfd695025 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Received event network-vif-deleted-6cd8ab35-cc78-4cb7-b84e-09ec579e9f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:54:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:54:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/320822183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:02 compute-2 nova_compute[226829]: 2026-01-31 07:54:02.874 226833 DEBUG oslo_concurrency.processutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:02 compute-2 nova_compute[226829]: 2026-01-31 07:54:02.881 226833 DEBUG nova.compute.provider_tree [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:54:02 compute-2 nova_compute[226829]: 2026-01-31 07:54:02.997 226833 DEBUG nova.scheduler.client.report [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:54:03 compute-2 nova_compute[226829]: 2026-01-31 07:54:03.003 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:03 compute-2 nova_compute[226829]: 2026-01-31 07:54:03.060 226833 DEBUG oslo_concurrency.lockutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:03 compute-2 nova_compute[226829]: 2026-01-31 07:54:03.203 226833 INFO nova.scheduler.client.report [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Deleted allocations for instance 66066b76-4c92-4b20-ba23-c3002693dc10
Jan 31 07:54:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:03.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:03 compute-2 nova_compute[226829]: 2026-01-31 07:54:03.405 226833 DEBUG oslo_concurrency.lockutils [None req-7248c0e8-c45e-4aec-9ca6-8e4ea99d95bd 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "66066b76-4c92-4b20-ba23-c3002693dc10" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/320822183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:03.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:03 compute-2 nova_compute[226829]: 2026-01-31 07:54:03.967 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:04 compute-2 ceph-mon[77282]: pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Jan 31 07:54:04 compute-2 nova_compute[226829]: 2026-01-31 07:54:04.779 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:05 compute-2 nova_compute[226829]: 2026-01-31 07:54:05.358 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:05.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:05.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:05 compute-2 ceph-mon[77282]: pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Jan 31 07:54:06 compute-2 nova_compute[226829]: 2026-01-31 07:54:06.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:54:06.864 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:54:06.864 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:54:06.865 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:07 compute-2 podman[259446]: 2026-01-31 07:54:07.164906006 +0000 UTC m=+0.051481639 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:54:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:07.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:07 compute-2 nova_compute[226829]: 2026-01-31 07:54:07.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:07.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:54:07.929 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:54:08 compute-2 ceph-mon[77282]: pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 07:54:08 compute-2 nova_compute[226829]: 2026-01-31 07:54:08.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:09.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:09 compute-2 nova_compute[226829]: 2026-01-31 07:54:09.781 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:09.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:10 compute-2 nova_compute[226829]: 2026-01-31 07:54:10.265 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846035.261913, 66066b76-4c92-4b20-ba23-c3002693dc10 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:54:10 compute-2 nova_compute[226829]: 2026-01-31 07:54:10.265 226833 INFO nova.compute.manager [-] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] VM Stopped (Lifecycle Event)
Jan 31 07:54:10 compute-2 nova_compute[226829]: 2026-01-31 07:54:10.360 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:10 compute-2 ceph-mon[77282]: pgmap v1707: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 21 op/s
Jan 31 07:54:10 compute-2 nova_compute[226829]: 2026-01-31 07:54:10.611 226833 DEBUG nova.compute.manager [None req-ee74ccf5-5590-4acf-9f85-1c1aa7aaa00c - - - - - -] [instance: 66066b76-4c92-4b20-ba23-c3002693dc10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:54:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:11.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:11.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:12 compute-2 nova_compute[226829]: 2026-01-31 07:54:12.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:12 compute-2 ceph-mon[77282]: pgmap v1708: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Jan 31 07:54:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:13.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:13 compute-2 nova_compute[226829]: 2026-01-31 07:54:13.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:13.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:14 compute-2 nova_compute[226829]: 2026-01-31 07:54:14.334 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:14 compute-2 nova_compute[226829]: 2026-01-31 07:54:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:14 compute-2 nova_compute[226829]: 2026-01-31 07:54:14.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:54:14 compute-2 nova_compute[226829]: 2026-01-31 07:54:14.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:54:14 compute-2 ceph-mon[77282]: pgmap v1709: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/895703752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:14 compute-2 nova_compute[226829]: 2026-01-31 07:54:14.836 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:15 compute-2 nova_compute[226829]: 2026-01-31 07:54:15.088 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:54:15 compute-2 nova_compute[226829]: 2026-01-31 07:54:15.088 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:15 compute-2 nova_compute[226829]: 2026-01-31 07:54:15.363 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:15.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:15.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:15 compute-2 ceph-mon[77282]: pgmap v1710: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:16 compute-2 nova_compute[226829]: 2026-01-31 07:54:16.303 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:16 compute-2 nova_compute[226829]: 2026-01-31 07:54:16.304 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:16 compute-2 nova_compute[226829]: 2026-01-31 07:54:16.304 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:16 compute-2 nova_compute[226829]: 2026-01-31 07:54:16.304 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:54:16 compute-2 nova_compute[226829]: 2026-01-31 07:54:16.305 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:54:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/823784031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:16 compute-2 nova_compute[226829]: 2026-01-31 07:54:16.724 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:16 compute-2 sudo[259494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:16 compute-2 sudo[259494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:16 compute-2 sudo[259494]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:16 compute-2 sudo[259519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:16 compute-2 sudo[259519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:16 compute-2 sudo[259519]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:16 compute-2 nova_compute[226829]: 2026-01-31 07:54:16.904 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:54:16 compute-2 nova_compute[226829]: 2026-01-31 07:54:16.906 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4572MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:54:16 compute-2 nova_compute[226829]: 2026-01-31 07:54:16.906 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:16 compute-2 nova_compute[226829]: 2026-01-31 07:54:16.907 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2400977389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/823784031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:17.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:17.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:18 compute-2 ceph-mon[77282]: pgmap v1711: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:19.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3230444530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:19 compute-2 nova_compute[226829]: 2026-01-31 07:54:19.837 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:19.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:20 compute-2 nova_compute[226829]: 2026-01-31 07:54:20.032 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:54:20 compute-2 nova_compute[226829]: 2026-01-31 07:54:20.033 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:54:20 compute-2 nova_compute[226829]: 2026-01-31 07:54:20.365 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:20 compute-2 nova_compute[226829]: 2026-01-31 07:54:20.602 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:20 compute-2 ceph-mon[77282]: pgmap v1712: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:54:21 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3604665836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:21 compute-2 nova_compute[226829]: 2026-01-31 07:54:21.026 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:21 compute-2 nova_compute[226829]: 2026-01-31 07:54:21.033 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:54:21 compute-2 nova_compute[226829]: 2026-01-31 07:54:21.157 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:54:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 31 07:54:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:21.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 31 07:54:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:21.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3604665836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:21 compute-2 ceph-mon[77282]: pgmap v1713: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3039010121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:22 compute-2 nova_compute[226829]: 2026-01-31 07:54:22.060 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:54:22 compute-2 nova_compute[226829]: 2026-01-31 07:54:22.061 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:23.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:54:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:23.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:54:24 compute-2 ceph-mon[77282]: pgmap v1714: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:24 compute-2 nova_compute[226829]: 2026-01-31 07:54:24.460 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:24 compute-2 nova_compute[226829]: 2026-01-31 07:54:24.664 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:24 compute-2 nova_compute[226829]: 2026-01-31 07:54:24.664 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:24 compute-2 nova_compute[226829]: 2026-01-31 07:54:24.665 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:54:24 compute-2 nova_compute[226829]: 2026-01-31 07:54:24.841 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:25 compute-2 podman[259570]: 2026-01-31 07:54:25.197309334 +0000 UTC m=+0.089988317 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 07:54:25 compute-2 nova_compute[226829]: 2026-01-31 07:54:25.367 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:25.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:25.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:26 compute-2 ceph-mon[77282]: pgmap v1715: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:27.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:54:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:27.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:54:28 compute-2 ceph-mon[77282]: pgmap v1716: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:54:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:29.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:54:29 compute-2 nova_compute[226829]: 2026-01-31 07:54:29.844 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:29.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:30 compute-2 nova_compute[226829]: 2026-01-31 07:54:30.371 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:30 compute-2 ceph-mon[77282]: pgmap v1717: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:31.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:54:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:31.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:54:32 compute-2 ceph-mon[77282]: pgmap v1718: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:33.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:33 compute-2 ceph-mon[77282]: pgmap v1719: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:54:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:33.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:54:34 compute-2 nova_compute[226829]: 2026-01-31 07:54:34.846 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:35 compute-2 nova_compute[226829]: 2026-01-31 07:54:35.373 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:35.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:35.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:36 compute-2 ceph-mon[77282]: pgmap v1720: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:36 compute-2 sudo[259605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:36 compute-2 sudo[259605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:36 compute-2 sudo[259605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:36 compute-2 sudo[259630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:36 compute-2 sudo[259630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:36 compute-2 sudo[259630]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:37.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:37.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:38 compute-2 podman[259656]: 2026-01-31 07:54:38.165280146 +0000 UTC m=+0.054751698 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:54:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:39 compute-2 ceph-mon[77282]: pgmap v1721: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:39.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:39 compute-2 nova_compute[226829]: 2026-01-31 07:54:39.848 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:39.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:40 compute-2 nova_compute[226829]: 2026-01-31 07:54:40.375 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:40 compute-2 ceph-mon[77282]: pgmap v1722: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:41.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:41.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:43 compute-2 ceph-mon[77282]: pgmap v1723: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:43.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:44 compute-2 ceph-mon[77282]: pgmap v1724: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:44 compute-2 nova_compute[226829]: 2026-01-31 07:54:44.850 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:45 compute-2 nova_compute[226829]: 2026-01-31 07:54:45.377 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:45.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2192602938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:54:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2192602938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:54:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:54:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:45.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:54:46 compute-2 ceph-mon[77282]: pgmap v1725: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:47.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:47 compute-2 nova_compute[226829]: 2026-01-31 07:54:47.835 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Acquiring lock "a6d52544-dac0-441a-a99f-b0d23283f733" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:47 compute-2 nova_compute[226829]: 2026-01-31 07:54:47.835 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:47 compute-2 sudo[259680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:47 compute-2 sudo[259680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:47 compute-2 sudo[259680]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:47 compute-2 sudo[259705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:54:47 compute-2 sudo[259705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:47 compute-2 sudo[259705]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:54:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:47.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:54:47 compute-2 sudo[259730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:48 compute-2 sudo[259730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:48 compute-2 sudo[259730]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:48 compute-2 sudo[259755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:54:48 compute-2 sudo[259755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:48 compute-2 ceph-mon[77282]: pgmap v1726: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:48 compute-2 nova_compute[226829]: 2026-01-31 07:54:48.265 226833 DEBUG nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:54:48 compute-2 sudo[259755]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:54:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:54:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:54:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:54:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:54:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:54:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:49.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:49 compute-2 nova_compute[226829]: 2026-01-31 07:54:49.474 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:49 compute-2 nova_compute[226829]: 2026-01-31 07:54:49.475 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:49 compute-2 nova_compute[226829]: 2026-01-31 07:54:49.482 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:54:49 compute-2 nova_compute[226829]: 2026-01-31 07:54:49.483 226833 INFO nova.compute.claims [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:54:49 compute-2 nova_compute[226829]: 2026-01-31 07:54:49.852 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:49.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:50 compute-2 ceph-mon[77282]: pgmap v1727: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:50 compute-2 nova_compute[226829]: 2026-01-31 07:54:50.379 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:50 compute-2 nova_compute[226829]: 2026-01-31 07:54:50.493 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:54:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3774768180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:50 compute-2 nova_compute[226829]: 2026-01-31 07:54:50.893 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:50 compute-2 nova_compute[226829]: 2026-01-31 07:54:50.898 226833 DEBUG nova.compute.provider_tree [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.039 226833 DEBUG nova.scheduler.client.report [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.097 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.098 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.165 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.166 226833 DEBUG nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.206 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.317 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.333 226833 DEBUG nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.334 226833 DEBUG nova.network.neutron [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.368 226833 INFO nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.374 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.375 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.382 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.382 226833 INFO nova.compute.claims [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:54:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:51.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.423 226833 DEBUG nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.625 226833 DEBUG nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.627 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.627 226833 INFO nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Creating image(s)
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.650 226833 DEBUG nova.storage.rbd_utils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] rbd image a6d52544-dac0-441a-a99f-b0d23283f733_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.675 226833 DEBUG nova.storage.rbd_utils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] rbd image a6d52544-dac0-441a-a99f-b0d23283f733_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.699 226833 DEBUG nova.storage.rbd_utils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] rbd image a6d52544-dac0-441a-a99f-b0d23283f733_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.703 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.740 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.757 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.758 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.759 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.759 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.785 226833 DEBUG nova.storage.rbd_utils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] rbd image a6d52544-dac0-441a-a99f-b0d23283f733_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.788 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a6d52544-dac0-441a-a99f-b0d23283f733_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3774768180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:51 compute-2 nova_compute[226829]: 2026-01-31 07:54:51.976 226833 DEBUG nova.policy [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '41f8b35d7c8b43d9b3583e6e2b1385cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9bc07928b022430ba8bcc450bbb5c7f5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:54:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:51.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:54:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/762709688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.125 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.130 226833 DEBUG nova.compute.provider_tree [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.167 226833 DEBUG nova.scheduler.client.report [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.204 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.205 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.275 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.276 226833 DEBUG nova.network.neutron [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.303 226833 INFO nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.337 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.446 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.447 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.448 226833 INFO nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Creating image(s)
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.480 226833 DEBUG nova.storage.rbd_utils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image ebe88ddf-b955-4cc2-9685-826d10d55955_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.513 226833 DEBUG nova.storage.rbd_utils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image ebe88ddf-b955-4cc2-9685-826d10d55955_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.546 226833 DEBUG nova.storage.rbd_utils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image ebe88ddf-b955-4cc2-9685-826d10d55955_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.549 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.576 226833 DEBUG nova.policy [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56eecf4373334b18a454186e0c54e924', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56ce08a86486427fbebbfbd075cdb404', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.619 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.620 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.621 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.621 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.649 226833 DEBUG nova.storage.rbd_utils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image ebe88ddf-b955-4cc2-9685-826d10d55955_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.653 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ebe88ddf-b955-4cc2-9685-826d10d55955_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:52 compute-2 nova_compute[226829]: 2026-01-31 07:54:52.718 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:54:52.717 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:54:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:54:52.721 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:54:53 compute-2 ceph-mon[77282]: pgmap v1728: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1505041532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/762709688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:53.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:53 compute-2 nova_compute[226829]: 2026-01-31 07:54:53.506 226833 DEBUG nova.network.neutron [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Successfully created port: 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:54:53 compute-2 nova_compute[226829]: 2026-01-31 07:54:53.572 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a6d52544-dac0-441a-a99f-b0d23283f733_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.784s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:53 compute-2 nova_compute[226829]: 2026-01-31 07:54:53.656 226833 DEBUG nova.storage.rbd_utils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] resizing rbd image a6d52544-dac0-441a-a99f-b0d23283f733_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:54:53 compute-2 nova_compute[226829]: 2026-01-31 07:54:53.827 226833 DEBUG nova.network.neutron [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Successfully created port: 17255b28-10be-4d25-bba8-0292dfa7ad68 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:54:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:54:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:53.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:54:54 compute-2 nova_compute[226829]: 2026-01-31 07:54:54.634 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ebe88ddf-b955-4cc2-9685-826d10d55955_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.981s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:54 compute-2 ceph-mon[77282]: pgmap v1729: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:54 compute-2 nova_compute[226829]: 2026-01-31 07:54:54.697 226833 DEBUG nova.storage.rbd_utils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] resizing rbd image ebe88ddf-b955-4cc2-9685-826d10d55955_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:54:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:54:54.725 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:54:54 compute-2 nova_compute[226829]: 2026-01-31 07:54:54.854 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:54 compute-2 nova_compute[226829]: 2026-01-31 07:54:54.932 226833 DEBUG nova.objects.instance [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lazy-loading 'migration_context' on Instance uuid a6d52544-dac0-441a-a99f-b0d23283f733 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:54:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:55.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:55 compute-2 nova_compute[226829]: 2026-01-31 07:54:55.883 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:55 compute-2 nova_compute[226829]: 2026-01-31 07:54:55.887 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:54:55 compute-2 nova_compute[226829]: 2026-01-31 07:54:55.888 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Ensure instance console log exists: /var/lib/nova/instances/a6d52544-dac0-441a-a99f-b0d23283f733/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:54:55 compute-2 nova_compute[226829]: 2026-01-31 07:54:55.888 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:55 compute-2 nova_compute[226829]: 2026-01-31 07:54:55.889 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:55 compute-2 nova_compute[226829]: 2026-01-31 07:54:55.889 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:54:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:55.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.068 226833 DEBUG nova.network.neutron [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Successfully created port: 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:54:56 compute-2 ceph-mon[77282]: pgmap v1730: 305 pgs: 305 active+clean; 85 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.129 226833 DEBUG nova.objects.instance [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lazy-loading 'migration_context' on Instance uuid ebe88ddf-b955-4cc2-9685-826d10d55955 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.152 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.152 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Ensure instance console log exists: /var/lib/nova/instances/ebe88ddf-b955-4cc2-9685-826d10d55955/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.153 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.153 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.153 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:56 compute-2 podman[260174]: 2026-01-31 07:54:56.193060859 +0000 UTC m=+0.078928349 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.836 226833 DEBUG nova.network.neutron [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Successfully updated port: 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.902 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Acquiring lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.903 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Acquired lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:54:56 compute-2 nova_compute[226829]: 2026-01-31 07:54:56.903 226833 DEBUG nova.network.neutron [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:54:57 compute-2 nova_compute[226829]: 2026-01-31 07:54:57.104 226833 DEBUG nova.compute.manager [req-e076c446-6acc-4c02-93d7-129b3ed8ec38 req-13571e78-2b9d-4b3e-ba2a-21e1d86d9fa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Received event network-changed-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:54:57 compute-2 nova_compute[226829]: 2026-01-31 07:54:57.105 226833 DEBUG nova.compute.manager [req-e076c446-6acc-4c02-93d7-129b3ed8ec38 req-13571e78-2b9d-4b3e-ba2a-21e1d86d9fa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Refreshing instance network info cache due to event network-changed-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:54:57 compute-2 nova_compute[226829]: 2026-01-31 07:54:57.106 226833 DEBUG oslo_concurrency.lockutils [req-e076c446-6acc-4c02-93d7-129b3ed8ec38 req-13571e78-2b9d-4b3e-ba2a-21e1d86d9fa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:54:57 compute-2 sudo[260219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:57 compute-2 sudo[260219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:57 compute-2 sudo[260219]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:57 compute-2 sudo[260244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:57 compute-2 sudo[260244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:57 compute-2 sudo[260244]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:57 compute-2 nova_compute[226829]: 2026-01-31 07:54:57.367 226833 DEBUG nova.network.neutron [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:54:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:57.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:58.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:58 compute-2 ceph-mon[77282]: pgmap v1731: 305 pgs: 305 active+clean; 113 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.9 MiB/s wr, 44 op/s
Jan 31 07:54:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:54:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:59.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:59 compute-2 nova_compute[226829]: 2026-01-31 07:54:59.890 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:00.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:00 compute-2 nova_compute[226829]: 2026-01-31 07:55:00.225 226833 DEBUG nova.network.neutron [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Successfully created port: bf5f6a49-45dc-420a-8c08-bbf112a81186 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:55:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1307736074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:00 compute-2 ceph-mon[77282]: pgmap v1732: 305 pgs: 305 active+clean; 180 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 80 op/s
Jan 31 07:55:00 compute-2 nova_compute[226829]: 2026-01-31 07:55:00.886 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:01.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.564 226833 DEBUG nova.network.neutron [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Updating instance_info_cache with network_info: [{"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.920 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Releasing lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.921 226833 DEBUG nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Instance network_info: |[{"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.921 226833 DEBUG oslo_concurrency.lockutils [req-e076c446-6acc-4c02-93d7-129b3ed8ec38 req-13571e78-2b9d-4b3e-ba2a-21e1d86d9fa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.922 226833 DEBUG nova.network.neutron [req-e076c446-6acc-4c02-93d7-129b3ed8ec38 req-13571e78-2b9d-4b3e-ba2a-21e1d86d9fa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Refreshing network info cache for port 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.925 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Start _get_guest_xml network_info=[{"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.929 226833 WARNING nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.935 226833 DEBUG nova.virt.libvirt.host [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.936 226833 DEBUG nova.virt.libvirt.host [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.939 226833 DEBUG nova.virt.libvirt.host [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.939 226833 DEBUG nova.virt.libvirt.host [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.941 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.941 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.941 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.941 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.942 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.942 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.942 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.942 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.942 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.942 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.942 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.943 226833 DEBUG nova.virt.hardware [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:55:01 compute-2 nova_compute[226829]: 2026-01-31 07:55:01.946 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:02.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:55:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2952688819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:03 compute-2 nova_compute[226829]: 2026-01-31 07:55:03.141 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:03.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/263086270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:03 compute-2 ceph-mon[77282]: pgmap v1733: 305 pgs: 305 active+clean; 180 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 31 07:55:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3259012295' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:03 compute-2 nova_compute[226829]: 2026-01-31 07:55:03.714 226833 DEBUG nova.storage.rbd_utils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] rbd image a6d52544-dac0-441a-a99f-b0d23283f733_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:55:03 compute-2 nova_compute[226829]: 2026-01-31 07:55:03.718 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:03 compute-2 nova_compute[226829]: 2026-01-31 07:55:03.738 226833 DEBUG nova.network.neutron [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Successfully updated port: 17255b28-10be-4d25-bba8-0292dfa7ad68 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:55:03 compute-2 nova_compute[226829]: 2026-01-31 07:55:03.796 226833 DEBUG nova.compute.manager [req-9fc3e584-13d0-4909-933a-bec658403c27 req-7ed0b892-ee63-4b25-b7d1-efb9922dcc28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-changed-17255b28-10be-4d25-bba8-0292dfa7ad68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:03 compute-2 nova_compute[226829]: 2026-01-31 07:55:03.796 226833 DEBUG nova.compute.manager [req-9fc3e584-13d0-4909-933a-bec658403c27 req-7ed0b892-ee63-4b25-b7d1-efb9922dcc28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Refreshing instance network info cache due to event network-changed-17255b28-10be-4d25-bba8-0292dfa7ad68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:55:03 compute-2 nova_compute[226829]: 2026-01-31 07:55:03.796 226833 DEBUG oslo_concurrency.lockutils [req-9fc3e584-13d0-4909-933a-bec658403c27 req-7ed0b892-ee63-4b25-b7d1-efb9922dcc28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:55:03 compute-2 nova_compute[226829]: 2026-01-31 07:55:03.797 226833 DEBUG oslo_concurrency.lockutils [req-9fc3e584-13d0-4909-933a-bec658403c27 req-7ed0b892-ee63-4b25-b7d1-efb9922dcc28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:55:03 compute-2 nova_compute[226829]: 2026-01-31 07:55:03.797 226833 DEBUG nova.network.neutron [req-9fc3e584-13d0-4909-933a-bec658403c27 req-7ed0b892-ee63-4b25-b7d1-efb9922dcc28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Refreshing network info cache for port 17255b28-10be-4d25-bba8-0292dfa7ad68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:55:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:55:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:04.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:55:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:55:04 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1662721803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.168 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.169 226833 DEBUG nova.virt.libvirt.vif [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=76,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFaSVVOkqAy4m6ftfVfCN7yyXKwsbbmBkSuPCnHAQrSoEw66tu1puYtXnpD/5EDlHpoXDfR9WrQMHLeQOsJa0XkGcyRGLK347cubHzHBRMaY57fpNViMHaMBgY16/9IaoA==',key_name='tempest-keypair-318815734',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bc07928b022430ba8bcc450bbb5c7f5',ramdisk_id='',reservation_id='r-oxii7bzm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-1984852953',owner_user_name='tempest-ServersTestFqdnHostnames-1984852953-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='41f8b35d7c8b43d9b3583e6e2b1385cb',uuid=a6d52544-dac0-441a-a99f-b0d23283f733,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.170 226833 DEBUG nova.network.os_vif_util [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Converting VIF {"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.171 226833 DEBUG nova.network.os_vif_util [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:0e:2f,bridge_name='br-int',has_traffic_filtering=True,id=7b8f6af4-25f0-4345-98f9-b6f4345e7a4c,network=Network(45bd414f-24de-4143-b7e8-6a584f757e03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f6af4-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.173 226833 DEBUG nova.objects.instance [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6d52544-dac0-441a-a99f-b0d23283f733 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.206 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <uuid>a6d52544-dac0-441a-a99f-b0d23283f733</uuid>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <name>instance-0000004c</name>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <nova:name>guest-instance-1.domain.com</nova:name>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:55:01</nova:creationTime>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <nova:user uuid="41f8b35d7c8b43d9b3583e6e2b1385cb">tempest-ServersTestFqdnHostnames-1984852953-project-member</nova:user>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <nova:project uuid="9bc07928b022430ba8bcc450bbb5c7f5">tempest-ServersTestFqdnHostnames-1984852953</nova:project>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <nova:port uuid="7b8f6af4-25f0-4345-98f9-b6f4345e7a4c">
Jan 31 07:55:04 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <system>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <entry name="serial">a6d52544-dac0-441a-a99f-b0d23283f733</entry>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <entry name="uuid">a6d52544-dac0-441a-a99f-b0d23283f733</entry>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     </system>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <os>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   </os>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <features>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   </features>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/a6d52544-dac0-441a-a99f-b0d23283f733_disk">
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       </source>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/a6d52544-dac0-441a-a99f-b0d23283f733_disk.config">
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       </source>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:55:04 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:19:0e:2f"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <target dev="tap7b8f6af4-25"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/a6d52544-dac0-441a-a99f-b0d23283f733/console.log" append="off"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <video>
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     </video>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:55:04 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:55:04 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:55:04 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:55:04 compute-2 nova_compute[226829]: </domain>
Jan 31 07:55:04 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.207 226833 DEBUG nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Preparing to wait for external event network-vif-plugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.208 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Acquiring lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.208 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.208 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.209 226833 DEBUG nova.virt.libvirt.vif [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=76,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFaSVVOkqAy4m6ftfVfCN7yyXKwsbbmBkSuPCnHAQrSoEw66tu1puYtXnpD/5EDlHpoXDfR9WrQMHLeQOsJa0XkGcyRGLK347cubHzHBRMaY57fpNViMHaMBgY16/9IaoA==',key_name='tempest-keypair-318815734',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bc07928b022430ba8bcc450bbb5c7f5',ramdisk_id='',reservation_id='r-oxii7bzm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-1984852953',owner_user_name='tempest-ServersTestFqdnHostnames-1984852953-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='41f8b35d7c8b43d9b3583e6e2b1385cb',uuid=a6d52544-dac0-441a-a99f-b0d23283f733,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.209 226833 DEBUG nova.network.os_vif_util [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Converting VIF {"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.210 226833 DEBUG nova.network.os_vif_util [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:0e:2f,bridge_name='br-int',has_traffic_filtering=True,id=7b8f6af4-25f0-4345-98f9-b6f4345e7a4c,network=Network(45bd414f-24de-4143-b7e8-6a584f757e03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f6af4-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.210 226833 DEBUG os_vif [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:0e:2f,bridge_name='br-int',has_traffic_filtering=True,id=7b8f6af4-25f0-4345-98f9-b6f4345e7a4c,network=Network(45bd414f-24de-4143-b7e8-6a584f757e03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f6af4-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.211 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.211 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.211 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.217 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.218 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b8f6af4-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.218 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b8f6af4-25, col_values=(('external_ids', {'iface-id': '7b8f6af4-25f0-4345-98f9-b6f4345e7a4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:0e:2f', 'vm-uuid': 'a6d52544-dac0-441a-a99f-b0d23283f733'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:04 compute-2 NetworkManager[48999]: <info>  [1769846104.2424] manager: (tap7b8f6af4-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.241 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.245 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.248 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.248 226833 INFO os_vif [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:0e:2f,bridge_name='br-int',has_traffic_filtering=True,id=7b8f6af4-25f0-4345-98f9-b6f4345e7a4c,network=Network(45bd414f-24de-4143-b7e8-6a584f757e03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f6af4-25')
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.355 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.356 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.356 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] No VIF found with MAC fa:16:3e:19:0e:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.356 226833 INFO nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Using config drive
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.384 226833 DEBUG nova.storage.rbd_utils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] rbd image a6d52544-dac0-441a-a99f-b0d23283f733_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.591 226833 DEBUG nova.network.neutron [req-9fc3e584-13d0-4909-933a-bec658403c27 req-7ed0b892-ee63-4b25-b7d1-efb9922dcc28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:55:04 compute-2 nova_compute[226829]: 2026-01-31 07:55:04.892 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2952688819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:05 compute-2 ceph-mon[77282]: pgmap v1734: 305 pgs: 305 active+clean; 185 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 5.6 MiB/s wr, 94 op/s
Jan 31 07:55:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:05 compute-2 nova_compute[226829]: 2026-01-31 07:55:05.485 226833 INFO nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Creating config drive at /var/lib/nova/instances/a6d52544-dac0-441a-a99f-b0d23283f733/disk.config
Jan 31 07:55:05 compute-2 nova_compute[226829]: 2026-01-31 07:55:05.491 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6d52544-dac0-441a-a99f-b0d23283f733/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphswvd1sy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:05 compute-2 nova_compute[226829]: 2026-01-31 07:55:05.617 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6d52544-dac0-441a-a99f-b0d23283f733/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphswvd1sy" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:05 compute-2 nova_compute[226829]: 2026-01-31 07:55:05.647 226833 DEBUG nova.storage.rbd_utils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] rbd image a6d52544-dac0-441a-a99f-b0d23283f733_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:55:05 compute-2 nova_compute[226829]: 2026-01-31 07:55:05.651 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6d52544-dac0-441a-a99f-b0d23283f733/disk.config a6d52544-dac0-441a-a99f-b0d23283f733_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:05 compute-2 nova_compute[226829]: 2026-01-31 07:55:05.981 226833 DEBUG nova.network.neutron [req-9fc3e584-13d0-4909-933a-bec658403c27 req-7ed0b892-ee63-4b25-b7d1-efb9922dcc28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:06 compute-2 nova_compute[226829]: 2026-01-31 07:55:06.000 226833 DEBUG oslo_concurrency.lockutils [req-9fc3e584-13d0-4909-933a-bec658403c27 req-7ed0b892-ee63-4b25-b7d1-efb9922dcc28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:55:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:06.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:06 compute-2 nova_compute[226829]: 2026-01-31 07:55:06.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:06 compute-2 nova_compute[226829]: 2026-01-31 07:55:06.849 226833 DEBUG nova.network.neutron [req-e076c446-6acc-4c02-93d7-129b3ed8ec38 req-13571e78-2b9d-4b3e-ba2a-21e1d86d9fa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Updated VIF entry in instance network info cache for port 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:55:06 compute-2 nova_compute[226829]: 2026-01-31 07:55:06.850 226833 DEBUG nova.network.neutron [req-e076c446-6acc-4c02-93d7-129b3ed8ec38 req-13571e78-2b9d-4b3e-ba2a-21e1d86d9fa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Updating instance_info_cache with network_info: [{"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:06.865 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:06.866 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:06.866 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:06 compute-2 nova_compute[226829]: 2026-01-31 07:55:06.927 226833 DEBUG oslo_concurrency.lockutils [req-e076c446-6acc-4c02-93d7-129b3ed8ec38 req-13571e78-2b9d-4b3e-ba2a-21e1d86d9fa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:55:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1662721803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:06 compute-2 ceph-mon[77282]: pgmap v1735: 305 pgs: 305 active+clean; 202 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 6.2 MiB/s wr, 95 op/s
Jan 31 07:55:07 compute-2 nova_compute[226829]: 2026-01-31 07:55:07.151 226833 DEBUG nova.network.neutron [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Successfully updated port: 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:55:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:07 compute-2 nova_compute[226829]: 2026-01-31 07:55:07.442 226833 DEBUG oslo_concurrency.processutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6d52544-dac0-441a-a99f-b0d23283f733/disk.config a6d52544-dac0-441a-a99f-b0d23283f733_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.792s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:07 compute-2 nova_compute[226829]: 2026-01-31 07:55:07.443 226833 INFO nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Deleting local config drive /var/lib/nova/instances/a6d52544-dac0-441a-a99f-b0d23283f733/disk.config because it was imported into RBD.
Jan 31 07:55:07 compute-2 kernel: tap7b8f6af4-25: entered promiscuous mode
Jan 31 07:55:07 compute-2 NetworkManager[48999]: <info>  [1769846107.5037] manager: (tap7b8f6af4-25): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Jan 31 07:55:07 compute-2 ovn_controller[133834]: 2026-01-31T07:55:07Z|00234|binding|INFO|Claiming lport 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c for this chassis.
Jan 31 07:55:07 compute-2 ovn_controller[133834]: 2026-01-31T07:55:07Z|00235|binding|INFO|7b8f6af4-25f0-4345-98f9-b6f4345e7a4c: Claiming fa:16:3e:19:0e:2f 10.100.0.12
Jan 31 07:55:07 compute-2 nova_compute[226829]: 2026-01-31 07:55:07.505 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:07 compute-2 ovn_controller[133834]: 2026-01-31T07:55:07Z|00236|binding|INFO|Setting lport 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c ovn-installed in OVS
Jan 31 07:55:07 compute-2 nova_compute[226829]: 2026-01-31 07:55:07.515 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:07 compute-2 nova_compute[226829]: 2026-01-31 07:55:07.519 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:07 compute-2 systemd-machined[195142]: New machine qemu-32-instance-0000004c.
Jan 31 07:55:07 compute-2 systemd-udevd[260410]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:55:07 compute-2 ovn_controller[133834]: 2026-01-31T07:55:07Z|00237|binding|INFO|Setting lport 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c up in Southbound
Jan 31 07:55:07 compute-2 systemd[1]: Started Virtual Machine qemu-32-instance-0000004c.
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.553 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:0e:2f 10.100.0.12'], port_security=['fa:16:3e:19:0e:2f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a6d52544-dac0-441a-a99f-b0d23283f733', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45bd414f-24de-4143-b7e8-6a584f757e03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bc07928b022430ba8bcc450bbb5c7f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '456504e2-b7ec-4864-92ee-93c1963edbdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99232924-38ab-446c-9a0d-35433e368ebf, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=7b8f6af4-25f0-4345-98f9-b6f4345e7a4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.554 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c in datapath 45bd414f-24de-4143-b7e8-6a584f757e03 bound to our chassis
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.555 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45bd414f-24de-4143-b7e8-6a584f757e03
Jan 31 07:55:07 compute-2 NetworkManager[48999]: <info>  [1769846107.5601] device (tap7b8f6af4-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:55:07 compute-2 NetworkManager[48999]: <info>  [1769846107.5616] device (tap7b8f6af4-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.566 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e62c1a-abbc-4abe-b22f-b15db09622ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.567 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45bd414f-21 in ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.569 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45bd414f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.569 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b658f0-cb0c-4d40-9b43-c396cb5e01c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.570 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ce27c5f8-16d3-43be-9dba-dcb49c512b42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.584 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[2169fe11-8c88-4795-996c-0baed62175a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.599 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a4fe66f9-f872-4381-b7d6-374b6885dd2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.630 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[de6959f0-4a2d-4334-8e8f-1003456e6ce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.635 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[79b297d4-9a3a-42a1-9bb4-367e2e358a17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 NetworkManager[48999]: <info>  [1769846107.6368] manager: (tap45bd414f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.657 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f73c5c25-6000-48f9-9d03-772c1046c8da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.660 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[a0781ac2-64c7-4342-b6ec-505683ff908a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 NetworkManager[48999]: <info>  [1769846107.6786] device (tap45bd414f-20): carrier: link connected
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.684 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[92e04bb3-0c55-45de-a9c2-f08e34784b41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.698 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[45b54f1c-d046-4acc-81f8-f7f0136f1845]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45bd414f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:5d:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636015, 'reachable_time': 22205, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260443, 'error': None, 'target': 'ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.708 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8a90f204-6c4a-4916-9e9e-9a32bd10be85]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:5d92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636015, 'tstamp': 636015}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260444, 'error': None, 'target': 'ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.720 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f4240768-a9aa-4764-8854-ee89d0f94a33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45bd414f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:5d:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636015, 'reachable_time': 22205, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260445, 'error': None, 'target': 'ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.743 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[46e5f212-a33d-4447-aba2-a93825278b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.783 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e513f79c-ada7-46ef-95bc-f59ddf5852f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.785 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45bd414f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.785 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.786 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45bd414f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:07 compute-2 nova_compute[226829]: 2026-01-31 07:55:07.801 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:07 compute-2 kernel: tap45bd414f-20: entered promiscuous mode
Jan 31 07:55:07 compute-2 NetworkManager[48999]: <info>  [1769846107.8017] manager: (tap45bd414f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.807 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45bd414f-20, col_values=(('external_ids', {'iface-id': '91286c77-1df8-46ba-83bb-01dd873639c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:07 compute-2 nova_compute[226829]: 2026-01-31 07:55:07.809 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:07 compute-2 ovn_controller[133834]: 2026-01-31T07:55:07Z|00238|binding|INFO|Releasing lport 91286c77-1df8-46ba-83bb-01dd873639c4 from this chassis (sb_readonly=0)
Jan 31 07:55:07 compute-2 nova_compute[226829]: 2026-01-31 07:55:07.811 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.813 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45bd414f-24de-4143-b7e8-6a584f757e03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45bd414f-24de-4143-b7e8-6a584f757e03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.815 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[116e839c-9277-4f67-bcca-f4d363d25719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:07 compute-2 nova_compute[226829]: 2026-01-31 07:55:07.816 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.817 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-45bd414f-24de-4143-b7e8-6a584f757e03
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/45bd414f-24de-4143-b7e8-6a584f757e03.pid.haproxy
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 45bd414f-24de-4143-b7e8-6a584f757e03
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:55:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:07.818 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03', 'env', 'PROCESS_TAG=haproxy-45bd414f-24de-4143-b7e8-6a584f757e03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45bd414f-24de-4143-b7e8-6a584f757e03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:55:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:55:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:08.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:55:08 compute-2 sudo[260472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:08 compute-2 sudo[260472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:08 compute-2 sudo[260472]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.181 226833 DEBUG nova.compute.manager [req-e22041f7-2d06-462a-afd4-66da977e0bbb req-bdc6bdb4-85ca-4533-822e-7e0240480ee3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-changed-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.183 226833 DEBUG nova.compute.manager [req-e22041f7-2d06-462a-afd4-66da977e0bbb req-bdc6bdb4-85ca-4533-822e-7e0240480ee3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Refreshing instance network info cache due to event network-changed-9f745c59-fd7c-4cd5-9eda-e439c116fbd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.184 226833 DEBUG oslo_concurrency.lockutils [req-e22041f7-2d06-462a-afd4-66da977e0bbb req-bdc6bdb4-85ca-4533-822e-7e0240480ee3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.184 226833 DEBUG oslo_concurrency.lockutils [req-e22041f7-2d06-462a-afd4-66da977e0bbb req-bdc6bdb4-85ca-4533-822e-7e0240480ee3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.184 226833 DEBUG nova.network.neutron [req-e22041f7-2d06-462a-afd4-66da977e0bbb req-bdc6bdb4-85ca-4533-822e-7e0240480ee3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Refreshing network info cache for port 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:55:08 compute-2 sudo[260513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:55:08 compute-2 sudo[260513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:08 compute-2 sudo[260513]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:08 compute-2 podman[260481]: 2026-01-31 07:55:08.128417479 +0000 UTC m=+0.020129303 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:55:08 compute-2 podman[260481]: 2026-01-31 07:55:08.224551252 +0000 UTC m=+0.116263046 container create 6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:55:08 compute-2 systemd[1]: Started libpod-conmon-6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077.scope.
Jan 31 07:55:08 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:55:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78b85a24104f5221451f27965d1e702a37f093ebdf0626cb2ffd6a7d89a2ad0c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:55:08 compute-2 podman[260481]: 2026-01-31 07:55:08.306655805 +0000 UTC m=+0.198367599 container init 6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 07:55:08 compute-2 podman[260537]: 2026-01-31 07:55:08.307978292 +0000 UTC m=+0.108748994 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:55:08 compute-2 podman[260481]: 2026-01-31 07:55:08.312495083 +0000 UTC m=+0.204206867 container start 6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:55:08 compute-2 neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03[260558]: [NOTICE]   (260564) : New worker (260566) forked
Jan 31 07:55:08 compute-2 neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03[260558]: [NOTICE]   (260564) : Loading success.
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.421 226833 DEBUG nova.compute.manager [req-91f3fa02-e422-401b-bc21-fc47c605e731 req-26d9509e-1d31-40ce-b5a2-c7b6fd55f9a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Received event network-vif-plugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.422 226833 DEBUG oslo_concurrency.lockutils [req-91f3fa02-e422-401b-bc21-fc47c605e731 req-26d9509e-1d31-40ce-b5a2-c7b6fd55f9a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.422 226833 DEBUG oslo_concurrency.lockutils [req-91f3fa02-e422-401b-bc21-fc47c605e731 req-26d9509e-1d31-40ce-b5a2-c7b6fd55f9a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.423 226833 DEBUG oslo_concurrency.lockutils [req-91f3fa02-e422-401b-bc21-fc47c605e731 req-26d9509e-1d31-40ce-b5a2-c7b6fd55f9a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.423 226833 DEBUG nova.compute.manager [req-91f3fa02-e422-401b-bc21-fc47c605e731 req-26d9509e-1d31-40ce-b5a2-c7b6fd55f9a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Processing event network-vif-plugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.746 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846108.745623, a6d52544-dac0-441a-a99f-b0d23283f733 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.748 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] VM Started (Lifecycle Event)
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.750 226833 DEBUG nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.753 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.756 226833 INFO nova.virt.libvirt.driver [-] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Instance spawned successfully.
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.756 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.811 226833 DEBUG nova.network.neutron [req-e22041f7-2d06-462a-afd4-66da977e0bbb req-bdc6bdb4-85ca-4533-822e-7e0240480ee3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.839 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.843 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.846 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.846 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.847 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.847 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.848 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.848 226833 DEBUG nova.virt.libvirt.driver [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.927 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.928 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846108.745797, a6d52544-dac0-441a-a99f-b0d23283f733 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:08 compute-2 nova_compute[226829]: 2026-01-31 07:55:08.929 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] VM Paused (Lifecycle Event)
Jan 31 07:55:08 compute-2 ceph-mon[77282]: pgmap v1736: 305 pgs: 305 active+clean; 223 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 5.2 MiB/s wr, 80 op/s
Jan 31 07:55:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:55:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.012 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.015 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846108.7528455, a6d52544-dac0-441a-a99f-b0d23283f733 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.015 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] VM Resumed (Lifecycle Event)
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.075 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.079 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.086 226833 INFO nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Took 17.46 seconds to spawn the instance on the hypervisor.
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.087 226833 DEBUG nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.130 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.222 226833 INFO nova.compute.manager [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Took 19.78 seconds to build instance.
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.241 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.249 226833 DEBUG oslo_concurrency.lockutils [None req-1c94c0b0-5109-426a-9d7e-ba5743ad6fb6 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:09.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:09 compute-2 nova_compute[226829]: 2026-01-31 07:55:09.894 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:10.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:10 compute-2 ceph-mon[77282]: pgmap v1737: 305 pgs: 305 active+clean; 227 MiB data, 755 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 4.2 MiB/s wr, 66 op/s
Jan 31 07:55:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/788326176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1759519715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:10 compute-2 nova_compute[226829]: 2026-01-31 07:55:10.469 226833 DEBUG nova.network.neutron [req-e22041f7-2d06-462a-afd4-66da977e0bbb req-bdc6bdb4-85ca-4533-822e-7e0240480ee3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:10 compute-2 nova_compute[226829]: 2026-01-31 07:55:10.543 226833 DEBUG oslo_concurrency.lockutils [req-e22041f7-2d06-462a-afd4-66da977e0bbb req-bdc6bdb4-85ca-4533-822e-7e0240480ee3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:55:10 compute-2 nova_compute[226829]: 2026-01-31 07:55:10.700 226833 DEBUG nova.compute.manager [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Received event network-vif-plugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:10 compute-2 nova_compute[226829]: 2026-01-31 07:55:10.701 226833 DEBUG oslo_concurrency.lockutils [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:10 compute-2 nova_compute[226829]: 2026-01-31 07:55:10.701 226833 DEBUG oslo_concurrency.lockutils [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:10 compute-2 nova_compute[226829]: 2026-01-31 07:55:10.702 226833 DEBUG oslo_concurrency.lockutils [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:10 compute-2 nova_compute[226829]: 2026-01-31 07:55:10.702 226833 DEBUG nova.compute.manager [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] No waiting events found dispatching network-vif-plugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:10 compute-2 nova_compute[226829]: 2026-01-31 07:55:10.702 226833 WARNING nova.compute.manager [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Received unexpected event network-vif-plugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c for instance with vm_state active and task_state None.
Jan 31 07:55:11 compute-2 nova_compute[226829]: 2026-01-31 07:55:11.300 226833 DEBUG nova.network.neutron [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Successfully updated port: bf5f6a49-45dc-420a-8c08-bbf112a81186 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:55:11 compute-2 nova_compute[226829]: 2026-01-31 07:55:11.325 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:55:11 compute-2 nova_compute[226829]: 2026-01-31 07:55:11.325 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquired lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:55:11 compute-2 nova_compute[226829]: 2026-01-31 07:55:11.326 226833 DEBUG nova.network.neutron [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:55:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:11.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:11 compute-2 nova_compute[226829]: 2026-01-31 07:55:11.732 226833 DEBUG nova.network.neutron [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:55:11 compute-2 nova_compute[226829]: 2026-01-31 07:55:11.806 226833 DEBUG nova.compute.manager [req-d61df46f-de05-41ae-ac97-1ba403dba64e req-b9eebc88-59f7-4612-8750-4a1e1f98cdee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-changed-bf5f6a49-45dc-420a-8c08-bbf112a81186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:11 compute-2 nova_compute[226829]: 2026-01-31 07:55:11.807 226833 DEBUG nova.compute.manager [req-d61df46f-de05-41ae-ac97-1ba403dba64e req-b9eebc88-59f7-4612-8750-4a1e1f98cdee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Refreshing instance network info cache due to event network-changed-bf5f6a49-45dc-420a-8c08-bbf112a81186. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:55:11 compute-2 nova_compute[226829]: 2026-01-31 07:55:11.807 226833 DEBUG oslo_concurrency.lockutils [req-d61df46f-de05-41ae-ac97-1ba403dba64e req-b9eebc88-59f7-4612-8750-4a1e1f98cdee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:55:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:12.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:12 compute-2 ceph-mon[77282]: pgmap v1738: 305 pgs: 305 active+clean; 227 MiB data, 755 MiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 31 07:55:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:13.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:13 compute-2 nova_compute[226829]: 2026-01-31 07:55:13.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:13 compute-2 nova_compute[226829]: 2026-01-31 07:55:13.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:14.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:14 compute-2 nova_compute[226829]: 2026-01-31 07:55:14.244 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:14 compute-2 nova_compute[226829]: 2026-01-31 07:55:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:14 compute-2 nova_compute[226829]: 2026-01-31 07:55:14.941 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:15.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:15 compute-2 nova_compute[226829]: 2026-01-31 07:55:15.869 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:15 compute-2 nova_compute[226829]: 2026-01-31 07:55:15.872 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:15 compute-2 nova_compute[226829]: 2026-01-31 07:55:15.872 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:15 compute-2 nova_compute[226829]: 2026-01-31 07:55:15.873 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:55:15 compute-2 nova_compute[226829]: 2026-01-31 07:55:15.874 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:16.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.283 226833 DEBUG nova.compute.manager [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Received event network-changed-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.285 226833 DEBUG nova.compute.manager [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Refreshing instance network info cache due to event network-changed-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.285 226833 DEBUG oslo_concurrency.lockutils [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.285 226833 DEBUG oslo_concurrency.lockutils [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.286 226833 DEBUG nova.network.neutron [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Refreshing network info cache for port 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:55:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:55:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2801926878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.316 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.483 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.483 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.623 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.624 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4375MB free_disk=20.90493392944336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.625 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.625 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.747 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance a6d52544-dac0-441a-a99f-b0d23283f733 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.747 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance ebe88ddf-b955-4cc2-9685-826d10d55955 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.748 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.748 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:55:16 compute-2 nova_compute[226829]: 2026-01-31 07:55:16.812 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:55:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1283347469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:17 compute-2 nova_compute[226829]: 2026-01-31 07:55:17.221 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:17 compute-2 nova_compute[226829]: 2026-01-31 07:55:17.230 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:55:17 compute-2 sudo[260667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:17 compute-2 sudo[260667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:17 compute-2 sudo[260667]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:17 compute-2 sudo[260693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:17 compute-2 sudo[260693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:17 compute-2 sudo[260693]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:55:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:17.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:55:17 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:55:17 compute-2 nova_compute[226829]: 2026-01-31 07:55:17.974 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:55:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:55:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:18.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:55:18 compute-2 nova_compute[226829]: 2026-01-31 07:55:18.126 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:55:18 compute-2 nova_compute[226829]: 2026-01-31 07:55:18.127 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:19 compute-2 nova_compute[226829]: 2026-01-31 07:55:19.127 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:19 compute-2 nova_compute[226829]: 2026-01-31 07:55:19.128 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:55:19 compute-2 nova_compute[226829]: 2026-01-31 07:55:19.129 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:55:19 compute-2 nova_compute[226829]: 2026-01-31 07:55:19.164 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:55:19 compute-2 nova_compute[226829]: 2026-01-31 07:55:19.248 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:19.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:19 compute-2 nova_compute[226829]: 2026-01-31 07:55:19.944 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:20 compute-2 nova_compute[226829]: 2026-01-31 07:55:20.024 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:55:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:20.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:21 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:55:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).paxos(paxos updating c 3264..3935) lease_timeout -- calling new election
Jan 31 07:55:22 compute-2 nova_compute[226829]: 2026-01-31 07:55:22.851 226833 DEBUG nova.network.neutron [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Updating instance_info_cache with network_info: [{"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "address": "fa:16:3e:be:75:fd", "network": {"id": "9ed74c79-93cc-44eb-a177-8d95f653faee", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1808837000", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.16", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f745c59-fd", "ovs_interfaceid": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:23 compute-2 nova_compute[226829]: 2026-01-31 07:55:23.357 226833 DEBUG nova.network.neutron [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Updated VIF entry in instance network info cache for port 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:55:23 compute-2 nova_compute[226829]: 2026-01-31 07:55:23.357 226833 DEBUG nova.network.neutron [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Updating instance_info_cache with network_info: [{"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:23.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:24 compute-2 nova_compute[226829]: 2026-01-31 07:55:24.252 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:24 compute-2 ceph-mon[77282]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 31 07:55:24 compute-2 ceph-mon[77282]: paxos.1).electionLogic(34) init, last seen epoch 34
Jan 31 07:55:24 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:55:24 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:55:24 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:24 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:24 compute-2 nova_compute[226829]: 2026-01-31 07:55:24.946 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:25 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:25.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:25 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.556 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Releasing lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.557 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Instance network_info: |[{"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "address": "fa:16:3e:be:75:fd", "network": {"id": "9ed74c79-93cc-44eb-a177-8d95f653faee", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1808837000", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.16", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f745c59-fd", "ovs_interfaceid": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.557 226833 DEBUG oslo_concurrency.lockutils [req-d61df46f-de05-41ae-ac97-1ba403dba64e req-b9eebc88-59f7-4612-8750-4a1e1f98cdee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.557 226833 DEBUG nova.network.neutron [req-d61df46f-de05-41ae-ac97-1ba403dba64e req-b9eebc88-59f7-4612-8750-4a1e1f98cdee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Refreshing network info cache for port bf5f6a49-45dc-420a-8c08-bbf112a81186 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.561 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Start _get_guest_xml network_info=[{"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "address": "fa:16:3e:be:75:fd", "network": {"id": "9ed74c79-93cc-44eb-a177-8d95f653faee", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1808837000", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.16", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f745c59-fd", "ovs_interfaceid": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.565 226833 WARNING nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.572 226833 DEBUG nova.virt.libvirt.host [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.572 226833 DEBUG nova.virt.libvirt.host [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.577 226833 DEBUG nova.virt.libvirt.host [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.577 226833 DEBUG nova.virt.libvirt.host [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.579 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.579 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.580 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.580 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.580 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.580 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.581 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.581 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.581 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.582 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.582 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.582 226833 DEBUG nova.virt.hardware [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:55:25 compute-2 nova_compute[226829]: 2026-01-31 07:55:25.588 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:25 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:25 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:26.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:26 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:26 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:27 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:27 compute-2 podman[260733]: 2026-01-31 07:55:27.217984261 +0000 UTC m=+0.088437456 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:55:27 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma MDS connection to Monitors appears to be laggy; 17.9205s since last acked beacon
Jan 31 07:55:27 compute-2 ceph-mds[84366]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 31 07:55:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:27.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:27 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:27 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:28.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:28 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:28 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:28 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:29 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:29 compute-2 nova_compute[226829]: 2026-01-31 07:55:29.255 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:29 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:55:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:55:29 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 07:55:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:55:29 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma  MDS is no longer laggy
Jan 31 07:55:29 compute-2 ceph-mon[77282]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 31 07:55:29 compute-2 ceph-mon[77282]: paxos.1).electionLogic(37) init, last seen epoch 37, mid-election, bumping
Jan 31 07:55:29 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:55:29 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:55:29 compute-2 nova_compute[226829]: 2026-01-31 07:55:29.947 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:55:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:30.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:55:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2549763965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.440 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.853s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.464 226833 DEBUG nova.storage.rbd_utils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image ebe88ddf-b955-4cc2-9685-826d10d55955_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.467 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.634 226833 DEBUG oslo_concurrency.lockutils [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:55:30 compute-2 ceph-mon[77282]: mon.compute-1 calling monitor election
Jan 31 07:55:30 compute-2 ceph-mon[77282]: mon.compute-2 calling monitor election
Jan 31 07:55:30 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 07:55:30 compute-2 ceph-mon[77282]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:55:30 compute-2 ceph-mon[77282]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:55:30 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:55:30 compute-2 ceph-mon[77282]: osdmap e250: 3 total, 3 up, 3 in
Jan 31 07:55:30 compute-2 ceph-mon[77282]: mgrmap e11: compute-0.hhuoua(active, since 50m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:55:30 compute-2 ceph-mon[77282]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Jan 31 07:55:30 compute-2 ceph-mon[77282]: Cluster is now healthy
Jan 31 07:55:30 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 07:55:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2549763965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.636 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.636 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.636 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6d52544-dac0-441a-a99f-b0d23283f733 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:55:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:55:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1914918059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.961 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.962 226833 DEBUG nova.virt.libvirt.vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-17927120',display_name='tempest-ServersTestMultiNic-server-17927120',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-17927120',id=78,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-sxjoqsag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:52Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=ebe88ddf-b955-4cc2-9685-826d10d55955,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.963 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.964 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:35:4b,bridge_name='br-int',has_traffic_filtering=True,id=17255b28-10be-4d25-bba8-0292dfa7ad68,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17255b28-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.965 226833 DEBUG nova.virt.libvirt.vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-17927120',display_name='tempest-ServersTestMultiNic-server-17927120',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-17927120',id=78,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-sxjoqsag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:52Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=ebe88ddf-b955-4cc2-9685-826d10d55955,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "address": "fa:16:3e:be:75:fd", "network": {"id": "9ed74c79-93cc-44eb-a177-8d95f653faee", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1808837000", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.16", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f745c59-fd", "ovs_interfaceid": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.965 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "address": "fa:16:3e:be:75:fd", "network": {"id": "9ed74c79-93cc-44eb-a177-8d95f653faee", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1808837000", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.16", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f745c59-fd", "ovs_interfaceid": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.966 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:75:fd,bridge_name='br-int',has_traffic_filtering=True,id=9f745c59-fd7c-4cd5-9eda-e439c116fbd8,network=Network(9ed74c79-93cc-44eb-a177-8d95f653faee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f745c59-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.966 226833 DEBUG nova.virt.libvirt.vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-17927120',display_name='tempest-ServersTestMultiNic-server-17927120',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-17927120',id=78,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-sxjoqsag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:52Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=ebe88ddf-b955-4cc2-9685-826d10d55955,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.967 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.967 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:15:34,bridge_name='br-int',has_traffic_filtering=True,id=bf5f6a49-45dc-420a-8c08-bbf112a81186,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f6a49-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:30 compute-2 nova_compute[226829]: 2026-01-31 07:55:30.968 226833 DEBUG nova.objects.instance [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lazy-loading 'pci_devices' on Instance uuid ebe88ddf-b955-4cc2-9685-826d10d55955 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:55:30 compute-2 ovn_controller[133834]: 2026-01-31T07:55:30Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:0e:2f 10.100.0.12
Jan 31 07:55:30 compute-2 ovn_controller[133834]: 2026-01-31T07:55:30Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:0e:2f 10.100.0.12
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.240 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <uuid>ebe88ddf-b955-4cc2-9685-826d10d55955</uuid>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <name>instance-0000004e</name>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <nova:name>tempest-ServersTestMultiNic-server-17927120</nova:name>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:55:25</nova:creationTime>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <nova:user uuid="56eecf4373334b18a454186e0c54e924">tempest-ServersTestMultiNic-539084191-project-member</nova:user>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <nova:project uuid="56ce08a86486427fbebbfbd075cdb404">tempest-ServersTestMultiNic-539084191</nova:project>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <nova:port uuid="17255b28-10be-4d25-bba8-0292dfa7ad68">
Jan 31 07:55:31 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.150" ipVersion="4"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <nova:port uuid="9f745c59-fd7c-4cd5-9eda-e439c116fbd8">
Jan 31 07:55:31 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.1.16" ipVersion="4"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <nova:port uuid="bf5f6a49-45dc-420a-8c08-bbf112a81186">
Jan 31 07:55:31 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.218" ipVersion="4"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <system>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <entry name="serial">ebe88ddf-b955-4cc2-9685-826d10d55955</entry>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <entry name="uuid">ebe88ddf-b955-4cc2-9685-826d10d55955</entry>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </system>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <os>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   </os>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <features>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   </features>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/ebe88ddf-b955-4cc2-9685-826d10d55955_disk">
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       </source>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/ebe88ddf-b955-4cc2-9685-826d10d55955_disk.config">
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       </source>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:55:31 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:df:35:4b"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <target dev="tap17255b28-10"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:be:75:fd"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <target dev="tap9f745c59-fd"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:93:15:34"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <target dev="tapbf5f6a49-45"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/ebe88ddf-b955-4cc2-9685-826d10d55955/console.log" append="off"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <video>
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </video>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:55:31 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:55:31 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:55:31 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:55:31 compute-2 nova_compute[226829]: </domain>
Jan 31 07:55:31 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.240 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Preparing to wait for external event network-vif-plugged-17255b28-10be-4d25-bba8-0292dfa7ad68 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.241 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.241 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.241 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.241 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Preparing to wait for external event network-vif-plugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.241 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.241 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.242 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.242 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Preparing to wait for external event network-vif-plugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.242 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.242 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.242 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.243 226833 DEBUG nova.virt.libvirt.vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-17927120',display_name='tempest-ServersTestMultiNic-server-17927120',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-17927120',id=78,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-sxjoqsag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:52Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=ebe88ddf-b955-4cc2-9685-826d10d55955,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.243 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.243 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:35:4b,bridge_name='br-int',has_traffic_filtering=True,id=17255b28-10be-4d25-bba8-0292dfa7ad68,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17255b28-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.244 226833 DEBUG os_vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:35:4b,bridge_name='br-int',has_traffic_filtering=True,id=17255b28-10be-4d25-bba8-0292dfa7ad68,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17255b28-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.244 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.245 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.245 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.247 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.247 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17255b28-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.248 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap17255b28-10, col_values=(('external_ids', {'iface-id': '17255b28-10be-4d25-bba8-0292dfa7ad68', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:35:4b', 'vm-uuid': 'ebe88ddf-b955-4cc2-9685-826d10d55955'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.250 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 NetworkManager[48999]: <info>  [1769846131.2526] manager: (tap17255b28-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.252 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.257 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.259 226833 INFO os_vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:35:4b,bridge_name='br-int',has_traffic_filtering=True,id=17255b28-10be-4d25-bba8-0292dfa7ad68,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17255b28-10')
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.260 226833 DEBUG nova.virt.libvirt.vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-17927120',display_name='tempest-ServersTestMultiNic-server-17927120',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-17927120',id=78,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-sxjoqsag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:52Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=ebe88ddf-b955-4cc2-9685-826d10d55955,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "address": "fa:16:3e:be:75:fd", "network": {"id": "9ed74c79-93cc-44eb-a177-8d95f653faee", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1808837000", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.16", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f745c59-fd", "ovs_interfaceid": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.261 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "address": "fa:16:3e:be:75:fd", "network": {"id": "9ed74c79-93cc-44eb-a177-8d95f653faee", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1808837000", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.16", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f745c59-fd", "ovs_interfaceid": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.262 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:75:fd,bridge_name='br-int',has_traffic_filtering=True,id=9f745c59-fd7c-4cd5-9eda-e439c116fbd8,network=Network(9ed74c79-93cc-44eb-a177-8d95f653faee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f745c59-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.262 226833 DEBUG os_vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:75:fd,bridge_name='br-int',has_traffic_filtering=True,id=9f745c59-fd7c-4cd5-9eda-e439c116fbd8,network=Network(9ed74c79-93cc-44eb-a177-8d95f653faee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f745c59-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.263 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.263 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.264 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.269 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.270 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f745c59-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.270 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f745c59-fd, col_values=(('external_ids', {'iface-id': '9f745c59-fd7c-4cd5-9eda-e439c116fbd8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:75:fd', 'vm-uuid': 'ebe88ddf-b955-4cc2-9685-826d10d55955'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:31 compute-2 NetworkManager[48999]: <info>  [1769846131.2737] manager: (tap9f745c59-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.275 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.278 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.279 226833 INFO os_vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:75:fd,bridge_name='br-int',has_traffic_filtering=True,id=9f745c59-fd7c-4cd5-9eda-e439c116fbd8,network=Network(9ed74c79-93cc-44eb-a177-8d95f653faee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f745c59-fd')
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.279 226833 DEBUG nova.virt.libvirt.vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-17927120',display_name='tempest-ServersTestMultiNic-server-17927120',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-17927120',id=78,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-sxjoqsag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:52Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=ebe88ddf-b955-4cc2-9685-826d10d55955,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.280 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.280 226833 DEBUG nova.network.os_vif_util [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:15:34,bridge_name='br-int',has_traffic_filtering=True,id=bf5f6a49-45dc-420a-8c08-bbf112a81186,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f6a49-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.281 226833 DEBUG os_vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:15:34,bridge_name='br-int',has_traffic_filtering=True,id=bf5f6a49-45dc-420a-8c08-bbf112a81186,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f6a49-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.281 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.281 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.281 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.283 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.283 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5f6a49-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.283 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf5f6a49-45, col_values=(('external_ids', {'iface-id': 'bf5f6a49-45dc-420a-8c08-bbf112a81186', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:15:34', 'vm-uuid': 'ebe88ddf-b955-4cc2-9685-826d10d55955'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.284 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 NetworkManager[48999]: <info>  [1769846131.2859] manager: (tapbf5f6a49-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.286 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.290 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.291 226833 INFO os_vif [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:15:34,bridge_name='br-int',has_traffic_filtering=True,id=bf5f6a49-45dc-420a-8c08-bbf112a81186,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f6a49-45')
Jan 31 07:55:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:55:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:31.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.731 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.731 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.732 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] No VIF found with MAC fa:16:3e:df:35:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.732 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] No VIF found with MAC fa:16:3e:be:75:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.732 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] No VIF found with MAC fa:16:3e:93:15:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:55:31 compute-2 nova_compute[226829]: 2026-01-31 07:55:31.732 226833 INFO nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Using config drive
Jan 31 07:55:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1914918059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2519778568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:32.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:32 compute-2 nova_compute[226829]: 2026-01-31 07:55:32.265 226833 DEBUG nova.storage.rbd_utils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image ebe88ddf-b955-4cc2-9685-826d10d55955_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:55:32 compute-2 ceph-mon[77282]: pgmap v1748: 305 pgs: 305 active+clean; 268 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 128 op/s
Jan 31 07:55:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1141855199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:33.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:33 compute-2 nova_compute[226829]: 2026-01-31 07:55:33.720 226833 INFO nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Creating config drive at /var/lib/nova/instances/ebe88ddf-b955-4cc2-9685-826d10d55955/disk.config
Jan 31 07:55:33 compute-2 nova_compute[226829]: 2026-01-31 07:55:33.724 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ebe88ddf-b955-4cc2-9685-826d10d55955/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8t62g0y2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:33 compute-2 nova_compute[226829]: 2026-01-31 07:55:33.758 226833 DEBUG nova.network.neutron [req-d61df46f-de05-41ae-ac97-1ba403dba64e req-b9eebc88-59f7-4612-8750-4a1e1f98cdee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Updated VIF entry in instance network info cache for port bf5f6a49-45dc-420a-8c08-bbf112a81186. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:55:33 compute-2 nova_compute[226829]: 2026-01-31 07:55:33.759 226833 DEBUG nova.network.neutron [req-d61df46f-de05-41ae-ac97-1ba403dba64e req-b9eebc88-59f7-4612-8750-4a1e1f98cdee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Updating instance_info_cache with network_info: [{"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "address": "fa:16:3e:be:75:fd", "network": {"id": "9ed74c79-93cc-44eb-a177-8d95f653faee", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1808837000", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.16", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f745c59-fd", "ovs_interfaceid": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:33 compute-2 nova_compute[226829]: 2026-01-31 07:55:33.820 226833 DEBUG oslo_concurrency.lockutils [req-d61df46f-de05-41ae-ac97-1ba403dba64e req-b9eebc88-59f7-4612-8750-4a1e1f98cdee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ebe88ddf-b955-4cc2-9685-826d10d55955" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:55:33 compute-2 nova_compute[226829]: 2026-01-31 07:55:33.860 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ebe88ddf-b955-4cc2-9685-826d10d55955/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8t62g0y2" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:33 compute-2 nova_compute[226829]: 2026-01-31 07:55:33.886 226833 DEBUG nova.storage.rbd_utils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image ebe88ddf-b955-4cc2-9685-826d10d55955_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:55:33 compute-2 nova_compute[226829]: 2026-01-31 07:55:33.889 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ebe88ddf-b955-4cc2-9685-826d10d55955/disk.config ebe88ddf-b955-4cc2-9685-826d10d55955_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:33 compute-2 ceph-mon[77282]: pgmap v1749: 305 pgs: 305 active+clean; 274 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 149 op/s
Jan 31 07:55:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/407827562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:55:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:34.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.196 226833 DEBUG oslo_concurrency.processutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ebe88ddf-b955-4cc2-9685-826d10d55955/disk.config ebe88ddf-b955-4cc2-9685-826d10d55955_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.197 226833 INFO nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Deleting local config drive /var/lib/nova/instances/ebe88ddf-b955-4cc2-9685-826d10d55955/disk.config because it was imported into RBD.
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.2418] manager: (tap17255b28-10): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Jan 31 07:55:34 compute-2 kernel: tap17255b28-10: entered promiscuous mode
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00239|binding|INFO|Claiming lport 17255b28-10be-4d25-bba8-0292dfa7ad68 for this chassis.
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00240|binding|INFO|17255b28-10be-4d25-bba8-0292dfa7ad68: Claiming fa:16:3e:df:35:4b 10.100.0.150
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.249 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 kernel: tap9f745c59-fd: entered promiscuous mode
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.2566] manager: (tap9f745c59-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.265 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00241|binding|INFO|Claiming lport 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 for this chassis.
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00242|binding|INFO|9f745c59-fd7c-4cd5-9eda-e439c116fbd8: Claiming fa:16:3e:be:75:fd 10.100.1.16
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.266 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:35:4b 10.100.0.150'], port_security=['fa:16:3e:df:35:4b 10.100.0.150'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.150/24', 'neutron:device_id': 'ebe88ddf-b955-4cc2-9685-826d10d55955', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-adb6998e-030b-47a7-b80a-80015d534006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56ce08a86486427fbebbfbd075cdb404', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dc2cd802-0b4d-4cd4-bf24-edbd81854edb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87a9ae64-e09e-4ddd-9c3f-2e68f11949d7, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=17255b28-10be-4d25-bba8-0292dfa7ad68) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.269 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 17255b28-10be-4d25-bba8-0292dfa7ad68 in datapath adb6998e-030b-47a7-b80a-80015d534006 bound to our chassis
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.2716] manager: (tapbf5f6a49-45): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.271 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network adb6998e-030b-47a7-b80a-80015d534006
Jan 31 07:55:34 compute-2 kernel: tapbf5f6a49-45: entered promiscuous mode
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.282 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00243|if_status|INFO|Dropped 1 log messages in last 1427 seconds (most recently, 1427 seconds ago) due to excessive rate
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00244|if_status|INFO|Not updating pb chassis for bf5f6a49-45dc-420a-8c08-bbf112a81186 now as sb is readonly
Jan 31 07:55:34 compute-2 systemd-udevd[260902]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:55:34 compute-2 systemd-udevd[260904]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.283 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[52fb6fea-d13d-48f7-9940-5efe2bea0c91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.284 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapadb6998e-01 in ovnmeta-adb6998e-030b-47a7-b80a-80015d534006 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:55:34 compute-2 systemd-udevd[260903]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.286 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapadb6998e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.287 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7b8142-e061-474c-adb9-d3b69707f4ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.288 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[02c492c9-e44d-436c-a2d3-db3949d5cb19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00245|binding|INFO|Setting lport 17255b28-10be-4d25-bba8-0292dfa7ad68 ovn-installed in OVS
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.290 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.2957] device (tapbf5f6a49-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.2962] device (tap17255b28-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.2968] device (tapbf5f6a49-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.2973] device (tap17255b28-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.2976] device (tap9f745c59-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.2980] device (tap9f745c59-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.299 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a235a1-7274-4844-a2d7-945a870267cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.311 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00246|binding|INFO|Setting lport 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 ovn-installed in OVS
Jan 31 07:55:34 compute-2 systemd-machined[195142]: New machine qemu-33-instance-0000004e.
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.314 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[79ed6d77-403b-4c8d-804f-687c507c43d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.316 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.320 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:75:fd 10.100.1.16'], port_security=['fa:16:3e:be:75:fd 10.100.1.16'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.16/24', 'neutron:device_id': 'ebe88ddf-b955-4cc2-9685-826d10d55955', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ed74c79-93cc-44eb-a177-8d95f653faee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56ce08a86486427fbebbfbd075cdb404', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dc2cd802-0b4d-4cd4-bf24-edbd81854edb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe833736-a295-4269-b9a9-4bd246e0b791, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=9f745c59-fd7c-4cd5-9eda-e439c116fbd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00247|binding|INFO|Claiming lport bf5f6a49-45dc-420a-8c08-bbf112a81186 for this chassis.
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00248|binding|INFO|bf5f6a49-45dc-420a-8c08-bbf112a81186: Claiming fa:16:3e:93:15:34 10.100.0.218
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00249|binding|INFO|Setting lport 17255b28-10be-4d25-bba8-0292dfa7ad68 up in Southbound
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00250|binding|INFO|Setting lport 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 up in Southbound
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00251|binding|INFO|Setting lport bf5f6a49-45dc-420a-8c08-bbf112a81186 ovn-installed in OVS
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.322 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 systemd[1]: Started Virtual Machine qemu-33-instance-0000004e.
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.339 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[92f60b7e-58bf-4e58-8bb5-52c167d1a802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.3451] manager: (tapadb6998e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.344 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbebbba-159c-4ef6-a41c-11cb391b0879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00252|binding|INFO|Setting lport bf5f6a49-45dc-420a-8c08-bbf112a81186 up in Southbound
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.347 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:15:34 10.100.0.218'], port_security=['fa:16:3e:93:15:34 10.100.0.218'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.218/24', 'neutron:device_id': 'ebe88ddf-b955-4cc2-9685-826d10d55955', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-adb6998e-030b-47a7-b80a-80015d534006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56ce08a86486427fbebbfbd075cdb404', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dc2cd802-0b4d-4cd4-bf24-edbd81854edb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87a9ae64-e09e-4ddd-9c3f-2e68f11949d7, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=bf5f6a49-45dc-420a-8c08-bbf112a81186) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.370 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[53897b99-e514-419b-afd1-d2f0c17a0113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.373 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f830127d-c457-44ca-84aa-dd8dfac2ed0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.3894] device (tapadb6998e-00): carrier: link connected
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.391 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9d8c58-4b6a-4eee-b766-2fdc1290eed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.405 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ff719da9-0358-4ae1-99a1-e1c285085d83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapadb6998e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:ec:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638686, 'reachable_time': 29333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260938, 'error': None, 'target': 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.415 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8919b55e-7cb3-43fd-81b0-b37ecb2cd246]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:ec4b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638686, 'tstamp': 638686}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260940, 'error': None, 'target': 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.426 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f00179ce-41a5-4a27-8fc1-4178dc0cc3bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapadb6998e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:ec:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638686, 'reachable_time': 29333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260941, 'error': None, 'target': 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.446 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8a83db-4ea1-4b82-84ef-9968899411f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.494 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[59e52ba9-4c7d-4162-b493-e0aa51ee5dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.495 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapadb6998e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.495 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.496 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapadb6998e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.497 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 kernel: tapadb6998e-00: entered promiscuous mode
Jan 31 07:55:34 compute-2 NetworkManager[48999]: <info>  [1769846134.4998] manager: (tapadb6998e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.500 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.501 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapadb6998e-00, col_values=(('external_ids', {'iface-id': 'a8499b5e-8156-4438-a949-1a44ac18422e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.502 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 ovn_controller[133834]: 2026-01-31T07:55:34Z|00253|binding|INFO|Releasing lport a8499b5e-8156-4438-a949-1a44ac18422e from this chassis (sb_readonly=0)
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.509 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.510 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/adb6998e-030b-47a7-b80a-80015d534006.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/adb6998e-030b-47a7-b80a-80015d534006.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.511 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2e39785a-ae75-41fb-968d-4fa0edcdc76d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.512 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-adb6998e-030b-47a7-b80a-80015d534006
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/adb6998e-030b-47a7-b80a-80015d534006.pid.haproxy
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID adb6998e-030b-47a7-b80a-80015d534006
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:55:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:34.513 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'env', 'PROCESS_TAG=haproxy-adb6998e-030b-47a7-b80a-80015d534006', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/adb6998e-030b-47a7-b80a-80015d534006.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:55:34 compute-2 podman[260980]: 2026-01-31 07:55:34.832557215 +0000 UTC m=+0.019120977 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:55:34 compute-2 nova_compute[226829]: 2026-01-31 07:55:34.950 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:35 compute-2 podman[260980]: 2026-01-31 07:55:35.182983673 +0000 UTC m=+0.369547415 container create b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 07:55:35 compute-2 systemd[1]: Started libpod-conmon-b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a.scope.
Jan 31 07:55:35 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:55:35 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ecdea1172eb5e492bfb46565fa4c925c52d607e8c71932e09da989589501a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:55:35 compute-2 podman[260980]: 2026-01-31 07:55:35.263260677 +0000 UTC m=+0.449824429 container init b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 07:55:35 compute-2 podman[260980]: 2026-01-31 07:55:35.269822244 +0000 UTC m=+0.456385976 container start b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 07:55:35 compute-2 neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006[261025]: [NOTICE]   (261036) : New worker (261038) forked
Jan 31 07:55:35 compute-2 neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006[261025]: [NOTICE]   (261036) : Loading success.
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.332 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846135.3319204, ebe88ddf-b955-4cc2-9685-826d10d55955 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.333 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] VM Started (Lifecycle Event)
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.345 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 in datapath 9ed74c79-93cc-44eb-a177-8d95f653faee unbound from our chassis
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.347 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ed74c79-93cc-44eb-a177-8d95f653faee
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.354 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba9463d-2097-4cbb-979b-1e5b61b17b7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.356 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ed74c79-91 in ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.358 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ed74c79-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.358 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[259cf83b-3497-4f4a-88b7-be06faca3ba0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.359 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[01aad0af-36ae-439a-abe9-a9d9982201d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.361 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.367 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846135.332087, ebe88ddf-b955-4cc2-9685-826d10d55955 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.367 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] VM Paused (Lifecycle Event)
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.368 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[77efe91c-63e7-405d-9a7a-6339aa3c076a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.377 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6e7f35-56f3-4b11-8c87-d78d6da3fbac]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.400 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[255ad44c-5df8-4421-8493-a7665b3594ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.402 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.405 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0c5df7-0ef5-436e-b352-35e66e2eddac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 systemd-udevd[260923]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:55:35 compute-2 NetworkManager[48999]: <info>  [1769846135.4069] manager: (tap9ed74c79-90): new Veth device (/org/freedesktop/NetworkManager/Devices/137)
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.406 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.426 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[def00385-2018-4854-a1fd-e4827376ff1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.429 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c703a75f-7ff4-4bc5-b6c9-c4e467ccce91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 NetworkManager[48999]: <info>  [1769846135.4463] device (tap9ed74c79-90): carrier: link connected
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.450 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[a30a6bca-7da8-4292-8556-538b79e3358d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:35.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.464 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[745355f2-b64d-439a-a5aa-91bd1043d056]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ed74c79-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:82:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638791, 'reachable_time': 25020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261058, 'error': None, 'target': 'ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.478 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0df13a6f-6c83-4d46-b7d6-b35f2efdba66]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:8233'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638791, 'tstamp': 638791}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261059, 'error': None, 'target': 'ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.487 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8c19ad14-b34b-4394-b520-bc48de01d24b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ed74c79-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:82:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638791, 'reachable_time': 25020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261060, 'error': None, 'target': 'ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.501 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[594e3eea-43d8-443a-ab47-cc8c3ea60421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.538 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.541 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ba35782b-4dc6-40cd-b471-21aece3a44f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.543 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ed74c79-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.543 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.543 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ed74c79-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.545 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:35 compute-2 kernel: tap9ed74c79-90: entered promiscuous mode
Jan 31 07:55:35 compute-2 NetworkManager[48999]: <info>  [1769846135.5471] manager: (tap9ed74c79-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.550 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ed74c79-90, col_values=(('external_ids', {'iface-id': '43b7e0cf-19aa-4930-9d9d-152c622b786c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.551 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:35 compute-2 ovn_controller[133834]: 2026-01-31T07:55:35Z|00254|binding|INFO|Releasing lport 43b7e0cf-19aa-4930-9d9d-152c622b786c from this chassis (sb_readonly=0)
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.553 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ed74c79-93cc-44eb-a177-8d95f653faee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ed74c79-93cc-44eb-a177-8d95f653faee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.553 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[937f7d08-326b-47f3-bbf3-23defc347b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.554 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-9ed74c79-93cc-44eb-a177-8d95f653faee
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/9ed74c79-93cc-44eb-a177-8d95f653faee.pid.haproxy
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 9ed74c79-93cc-44eb-a177-8d95f653faee
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:55:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:35.555 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee', 'env', 'PROCESS_TAG=haproxy-9ed74c79-93cc-44eb-a177-8d95f653faee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ed74c79-93cc-44eb-a177-8d95f653faee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:55:35 compute-2 nova_compute[226829]: 2026-01-31 07:55:35.557 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:35 compute-2 podman[261093]: 2026-01-31 07:55:35.889662516 +0000 UTC m=+0.048694534 container create fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:55:35 compute-2 systemd[1]: Started libpod-conmon-fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05.scope.
Jan 31 07:55:35 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:55:35 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a78fb1ab95b434907c7503018baa5559dc46113472362d50cf88dc0fc696df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:55:35 compute-2 podman[261093]: 2026-01-31 07:55:35.864254451 +0000 UTC m=+0.023286499 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:55:36 compute-2 podman[261093]: 2026-01-31 07:55:36.001606784 +0000 UTC m=+0.160638822 container init fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 07:55:36 compute-2 podman[261093]: 2026-01-31 07:55:36.009583949 +0000 UTC m=+0.168615977 container start fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:55:36 compute-2 neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee[261109]: [NOTICE]   (261113) : New worker (261115) forked
Jan 31 07:55:36 compute-2 neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee[261109]: [NOTICE]   (261113) : Loading success.
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.058 143841 INFO neutron.agent.ovn.metadata.agent [-] Port bf5f6a49-45dc-420a-8c08-bbf112a81186 in datapath adb6998e-030b-47a7-b80a-80015d534006 unbound from our chassis
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.061 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network adb6998e-030b-47a7-b80a-80015d534006
Jan 31 07:55:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:36.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.071 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[172fd0ba-e886-465b-994c-8078a0c72e11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.090 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e71a2cea-2a87-445e-ae27-457d6cbba914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.094 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[59135f79-14cd-4ad6-baf6-f18511712557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.114 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[96f7933e-fb84-44c0-83f6-8c90f7f3903a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.118 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Updating instance_info_cache with network_info: [{"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.130 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3358c0fd-ee30-4c96-9b3b-3ef98b8aa849]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapadb6998e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:ec:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638686, 'reachable_time': 29333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261129, 'error': None, 'target': 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.143 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[93d7af4e-953c-45f1-bd99-bf37913f262b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapadb6998e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638694, 'tstamp': 638694}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261130, 'error': None, 'target': 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapadb6998e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638696, 'tstamp': 638696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261130, 'error': None, 'target': 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.146 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapadb6998e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.148 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.150 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.150 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapadb6998e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.151 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.151 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapadb6998e-00, col_values=(('external_ids', {'iface-id': 'a8499b5e-8156-4438-a949-1a44ac18422e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:36.152 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.153 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-a6d52544-dac0-441a-a99f-b0d23283f733" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.153 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.154 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.155 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.155 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:55:36 compute-2 ceph-mon[77282]: pgmap v1750: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 186 op/s
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.252 226833 DEBUG nova.compute.manager [req-dfc413d8-d88e-4c57-9150-b883db7ae1e5 req-867844a0-e3c2-497f-b1e1-44b362deb348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-plugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.253 226833 DEBUG oslo_concurrency.lockutils [req-dfc413d8-d88e-4c57-9150-b883db7ae1e5 req-867844a0-e3c2-497f-b1e1-44b362deb348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.253 226833 DEBUG oslo_concurrency.lockutils [req-dfc413d8-d88e-4c57-9150-b883db7ae1e5 req-867844a0-e3c2-497f-b1e1-44b362deb348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.253 226833 DEBUG oslo_concurrency.lockutils [req-dfc413d8-d88e-4c57-9150-b883db7ae1e5 req-867844a0-e3c2-497f-b1e1-44b362deb348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.253 226833 DEBUG nova.compute.manager [req-dfc413d8-d88e-4c57-9150-b883db7ae1e5 req-867844a0-e3c2-497f-b1e1-44b362deb348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Processing event network-vif-plugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:55:36 compute-2 nova_compute[226829]: 2026-01-31 07:55:36.324 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:37.138 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:37 compute-2 nova_compute[226829]: 2026-01-31 07:55:37.138 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:37.140 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:55:37 compute-2 sudo[261132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:37 compute-2 sudo[261132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:37 compute-2 sudo[261132]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:37.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:37 compute-2 sudo[261157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:37 compute-2 sudo[261157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:37 compute-2 sudo[261157]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:38.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:38 compute-2 ceph-mon[77282]: pgmap v1751: 305 pgs: 305 active+clean; 292 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 214 op/s
Jan 31 07:55:38 compute-2 nova_compute[226829]: 2026-01-31 07:55:38.613 226833 DEBUG nova.compute.manager [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-plugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:38 compute-2 nova_compute[226829]: 2026-01-31 07:55:38.614 226833 DEBUG oslo_concurrency.lockutils [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:38 compute-2 nova_compute[226829]: 2026-01-31 07:55:38.614 226833 DEBUG oslo_concurrency.lockutils [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:38 compute-2 nova_compute[226829]: 2026-01-31 07:55:38.614 226833 DEBUG oslo_concurrency.lockutils [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:38 compute-2 nova_compute[226829]: 2026-01-31 07:55:38.614 226833 DEBUG nova.compute.manager [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Processing event network-vif-plugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:55:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:39 compute-2 podman[261182]: 2026-01-31 07:55:39.162714514 +0000 UTC m=+0.049903626 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.269 226833 DEBUG nova.compute.manager [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-plugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.269 226833 DEBUG oslo_concurrency.lockutils [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.269 226833 DEBUG oslo_concurrency.lockutils [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.269 226833 DEBUG oslo_concurrency.lockutils [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.270 226833 DEBUG nova.compute.manager [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] No event matching network-vif-plugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 in dict_keys([('network-vif-plugged', '17255b28-10be-4d25-bba8-0292dfa7ad68')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.270 226833 WARNING nova.compute.manager [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received unexpected event network-vif-plugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 for instance with vm_state building and task_state spawning.
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.270 226833 DEBUG nova.compute.manager [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-plugged-17255b28-10be-4d25-bba8-0292dfa7ad68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.270 226833 DEBUG oslo_concurrency.lockutils [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.271 226833 DEBUG oslo_concurrency.lockutils [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.271 226833 DEBUG oslo_concurrency.lockutils [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.271 226833 DEBUG nova.compute.manager [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Processing event network-vif-plugged-17255b28-10be-4d25-bba8-0292dfa7ad68 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.271 226833 DEBUG nova.compute.manager [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-plugged-17255b28-10be-4d25-bba8-0292dfa7ad68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.271 226833 DEBUG oslo_concurrency.lockutils [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.271 226833 DEBUG oslo_concurrency.lockutils [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.271 226833 DEBUG oslo_concurrency.lockutils [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.272 226833 DEBUG nova.compute.manager [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] No waiting events found dispatching network-vif-plugged-17255b28-10be-4d25-bba8-0292dfa7ad68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.272 226833 WARNING nova.compute.manager [req-6df84bb2-37e0-4641-a448-9a87f3deb565 req-21aee1cc-be94-4262-95c2-d92eee62c601 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received unexpected event network-vif-plugged-17255b28-10be-4d25-bba8-0292dfa7ad68 for instance with vm_state building and task_state spawning.
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.272 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.279 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846139.2788723, ebe88ddf-b955-4cc2-9685-826d10d55955 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.280 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] VM Resumed (Lifecycle Event)
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.286 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.290 226833 INFO nova.virt.libvirt.driver [-] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Instance spawned successfully.
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.291 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.321 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.324 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.342 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.343 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.343 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.344 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.344 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.345 226833 DEBUG nova.virt.libvirt.driver [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.361 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:55:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3778543200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:39.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.558 226833 INFO nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Took 47.11 seconds to spawn the instance on the hypervisor.
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.559 226833 DEBUG nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:39 compute-2 nova_compute[226829]: 2026-01-31 07:55:39.954 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:40 compute-2 nova_compute[226829]: 2026-01-31 07:55:40.000 226833 INFO nova.compute.manager [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Took 48.65 seconds to build instance.
Jan 31 07:55:40 compute-2 nova_compute[226829]: 2026-01-31 07:55:40.058 226833 DEBUG oslo_concurrency.lockutils [None req-30cf610e-ec30-43af-ae58-d40ca797d2d9 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 48.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:55:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:40.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:55:40 compute-2 ceph-mon[77282]: pgmap v1752: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.9 MiB/s wr, 236 op/s
Jan 31 07:55:40 compute-2 nova_compute[226829]: 2026-01-31 07:55:40.892 226833 DEBUG nova.compute.manager [req-66ec32bd-c086-47ed-a076-24e52df6e958 req-7aab3918-102a-4133-a59a-f3d13ba5c533 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-plugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:40 compute-2 nova_compute[226829]: 2026-01-31 07:55:40.893 226833 DEBUG oslo_concurrency.lockutils [req-66ec32bd-c086-47ed-a076-24e52df6e958 req-7aab3918-102a-4133-a59a-f3d13ba5c533 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:40 compute-2 nova_compute[226829]: 2026-01-31 07:55:40.894 226833 DEBUG oslo_concurrency.lockutils [req-66ec32bd-c086-47ed-a076-24e52df6e958 req-7aab3918-102a-4133-a59a-f3d13ba5c533 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:40 compute-2 nova_compute[226829]: 2026-01-31 07:55:40.894 226833 DEBUG oslo_concurrency.lockutils [req-66ec32bd-c086-47ed-a076-24e52df6e958 req-7aab3918-102a-4133-a59a-f3d13ba5c533 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:40 compute-2 nova_compute[226829]: 2026-01-31 07:55:40.894 226833 DEBUG nova.compute.manager [req-66ec32bd-c086-47ed-a076-24e52df6e958 req-7aab3918-102a-4133-a59a-f3d13ba5c533 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] No waiting events found dispatching network-vif-plugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:40 compute-2 nova_compute[226829]: 2026-01-31 07:55:40.894 226833 WARNING nova.compute.manager [req-66ec32bd-c086-47ed-a076-24e52df6e958 req-7aab3918-102a-4133-a59a-f3d13ba5c533 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received unexpected event network-vif-plugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 for instance with vm_state active and task_state None.
Jan 31 07:55:41 compute-2 nova_compute[226829]: 2026-01-31 07:55:41.326 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:55:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:41.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:55:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:55:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:42.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:55:42 compute-2 ceph-mon[77282]: pgmap v1753: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 172 op/s
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.781 226833 DEBUG oslo_concurrency.lockutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.782 226833 DEBUG oslo_concurrency.lockutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.783 226833 DEBUG oslo_concurrency.lockutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.783 226833 DEBUG oslo_concurrency.lockutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.783 226833 DEBUG oslo_concurrency.lockutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.784 226833 INFO nova.compute.manager [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Terminating instance
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.785 226833 DEBUG nova.compute.manager [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:55:42 compute-2 kernel: tap17255b28-10 (unregistering): left promiscuous mode
Jan 31 07:55:42 compute-2 NetworkManager[48999]: <info>  [1769846142.8235] device (tap17255b28-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.831 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00255|binding|INFO|Releasing lport 17255b28-10be-4d25-bba8-0292dfa7ad68 from this chassis (sb_readonly=0)
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00256|binding|INFO|Setting lport 17255b28-10be-4d25-bba8-0292dfa7ad68 down in Southbound
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00257|binding|INFO|Removing iface tap17255b28-10 ovn-installed in OVS
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.833 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.841 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 kernel: tap9f745c59-fd (unregistering): left promiscuous mode
Jan 31 07:55:42 compute-2 NetworkManager[48999]: <info>  [1769846142.8567] device (tap9f745c59-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.864 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00258|binding|INFO|Releasing lport 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 from this chassis (sb_readonly=1)
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00259|binding|INFO|Removing iface tap9f745c59-fd ovn-installed in OVS
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00260|if_status|INFO|Dropped 8 log messages in last 108 seconds (most recently, 108 seconds ago) due to excessive rate
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00261|if_status|INFO|Not setting lport 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 down as sb is readonly
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.867 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.872 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 kernel: tapbf5f6a49-45 (unregistering): left promiscuous mode
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00262|binding|INFO|Setting lport 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 down in Southbound
Jan 31 07:55:42 compute-2 NetworkManager[48999]: <info>  [1769846142.8884] device (tapbf5f6a49-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.887 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:35:4b 10.100.0.150'], port_security=['fa:16:3e:df:35:4b 10.100.0.150'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.150/24', 'neutron:device_id': 'ebe88ddf-b955-4cc2-9685-826d10d55955', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-adb6998e-030b-47a7-b80a-80015d534006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56ce08a86486427fbebbfbd075cdb404', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dc2cd802-0b4d-4cd4-bf24-edbd81854edb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87a9ae64-e09e-4ddd-9c3f-2e68f11949d7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=17255b28-10be-4d25-bba8-0292dfa7ad68) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.889 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 17255b28-10be-4d25-bba8-0292dfa7ad68 in datapath adb6998e-030b-47a7-b80a-80015d534006 unbound from our chassis
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.891 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network adb6998e-030b-47a7-b80a-80015d534006
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00263|binding|INFO|Releasing lport bf5f6a49-45dc-420a-8c08-bbf112a81186 from this chassis (sb_readonly=1)
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00264|binding|INFO|Removing iface tapbf5f6a49-45 ovn-installed in OVS
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.894 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.895 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.907 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.909 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[728d1d6f-960a-4370-a84b-2954c8b261c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:42 compute-2 ovn_controller[133834]: 2026-01-31T07:55:42Z|00265|binding|INFO|Setting lport bf5f6a49-45dc-420a-8c08-bbf112a81186 down in Southbound
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.929 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:75:fd 10.100.1.16'], port_security=['fa:16:3e:be:75:fd 10.100.1.16'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.16/24', 'neutron:device_id': 'ebe88ddf-b955-4cc2-9685-826d10d55955', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ed74c79-93cc-44eb-a177-8d95f653faee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56ce08a86486427fbebbfbd075cdb404', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dc2cd802-0b4d-4cd4-bf24-edbd81854edb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe833736-a295-4269-b9a9-4bd246e0b791, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=9f745c59-fd7c-4cd5-9eda-e439c116fbd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.932 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0395aa-a9d8-4750-942b-c0aa33258f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.935 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c20e56ab-feda-4dad-ab0e-1612e1e4d7a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:42 compute-2 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Jan 31 07:55:42 compute-2 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004e.scope: Consumed 4.210s CPU time.
Jan 31 07:55:42 compute-2 systemd-machined[195142]: Machine qemu-33-instance-0000004e terminated.
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.955 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e08694aa-11c0-4aea-8b3a-f2fc6c2233d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.969 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[83bd3aa5-6e4f-4990-858c-ce5ea4fc3c8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapadb6998e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:ec:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638686, 'reachable_time': 29333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261227, 'error': None, 'target': 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.981 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:15:34 10.100.0.218'], port_security=['fa:16:3e:93:15:34 10.100.0.218'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.218/24', 'neutron:device_id': 'ebe88ddf-b955-4cc2-9685-826d10d55955', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-adb6998e-030b-47a7-b80a-80015d534006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56ce08a86486427fbebbfbd075cdb404', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dc2cd802-0b4d-4cd4-bf24-edbd81854edb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87a9ae64-e09e-4ddd-9c3f-2e68f11949d7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=bf5f6a49-45dc-420a-8c08-bbf112a81186) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.984 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f31a2f23-5254-40ca-bee3-9bed486db538]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapadb6998e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638694, 'tstamp': 638694}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261228, 'error': None, 'target': 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapadb6998e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638696, 'tstamp': 638696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261228, 'error': None, 'target': 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.985 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapadb6998e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.986 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 nova_compute[226829]: 2026-01-31 07:55:42.993 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.994 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapadb6998e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.995 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.995 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapadb6998e-00, col_values=(('external_ids', {'iface-id': 'a8499b5e-8156-4438-a949-1a44ac18422e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.996 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.997 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 9f745c59-fd7c-4cd5-9eda-e439c116fbd8 in datapath 9ed74c79-93cc-44eb-a177-8d95f653faee unbound from our chassis
Jan 31 07:55:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.999 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ed74c79-93cc-44eb-a177-8d95f653faee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:42.999 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4c7632-f90a-4f3b-9b77-0601faf48bd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.000 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee namespace which is not needed anymore
Jan 31 07:55:43 compute-2 NetworkManager[48999]: <info>  [1769846143.0129] manager: (tap9f745c59-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Jan 31 07:55:43 compute-2 NetworkManager[48999]: <info>  [1769846143.0225] manager: (tapbf5f6a49-45): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.027 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.034 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.037 226833 INFO nova.virt.libvirt.driver [-] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Instance destroyed successfully.
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.038 226833 DEBUG nova.objects.instance [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lazy-loading 'resources' on Instance uuid ebe88ddf-b955-4cc2-9685-826d10d55955 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.069 226833 DEBUG nova.virt.libvirt.vif [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-17927120',display_name='tempest-ServersTestMultiNic-server-17927120',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-17927120',id=78,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-sxjoqsag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:39Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=ebe88ddf-b955-4cc2-9685-826d10d55955,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.070 226833 DEBUG nova.network.os_vif_util [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.071 226833 DEBUG nova.network.os_vif_util [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:35:4b,bridge_name='br-int',has_traffic_filtering=True,id=17255b28-10be-4d25-bba8-0292dfa7ad68,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17255b28-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.072 226833 DEBUG os_vif [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:35:4b,bridge_name='br-int',has_traffic_filtering=True,id=17255b28-10be-4d25-bba8-0292dfa7ad68,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17255b28-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.074 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.075 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17255b28-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.076 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.079 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.082 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.088 226833 INFO os_vif [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:35:4b,bridge_name='br-int',has_traffic_filtering=True,id=17255b28-10be-4d25-bba8-0292dfa7ad68,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17255b28-10')
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.089 226833 DEBUG nova.virt.libvirt.vif [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-17927120',display_name='tempest-ServersTestMultiNic-server-17927120',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-17927120',id=78,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-sxjoqsag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:39Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=ebe88ddf-b955-4cc2-9685-826d10d55955,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "address": "fa:16:3e:be:75:fd", "network": {"id": "9ed74c79-93cc-44eb-a177-8d95f653faee", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1808837000", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.16", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f745c59-fd", "ovs_interfaceid": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.089 226833 DEBUG nova.network.os_vif_util [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "address": "fa:16:3e:be:75:fd", "network": {"id": "9ed74c79-93cc-44eb-a177-8d95f653faee", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1808837000", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.16", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f745c59-fd", "ovs_interfaceid": "9f745c59-fd7c-4cd5-9eda-e439c116fbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.090 226833 DEBUG nova.network.os_vif_util [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:75:fd,bridge_name='br-int',has_traffic_filtering=True,id=9f745c59-fd7c-4cd5-9eda-e439c116fbd8,network=Network(9ed74c79-93cc-44eb-a177-8d95f653faee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f745c59-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.091 226833 DEBUG os_vif [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:75:fd,bridge_name='br-int',has_traffic_filtering=True,id=9f745c59-fd7c-4cd5-9eda-e439c116fbd8,network=Network(9ed74c79-93cc-44eb-a177-8d95f653faee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f745c59-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.092 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.092 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f745c59-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.093 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.096 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.099 226833 INFO os_vif [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:75:fd,bridge_name='br-int',has_traffic_filtering=True,id=9f745c59-fd7c-4cd5-9eda-e439c116fbd8,network=Network(9ed74c79-93cc-44eb-a177-8d95f653faee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f745c59-fd')
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.100 226833 DEBUG nova.virt.libvirt.vif [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-17927120',display_name='tempest-ServersTestMultiNic-server-17927120',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-17927120',id=78,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-sxjoqsag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:39Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=ebe88ddf-b955-4cc2-9685-826d10d55955,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.101 226833 DEBUG nova.network.os_vif_util [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.101 226833 DEBUG nova.network.os_vif_util [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:15:34,bridge_name='br-int',has_traffic_filtering=True,id=bf5f6a49-45dc-420a-8c08-bbf112a81186,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f6a49-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.102 226833 DEBUG os_vif [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:15:34,bridge_name='br-int',has_traffic_filtering=True,id=bf5f6a49-45dc-420a-8c08-bbf112a81186,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f6a49-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.103 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee[261109]: [NOTICE]   (261113) : haproxy version is 2.8.14-c23fe91
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee[261109]: [NOTICE]   (261113) : path to executable is /usr/sbin/haproxy
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee[261109]: [WARNING]  (261113) : Exiting Master process...
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.103 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5f6a49-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee[261109]: [ALERT]    (261113) : Current worker (261115) exited with code 143 (Terminated)
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee[261109]: [WARNING]  (261113) : All workers exited. Exiting... (0)
Jan 31 07:55:43 compute-2 systemd[1]: libpod-fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05.scope: Deactivated successfully.
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.107 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.109 226833 INFO os_vif [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:15:34,bridge_name='br-int',has_traffic_filtering=True,id=bf5f6a49-45dc-420a-8c08-bbf112a81186,network=Network(adb6998e-030b-47a7-b80a-80015d534006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f6a49-45')
Jan 31 07:55:43 compute-2 podman[261285]: 2026-01-31 07:55:43.114550973 +0000 UTC m=+0.045462897 container died fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:55:43 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05-userdata-shm.mount: Deactivated successfully.
Jan 31 07:55:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-30a78fb1ab95b434907c7503018baa5559dc46113472362d50cf88dc0fc696df-merged.mount: Deactivated successfully.
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.141 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:43 compute-2 podman[261285]: 2026-01-31 07:55:43.151311144 +0000 UTC m=+0.082223068 container cleanup fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:55:43 compute-2 systemd[1]: libpod-conmon-fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05.scope: Deactivated successfully.
Jan 31 07:55:43 compute-2 podman[261333]: 2026-01-31 07:55:43.207671764 +0000 UTC m=+0.040053661 container remove fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.211 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3b1d79-065e-408c-9a0e-21dbd0e15026]: (4, ('Sat Jan 31 07:55:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee (fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05)\nfb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05\nSat Jan 31 07:55:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee (fb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05)\nfb6747696e909d643763c87e0aee91ccdd231a9c1a4e2c5bf4b408a3bf89ea05\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.213 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bb42a7-b139-4715-ad4d-217574c6cb53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.215 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ed74c79-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.216 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 kernel: tap9ed74c79-90: left promiscuous mode
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.221 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8428049a-d570-43bc-8d16-974ae8e5cb01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.226 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.230 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8e3929-7599-4f1e-912d-8c61fb2dabc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.231 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8e41b817-7a91-479c-9410-f6fa4d51a9d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.241 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fbdba0a1-b046-49a3-9a8e-994531f25f24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638786, 'reachable_time': 16912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261349, 'error': None, 'target': 'ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 systemd[1]: run-netns-ovnmeta\x2d9ed74c79\x2d93cc\x2d44eb\x2da177\x2d8d95f653faee.mount: Deactivated successfully.
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.244 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ed74c79-93cc-44eb-a177-8d95f653faee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.244 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[7de3a71c-db86-4f7b-9de9-cce49b7692c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.245 143841 INFO neutron.agent.ovn.metadata.agent [-] Port bf5f6a49-45dc-420a-8c08-bbf112a81186 in datapath adb6998e-030b-47a7-b80a-80015d534006 unbound from our chassis
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.247 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network adb6998e-030b-47a7-b80a-80015d534006, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.249 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7ec46a-e4ce-4e9a-9c2e-e53522ba7665]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.250 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-adb6998e-030b-47a7-b80a-80015d534006 namespace which is not needed anymore
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.284 226833 DEBUG nova.compute.manager [req-d5b4e580-ce8d-4a62-a1cf-12eaa58b9a94 req-ee842cfc-732a-4dce-b533-0f984f8d6f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-unplugged-17255b28-10be-4d25-bba8-0292dfa7ad68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.285 226833 DEBUG oslo_concurrency.lockutils [req-d5b4e580-ce8d-4a62-a1cf-12eaa58b9a94 req-ee842cfc-732a-4dce-b533-0f984f8d6f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.286 226833 DEBUG oslo_concurrency.lockutils [req-d5b4e580-ce8d-4a62-a1cf-12eaa58b9a94 req-ee842cfc-732a-4dce-b533-0f984f8d6f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.287 226833 DEBUG oslo_concurrency.lockutils [req-d5b4e580-ce8d-4a62-a1cf-12eaa58b9a94 req-ee842cfc-732a-4dce-b533-0f984f8d6f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.287 226833 DEBUG nova.compute.manager [req-d5b4e580-ce8d-4a62-a1cf-12eaa58b9a94 req-ee842cfc-732a-4dce-b533-0f984f8d6f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] No waiting events found dispatching network-vif-unplugged-17255b28-10be-4d25-bba8-0292dfa7ad68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.287 226833 DEBUG nova.compute.manager [req-d5b4e580-ce8d-4a62-a1cf-12eaa58b9a94 req-ee842cfc-732a-4dce-b533-0f984f8d6f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-unplugged-17255b28-10be-4d25-bba8-0292dfa7ad68 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006[261025]: [NOTICE]   (261036) : haproxy version is 2.8.14-c23fe91
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006[261025]: [NOTICE]   (261036) : path to executable is /usr/sbin/haproxy
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006[261025]: [WARNING]  (261036) : Exiting Master process...
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006[261025]: [WARNING]  (261036) : Exiting Master process...
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006[261025]: [ALERT]    (261036) : Current worker (261038) exited with code 143 (Terminated)
Jan 31 07:55:43 compute-2 neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006[261025]: [WARNING]  (261036) : All workers exited. Exiting... (0)
Jan 31 07:55:43 compute-2 systemd[1]: libpod-b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a.scope: Deactivated successfully.
Jan 31 07:55:43 compute-2 podman[261364]: 2026-01-31 07:55:43.349113887 +0000 UTC m=+0.038455398 container died b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 07:55:43 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a-userdata-shm.mount: Deactivated successfully.
Jan 31 07:55:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-28ecdea1172eb5e492bfb46565fa4c925c52d607e8c71932e09da989589501a9-merged.mount: Deactivated successfully.
Jan 31 07:55:43 compute-2 podman[261364]: 2026-01-31 07:55:43.378366586 +0000 UTC m=+0.067708097 container cleanup b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:55:43 compute-2 systemd[1]: libpod-conmon-b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a.scope: Deactivated successfully.
Jan 31 07:55:43 compute-2 podman[261396]: 2026-01-31 07:55:43.443014319 +0000 UTC m=+0.043821252 container remove b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.446 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4a222201-2c9b-4b3a-bee4-288b70b2226a]: (4, ('Sat Jan 31 07:55:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006 (b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a)\nb02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a\nSat Jan 31 07:55:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-adb6998e-030b-47a7-b80a-80015d534006 (b02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a)\nb02c8018681ecff9be1aaa912a8dd688d7c89be901f52e4b17aad33c82f7690a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.447 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7eeda3e9-74f0-458c-af25-eed9f934a885]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.448 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapadb6998e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.450 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 kernel: tapadb6998e-00: left promiscuous mode
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.454 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.458 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[26e5b0a6-6425-460c-967c-9758ef0377b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:43.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.486 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cba30117-8589-4692-9a93-88d7b1cd9783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.487 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8812ab31-1c65-4f6b-be5d-414df0c8ba4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.498 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ef52ebe1-68f3-4efb-a9ae-ec0952f3b0a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638680, 'reachable_time': 21106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261411, 'error': None, 'target': 'ovnmeta-adb6998e-030b-47a7-b80a-80015d534006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.500 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-adb6998e-030b-47a7-b80a-80015d534006 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:55:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:43.500 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[2aed092c-e685-447a-a698-1350caf3ec60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.555 226833 INFO nova.virt.libvirt.driver [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Deleting instance files /var/lib/nova/instances/ebe88ddf-b955-4cc2-9685-826d10d55955_del
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.556 226833 INFO nova.virt.libvirt.driver [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Deletion of /var/lib/nova/instances/ebe88ddf-b955-4cc2-9685-826d10d55955_del complete
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.565 226833 DEBUG nova.compute.manager [req-512e4573-064e-4f70-99c6-a0aa33ef1196 req-1e063dd8-40a8-4a88-b8b2-42ac7bd8eed5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-unplugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.568 226833 DEBUG oslo_concurrency.lockutils [req-512e4573-064e-4f70-99c6-a0aa33ef1196 req-1e063dd8-40a8-4a88-b8b2-42ac7bd8eed5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.569 226833 DEBUG oslo_concurrency.lockutils [req-512e4573-064e-4f70-99c6-a0aa33ef1196 req-1e063dd8-40a8-4a88-b8b2-42ac7bd8eed5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.569 226833 DEBUG oslo_concurrency.lockutils [req-512e4573-064e-4f70-99c6-a0aa33ef1196 req-1e063dd8-40a8-4a88-b8b2-42ac7bd8eed5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.569 226833 DEBUG nova.compute.manager [req-512e4573-064e-4f70-99c6-a0aa33ef1196 req-1e063dd8-40a8-4a88-b8b2-42ac7bd8eed5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] No waiting events found dispatching network-vif-unplugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.569 226833 DEBUG nova.compute.manager [req-512e4573-064e-4f70-99c6-a0aa33ef1196 req-1e063dd8-40a8-4a88-b8b2-42ac7bd8eed5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-unplugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.729 226833 INFO nova.compute.manager [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Took 0.94 seconds to destroy the instance on the hypervisor.
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.729 226833 DEBUG oslo.service.loopingcall [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.729 226833 DEBUG nova.compute.manager [-] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:55:43 compute-2 nova_compute[226829]: 2026-01-31 07:55:43.730 226833 DEBUG nova.network.neutron [-] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:55:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:55:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:44.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:55:44 compute-2 systemd[1]: run-netns-ovnmeta\x2dadb6998e\x2d030b\x2d47a7\x2db80a\x2d80015d534006.mount: Deactivated successfully.
Jan 31 07:55:44 compute-2 ceph-mon[77282]: pgmap v1754: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 07:55:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1164178979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.004 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:55:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4023026982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:55:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:55:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4023026982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:55:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:45.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.627 226833 DEBUG nova.compute.manager [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-plugged-17255b28-10be-4d25-bba8-0292dfa7ad68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.628 226833 DEBUG oslo_concurrency.lockutils [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.628 226833 DEBUG oslo_concurrency.lockutils [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.628 226833 DEBUG oslo_concurrency.lockutils [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.628 226833 DEBUG nova.compute.manager [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] No waiting events found dispatching network-vif-plugged-17255b28-10be-4d25-bba8-0292dfa7ad68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.629 226833 WARNING nova.compute.manager [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received unexpected event network-vif-plugged-17255b28-10be-4d25-bba8-0292dfa7ad68 for instance with vm_state active and task_state deleting.
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.629 226833 DEBUG nova.compute.manager [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-unplugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.629 226833 DEBUG oslo_concurrency.lockutils [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.629 226833 DEBUG oslo_concurrency.lockutils [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.630 226833 DEBUG oslo_concurrency.lockutils [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.630 226833 DEBUG nova.compute.manager [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] No waiting events found dispatching network-vif-unplugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.630 226833 DEBUG nova.compute.manager [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-unplugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.630 226833 DEBUG nova.compute.manager [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-plugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.630 226833 DEBUG oslo_concurrency.lockutils [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.631 226833 DEBUG oslo_concurrency.lockutils [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.631 226833 DEBUG oslo_concurrency.lockutils [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.631 226833 DEBUG nova.compute.manager [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] No waiting events found dispatching network-vif-plugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:45 compute-2 nova_compute[226829]: 2026-01-31 07:55:45.631 226833 WARNING nova.compute.manager [req-993b4b17-671b-4605-a6f2-eb642258de05 req-7d1371fa-a9a8-4aac-aa39-dc9825dc5fbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received unexpected event network-vif-plugged-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 for instance with vm_state active and task_state deleting.
Jan 31 07:55:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4023026982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:55:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4023026982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:55:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2371626794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:46.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:46 compute-2 nova_compute[226829]: 2026-01-31 07:55:46.720 226833 DEBUG nova.compute.manager [req-3af16e15-835a-4d33-a6b6-ddaaf74bb935 req-afd99859-2fc9-4038-98ee-eb89a28743ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-plugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:46 compute-2 nova_compute[226829]: 2026-01-31 07:55:46.721 226833 DEBUG oslo_concurrency.lockutils [req-3af16e15-835a-4d33-a6b6-ddaaf74bb935 req-afd99859-2fc9-4038-98ee-eb89a28743ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:46 compute-2 nova_compute[226829]: 2026-01-31 07:55:46.721 226833 DEBUG oslo_concurrency.lockutils [req-3af16e15-835a-4d33-a6b6-ddaaf74bb935 req-afd99859-2fc9-4038-98ee-eb89a28743ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:46 compute-2 nova_compute[226829]: 2026-01-31 07:55:46.721 226833 DEBUG oslo_concurrency.lockutils [req-3af16e15-835a-4d33-a6b6-ddaaf74bb935 req-afd99859-2fc9-4038-98ee-eb89a28743ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:46 compute-2 nova_compute[226829]: 2026-01-31 07:55:46.721 226833 DEBUG nova.compute.manager [req-3af16e15-835a-4d33-a6b6-ddaaf74bb935 req-afd99859-2fc9-4038-98ee-eb89a28743ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] No waiting events found dispatching network-vif-plugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:46 compute-2 nova_compute[226829]: 2026-01-31 07:55:46.721 226833 WARNING nova.compute.manager [req-3af16e15-835a-4d33-a6b6-ddaaf74bb935 req-afd99859-2fc9-4038-98ee-eb89a28743ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received unexpected event network-vif-plugged-bf5f6a49-45dc-420a-8c08-bbf112a81186 for instance with vm_state active and task_state deleting.
Jan 31 07:55:46 compute-2 ceph-mon[77282]: pgmap v1755: 305 pgs: 305 active+clean; 262 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 190 op/s
Jan 31 07:55:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:47.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:47 compute-2 ceph-mon[77282]: pgmap v1756: 305 pgs: 305 active+clean; 258 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 31 07:55:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:48.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:48 compute-2 nova_compute[226829]: 2026-01-31 07:55:48.105 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:48 compute-2 nova_compute[226829]: 2026-01-31 07:55:48.992 226833 DEBUG oslo_concurrency.lockutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Acquiring lock "a6d52544-dac0-441a-a99f-b0d23283f733" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:48 compute-2 nova_compute[226829]: 2026-01-31 07:55:48.993 226833 DEBUG oslo_concurrency.lockutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:48 compute-2 nova_compute[226829]: 2026-01-31 07:55:48.993 226833 DEBUG oslo_concurrency.lockutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Acquiring lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:48 compute-2 nova_compute[226829]: 2026-01-31 07:55:48.993 226833 DEBUG oslo_concurrency.lockutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:48 compute-2 nova_compute[226829]: 2026-01-31 07:55:48.994 226833 DEBUG oslo_concurrency.lockutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:48 compute-2 nova_compute[226829]: 2026-01-31 07:55:48.995 226833 INFO nova.compute.manager [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Terminating instance
Jan 31 07:55:48 compute-2 nova_compute[226829]: 2026-01-31 07:55:48.996 226833 DEBUG nova.compute.manager [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:55:49 compute-2 kernel: tap7b8f6af4-25 (unregistering): left promiscuous mode
Jan 31 07:55:49 compute-2 NetworkManager[48999]: <info>  [1769846149.2971] device (tap7b8f6af4-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:55:49 compute-2 ovn_controller[133834]: 2026-01-31T07:55:49Z|00266|binding|INFO|Releasing lport 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c from this chassis (sb_readonly=0)
Jan 31 07:55:49 compute-2 ovn_controller[133834]: 2026-01-31T07:55:49Z|00267|binding|INFO|Setting lport 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c down in Southbound
Jan 31 07:55:49 compute-2 ovn_controller[133834]: 2026-01-31T07:55:49Z|00268|binding|INFO|Removing iface tap7b8f6af4-25 ovn-installed in OVS
Jan 31 07:55:49 compute-2 nova_compute[226829]: 2026-01-31 07:55:49.326 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:49 compute-2 nova_compute[226829]: 2026-01-31 07:55:49.329 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:49 compute-2 nova_compute[226829]: 2026-01-31 07:55:49.335 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:49 compute-2 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Jan 31 07:55:49 compute-2 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004c.scope: Consumed 14.878s CPU time.
Jan 31 07:55:49 compute-2 systemd-machined[195142]: Machine qemu-32-instance-0000004c terminated.
Jan 31 07:55:49 compute-2 nova_compute[226829]: 2026-01-31 07:55:49.409 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:49 compute-2 nova_compute[226829]: 2026-01-31 07:55:49.412 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:49 compute-2 nova_compute[226829]: 2026-01-31 07:55:49.426 226833 INFO nova.virt.libvirt.driver [-] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Instance destroyed successfully.
Jan 31 07:55:49 compute-2 nova_compute[226829]: 2026-01-31 07:55:49.426 226833 DEBUG nova.objects.instance [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lazy-loading 'resources' on Instance uuid a6d52544-dac0-441a-a99f-b0d23283f733 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.459 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:0e:2f 10.100.0.12'], port_security=['fa:16:3e:19:0e:2f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a6d52544-dac0-441a-a99f-b0d23283f733', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45bd414f-24de-4143-b7e8-6a584f757e03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bc07928b022430ba8bcc450bbb5c7f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '456504e2-b7ec-4864-92ee-93c1963edbdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99232924-38ab-446c-9a0d-35433e368ebf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=7b8f6af4-25f0-4345-98f9-b6f4345e7a4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.460 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c in datapath 45bd414f-24de-4143-b7e8-6a584f757e03 unbound from our chassis
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.463 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45bd414f-24de-4143-b7e8-6a584f757e03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.464 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[29aca189-675f-457f-ac2e-a6def38b1505]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.465 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03 namespace which is not needed anymore
Jan 31 07:55:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:49.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:49 compute-2 neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03[260558]: [NOTICE]   (260564) : haproxy version is 2.8.14-c23fe91
Jan 31 07:55:49 compute-2 neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03[260558]: [NOTICE]   (260564) : path to executable is /usr/sbin/haproxy
Jan 31 07:55:49 compute-2 neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03[260558]: [WARNING]  (260564) : Exiting Master process...
Jan 31 07:55:49 compute-2 neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03[260558]: [ALERT]    (260564) : Current worker (260566) exited with code 143 (Terminated)
Jan 31 07:55:49 compute-2 neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03[260558]: [WARNING]  (260564) : All workers exited. Exiting... (0)
Jan 31 07:55:49 compute-2 systemd[1]: libpod-6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077.scope: Deactivated successfully.
Jan 31 07:55:49 compute-2 podman[261449]: 2026-01-31 07:55:49.685087997 +0000 UTC m=+0.145585176 container died 6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 07:55:49 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077-userdata-shm.mount: Deactivated successfully.
Jan 31 07:55:49 compute-2 systemd[1]: var-lib-containers-storage-overlay-78b85a24104f5221451f27965d1e702a37f093ebdf0626cb2ffd6a7d89a2ad0c-merged.mount: Deactivated successfully.
Jan 31 07:55:49 compute-2 podman[261449]: 2026-01-31 07:55:49.769561384 +0000 UTC m=+0.230058573 container cleanup 6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:55:49 compute-2 systemd[1]: libpod-conmon-6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077.scope: Deactivated successfully.
Jan 31 07:55:49 compute-2 podman[261479]: 2026-01-31 07:55:49.838503903 +0000 UTC m=+0.048064487 container remove 6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.842 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4252ee5c-8458-4e4a-b0f6-6f3c708f66b8]: (4, ('Sat Jan 31 07:55:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03 (6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077)\n6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077\nSat Jan 31 07:55:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03 (6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077)\n6d4556b886a77d83d935c8dd8650a09fac628a0ec09c86635a5aadeaa4ef5077\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.844 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5e204c79-c901-4bd1-bbea-68525b189c9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.845 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45bd414f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:49 compute-2 nova_compute[226829]: 2026-01-31 07:55:49.847 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:49 compute-2 kernel: tap45bd414f-20: left promiscuous mode
Jan 31 07:55:49 compute-2 nova_compute[226829]: 2026-01-31 07:55:49.857 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.860 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dc298c48-48e4-4bd4-97ef-a49c1593172f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.882 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eff360ab-9484-4c00-955e-7d8118c06d3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.884 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fbf2d7-8ff7-4fa9-b8e5-0be80c68e56d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.899 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[18f1df4d-a9e9-4c59-b944-e8796ce44f08]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636009, 'reachable_time': 40444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261499, 'error': None, 'target': 'ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:49 compute-2 systemd[1]: run-netns-ovnmeta\x2d45bd414f\x2d24de\x2d4143\x2db7e8\x2d6a584f757e03.mount: Deactivated successfully.
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.903 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45bd414f-24de-4143-b7e8-6a584f757e03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:55:49 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:55:49.903 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[f31d7fe0-e429-4e96-8bb2-b4f72f501b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.005 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:55:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:50.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.177 226833 DEBUG nova.virt.libvirt.vif [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=76,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFaSVVOkqAy4m6ftfVfCN7yyXKwsbbmBkSuPCnHAQrSoEw66tu1puYtXnpD/5EDlHpoXDfR9WrQMHLeQOsJa0XkGcyRGLK347cubHzHBRMaY57fpNViMHaMBgY16/9IaoA==',key_name='tempest-keypair-318815734',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:09Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bc07928b022430ba8bcc450bbb5c7f5',ramdisk_id='',reservation_id='r-oxii7bzm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestFqdnHostnames-1984852953',owner_user_name='tempest-ServersTestFqdnHostnames-1984852953-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='41f8b35d7c8b43d9b3583e6e2b1385cb',uuid=a6d52544-dac0-441a-a99f-b0d23283f733,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.178 226833 DEBUG nova.network.os_vif_util [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Converting VIF {"id": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "address": "fa:16:3e:19:0e:2f", "network": {"id": "45bd414f-24de-4143-b7e8-6a584f757e03", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1647349453-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc07928b022430ba8bcc450bbb5c7f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b8f6af4-25", "ovs_interfaceid": "7b8f6af4-25f0-4345-98f9-b6f4345e7a4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.179 226833 DEBUG nova.network.os_vif_util [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:0e:2f,bridge_name='br-int',has_traffic_filtering=True,id=7b8f6af4-25f0-4345-98f9-b6f4345e7a4c,network=Network(45bd414f-24de-4143-b7e8-6a584f757e03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f6af4-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.179 226833 DEBUG os_vif [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:0e:2f,bridge_name='br-int',has_traffic_filtering=True,id=7b8f6af4-25f0-4345-98f9-b6f4345e7a4c,network=Network(45bd414f-24de-4143-b7e8-6a584f757e03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f6af4-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.181 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.181 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b8f6af4-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.183 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.184 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.187 226833 INFO os_vif [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:0e:2f,bridge_name='br-int',has_traffic_filtering=True,id=7b8f6af4-25f0-4345-98f9-b6f4345e7a4c,network=Network(45bd414f-24de-4143-b7e8-6a584f757e03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b8f6af4-25')
Jan 31 07:55:50 compute-2 ceph-mon[77282]: pgmap v1757: 305 pgs: 305 active+clean; 247 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 126 op/s
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.797 226833 DEBUG nova.compute.manager [req-d1cdb46a-7107-401a-8da0-7a514db750e9 req-31bec47b-bad3-46e0-8745-360b19415338 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-deleted-9f745c59-fd7c-4cd5-9eda-e439c116fbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.798 226833 INFO nova.compute.manager [req-d1cdb46a-7107-401a-8da0-7a514db750e9 req-31bec47b-bad3-46e0-8745-360b19415338 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Neutron deleted interface 9f745c59-fd7c-4cd5-9eda-e439c116fbd8; detaching it from the instance and deleting it from the info cache
Jan 31 07:55:50 compute-2 nova_compute[226829]: 2026-01-31 07:55:50.798 226833 DEBUG nova.network.neutron [req-d1cdb46a-7107-401a-8da0-7a514db750e9 req-31bec47b-bad3-46e0-8745-360b19415338 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Updating instance_info_cache with network_info: [{"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "address": "fa:16:3e:93:15:34", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f6a49-45", "ovs_interfaceid": "bf5f6a49-45dc-420a-8c08-bbf112a81186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:55:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:51.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:55:51 compute-2 nova_compute[226829]: 2026-01-31 07:55:51.527 226833 DEBUG nova.compute.manager [req-d1cdb46a-7107-401a-8da0-7a514db750e9 req-31bec47b-bad3-46e0-8745-360b19415338 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Detach interface failed, port_id=9f745c59-fd7c-4cd5-9eda-e439c116fbd8, reason: Instance ebe88ddf-b955-4cc2-9685-826d10d55955 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:55:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:52.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:52 compute-2 nova_compute[226829]: 2026-01-31 07:55:52.248 226833 DEBUG nova.compute.manager [req-9ccdb610-121b-46f1-b267-3af0f843b729 req-69838871-fb76-4f2f-af64-1096f7ce878f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Received event network-vif-unplugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:52 compute-2 nova_compute[226829]: 2026-01-31 07:55:52.249 226833 DEBUG oslo_concurrency.lockutils [req-9ccdb610-121b-46f1-b267-3af0f843b729 req-69838871-fb76-4f2f-af64-1096f7ce878f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:52 compute-2 nova_compute[226829]: 2026-01-31 07:55:52.250 226833 DEBUG oslo_concurrency.lockutils [req-9ccdb610-121b-46f1-b267-3af0f843b729 req-69838871-fb76-4f2f-af64-1096f7ce878f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:52 compute-2 nova_compute[226829]: 2026-01-31 07:55:52.250 226833 DEBUG oslo_concurrency.lockutils [req-9ccdb610-121b-46f1-b267-3af0f843b729 req-69838871-fb76-4f2f-af64-1096f7ce878f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:52 compute-2 nova_compute[226829]: 2026-01-31 07:55:52.250 226833 DEBUG nova.compute.manager [req-9ccdb610-121b-46f1-b267-3af0f843b729 req-69838871-fb76-4f2f-af64-1096f7ce878f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] No waiting events found dispatching network-vif-unplugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:52 compute-2 nova_compute[226829]: 2026-01-31 07:55:52.251 226833 DEBUG nova.compute.manager [req-9ccdb610-121b-46f1-b267-3af0f843b729 req-69838871-fb76-4f2f-af64-1096f7ce878f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Received event network-vif-unplugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:55:52 compute-2 ceph-mon[77282]: pgmap v1758: 305 pgs: 305 active+clean; 247 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 106 op/s
Jan 31 07:55:52 compute-2 nova_compute[226829]: 2026-01-31 07:55:52.800 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:52 compute-2 nova_compute[226829]: 2026-01-31 07:55:52.929 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:53 compute-2 nova_compute[226829]: 2026-01-31 07:55:53.465 226833 DEBUG nova.compute.manager [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-deleted-bf5f6a49-45dc-420a-8c08-bbf112a81186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:53 compute-2 nova_compute[226829]: 2026-01-31 07:55:53.466 226833 INFO nova.compute.manager [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Neutron deleted interface bf5f6a49-45dc-420a-8c08-bbf112a81186; detaching it from the instance and deleting it from the info cache
Jan 31 07:55:53 compute-2 nova_compute[226829]: 2026-01-31 07:55:53.466 226833 DEBUG nova.network.neutron [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Updating instance_info_cache with network_info: [{"id": "17255b28-10be-4d25-bba8-0292dfa7ad68", "address": "fa:16:3e:df:35:4b", "network": {"id": "adb6998e-030b-47a7-b80a-80015d534006", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1792056346", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.150", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17255b28-10", "ovs_interfaceid": "17255b28-10be-4d25-bba8-0292dfa7ad68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:53.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:53 compute-2 nova_compute[226829]: 2026-01-31 07:55:53.628 226833 INFO nova.virt.libvirt.driver [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Deleting instance files /var/lib/nova/instances/a6d52544-dac0-441a-a99f-b0d23283f733_del
Jan 31 07:55:53 compute-2 nova_compute[226829]: 2026-01-31 07:55:53.629 226833 INFO nova.virt.libvirt.driver [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Deletion of /var/lib/nova/instances/a6d52544-dac0-441a-a99f-b0d23283f733_del complete
Jan 31 07:55:53 compute-2 nova_compute[226829]: 2026-01-31 07:55:53.816 226833 DEBUG nova.compute.manager [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Detach interface failed, port_id=bf5f6a49-45dc-420a-8c08-bbf112a81186, reason: Instance ebe88ddf-b955-4cc2-9685-826d10d55955 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:55:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:55:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:54.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.554 226833 DEBUG nova.compute.manager [req-83bcf9b6-8bc0-4d69-8792-c72fffbf7df0 req-6d7fbd80-a360-4566-b2bb-f9375b0d3e3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Received event network-vif-plugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.554 226833 DEBUG oslo_concurrency.lockutils [req-83bcf9b6-8bc0-4d69-8792-c72fffbf7df0 req-6d7fbd80-a360-4566-b2bb-f9375b0d3e3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.555 226833 DEBUG oslo_concurrency.lockutils [req-83bcf9b6-8bc0-4d69-8792-c72fffbf7df0 req-6d7fbd80-a360-4566-b2bb-f9375b0d3e3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.555 226833 DEBUG oslo_concurrency.lockutils [req-83bcf9b6-8bc0-4d69-8792-c72fffbf7df0 req-6d7fbd80-a360-4566-b2bb-f9375b0d3e3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.555 226833 DEBUG nova.compute.manager [req-83bcf9b6-8bc0-4d69-8792-c72fffbf7df0 req-6d7fbd80-a360-4566-b2bb-f9375b0d3e3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] No waiting events found dispatching network-vif-plugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.556 226833 WARNING nova.compute.manager [req-83bcf9b6-8bc0-4d69-8792-c72fffbf7df0 req-6d7fbd80-a360-4566-b2bb-f9375b0d3e3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Received unexpected event network-vif-plugged-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c for instance with vm_state active and task_state deleting.
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.558 226833 DEBUG nova.network.neutron [-] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.587 226833 INFO nova.compute.manager [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Took 5.59 seconds to destroy the instance on the hypervisor.
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.588 226833 DEBUG oslo.service.loopingcall [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.588 226833 DEBUG nova.compute.manager [-] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.588 226833 DEBUG nova.network.neutron [-] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:55:54 compute-2 ceph-mon[77282]: pgmap v1759: 305 pgs: 305 active+clean; 226 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 105 op/s
Jan 31 07:55:54 compute-2 nova_compute[226829]: 2026-01-31 07:55:54.815 226833 INFO nova.compute.manager [-] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Took 11.09 seconds to deallocate network for instance.
Jan 31 07:55:55 compute-2 nova_compute[226829]: 2026-01-31 07:55:55.007 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:55 compute-2 nova_compute[226829]: 2026-01-31 07:55:55.184 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:55 compute-2 nova_compute[226829]: 2026-01-31 07:55:55.289 226833 DEBUG oslo_concurrency.lockutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:55 compute-2 nova_compute[226829]: 2026-01-31 07:55:55.290 226833 DEBUG oslo_concurrency.lockutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:55 compute-2 nova_compute[226829]: 2026-01-31 07:55:55.409 226833 DEBUG oslo_concurrency.processutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:55.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:55 compute-2 nova_compute[226829]: 2026-01-31 07:55:55.645 226833 DEBUG nova.compute.manager [req-1046e2d1-6928-4db9-b7c1-65203011f068 req-034ce1ed-0ca5-40f7-b607-119ee828d6c3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Received event network-vif-deleted-17255b28-10be-4d25-bba8-0292dfa7ad68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:55 compute-2 ceph-mon[77282]: pgmap v1760: 305 pgs: 305 active+clean; 175 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 23 KiB/s wr, 97 op/s
Jan 31 07:55:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:55:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1145297886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:55 compute-2 nova_compute[226829]: 2026-01-31 07:55:55.848 226833 DEBUG oslo_concurrency.processutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:55 compute-2 nova_compute[226829]: 2026-01-31 07:55:55.854 226833 DEBUG nova.compute.provider_tree [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:55:56 compute-2 nova_compute[226829]: 2026-01-31 07:55:56.012 226833 DEBUG nova.scheduler.client.report [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:55:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:56.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:56 compute-2 nova_compute[226829]: 2026-01-31 07:55:56.325 226833 DEBUG oslo_concurrency.lockutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:56 compute-2 nova_compute[226829]: 2026-01-31 07:55:56.626 226833 INFO nova.scheduler.client.report [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Deleted allocations for instance ebe88ddf-b955-4cc2-9685-826d10d55955
Jan 31 07:55:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1145297886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:57.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:57 compute-2 nova_compute[226829]: 2026-01-31 07:55:57.524 226833 DEBUG oslo_concurrency.lockutils [None req-ec620c70-c644-4161-b388-e9baf99f4a58 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ebe88ddf-b955-4cc2-9685-826d10d55955" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:57 compute-2 sudo[261546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:57 compute-2 sudo[261546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:57 compute-2 sudo[261546]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:57 compute-2 sudo[261577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:57 compute-2 sudo[261577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:57 compute-2 sudo[261577]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:57 compute-2 podman[261570]: 2026-01-31 07:55:57.726139986 +0000 UTC m=+0.106578715 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:55:58 compute-2 ceph-mon[77282]: pgmap v1761: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 785 KiB/s rd, 19 KiB/s wr, 67 op/s
Jan 31 07:55:58 compute-2 nova_compute[226829]: 2026-01-31 07:55:58.036 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846143.0352407, ebe88ddf-b955-4cc2-9685-826d10d55955 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:58 compute-2 nova_compute[226829]: 2026-01-31 07:55:58.037 226833 INFO nova.compute.manager [-] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] VM Stopped (Lifecycle Event)
Jan 31 07:55:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:58.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:58 compute-2 nova_compute[226829]: 2026-01-31 07:55:58.731 226833 DEBUG nova.compute.manager [None req-5bc5009b-be9c-4188-bb76-9d38dd6a7fbd - - - - - -] [instance: ebe88ddf-b955-4cc2-9685-826d10d55955] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:58 compute-2 nova_compute[226829]: 2026-01-31 07:55:58.925 226833 DEBUG nova.compute.manager [req-a42920c7-ec31-4489-82de-0e198fe33f2c req-1c7f3eb5-318a-4d4e-aae9-bf54d9d924a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Received event network-vif-deleted-7b8f6af4-25f0-4345-98f9-b6f4345e7a4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:58 compute-2 nova_compute[226829]: 2026-01-31 07:55:58.926 226833 INFO nova.compute.manager [req-a42920c7-ec31-4489-82de-0e198fe33f2c req-1c7f3eb5-318a-4d4e-aae9-bf54d9d924a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Neutron deleted interface 7b8f6af4-25f0-4345-98f9-b6f4345e7a4c; detaching it from the instance and deleting it from the info cache
Jan 31 07:55:58 compute-2 nova_compute[226829]: 2026-01-31 07:55:58.926 226833 DEBUG nova.network.neutron [req-a42920c7-ec31-4489-82de-0e198fe33f2c req-1c7f3eb5-318a-4d4e-aae9-bf54d9d924a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:59 compute-2 nova_compute[226829]: 2026-01-31 07:55:59.116 226833 DEBUG nova.network.neutron [-] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:55:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:55:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:59.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:55:59 compute-2 nova_compute[226829]: 2026-01-31 07:55:59.777 226833 INFO nova.compute.manager [-] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Took 5.19 seconds to deallocate network for instance.
Jan 31 07:55:59 compute-2 nova_compute[226829]: 2026-01-31 07:55:59.784 226833 DEBUG nova.compute.manager [req-a42920c7-ec31-4489-82de-0e198fe33f2c req-1c7f3eb5-318a-4d4e-aae9-bf54d9d924a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Detach interface failed, port_id=7b8f6af4-25f0-4345-98f9-b6f4345e7a4c, reason: Instance a6d52544-dac0-441a-a99f-b0d23283f733 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:56:00 compute-2 nova_compute[226829]: 2026-01-31 07:56:00.009 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:00.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:00 compute-2 nova_compute[226829]: 2026-01-31 07:56:00.135 226833 DEBUG oslo_concurrency.lockutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:00 compute-2 nova_compute[226829]: 2026-01-31 07:56:00.136 226833 DEBUG oslo_concurrency.lockutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:00 compute-2 nova_compute[226829]: 2026-01-31 07:56:00.176 226833 DEBUG oslo_concurrency.processutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:00 compute-2 nova_compute[226829]: 2026-01-31 07:56:00.201 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:00 compute-2 ceph-mon[77282]: pgmap v1762: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 102 op/s
Jan 31 07:56:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:56:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2190604874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:00 compute-2 nova_compute[226829]: 2026-01-31 07:56:00.639 226833 DEBUG oslo_concurrency.processutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:00 compute-2 nova_compute[226829]: 2026-01-31 07:56:00.644 226833 DEBUG nova.compute.provider_tree [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:56:00 compute-2 nova_compute[226829]: 2026-01-31 07:56:00.760 226833 DEBUG nova.scheduler.client.report [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:56:00 compute-2 nova_compute[226829]: 2026-01-31 07:56:00.822 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:01 compute-2 nova_compute[226829]: 2026-01-31 07:56:01.320 226833 DEBUG oslo_concurrency.lockutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:01.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2190604874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:02.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:03 compute-2 ceph-mon[77282]: pgmap v1763: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 95 op/s
Jan 31 07:56:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:03.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:04.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:04 compute-2 ceph-mon[77282]: pgmap v1764: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 86 op/s
Jan 31 07:56:04 compute-2 nova_compute[226829]: 2026-01-31 07:56:04.391 226833 INFO nova.scheduler.client.report [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Deleted allocations for instance a6d52544-dac0-441a-a99f-b0d23283f733
Jan 31 07:56:04 compute-2 nova_compute[226829]: 2026-01-31 07:56:04.423 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846149.422262, a6d52544-dac0-441a-a99f-b0d23283f733 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:56:04 compute-2 nova_compute[226829]: 2026-01-31 07:56:04.424 226833 INFO nova.compute.manager [-] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] VM Stopped (Lifecycle Event)
Jan 31 07:56:04 compute-2 nova_compute[226829]: 2026-01-31 07:56:04.843 226833 DEBUG nova.compute.manager [None req-6edfbeac-0431-461a-a48c-ad8c1afae29d - - - - - -] [instance: a6d52544-dac0-441a-a99f-b0d23283f733] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:56:05 compute-2 nova_compute[226829]: 2026-01-31 07:56:05.011 226833 DEBUG oslo_concurrency.lockutils [None req-bdfa3ad2-f256-4e8c-8c50-6f0a0a73672e 41f8b35d7c8b43d9b3583e6e2b1385cb 9bc07928b022430ba8bcc450bbb5c7f5 - - default default] Lock "a6d52544-dac0-441a-a99f-b0d23283f733" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:05 compute-2 nova_compute[226829]: 2026-01-31 07:56:05.012 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:05 compute-2 nova_compute[226829]: 2026-01-31 07:56:05.240 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:05.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:06.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:06 compute-2 ceph-mon[77282]: pgmap v1765: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 KiB/s wr, 86 op/s
Jan 31 07:56:06 compute-2 nova_compute[226829]: 2026-01-31 07:56:06.491 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:06.866 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:06.867 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:06.867 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:56:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:07.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:56:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:56:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:08.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:56:08 compute-2 sudo[261649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:08 compute-2 sudo[261649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:08 compute-2 sudo[261649]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:08 compute-2 sudo[261674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:56:08 compute-2 sudo[261674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:08 compute-2 sudo[261674]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:08 compute-2 sudo[261699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:08 compute-2 sudo[261699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:08 compute-2 sudo[261699]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:08 compute-2 sudo[261724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:56:08 compute-2 sudo[261724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:08 compute-2 nova_compute[226829]: 2026-01-31 07:56:08.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:08 compute-2 ceph-mon[77282]: pgmap v1766: 305 pgs: 305 active+clean; 174 MiB data, 748 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 808 KiB/s wr, 91 op/s
Jan 31 07:56:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:09 compute-2 nova_compute[226829]: 2026-01-31 07:56:09.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:56:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:09.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:56:09 compute-2 podman[261820]: 2026-01-31 07:56:09.49761609 +0000 UTC m=+0.462435349 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:56:09 compute-2 podman[261841]: 2026-01-31 07:56:09.894263976 +0000 UTC m=+0.120450969 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:10 compute-2 nova_compute[226829]: 2026-01-31 07:56:10.013 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:10 compute-2 podman[261820]: 2026-01-31 07:56:10.076564741 +0000 UTC m=+1.041384000 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:56:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:10.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:56:10 compute-2 nova_compute[226829]: 2026-01-31 07:56:10.243 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:10 compute-2 ceph-mon[77282]: pgmap v1767: 305 pgs: 305 active+clean; 198 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 31 07:56:10 compute-2 podman[261854]: 2026-01-31 07:56:10.953464393 +0000 UTC m=+0.828837168 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:56:11 compute-2 nova_compute[226829]: 2026-01-31 07:56:11.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:11 compute-2 nova_compute[226829]: 2026-01-31 07:56:11.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:56:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:56:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:11.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:56:11 compute-2 nova_compute[226829]: 2026-01-31 07:56:11.645 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:56:11 compute-2 podman[262000]: 2026-01-31 07:56:11.846280506 +0000 UTC m=+0.510291261 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:56:12 compute-2 ceph-mon[77282]: pgmap v1768: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:12 compute-2 podman[262000]: 2026-01-31 07:56:12.112570095 +0000 UTC m=+0.776580840 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 07:56:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:12.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:13.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:13 compute-2 podman[262063]: 2026-01-31 07:56:13.559823436 +0000 UTC m=+0.912948667 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, release=1793, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 31 07:56:13 compute-2 podman[262085]: 2026-01-31 07:56:13.683303815 +0000 UTC m=+0.089054742 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, name=keepalived, release=1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, distribution-scope=public, io.openshift.expose-services=, vcs-type=git)
Jan 31 07:56:13 compute-2 podman[262063]: 2026-01-31 07:56:13.764805372 +0000 UTC m=+1.117930573 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, distribution-scope=public, name=keepalived, vcs-type=git, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc.)
Jan 31 07:56:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:14 compute-2 sudo[261724]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:56:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:14.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:56:14 compute-2 sudo[262096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:14 compute-2 sudo[262096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:14 compute-2 sudo[262096]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:14.397 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:56:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:14.398 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:56:14 compute-2 nova_compute[226829]: 2026-01-31 07:56:14.399 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:14 compute-2 sudo[262121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:56:14 compute-2 sudo[262121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:14 compute-2 sudo[262121]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:14 compute-2 sudo[262146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:14 compute-2 sudo[262146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:14 compute-2 sudo[262146]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:14 compute-2 sudo[262171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:56:14 compute-2 sudo[262171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:14 compute-2 ceph-mon[77282]: pgmap v1769: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:14 compute-2 nova_compute[226829]: 2026-01-31 07:56:14.645 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:14 compute-2 sudo[262171]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:15 compute-2 nova_compute[226829]: 2026-01-31 07:56:15.015 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:15 compute-2 nova_compute[226829]: 2026-01-31 07:56:15.245 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:15 compute-2 nova_compute[226829]: 2026-01-31 07:56:15.492 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:15 compute-2 nova_compute[226829]: 2026-01-31 07:56:15.492 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:15.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:15 compute-2 nova_compute[226829]: 2026-01-31 07:56:15.661 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:15 compute-2 nova_compute[226829]: 2026-01-31 07:56:15.661 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:15 compute-2 nova_compute[226829]: 2026-01-31 07:56:15.661 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:15 compute-2 nova_compute[226829]: 2026-01-31 07:56:15.662 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:56:15 compute-2 nova_compute[226829]: 2026-01-31 07:56:15.662 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:56:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:56:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:56:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:56:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:56:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:16.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:56:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3000068858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:16 compute-2 nova_compute[226829]: 2026-01-31 07:56:16.201 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:16 compute-2 nova_compute[226829]: 2026-01-31 07:56:16.369 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:56:16 compute-2 nova_compute[226829]: 2026-01-31 07:56:16.371 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4487MB free_disk=20.897254943847656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:56:16 compute-2 nova_compute[226829]: 2026-01-31 07:56:16.371 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:16 compute-2 nova_compute[226829]: 2026-01-31 07:56:16.371 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:16 compute-2 ceph-mon[77282]: pgmap v1770: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/375979925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3000068858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:56:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:17.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:56:17 compute-2 sudo[262252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:17 compute-2 sudo[262252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:17 compute-2 sudo[262252]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2778733874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:17 compute-2 ceph-mon[77282]: pgmap v1771: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:17 compute-2 sudo[262277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:17 compute-2 sudo[262277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:17 compute-2 sudo[262277]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:17 compute-2 nova_compute[226829]: 2026-01-31 07:56:17.908 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:56:17 compute-2 nova_compute[226829]: 2026-01-31 07:56:17.908 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:56:17 compute-2 nova_compute[226829]: 2026-01-31 07:56:17.924 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:56:18 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2041874506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:18 compute-2 nova_compute[226829]: 2026-01-31 07:56:18.328 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:18 compute-2 nova_compute[226829]: 2026-01-31 07:56:18.332 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:56:18 compute-2 nova_compute[226829]: 2026-01-31 07:56:18.855 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:56:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2041874506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:19 compute-2 nova_compute[226829]: 2026-01-31 07:56:19.220 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:56:19 compute-2 nova_compute[226829]: 2026-01-31 07:56:19.221 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:56:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:19.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:56:20 compute-2 nova_compute[226829]: 2026-01-31 07:56:20.017 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:20.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:20 compute-2 nova_compute[226829]: 2026-01-31 07:56:20.217 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:20 compute-2 nova_compute[226829]: 2026-01-31 07:56:20.217 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:56:20 compute-2 nova_compute[226829]: 2026-01-31 07:56:20.218 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:56:20 compute-2 ceph-mon[77282]: pgmap v1772: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 211 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Jan 31 07:56:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/51827031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:20 compute-2 nova_compute[226829]: 2026-01-31 07:56:20.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:20 compute-2 nova_compute[226829]: 2026-01-31 07:56:20.502 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:56:20 compute-2 nova_compute[226829]: 2026-01-31 07:56:20.502 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:21.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:21 compute-2 nova_compute[226829]: 2026-01-31 07:56:21.747 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:22.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:22 compute-2 sudo[262326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:22 compute-2 sudo[262326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:22 compute-2 sudo[262326]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:22 compute-2 sudo[262351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:56:22 compute-2 sudo[262351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:22 compute-2 sudo[262351]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:22 compute-2 ceph-mon[77282]: pgmap v1773: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 17 KiB/s wr, 11 op/s
Jan 31 07:56:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1461398054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:23.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:24.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:24.400 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:24 compute-2 ceph-mon[77282]: pgmap v1774: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Jan 31 07:56:25 compute-2 nova_compute[226829]: 2026-01-31 07:56:25.018 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:25 compute-2 nova_compute[226829]: 2026-01-31 07:56:25.248 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:25 compute-2 nova_compute[226829]: 2026-01-31 07:56:25.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:25 compute-2 nova_compute[226829]: 2026-01-31 07:56:25.487 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:56:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:25.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:26.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:26 compute-2 ceph-mon[77282]: pgmap v1775: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 07:56:27 compute-2 nova_compute[226829]: 2026-01-31 07:56:27.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:27.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:27 compute-2 ceph-mon[77282]: pgmap v1776: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 07:56:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:56:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:28.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:56:28 compute-2 podman[262379]: 2026-01-31 07:56:28.247272159 +0000 UTC m=+0.126202374 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:56:28 compute-2 nova_compute[226829]: 2026-01-31 07:56:28.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:28 compute-2 nova_compute[226829]: 2026-01-31 07:56:28.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:56:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:56:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:56:30 compute-2 nova_compute[226829]: 2026-01-31 07:56:30.019 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:30.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:30 compute-2 nova_compute[226829]: 2026-01-31 07:56:30.196 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:30 compute-2 nova_compute[226829]: 2026-01-31 07:56:30.196 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:30 compute-2 nova_compute[226829]: 2026-01-31 07:56:30.250 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:30 compute-2 ceph-mon[77282]: pgmap v1777: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 1 op/s
Jan 31 07:56:30 compute-2 nova_compute[226829]: 2026-01-31 07:56:30.558 226833 DEBUG nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:56:31 compute-2 nova_compute[226829]: 2026-01-31 07:56:31.282 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:31 compute-2 nova_compute[226829]: 2026-01-31 07:56:31.283 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:31 compute-2 nova_compute[226829]: 2026-01-31 07:56:31.294 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:56:31 compute-2 nova_compute[226829]: 2026-01-31 07:56:31.295 226833 INFO nova.compute.claims [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:56:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:31.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:31 compute-2 nova_compute[226829]: 2026-01-31 07:56:31.639 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:56:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3368109371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.059 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.065 226833 DEBUG nova.compute.provider_tree [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.096 226833 DEBUG nova.scheduler.client.report [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.131 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.132 226833 DEBUG nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:56:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:56:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:32.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.216 226833 DEBUG nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.217 226833 DEBUG nova.network.neutron [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.241 226833 INFO nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.267 226833 DEBUG nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.416 226833 DEBUG nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.418 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.418 226833 INFO nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Creating image(s)
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.445 226833 DEBUG nova.storage.rbd_utils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.474 226833 DEBUG nova.storage.rbd_utils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:32 compute-2 ceph-mon[77282]: pgmap v1778: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 2.7 KiB/s wr, 0 op/s
Jan 31 07:56:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3368109371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.510 226833 DEBUG nova.storage.rbd_utils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.515 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.571 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.572 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.572 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.573 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.602 226833 DEBUG nova.storage.rbd_utils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.606 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:32 compute-2 nova_compute[226829]: 2026-01-31 07:56:32.975 226833 DEBUG nova.policy [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f60419a58aea43b9a0b6db7d61d71246', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1cd91610847a480caeee0ae3cdabf066', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:56:33 compute-2 nova_compute[226829]: 2026-01-31 07:56:33.100 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:33 compute-2 nova_compute[226829]: 2026-01-31 07:56:33.177 226833 DEBUG nova.storage.rbd_utils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] resizing rbd image 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:56:33 compute-2 nova_compute[226829]: 2026-01-31 07:56:33.314 226833 DEBUG nova.objects.instance [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'migration_context' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:56:33 compute-2 nova_compute[226829]: 2026-01-31 07:56:33.335 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:56:33 compute-2 nova_compute[226829]: 2026-01-31 07:56:33.335 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Ensure instance console log exists: /var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:56:33 compute-2 nova_compute[226829]: 2026-01-31 07:56:33.335 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:33 compute-2 nova_compute[226829]: 2026-01-31 07:56:33.336 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:33 compute-2 nova_compute[226829]: 2026-01-31 07:56:33.336 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:33 compute-2 nova_compute[226829]: 2026-01-31 07:56:33.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:34.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:34 compute-2 ceph-mon[77282]: pgmap v1779: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s wr, 1 op/s
Jan 31 07:56:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4280707329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.511254) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194511353, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2414, "num_deletes": 251, "total_data_size": 5798724, "memory_usage": 5872272, "flush_reason": "Manual Compaction"}
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194532327, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 3776663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39466, "largest_seqno": 41875, "table_properties": {"data_size": 3767048, "index_size": 6045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20259, "raw_average_key_size": 20, "raw_value_size": 3747648, "raw_average_value_size": 3797, "num_data_blocks": 264, "num_entries": 987, "num_filter_entries": 987, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845964, "oldest_key_time": 1769845964, "file_creation_time": 1769846194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 21120 microseconds, and 8015 cpu microseconds.
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.532378) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 3776663 bytes OK
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.532396) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.534138) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.534151) EVENT_LOG_v1 {"time_micros": 1769846194534147, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.534167) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5788235, prev total WAL file size 5788235, number of live WAL files 2.
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.535008) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(3688KB)], [75(9752KB)]
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194535080, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13763334, "oldest_snapshot_seqno": -1}
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6766 keys, 11851937 bytes, temperature: kUnknown
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194615308, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 11851937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11804687, "index_size": 29218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 173672, "raw_average_key_size": 25, "raw_value_size": 11681465, "raw_average_value_size": 1726, "num_data_blocks": 1167, "num_entries": 6766, "num_filter_entries": 6766, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769846194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.615539) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11851937 bytes
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.617141) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.4 rd, 147.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.5 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7291, records dropped: 525 output_compression: NoCompression
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.617155) EVENT_LOG_v1 {"time_micros": 1769846194617148, "job": 46, "event": "compaction_finished", "compaction_time_micros": 80313, "compaction_time_cpu_micros": 22014, "output_level": 6, "num_output_files": 1, "total_output_size": 11851937, "num_input_records": 7291, "num_output_records": 6766, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194617584, "job": 46, "event": "table_file_deletion", "file_number": 77}
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194618506, "job": 46, "event": "table_file_deletion", "file_number": 75}
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.534918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.618606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.618612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.618614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.618615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:56:34.618617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:35 compute-2 nova_compute[226829]: 2026-01-31 07:56:35.004 226833 DEBUG nova.network.neutron [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Successfully created port: c8e5f1dd-3393-4d16-b1fe-e259ddea188d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:56:35 compute-2 nova_compute[226829]: 2026-01-31 07:56:35.021 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:35 compute-2 nova_compute[226829]: 2026-01-31 07:56:35.253 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:35.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:36.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:36 compute-2 ceph-mon[77282]: pgmap v1780: 305 pgs: 305 active+clean; 222 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 510 KiB/s wr, 16 op/s
Jan 31 07:56:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:37 compute-2 sudo[262600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:37 compute-2 sudo[262600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:37 compute-2 sudo[262600]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:37 compute-2 sudo[262625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:37 compute-2 sudo[262625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:37 compute-2 sudo[262625]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:37 compute-2 ceph-mon[77282]: pgmap v1781: 305 pgs: 305 active+clean; 237 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 16 op/s
Jan 31 07:56:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:38.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1342499023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:39 compute-2 ceph-mon[77282]: pgmap v1782: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 702 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 07:56:40 compute-2 nova_compute[226829]: 2026-01-31 07:56:40.022 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:40.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:40 compute-2 nova_compute[226829]: 2026-01-31 07:56:40.187 226833 DEBUG nova.network.neutron [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Successfully updated port: c8e5f1dd-3393-4d16-b1fe-e259ddea188d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:56:40 compute-2 nova_compute[226829]: 2026-01-31 07:56:40.254 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:40 compute-2 nova_compute[226829]: 2026-01-31 07:56:40.438 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:56:40 compute-2 nova_compute[226829]: 2026-01-31 07:56:40.438 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquired lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:56:40 compute-2 nova_compute[226829]: 2026-01-31 07:56:40.438 226833 DEBUG nova.network.neutron [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:56:40 compute-2 nova_compute[226829]: 2026-01-31 07:56:40.581 226833 DEBUG nova.compute.manager [req-bb92a2ee-1d67-4130-8c0b-ffc203718938 req-37fcbc91-52e9-4fbc-9723-7b5df79682e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-changed-c8e5f1dd-3393-4d16-b1fe-e259ddea188d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:56:40 compute-2 nova_compute[226829]: 2026-01-31 07:56:40.581 226833 DEBUG nova.compute.manager [req-bb92a2ee-1d67-4130-8c0b-ffc203718938 req-37fcbc91-52e9-4fbc-9723-7b5df79682e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Refreshing instance network info cache due to event network-changed-c8e5f1dd-3393-4d16-b1fe-e259ddea188d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:56:40 compute-2 nova_compute[226829]: 2026-01-31 07:56:40.581 226833 DEBUG oslo_concurrency.lockutils [req-bb92a2ee-1d67-4130-8c0b-ffc203718938 req-37fcbc91-52e9-4fbc-9723-7b5df79682e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:56:40 compute-2 nova_compute[226829]: 2026-01-31 07:56:40.780 226833 DEBUG nova.network.neutron [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:56:41 compute-2 podman[262651]: 2026-01-31 07:56:41.159831477 +0000 UTC m=+0.048236292 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:56:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:56:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:56:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:42.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:42 compute-2 ceph-mon[77282]: pgmap v1783: 305 pgs: 305 active+clean; 271 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 36 op/s
Jan 31 07:56:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3177322587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:42 compute-2 nova_compute[226829]: 2026-01-31 07:56:42.994 226833 DEBUG nova.network.neutron [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Updating instance_info_cache with network_info: [{"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.066 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Releasing lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.066 226833 DEBUG nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Instance network_info: |[{"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.067 226833 DEBUG oslo_concurrency.lockutils [req-bb92a2ee-1d67-4130-8c0b-ffc203718938 req-37fcbc91-52e9-4fbc-9723-7b5df79682e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.067 226833 DEBUG nova.network.neutron [req-bb92a2ee-1d67-4130-8c0b-ffc203718938 req-37fcbc91-52e9-4fbc-9723-7b5df79682e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Refreshing network info cache for port c8e5f1dd-3393-4d16-b1fe-e259ddea188d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.069 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Start _get_guest_xml network_info=[{"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.073 226833 WARNING nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.080 226833 DEBUG nova.virt.libvirt.host [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.081 226833 DEBUG nova.virt.libvirt.host [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.084 226833 DEBUG nova.virt.libvirt.host [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.084 226833 DEBUG nova.virt.libvirt.host [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.085 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.085 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.086 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.086 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.086 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.086 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.087 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.087 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.087 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.087 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.087 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.088 226833 DEBUG nova.virt.hardware [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.091 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:56:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2742351362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.512 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.542 226833 DEBUG nova.storage.rbd_utils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.546 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:56:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2933832284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.966 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.967 226833 DEBUG nova.virt.libvirt.vif [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-128986884',display_name='tempest-ListServerFiltersTestJSON-instance-128986884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-128986884',id=81,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-w0cs3g2v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:32Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=2ea2db2b-f865-43f2-9f44-4a92a02bb804,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.968 226833 DEBUG nova.network.os_vif_util [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.969 226833 DEBUG nova.network.os_vif_util [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.970 226833 DEBUG nova.objects.instance [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.989 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <uuid>2ea2db2b-f865-43f2-9f44-4a92a02bb804</uuid>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <name>instance-00000051</name>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-128986884</nova:name>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:56:43</nova:creationTime>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <nova:user uuid="f60419a58aea43b9a0b6db7d61d71246">tempest-ListServerFiltersTestJSON-334452958-project-member</nova:user>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <nova:project uuid="1cd91610847a480caeee0ae3cdabf066">tempest-ListServerFiltersTestJSON-334452958</nova:project>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <nova:port uuid="c8e5f1dd-3393-4d16-b1fe-e259ddea188d">
Jan 31 07:56:43 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <system>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <entry name="serial">2ea2db2b-f865-43f2-9f44-4a92a02bb804</entry>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <entry name="uuid">2ea2db2b-f865-43f2-9f44-4a92a02bb804</entry>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     </system>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <os>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   </os>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <features>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   </features>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk">
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       </source>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk.config">
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       </source>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:56:43 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:96:da:40"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <target dev="tapc8e5f1dd-33"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804/console.log" append="off"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <video>
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     </video>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:56:43 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:56:43 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:56:43 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:56:43 compute-2 nova_compute[226829]: </domain>
Jan 31 07:56:43 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.990 226833 DEBUG nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Preparing to wait for external event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.991 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.991 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.991 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.992 226833 DEBUG nova.virt.libvirt.vif [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-128986884',display_name='tempest-ListServerFiltersTestJSON-instance-128986884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-128986884',id=81,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-w0cs3g2v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:32Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=2ea2db2b-f865-43f2-9f44-4a92a02bb804,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.992 226833 DEBUG nova.network.os_vif_util [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.993 226833 DEBUG nova.network.os_vif_util [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.993 226833 DEBUG os_vif [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.994 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.994 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:43 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.995 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.999 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.999 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8e5f1dd-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:43.999 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8e5f1dd-33, col_values=(('external_ids', {'iface-id': 'c8e5f1dd-3393-4d16-b1fe-e259ddea188d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:da:40', 'vm-uuid': '2ea2db2b-f865-43f2-9f44-4a92a02bb804'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.001 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:44 compute-2 NetworkManager[48999]: <info>  [1769846204.0031] manager: (tapc8e5f1dd-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.006 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.008 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.010 226833 INFO os_vif [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33')
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.071 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.071 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.072 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] No VIF found with MAC fa:16:3e:96:da:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.072 226833 INFO nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Using config drive
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.100 226833 DEBUG nova.storage.rbd_utils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:44.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:44 compute-2 ceph-mon[77282]: pgmap v1784: 305 pgs: 305 active+clean; 301 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 63 op/s
Jan 31 07:56:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2742351362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2933832284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.690 226833 INFO nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Creating config drive at /var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804/disk.config
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.694 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1prjo23z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.815 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1prjo23z" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.842 226833 DEBUG nova.storage.rbd_utils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:44 compute-2 nova_compute[226829]: 2026-01-31 07:56:44.845 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804/disk.config 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.025 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.297 226833 DEBUG oslo_concurrency.processutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804/disk.config 2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.298 226833 INFO nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Deleting local config drive /var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804/disk.config because it was imported into RBD.
Jan 31 07:56:45 compute-2 kernel: tapc8e5f1dd-33: entered promiscuous mode
Jan 31 07:56:45 compute-2 ovn_controller[133834]: 2026-01-31T07:56:45Z|00269|binding|INFO|Claiming lport c8e5f1dd-3393-4d16-b1fe-e259ddea188d for this chassis.
Jan 31 07:56:45 compute-2 ovn_controller[133834]: 2026-01-31T07:56:45Z|00270|binding|INFO|c8e5f1dd-3393-4d16-b1fe-e259ddea188d: Claiming fa:16:3e:96:da:40 10.100.0.13
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.353 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:45 compute-2 NetworkManager[48999]: <info>  [1769846205.3556] manager: (tapc8e5f1dd-33): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.356 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.370 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:45 compute-2 ovn_controller[133834]: 2026-01-31T07:56:45Z|00271|binding|INFO|Setting lport c8e5f1dd-3393-4d16-b1fe-e259ddea188d ovn-installed in OVS
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:45 compute-2 systemd-udevd[262808]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:56:45 compute-2 systemd-machined[195142]: New machine qemu-34-instance-00000051.
Jan 31 07:56:45 compute-2 NetworkManager[48999]: <info>  [1769846205.3842] device (tapc8e5f1dd-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:56:45 compute-2 NetworkManager[48999]: <info>  [1769846205.3849] device (tapc8e5f1dd-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:56:45 compute-2 systemd[1]: Started Virtual Machine qemu-34-instance-00000051.
Jan 31 07:56:45 compute-2 ovn_controller[133834]: 2026-01-31T07:56:45Z|00272|binding|INFO|Setting lport c8e5f1dd-3393-4d16-b1fe-e259ddea188d up in Southbound
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.394 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:da:40 10.100.0.13'], port_security=['fa:16:3e:96:da:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2ea2db2b-f865-43f2-9f44-4a92a02bb804', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1cd91610847a480caeee0ae3cdabf066', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1695a46-1d81-4453-9c52-9917c020bc65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e38d659-b6a8-4d3d-8a23-b8299c5114da, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c8e5f1dd-3393-4d16-b1fe-e259ddea188d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.396 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c8e5f1dd-3393-4d16-b1fe-e259ddea188d in datapath ca1ed3b2-b27d-427e-a9bd-cc12393752eb bound to our chassis
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.397 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.405 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3cc5e190-22ca-4dc7-a68f-054095c268f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.407 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca1ed3b2-b1 in ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.409 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca1ed3b2-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.409 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e6412ab9-b8b9-4c17-9b0b-7024f3455be1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.410 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b1362866-45b9-4d2a-b98c-205e268deaa4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.421 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[eee5a2cb-bbf6-4330-a259-d566d93d1f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.432 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c16e07-4472-4eeb-952b-5c04687270c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.452 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[577e636c-d25c-4a79-8f58-0d3387e695d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.459 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5a203ab8-9eb1-45c1-ac13-b7acd357ce71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 NetworkManager[48999]: <info>  [1769846205.4606] manager: (tapca1ed3b2-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/143)
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.479 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f7719b37-3c4d-4530-8f02-4b9810cb4c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.481 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[a1dfde6d-576b-45bc-a5e4-0d1347ed53c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 NetworkManager[48999]: <info>  [1769846205.4958] device (tapca1ed3b2-b0): carrier: link connected
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.507 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[0dda8b12-f3db-441e-b3da-644f6a6e9361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.521 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c124b68c-bd0c-4538-b6bd-92687a17a017]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca1ed3b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:70:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645796, 'reachable_time': 19720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262843, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.531 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8e313902-66a3-4590-9667-223575021f63]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:7011'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 645796, 'tstamp': 645796}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262844, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:45.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.545 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[857502d6-3cab-4db1-bfff-fdac86e5c3f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca1ed3b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:70:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645796, 'reachable_time': 19720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262845, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.565 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a11db744-f6bf-4b07-b1b3-8fc282dd0ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/351507074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:56:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/351507074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.615 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce8351d-3731-4c4e-8b36-09f4c2c45f2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.617 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca1ed3b2-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.617 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.618 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca1ed3b2-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:45 compute-2 NetworkManager[48999]: <info>  [1769846205.6204] manager: (tapca1ed3b2-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Jan 31 07:56:45 compute-2 kernel: tapca1ed3b2-b0: entered promiscuous mode
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.621 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.623 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca1ed3b2-b0, col_values=(('external_ids', {'iface-id': 'd19b5f05-fa79-4835-8ef4-51f87493d59b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.624 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:45 compute-2 ovn_controller[133834]: 2026-01-31T07:56:45Z|00273|binding|INFO|Releasing lport d19b5f05-fa79-4835-8ef4-51f87493d59b from this chassis (sb_readonly=0)
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.624 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.625 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.626 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d0dc5d57-aacc-4cbf-aef8-d9d4974b4db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.627 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:56:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:56:45.628 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'env', 'PROCESS_TAG=haproxy-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.629 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.955 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846205.9550643, 2ea2db2b-f865-43f2-9f44-4a92a02bb804 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.956 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] VM Started (Lifecycle Event)
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.985 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.989 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846205.9563043, 2ea2db2b-f865-43f2-9f44-4a92a02bb804 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:56:45 compute-2 nova_compute[226829]: 2026-01-31 07:56:45.989 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] VM Paused (Lifecycle Event)
Jan 31 07:56:46 compute-2 podman[262915]: 2026-01-31 07:56:45.92519038 +0000 UTC m=+0.022905810 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:56:46 compute-2 nova_compute[226829]: 2026-01-31 07:56:46.023 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:56:46 compute-2 nova_compute[226829]: 2026-01-31 07:56:46.026 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:56:46 compute-2 nova_compute[226829]: 2026-01-31 07:56:46.062 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:56:46 compute-2 podman[262915]: 2026-01-31 07:56:46.174575934 +0000 UTC m=+0.272291344 container create 304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 07:56:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:56:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:46.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:56:46 compute-2 systemd[1]: Started libpod-conmon-304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e.scope.
Jan 31 07:56:46 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:56:46 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c5d34be87cce977f692f01df0953c383e0c76547baab5e2b2d215d6c47b1be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:46 compute-2 podman[262915]: 2026-01-31 07:56:46.369297763 +0000 UTC m=+0.467013203 container init 304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:56:46 compute-2 podman[262915]: 2026-01-31 07:56:46.373976349 +0000 UTC m=+0.471691759 container start 304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:56:46 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[262931]: [NOTICE]   (262935) : New worker (262937) forked
Jan 31 07:56:46 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[262931]: [NOTICE]   (262935) : Loading success.
Jan 31 07:56:46 compute-2 nova_compute[226829]: 2026-01-31 07:56:46.553 226833 DEBUG nova.network.neutron [req-bb92a2ee-1d67-4130-8c0b-ffc203718938 req-37fcbc91-52e9-4fbc-9723-7b5df79682e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Updated VIF entry in instance network info cache for port c8e5f1dd-3393-4d16-b1fe-e259ddea188d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:56:46 compute-2 nova_compute[226829]: 2026-01-31 07:56:46.554 226833 DEBUG nova.network.neutron [req-bb92a2ee-1d67-4130-8c0b-ffc203718938 req-37fcbc91-52e9-4fbc-9723-7b5df79682e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Updating instance_info_cache with network_info: [{"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:56:46 compute-2 nova_compute[226829]: 2026-01-31 07:56:46.584 226833 DEBUG oslo_concurrency.lockutils [req-bb92a2ee-1d67-4130-8c0b-ffc203718938 req-37fcbc91-52e9-4fbc-9723-7b5df79682e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:56:46 compute-2 ceph-mon[77282]: pgmap v1785: 305 pgs: 305 active+clean; 367 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 6.2 MiB/s wr, 101 op/s
Jan 31 07:56:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4256788063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2010572055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:56:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:47.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.967 226833 DEBUG nova.compute.manager [req-45a7a5d3-1ce8-4f3a-88aa-0c1282690762 req-a074c80c-2380-4a82-9b86-016805da00a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.968 226833 DEBUG oslo_concurrency.lockutils [req-45a7a5d3-1ce8-4f3a-88aa-0c1282690762 req-a074c80c-2380-4a82-9b86-016805da00a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.968 226833 DEBUG oslo_concurrency.lockutils [req-45a7a5d3-1ce8-4f3a-88aa-0c1282690762 req-a074c80c-2380-4a82-9b86-016805da00a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.968 226833 DEBUG oslo_concurrency.lockutils [req-45a7a5d3-1ce8-4f3a-88aa-0c1282690762 req-a074c80c-2380-4a82-9b86-016805da00a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.969 226833 DEBUG nova.compute.manager [req-45a7a5d3-1ce8-4f3a-88aa-0c1282690762 req-a074c80c-2380-4a82-9b86-016805da00a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Processing event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.969 226833 DEBUG nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.973 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846207.9734879, 2ea2db2b-f865-43f2-9f44-4a92a02bb804 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.973 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] VM Resumed (Lifecycle Event)
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.975 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.979 226833 INFO nova.virt.libvirt.driver [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Instance spawned successfully.
Jan 31 07:56:47 compute-2 nova_compute[226829]: 2026-01-31 07:56:47.979 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.029 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.033 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.033 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.034 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.034 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.035 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.035 226833 DEBUG nova.virt.libvirt.driver [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.039 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.109 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:56:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:48.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.229 226833 INFO nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Took 15.81 seconds to spawn the instance on the hypervisor.
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.230 226833 DEBUG nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.446 226833 INFO nova.compute.manager [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Took 17.44 seconds to build instance.
Jan 31 07:56:48 compute-2 nova_compute[226829]: 2026-01-31 07:56:48.512 226833 DEBUG oslo_concurrency.lockutils [None req-be20cab2-b3eb-4f36-979e-96b4f8da7672 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:48 compute-2 ceph-mon[77282]: pgmap v1786: 305 pgs: 305 active+clean; 375 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 6.3 MiB/s wr, 103 op/s
Jan 31 07:56:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:49 compute-2 nova_compute[226829]: 2026-01-31 07:56:49.003 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:49 compute-2 ceph-mon[77282]: pgmap v1787: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 126 op/s
Jan 31 07:56:50 compute-2 nova_compute[226829]: 2026-01-31 07:56:50.038 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:50.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1084162188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:51 compute-2 nova_compute[226829]: 2026-01-31 07:56:51.086 226833 DEBUG nova.compute.manager [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:56:51 compute-2 nova_compute[226829]: 2026-01-31 07:56:51.087 226833 DEBUG oslo_concurrency.lockutils [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:51 compute-2 nova_compute[226829]: 2026-01-31 07:56:51.087 226833 DEBUG oslo_concurrency.lockutils [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:51 compute-2 nova_compute[226829]: 2026-01-31 07:56:51.087 226833 DEBUG oslo_concurrency.lockutils [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:51 compute-2 nova_compute[226829]: 2026-01-31 07:56:51.087 226833 DEBUG nova.compute.manager [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] No waiting events found dispatching network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:56:51 compute-2 nova_compute[226829]: 2026-01-31 07:56:51.087 226833 WARNING nova.compute.manager [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received unexpected event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d for instance with vm_state active and task_state None.
Jan 31 07:56:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:51.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:52 compute-2 ceph-mon[77282]: pgmap v1788: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 153 op/s
Jan 31 07:56:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1497168377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:56:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:52.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:56:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:56:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:53.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:56:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:54 compute-2 nova_compute[226829]: 2026-01-31 07:56:54.007 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:54.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:54 compute-2 ceph-mon[77282]: pgmap v1789: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 193 op/s
Jan 31 07:56:55 compute-2 nova_compute[226829]: 2026-01-31 07:56:55.040 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:55.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:56.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:56 compute-2 ceph-mon[77282]: pgmap v1790: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Jan 31 07:56:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:57.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:57 compute-2 sudo[262953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:57 compute-2 sudo[262953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:57 compute-2 sudo[262953]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:58 compute-2 sudo[262978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:58 compute-2 sudo[262978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:58 compute-2 sudo[262978]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:58.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:58 compute-2 ceph-mon[77282]: pgmap v1791: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 920 KiB/s wr, 163 op/s
Jan 31 07:56:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:59 compute-2 nova_compute[226829]: 2026-01-31 07:56:59.011 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:59 compute-2 podman[263003]: 2026-01-31 07:56:59.200386008 +0000 UTC m=+0.075303989 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Jan 31 07:56:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:56:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8296 writes, 42K keys, 8296 commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 8296 writes, 8296 syncs, 1.00 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1568 writes, 7461 keys, 1568 commit groups, 1.0 writes per commit group, ingest: 15.41 MB, 0.03 MB/s
                                           Interval WAL: 1569 writes, 1569 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     72.2      0.71              0.15        23    0.031       0      0       0.0       0.0
                                             L6      1/0   11.30 MB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   3.9    137.6    114.4      1.74              0.63        22    0.079    121K    12K       0.0       0.0
                                            Sum      1/0   11.30 MB   0.0      0.2     0.1      0.2       0.2      0.1       0.0   4.9     97.6    102.2      2.46              0.78        45    0.055    121K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.8    132.2    135.2      0.46              0.20        10    0.046     34K   2631       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   0.0    137.6    114.4      1.74              0.63        22    0.079    121K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     72.4      0.71              0.15        22    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.25 GB write, 0.08 MB/s write, 0.23 GB read, 0.08 MB/s read, 2.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 27.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000235 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1612,26.35 MB,8.66857%) FilterBlock(45,344.05 KB,0.110521%) IndexBlock(45,621.00 KB,0.199489%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:56:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:56:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:59.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:00 compute-2 nova_compute[226829]: 2026-01-31 07:57:00.043 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:00.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:00 compute-2 ceph-mon[77282]: pgmap v1792: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 353 KiB/s wr, 216 op/s
Jan 31 07:57:00 compute-2 ovn_controller[133834]: 2026-01-31T07:57:00Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:da:40 10.100.0.13
Jan 31 07:57:00 compute-2 ovn_controller[133834]: 2026-01-31T07:57:00Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:da:40 10.100.0.13
Jan 31 07:57:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:01.227 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:57:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:01.228 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:57:01 compute-2 nova_compute[226829]: 2026-01-31 07:57:01.229 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:01.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:02.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:02 compute-2 ceph-mon[77282]: pgmap v1793: 305 pgs: 305 active+clean; 394 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 820 KiB/s wr, 201 op/s
Jan 31 07:57:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:03.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:04 compute-2 nova_compute[226829]: 2026-01-31 07:57:04.016 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:04.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:04 compute-2 ceph-mon[77282]: pgmap v1794: 305 pgs: 305 active+clean; 410 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.9 MiB/s wr, 194 op/s
Jan 31 07:57:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3621108763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:05 compute-2 nova_compute[226829]: 2026-01-31 07:57:05.044 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:05.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4031796697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:06.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:06 compute-2 nova_compute[226829]: 2026-01-31 07:57:06.527 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:06 compute-2 ceph-mon[77282]: pgmap v1795: 305 pgs: 305 active+clean; 443 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.1 MiB/s wr, 221 op/s
Jan 31 07:57:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:06.868 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:06.868 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:06.869 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:07.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:08.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:08 compute-2 nova_compute[226829]: 2026-01-31 07:57:08.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:08 compute-2 ceph-mon[77282]: pgmap v1796: 305 pgs: 305 active+clean; 448 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Jan 31 07:57:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:09 compute-2 nova_compute[226829]: 2026-01-31 07:57:09.018 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:09 compute-2 nova_compute[226829]: 2026-01-31 07:57:09.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:09.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:09 compute-2 ceph-mon[77282]: pgmap v1797: 305 pgs: 305 active+clean; 477 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 245 op/s
Jan 31 07:57:10 compute-2 nova_compute[226829]: 2026-01-31 07:57:10.079 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:10.231 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:10.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:11.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:12 compute-2 podman[263036]: 2026-01-31 07:57:12.18839816 +0000 UTC m=+0.072308630 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:57:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:12.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:12 compute-2 ceph-mon[77282]: pgmap v1798: 305 pgs: 305 active+clean; 456 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.4 MiB/s wr, 213 op/s
Jan 31 07:57:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:13.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:14 compute-2 nova_compute[226829]: 2026-01-31 07:57:14.022 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:14.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:14 compute-2 ceph-mon[77282]: pgmap v1799: 305 pgs: 305 active+clean; 435 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.6 MiB/s wr, 233 op/s
Jan 31 07:57:15 compute-2 nova_compute[226829]: 2026-01-31 07:57:15.082 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:15 compute-2 nova_compute[226829]: 2026-01-31 07:57:15.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 07:57:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:57:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:15.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:57:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:16.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:16 compute-2 ceph-mon[77282]: pgmap v1800: 305 pgs: 305 active+clean; 405 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 245 op/s
Jan 31 07:57:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/747376279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/36840789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:16 compute-2 nova_compute[226829]: 2026-01-31 07:57:16.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:16 compute-2 nova_compute[226829]: 2026-01-31 07:57:16.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:57:16 compute-2 nova_compute[226829]: 2026-01-31 07:57:16.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:57:16 compute-2 nova_compute[226829]: 2026-01-31 07:57:16.816 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:57:16 compute-2 nova_compute[226829]: 2026-01-31 07:57:16.816 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:57:16 compute-2 nova_compute[226829]: 2026-01-31 07:57:16.816 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:57:16 compute-2 nova_compute[226829]: 2026-01-31 07:57:16.816 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:17.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:17 compute-2 nova_compute[226829]: 2026-01-31 07:57:17.908 226833 DEBUG oslo_concurrency.lockutils [None req-021cfee8-e19b-4eec-bd26-0490ac349987 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:17 compute-2 nova_compute[226829]: 2026-01-31 07:57:17.908 226833 DEBUG oslo_concurrency.lockutils [None req-021cfee8-e19b-4eec-bd26-0490ac349987 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:17 compute-2 nova_compute[226829]: 2026-01-31 07:57:17.908 226833 DEBUG nova.compute.manager [None req-021cfee8-e19b-4eec-bd26-0490ac349987 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:17 compute-2 nova_compute[226829]: 2026-01-31 07:57:17.913 226833 DEBUG nova.compute.manager [None req-021cfee8-e19b-4eec-bd26-0490ac349987 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 31 07:57:17 compute-2 nova_compute[226829]: 2026-01-31 07:57:17.914 226833 DEBUG nova.objects.instance [None req-021cfee8-e19b-4eec-bd26-0490ac349987 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'flavor' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:18 compute-2 sudo[263061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:18 compute-2 sudo[263061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:18 compute-2 sudo[263061]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:18 compute-2 sudo[263086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:18 compute-2 sudo[263086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:18 compute-2 sudo[263086]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:18 compute-2 nova_compute[226829]: 2026-01-31 07:57:18.152 226833 DEBUG nova.virt.libvirt.driver [None req-021cfee8-e19b-4eec-bd26-0490ac349987 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:57:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:18.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:18 compute-2 ceph-mon[77282]: pgmap v1801: 305 pgs: 305 active+clean; 405 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 174 op/s
Jan 31 07:57:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:19 compute-2 nova_compute[226829]: 2026-01-31 07:57:19.025 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3650713589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/664972945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:19.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:20 compute-2 nova_compute[226829]: 2026-01-31 07:57:20.084 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:20.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:20 compute-2 ceph-mon[77282]: pgmap v1802: 305 pgs: 305 active+clean; 405 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Jan 31 07:57:20 compute-2 kernel: tapc8e5f1dd-33 (unregistering): left promiscuous mode
Jan 31 07:57:20 compute-2 NetworkManager[48999]: <info>  [1769846240.9913] device (tapc8e5f1dd-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:57:21 compute-2 nova_compute[226829]: 2026-01-31 07:57:21.027 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:21 compute-2 ovn_controller[133834]: 2026-01-31T07:57:21Z|00274|binding|INFO|Releasing lport c8e5f1dd-3393-4d16-b1fe-e259ddea188d from this chassis (sb_readonly=0)
Jan 31 07:57:21 compute-2 ovn_controller[133834]: 2026-01-31T07:57:21Z|00275|binding|INFO|Setting lport c8e5f1dd-3393-4d16-b1fe-e259ddea188d down in Southbound
Jan 31 07:57:21 compute-2 ovn_controller[133834]: 2026-01-31T07:57:21Z|00276|binding|INFO|Removing iface tapc8e5f1dd-33 ovn-installed in OVS
Jan 31 07:57:21 compute-2 nova_compute[226829]: 2026-01-31 07:57:21.031 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:21 compute-2 nova_compute[226829]: 2026-01-31 07:57:21.036 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:21 compute-2 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000051.scope: Deactivated successfully.
Jan 31 07:57:21 compute-2 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000051.scope: Consumed 13.952s CPU time.
Jan 31 07:57:21 compute-2 systemd-machined[195142]: Machine qemu-34-instance-00000051 terminated.
Jan 31 07:57:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:21.239 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:da:40 10.100.0.13'], port_security=['fa:16:3e:96:da:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2ea2db2b-f865-43f2-9f44-4a92a02bb804', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1cd91610847a480caeee0ae3cdabf066', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1695a46-1d81-4453-9c52-9917c020bc65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e38d659-b6a8-4d3d-8a23-b8299c5114da, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c8e5f1dd-3393-4d16-b1fe-e259ddea188d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:57:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:21.240 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c8e5f1dd-3393-4d16-b1fe-e259ddea188d in datapath ca1ed3b2-b27d-427e-a9bd-cc12393752eb unbound from our chassis
Jan 31 07:57:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:21.242 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca1ed3b2-b27d-427e-a9bd-cc12393752eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:57:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:21.244 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d89c9ead-de4c-4735-860d-b6d285b2918e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:21.246 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb namespace which is not needed anymore
Jan 31 07:57:21 compute-2 nova_compute[226829]: 2026-01-31 07:57:21.260 226833 INFO nova.virt.libvirt.driver [None req-021cfee8-e19b-4eec-bd26-0490ac349987 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Instance shutdown successfully after 3 seconds.
Jan 31 07:57:21 compute-2 nova_compute[226829]: 2026-01-31 07:57:21.267 226833 INFO nova.virt.libvirt.driver [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Instance destroyed successfully.
Jan 31 07:57:21 compute-2 nova_compute[226829]: 2026-01-31 07:57:21.268 226833 DEBUG nova.objects.instance [None req-021cfee8-e19b-4eec-bd26-0490ac349987 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:21 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[262931]: [NOTICE]   (262935) : haproxy version is 2.8.14-c23fe91
Jan 31 07:57:21 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[262931]: [NOTICE]   (262935) : path to executable is /usr/sbin/haproxy
Jan 31 07:57:21 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[262931]: [WARNING]  (262935) : Exiting Master process...
Jan 31 07:57:21 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[262931]: [ALERT]    (262935) : Current worker (262937) exited with code 143 (Terminated)
Jan 31 07:57:21 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[262931]: [WARNING]  (262935) : All workers exited. Exiting... (0)
Jan 31 07:57:21 compute-2 systemd[1]: libpod-304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e.scope: Deactivated successfully.
Jan 31 07:57:21 compute-2 podman[263149]: 2026-01-31 07:57:21.443499229 +0000 UTC m=+0.134338239 container died 304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:57:21 compute-2 systemd[1]: var-lib-containers-storage-overlay-15c5d34be87cce977f692f01df0953c383e0c76547baab5e2b2d215d6c47b1be-merged.mount: Deactivated successfully.
Jan 31 07:57:21 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e-userdata-shm.mount: Deactivated successfully.
Jan 31 07:57:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:21.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:21 compute-2 nova_compute[226829]: 2026-01-31 07:57:21.823 226833 DEBUG nova.compute.manager [None req-021cfee8-e19b-4eec-bd26-0490ac349987 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:21 compute-2 podman[263149]: 2026-01-31 07:57:21.839452805 +0000 UTC m=+0.530291805 container cleanup 304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 07:57:21 compute-2 systemd[1]: libpod-conmon-304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e.scope: Deactivated successfully.
Jan 31 07:57:21 compute-2 ceph-mon[77282]: pgmap v1803: 305 pgs: 305 active+clean; 397 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 362 KiB/s wr, 128 op/s
Jan 31 07:57:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:22.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:22 compute-2 podman[263179]: 2026-01-31 07:57:22.272864674 +0000 UTC m=+0.420845860 container remove 304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:22.277 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[44eea06e-d474-46d3-a12d-1dc27fefaa04]: (4, ('Sat Jan 31 07:57:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb (304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e)\n304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e\nSat Jan 31 07:57:21 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb (304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e)\n304f505f4da4cbd7f8118556152eb4d538d99307ed6182420ebaface71ef034e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:22.279 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f946ff70-0452-4bec-94e4-cb6e5a188aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:22.280 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca1ed3b2-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:22 compute-2 nova_compute[226829]: 2026-01-31 07:57:22.283 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:22 compute-2 kernel: tapca1ed3b2-b0: left promiscuous mode
Jan 31 07:57:22 compute-2 nova_compute[226829]: 2026-01-31 07:57:22.291 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:22.294 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ff86e7eb-a27a-4d2f-b984-b4613e62a2b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:22.313 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c608ca47-892a-4592-9631-0e2b5f964f43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:22.315 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f08f0642-3912-44f7-b379-47f218e4edc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:22.326 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[06f3242e-208b-4813-aaa0-ec1cd82f170b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645792, 'reachable_time': 44877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263199, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:22.330 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:57:22 compute-2 systemd[1]: run-netns-ovnmeta\x2dca1ed3b2\x2db27d\x2d427e\x2da9bd\x2dcc12393752eb.mount: Deactivated successfully.
Jan 31 07:57:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:22.331 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[9abc5c52-01ce-4cea-85f1-b368ea3caddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:22 compute-2 sudo[263200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:22 compute-2 sudo[263200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:22 compute-2 sudo[263200]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:22 compute-2 sudo[263225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:22 compute-2 sudo[263225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:22 compute-2 sudo[263225]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:22 compute-2 sudo[263250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:22 compute-2 sudo[263250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:22 compute-2 sudo[263250]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:22 compute-2 sudo[263275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:57:22 compute-2 sudo[263275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:22 compute-2 nova_compute[226829]: 2026-01-31 07:57:22.651 226833 DEBUG oslo_concurrency.lockutils [None req-021cfee8-e19b-4eec-bd26-0490ac349987 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 4.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:22 compute-2 nova_compute[226829]: 2026-01-31 07:57:22.902 226833 DEBUG nova.compute.manager [req-1c81e22a-0ace-4e99-9d7a-21e4d61b995f req-430e375c-7649-4cf7-bd2d-91d0735e333d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-vif-unplugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:22 compute-2 nova_compute[226829]: 2026-01-31 07:57:22.903 226833 DEBUG oslo_concurrency.lockutils [req-1c81e22a-0ace-4e99-9d7a-21e4d61b995f req-430e375c-7649-4cf7-bd2d-91d0735e333d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:22 compute-2 nova_compute[226829]: 2026-01-31 07:57:22.903 226833 DEBUG oslo_concurrency.lockutils [req-1c81e22a-0ace-4e99-9d7a-21e4d61b995f req-430e375c-7649-4cf7-bd2d-91d0735e333d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:22 compute-2 nova_compute[226829]: 2026-01-31 07:57:22.903 226833 DEBUG oslo_concurrency.lockutils [req-1c81e22a-0ace-4e99-9d7a-21e4d61b995f req-430e375c-7649-4cf7-bd2d-91d0735e333d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:22 compute-2 nova_compute[226829]: 2026-01-31 07:57:22.903 226833 DEBUG nova.compute.manager [req-1c81e22a-0ace-4e99-9d7a-21e4d61b995f req-430e375c-7649-4cf7-bd2d-91d0735e333d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] No waiting events found dispatching network-vif-unplugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:57:22 compute-2 nova_compute[226829]: 2026-01-31 07:57:22.904 226833 WARNING nova.compute.manager [req-1c81e22a-0ace-4e99-9d7a-21e4d61b995f req-430e375c-7649-4cf7-bd2d-91d0735e333d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received unexpected event network-vif-unplugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d for instance with vm_state stopped and task_state None.
Jan 31 07:57:22 compute-2 sudo[263275]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:23 compute-2 nova_compute[226829]: 2026-01-31 07:57:23.144 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Updating instance_info_cache with network_info: [{"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:57:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:57:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:57:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:57:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:23.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:23 compute-2 nova_compute[226829]: 2026-01-31 07:57:23.866 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:57:23 compute-2 nova_compute[226829]: 2026-01-31 07:57:23.866 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:57:23 compute-2 nova_compute[226829]: 2026-01-31 07:57:23.867 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:23 compute-2 nova_compute[226829]: 2026-01-31 07:57:23.868 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:23 compute-2 nova_compute[226829]: 2026-01-31 07:57:23.868 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:24 compute-2 nova_compute[226829]: 2026-01-31 07:57:24.028 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:24 compute-2 nova_compute[226829]: 2026-01-31 07:57:24.062 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:24 compute-2 nova_compute[226829]: 2026-01-31 07:57:24.063 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:24 compute-2 nova_compute[226829]: 2026-01-31 07:57:24.063 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:24 compute-2 nova_compute[226829]: 2026-01-31 07:57:24.063 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:57:24 compute-2 nova_compute[226829]: 2026-01-31 07:57:24.064 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:24.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:24 compute-2 ceph-mon[77282]: pgmap v1804: 305 pgs: 305 active+clean; 387 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 63 KiB/s wr, 100 op/s
Jan 31 07:57:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:57:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:57:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:57:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3936542120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:57:24 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/914716901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:24 compute-2 nova_compute[226829]: 2026-01-31 07:57:24.622 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.086 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.131 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.131 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.165 226833 DEBUG nova.compute.manager [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.165 226833 DEBUG oslo_concurrency.lockutils [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.165 226833 DEBUG oslo_concurrency.lockutils [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.166 226833 DEBUG oslo_concurrency.lockutils [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.166 226833 DEBUG nova.compute.manager [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] No waiting events found dispatching network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.166 226833 WARNING nova.compute.manager [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received unexpected event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d for instance with vm_state stopped and task_state powering-on.
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.271 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.273 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4492MB free_disk=20.79560089111328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.273 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.274 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/914716901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:25.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.651 226833 DEBUG nova.objects.instance [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'flavor' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.784 226833 DEBUG oslo_concurrency.lockutils [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.785 226833 DEBUG oslo_concurrency.lockutils [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquired lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.785 226833 DEBUG nova.network.neutron [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.785 226833 DEBUG nova.objects.instance [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'info_cache' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.843 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 2ea2db2b-f865-43f2-9f44-4a92a02bb804 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.844 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.844 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.884 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.900 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.901 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.924 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.956 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:57:25 compute-2 nova_compute[226829]: 2026-01-31 07:57:25.996 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:57:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:26.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:57:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:57:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3826910628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:26 compute-2 nova_compute[226829]: 2026-01-31 07:57:26.427 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:26 compute-2 nova_compute[226829]: 2026-01-31 07:57:26.434 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:57:26 compute-2 ceph-mon[77282]: pgmap v1805: 305 pgs: 305 active+clean; 359 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 34 KiB/s wr, 77 op/s
Jan 31 07:57:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2697800000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3826910628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:26 compute-2 nova_compute[226829]: 2026-01-31 07:57:26.875 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:57:27 compute-2 nova_compute[226829]: 2026-01-31 07:57:27.477 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:57:27 compute-2 nova_compute[226829]: 2026-01-31 07:57:27.478 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/923257819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:27.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:57:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:28.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:57:28 compute-2 ceph-mon[77282]: pgmap v1806: 305 pgs: 305 active+clean; 359 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 34 KiB/s wr, 30 op/s
Jan 31 07:57:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:57:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 132K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 31K writes, 10K syncs, 3.01 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6326 writes, 27K keys, 6326 commit groups, 1.0 writes per commit group, ingest: 33.05 MB, 0.06 MB/s
                                           Interval WAL: 6325 writes, 2266 syncs, 2.79 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 07:57:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:29 compute-2 nova_compute[226829]: 2026-01-31 07:57:29.031 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:29 compute-2 sudo[263381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:29 compute-2 sudo[263381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:29 compute-2 sudo[263381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:29 compute-2 sudo[263412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:57:29 compute-2 sudo[263412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:29 compute-2 sudo[263412]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:29 compute-2 podman[263405]: 2026-01-31 07:57:29.448736315 +0000 UTC m=+0.074112318 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:57:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:29.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:30 compute-2 nova_compute[226829]: 2026-01-31 07:57:30.087 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:30 compute-2 nova_compute[226829]: 2026-01-31 07:57:30.098 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:30 compute-2 nova_compute[226829]: 2026-01-31 07:57:30.098 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:57:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:57:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:57:30 compute-2 ceph-mon[77282]: pgmap v1807: 305 pgs: 305 active+clean; 359 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 35 KiB/s wr, 31 op/s
Jan 31 07:57:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:30.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:31.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:32.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:32 compute-2 ceph-mon[77282]: pgmap v1808: 305 pgs: 305 active+clean; 341 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 263 KiB/s wr, 34 op/s
Jan 31 07:57:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:33.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:34 compute-2 nova_compute[226829]: 2026-01-31 07:57:34.035 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:57:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:34.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:57:34 compute-2 ceph-mon[77282]: pgmap v1809: 305 pgs: 305 active+clean; 328 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 811 KiB/s wr, 57 op/s
Jan 31 07:57:34 compute-2 nova_compute[226829]: 2026-01-31 07:57:34.803 226833 DEBUG nova.network.neutron [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Updating instance_info_cache with network_info: [{"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:57:35 compute-2 nova_compute[226829]: 2026-01-31 07:57:35.090 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:35.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:36 compute-2 nova_compute[226829]: 2026-01-31 07:57:36.261 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846241.2602117, 2ea2db2b-f865-43f2-9f44-4a92a02bb804 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:57:36 compute-2 nova_compute[226829]: 2026-01-31 07:57:36.262 226833 INFO nova.compute.manager [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] VM Stopped (Lifecycle Event)
Jan 31 07:57:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:36.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:36 compute-2 ceph-mon[77282]: pgmap v1810: 305 pgs: 305 active+clean; 325 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 31 07:57:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:57:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:37.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:57:38 compute-2 sudo[263464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:38 compute-2 sudo[263464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:38 compute-2 sudo[263464]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:38 compute-2 sudo[263489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:38 compute-2 sudo[263489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:38 compute-2 sudo[263489]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:38.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:38 compute-2 ceph-mon[77282]: pgmap v1811: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 07:57:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:39 compute-2 nova_compute[226829]: 2026-01-31 07:57:39.038 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:39.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:40 compute-2 nova_compute[226829]: 2026-01-31 07:57:40.092 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:40.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:40 compute-2 ceph-mon[77282]: pgmap v1812: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 07:57:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:41.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:42 compute-2 ceph-mon[77282]: pgmap v1813: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 07:57:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:42.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:43 compute-2 podman[263516]: 2026-01-31 07:57:43.151730983 +0000 UTC m=+0.043217862 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:57:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:57:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:43.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:57:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:44 compute-2 nova_compute[226829]: 2026-01-31 07:57:44.041 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:44 compute-2 nova_compute[226829]: 2026-01-31 07:57:44.336 226833 DEBUG oslo_concurrency.lockutils [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Releasing lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:57:44 compute-2 nova_compute[226829]: 2026-01-31 07:57:44.393 226833 DEBUG nova.compute.manager [None req-1420b21a-74e9-4f30-8ff9-27967c3c75a7 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:44 compute-2 nova_compute[226829]: 2026-01-31 07:57:44.397 226833 DEBUG nova.compute.manager [None req-1420b21a-74e9-4f30-8ff9-27967c3c75a7 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:57:44 compute-2 ceph-mon[77282]: pgmap v1814: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Jan 31 07:57:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:57:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/516796285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:57:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:57:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/516796285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:57:45 compute-2 nova_compute[226829]: 2026-01-31 07:57:45.094 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:45 compute-2 nova_compute[226829]: 2026-01-31 07:57:45.212 226833 INFO nova.compute.manager [None req-1420b21a-74e9-4f30-8ff9-27967c3c75a7 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] During sync_power_state the instance has a pending task (powering-on). Skip.
Jan 31 07:57:45 compute-2 nova_compute[226829]: 2026-01-31 07:57:45.220 226833 INFO nova.virt.libvirt.driver [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Instance destroyed successfully.
Jan 31 07:57:45 compute-2 nova_compute[226829]: 2026-01-31 07:57:45.220 226833 DEBUG nova.objects.instance [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/516796285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:57:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/516796285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:57:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:45.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:45 compute-2 nova_compute[226829]: 2026-01-31 07:57:45.612 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:45.612 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:57:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:45.616 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:57:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:46.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:46 compute-2 ceph-mon[77282]: pgmap v1815: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 1015 KiB/s wr, 15 op/s
Jan 31 07:57:46 compute-2 nova_compute[226829]: 2026-01-31 07:57:46.942 226833 DEBUG nova.objects.instance [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'resources' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:47.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:48 compute-2 ceph-mon[77282]: pgmap v1816: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 4.4 KiB/s wr, 9 op/s
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.428 226833 DEBUG nova.virt.libvirt.vif [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:56:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-128986884',display_name='tempest-ListServerFiltersTestJSON-instance-128986884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-128986884',id=81,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:56:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-w0cs3g2v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:57:22Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=2ea2db2b-f865-43f2-9f44-4a92a02bb804,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.429 226833 DEBUG nova.network.os_vif_util [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.430 226833 DEBUG nova.network.os_vif_util [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.431 226833 DEBUG os_vif [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.434 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.434 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8e5f1dd-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.437 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.439 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.445 226833 INFO os_vif [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33')
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.451 226833 DEBUG nova.virt.libvirt.driver [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Start _get_guest_xml network_info=[{"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.455 226833 WARNING nova.virt.libvirt.driver [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.491 226833 DEBUG nova.virt.libvirt.host [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.492 226833 DEBUG nova.virt.libvirt.host [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.498 226833 DEBUG nova.virt.libvirt.host [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.498 226833 DEBUG nova.virt.libvirt.host [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.500 226833 DEBUG nova.virt.libvirt.driver [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.500 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.500 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.501 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.501 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.501 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.501 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.501 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.502 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.502 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.502 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.502 226833 DEBUG nova.virt.hardware [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:57:48 compute-2 nova_compute[226829]: 2026-01-31 07:57:48.502 226833 DEBUG nova.objects.instance [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:48.619 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:49 compute-2 nova_compute[226829]: 2026-01-31 07:57:49.059 226833 DEBUG oslo_concurrency.processutils [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:57:49 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2379552753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:49 compute-2 nova_compute[226829]: 2026-01-31 07:57:49.472 226833 DEBUG oslo_concurrency.processutils [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:49 compute-2 nova_compute[226829]: 2026-01-31 07:57:49.510 226833 DEBUG oslo_concurrency.processutils [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:49.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:57:49 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2640853960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:49 compute-2 nova_compute[226829]: 2026-01-31 07:57:49.925 226833 DEBUG oslo_concurrency.processutils [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:49 compute-2 nova_compute[226829]: 2026-01-31 07:57:49.927 226833 DEBUG nova.virt.libvirt.vif [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:56:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-128986884',display_name='tempest-ListServerFiltersTestJSON-instance-128986884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-128986884',id=81,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:56:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-w0cs3g2v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:57:22Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=2ea2db2b-f865-43f2-9f44-4a92a02bb804,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:57:49 compute-2 nova_compute[226829]: 2026-01-31 07:57:49.927 226833 DEBUG nova.network.os_vif_util [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:57:49 compute-2 nova_compute[226829]: 2026-01-31 07:57:49.928 226833 DEBUG nova.network.os_vif_util [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:57:49 compute-2 nova_compute[226829]: 2026-01-31 07:57:49.929 226833 DEBUG nova.objects.instance [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:50 compute-2 nova_compute[226829]: 2026-01-31 07:57:50.096 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:57:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:50.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:57:50 compute-2 ceph-mon[77282]: pgmap v1817: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 31 07:57:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2379552753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2640853960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.154 226833 DEBUG nova.virt.libvirt.driver [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <uuid>2ea2db2b-f865-43f2-9f44-4a92a02bb804</uuid>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <name>instance-00000051</name>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-128986884</nova:name>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:57:48</nova:creationTime>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <nova:user uuid="f60419a58aea43b9a0b6db7d61d71246">tempest-ListServerFiltersTestJSON-334452958-project-member</nova:user>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <nova:project uuid="1cd91610847a480caeee0ae3cdabf066">tempest-ListServerFiltersTestJSON-334452958</nova:project>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <nova:port uuid="c8e5f1dd-3393-4d16-b1fe-e259ddea188d">
Jan 31 07:57:51 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <system>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <entry name="serial">2ea2db2b-f865-43f2-9f44-4a92a02bb804</entry>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <entry name="uuid">2ea2db2b-f865-43f2-9f44-4a92a02bb804</entry>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     </system>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <os>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   </os>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <features>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   </features>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk">
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       </source>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/2ea2db2b-f865-43f2-9f44-4a92a02bb804_disk.config">
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       </source>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:57:51 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:96:da:40"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <target dev="tapc8e5f1dd-33"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804/console.log" append="off"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <video>
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     </video>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:57:51 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:57:51 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:57:51 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:57:51 compute-2 nova_compute[226829]: </domain>
Jan 31 07:57:51 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.156 226833 DEBUG nova.virt.libvirt.driver [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.156 226833 DEBUG nova.virt.libvirt.driver [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.158 226833 DEBUG nova.virt.libvirt.vif [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:56:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-128986884',display_name='tempest-ListServerFiltersTestJSON-instance-128986884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-128986884',id=81,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:56:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-w0cs3g2v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:57:22Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=2ea2db2b-f865-43f2-9f44-4a92a02bb804,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.158 226833 DEBUG nova.network.os_vif_util [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.159 226833 DEBUG nova.network.os_vif_util [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.160 226833 DEBUG os_vif [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.160 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.161 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.162 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.166 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.166 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8e5f1dd-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.167 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8e5f1dd-33, col_values=(('external_ids', {'iface-id': 'c8e5f1dd-3393-4d16-b1fe-e259ddea188d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:da:40', 'vm-uuid': '2ea2db2b-f865-43f2-9f44-4a92a02bb804'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.168 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 NetworkManager[48999]: <info>  [1769846271.1693] manager: (tapc8e5f1dd-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.170 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.173 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.173 226833 INFO os_vif [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33')
Jan 31 07:57:51 compute-2 kernel: tapc8e5f1dd-33: entered promiscuous mode
Jan 31 07:57:51 compute-2 NetworkManager[48999]: <info>  [1769846271.2416] manager: (tapc8e5f1dd-33): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Jan 31 07:57:51 compute-2 ovn_controller[133834]: 2026-01-31T07:57:51Z|00277|binding|INFO|Claiming lport c8e5f1dd-3393-4d16-b1fe-e259ddea188d for this chassis.
Jan 31 07:57:51 compute-2 ovn_controller[133834]: 2026-01-31T07:57:51Z|00278|binding|INFO|c8e5f1dd-3393-4d16-b1fe-e259ddea188d: Claiming fa:16:3e:96:da:40 10.100.0.13
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.242 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 ovn_controller[133834]: 2026-01-31T07:57:51Z|00279|binding|INFO|Setting lport c8e5f1dd-3393-4d16-b1fe-e259ddea188d ovn-installed in OVS
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.253 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.256 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 systemd-machined[195142]: New machine qemu-35-instance-00000051.
Jan 31 07:57:51 compute-2 systemd[1]: Started Virtual Machine qemu-35-instance-00000051.
Jan 31 07:57:51 compute-2 systemd-udevd[263618]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:57:51 compute-2 NetworkManager[48999]: <info>  [1769846271.3118] device (tapc8e5f1dd-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:57:51 compute-2 NetworkManager[48999]: <info>  [1769846271.3128] device (tapc8e5f1dd-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:57:51 compute-2 ovn_controller[133834]: 2026-01-31T07:57:51Z|00280|binding|INFO|Setting lport c8e5f1dd-3393-4d16-b1fe-e259ddea188d up in Southbound
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.342 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:da:40 10.100.0.13'], port_security=['fa:16:3e:96:da:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2ea2db2b-f865-43f2-9f44-4a92a02bb804', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1cd91610847a480caeee0ae3cdabf066', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f1695a46-1d81-4453-9c52-9917c020bc65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e38d659-b6a8-4d3d-8a23-b8299c5114da, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c8e5f1dd-3393-4d16-b1fe-e259ddea188d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.345 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c8e5f1dd-3393-4d16-b1fe-e259ddea188d in datapath ca1ed3b2-b27d-427e-a9bd-cc12393752eb bound to our chassis
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.348 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.359 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4d40230a-50e7-416a-9953-282ea43e70a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.361 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca1ed3b2-b1 in ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.363 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca1ed3b2-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.363 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[26e82843-12af-4577-82fc-860e875c66ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.365 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a048f580-ef51-4e8c-b7e3-cbad3896bd4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.379 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[63ddab26-625a-4f9d-afc4-c7a3c81dd181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.401 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[33026e66-b916-4f1b-9307-c6fa02362c85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.429 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8147a6d7-73ae-44c6-a121-baf0e47d101d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.436 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[80d19a92-5bbf-4a62-ad5e-ccbe27fe1bc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 NetworkManager[48999]: <info>  [1769846271.4378] manager: (tapca1ed3b2-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/147)
Jan 31 07:57:51 compute-2 systemd-udevd[263620]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.459 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[47fa83ee-ed8f-459e-8dc3-a46fbec02206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.462 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0359f3-f15c-4b35-a8d9-0f1296ae0728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 NetworkManager[48999]: <info>  [1769846271.4825] device (tapca1ed3b2-b0): carrier: link connected
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.487 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d9667dae-8a24-4f41-b68f-cda97490cab4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.499 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[36864a3f-f554-4e69-ba54-58871c8d8ae7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca1ed3b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:70:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652395, 'reachable_time': 28225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263651, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.511 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[524dcae4-3c67-423a-9031-2197b8bc91c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:7011'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652395, 'tstamp': 652395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263652, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.524 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d343e8ff-0489-4df0-bd31-96085a4a5dcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca1ed3b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:70:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652395, 'reachable_time': 28225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263653, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.550 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[54ec6371-0d9a-4ef0-ad7c-0a9fd77edb8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:51.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.606 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9e2f80-65d9-4b0e-819b-c1e7e849ade7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.607 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca1ed3b2-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.608 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.608 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca1ed3b2-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.610 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 kernel: tapca1ed3b2-b0: entered promiscuous mode
Jan 31 07:57:51 compute-2 NetworkManager[48999]: <info>  [1769846271.6112] manager: (tapca1ed3b2-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.613 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.613 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca1ed3b2-b0, col_values=(('external_ids', {'iface-id': 'd19b5f05-fa79-4835-8ef4-51f87493d59b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.614 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.615 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.616 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.617 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3259ccf5-dc53-4cd7-9bc0-35cc4e1ec36d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.618 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:57:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:57:51.618 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'env', 'PROCESS_TAG=haproxy-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:57:51 compute-2 ovn_controller[133834]: 2026-01-31T07:57:51Z|00281|binding|INFO|Releasing lport d19b5f05-fa79-4835-8ef4-51f87493d59b from this chassis (sb_readonly=0)
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.665 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.909 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846271.9086392, 2ea2db2b-f865-43f2-9f44-4a92a02bb804 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.909 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] VM Resumed (Lifecycle Event)
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.911 226833 DEBUG nova.compute.manager [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.914 226833 INFO nova.virt.libvirt.driver [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Instance rebooted successfully.
Jan 31 07:57:51 compute-2 nova_compute[226829]: 2026-01-31 07:57:51.915 226833 DEBUG nova.compute.manager [None req-8e30b0ad-ceb1-4c0a-be87-0b8c4fdd4613 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:52 compute-2 podman[263727]: 2026-01-31 07:57:51.94826227 +0000 UTC m=+0.023502338 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:57:52 compute-2 nova_compute[226829]: 2026-01-31 07:57:52.250 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:52 compute-2 nova_compute[226829]: 2026-01-31 07:57:52.255 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:57:52 compute-2 podman[263727]: 2026-01-31 07:57:52.306550634 +0000 UTC m=+0.381790682 container create 238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:57:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:52.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:52 compute-2 systemd[1]: Started libpod-conmon-238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca.scope.
Jan 31 07:57:52 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:57:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23aba9f9d5797d9b4cea41da5256d80e40877f4390cacf1bb9cd652a55148b67/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:52 compute-2 podman[263727]: 2026-01-31 07:57:52.554541181 +0000 UTC m=+0.629781229 container init 238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 07:57:52 compute-2 podman[263727]: 2026-01-31 07:57:52.560808861 +0000 UTC m=+0.636048929 container start 238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:57:52 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[263742]: [NOTICE]   (263746) : New worker (263748) forked
Jan 31 07:57:52 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[263742]: [NOTICE]   (263746) : Loading success.
Jan 31 07:57:52 compute-2 nova_compute[226829]: 2026-01-31 07:57:52.746 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846271.9095056, 2ea2db2b-f865-43f2-9f44-4a92a02bb804 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:57:52 compute-2 nova_compute[226829]: 2026-01-31 07:57:52.747 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] VM Started (Lifecycle Event)
Jan 31 07:57:52 compute-2 ceph-mon[77282]: pgmap v1818: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s rd, 2.3 KiB/s wr, 2 op/s
Jan 31 07:57:53 compute-2 nova_compute[226829]: 2026-01-31 07:57:53.209 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:53 compute-2 nova_compute[226829]: 2026-01-31 07:57:53.212 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:57:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:53.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:53 compute-2 ceph-mon[77282]: pgmap v1819: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 2.4 KiB/s wr, 3 op/s
Jan 31 07:57:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1737226846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:54.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:54 compute-2 nova_compute[226829]: 2026-01-31 07:57:54.523 226833 DEBUG nova.compute.manager [req-1d398cf1-ed4d-4ee0-ae54-cfc3c9b40bee req-9459e317-a34d-40ed-bbb2-8fb76ab28fdd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:54 compute-2 nova_compute[226829]: 2026-01-31 07:57:54.525 226833 DEBUG oslo_concurrency.lockutils [req-1d398cf1-ed4d-4ee0-ae54-cfc3c9b40bee req-9459e317-a34d-40ed-bbb2-8fb76ab28fdd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:54 compute-2 nova_compute[226829]: 2026-01-31 07:57:54.525 226833 DEBUG oslo_concurrency.lockutils [req-1d398cf1-ed4d-4ee0-ae54-cfc3c9b40bee req-9459e317-a34d-40ed-bbb2-8fb76ab28fdd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:54 compute-2 nova_compute[226829]: 2026-01-31 07:57:54.526 226833 DEBUG oslo_concurrency.lockutils [req-1d398cf1-ed4d-4ee0-ae54-cfc3c9b40bee req-9459e317-a34d-40ed-bbb2-8fb76ab28fdd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:54 compute-2 nova_compute[226829]: 2026-01-31 07:57:54.526 226833 DEBUG nova.compute.manager [req-1d398cf1-ed4d-4ee0-ae54-cfc3c9b40bee req-9459e317-a34d-40ed-bbb2-8fb76ab28fdd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] No waiting events found dispatching network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:57:54 compute-2 nova_compute[226829]: 2026-01-31 07:57:54.526 226833 WARNING nova.compute.manager [req-1d398cf1-ed4d-4ee0-ae54-cfc3c9b40bee req-9459e317-a34d-40ed-bbb2-8fb76ab28fdd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received unexpected event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d for instance with vm_state active and task_state None.
Jan 31 07:57:55 compute-2 nova_compute[226829]: 2026-01-31 07:57:55.098 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:55.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:56 compute-2 nova_compute[226829]: 2026-01-31 07:57:56.170 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:56.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:56 compute-2 ceph-mon[77282]: pgmap v1820: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 597 KiB/s rd, 10 KiB/s wr, 27 op/s
Jan 31 07:57:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:57:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:57.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:57:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:58 compute-2 sudo[263760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:58 compute-2 sudo[263760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:58 compute-2 sudo[263760]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:58 compute-2 sudo[263785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:58 compute-2 sudo[263785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:58 compute-2 sudo[263785]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:58 compute-2 ceph-mon[77282]: pgmap v1821: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 8.2 KiB/s wr, 49 op/s
Jan 31 07:57:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:59 compute-2 nova_compute[226829]: 2026-01-31 07:57:59.272 226833 DEBUG nova.compute.manager [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:59 compute-2 nova_compute[226829]: 2026-01-31 07:57:59.272 226833 DEBUG oslo_concurrency.lockutils [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:59 compute-2 nova_compute[226829]: 2026-01-31 07:57:59.273 226833 DEBUG oslo_concurrency.lockutils [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:59 compute-2 nova_compute[226829]: 2026-01-31 07:57:59.273 226833 DEBUG oslo_concurrency.lockutils [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:59 compute-2 nova_compute[226829]: 2026-01-31 07:57:59.273 226833 DEBUG nova.compute.manager [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] No waiting events found dispatching network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:57:59 compute-2 nova_compute[226829]: 2026-01-31 07:57:59.273 226833 WARNING nova.compute.manager [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received unexpected event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d for instance with vm_state active and task_state None.
Jan 31 07:57:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:57:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:59.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:00 compute-2 nova_compute[226829]: 2026-01-31 07:58:00.101 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:00 compute-2 podman[263811]: 2026-01-31 07:58:00.193048853 +0000 UTC m=+0.075755923 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:58:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:00.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:00 compute-2 ceph-mon[77282]: pgmap v1822: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Jan 31 07:58:01 compute-2 nova_compute[226829]: 2026-01-31 07:58:01.173 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:01.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:01 compute-2 ceph-mon[77282]: pgmap v1823: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Jan 31 07:58:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1670089036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:58:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2564199315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:58:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:58:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:03.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:58:03 compute-2 ceph-mon[77282]: pgmap v1824: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 71 op/s
Jan 31 07:58:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:04 compute-2 ovn_controller[133834]: 2026-01-31T07:58:04Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:da:40 10.100.0.13
Jan 31 07:58:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.491103) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284491262, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1126, "num_deletes": 250, "total_data_size": 2333327, "memory_usage": 2361712, "flush_reason": "Manual Compaction"}
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284499543, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 942484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41881, "largest_seqno": 43001, "table_properties": {"data_size": 938542, "index_size": 1530, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10696, "raw_average_key_size": 20, "raw_value_size": 930034, "raw_average_value_size": 1812, "num_data_blocks": 68, "num_entries": 513, "num_filter_entries": 513, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846195, "oldest_key_time": 1769846195, "file_creation_time": 1769846284, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 8481 microseconds, and 3656 cpu microseconds.
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.499620) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 942484 bytes OK
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.499640) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.500784) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.500799) EVENT_LOG_v1 {"time_micros": 1769846284500794, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.500818) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2327907, prev total WAL file size 2327907, number of live WAL files 2.
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.501456) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323538' seq:72057594037927935, type:22 .. '6D6772737461740031353039' seq:0, type:0; will stop at (end)
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(920KB)], [78(11MB)]
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284501516, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 12794421, "oldest_snapshot_seqno": -1}
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 6805 keys, 9538112 bytes, temperature: kUnknown
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284557708, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 9538112, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9494308, "index_size": 25708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 174720, "raw_average_key_size": 25, "raw_value_size": 9374149, "raw_average_value_size": 1377, "num_data_blocks": 1023, "num_entries": 6805, "num_filter_entries": 6805, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769846284, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.557952) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9538112 bytes
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.559217) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.4 rd, 169.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(23.7) write-amplify(10.1) OK, records in: 7279, records dropped: 474 output_compression: NoCompression
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.559240) EVENT_LOG_v1 {"time_micros": 1769846284559230, "job": 48, "event": "compaction_finished", "compaction_time_micros": 56256, "compaction_time_cpu_micros": 30621, "output_level": 6, "num_output_files": 1, "total_output_size": 9538112, "num_input_records": 7279, "num_output_records": 6805, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284559433, "job": 48, "event": "table_file_deletion", "file_number": 80}
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284560723, "job": 48, "event": "table_file_deletion", "file_number": 78}
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.501349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.560756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.560761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.560769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.560771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:58:04.560774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:05 compute-2 nova_compute[226829]: 2026-01-31 07:58:05.104 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:58:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:05.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:58:06 compute-2 nova_compute[226829]: 2026-01-31 07:58:06.176 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:06.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:06 compute-2 ceph-mon[77282]: pgmap v1825: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 79 op/s
Jan 31 07:58:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:06.869 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:06.870 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:06.870 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:58:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:07.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:58:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:08.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:08 compute-2 nova_compute[226829]: 2026-01-31 07:58:08.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:08 compute-2 ceph-mon[77282]: pgmap v1826: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 18 KiB/s wr, 58 op/s
Jan 31 07:58:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:09.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:09 compute-2 ceph-mon[77282]: pgmap v1827: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 98 op/s
Jan 31 07:58:10 compute-2 nova_compute[226829]: 2026-01-31 07:58:10.106 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:10.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:10 compute-2 nova_compute[226829]: 2026-01-31 07:58:10.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:10 compute-2 nova_compute[226829]: 2026-01-31 07:58:10.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:11 compute-2 nova_compute[226829]: 2026-01-31 07:58:11.179 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:11.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:58:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:12.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:58:13 compute-2 ceph-mon[77282]: pgmap v1828: 305 pgs: 305 active+clean; 326 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 26 KiB/s wr, 85 op/s
Jan 31 07:58:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:13.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:14 compute-2 podman[263844]: 2026-01-31 07:58:14.172792456 +0000 UTC m=+0.052793441 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 07:58:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:14.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:14 compute-2 ceph-mon[77282]: pgmap v1829: 305 pgs: 305 active+clean; 326 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 26 KiB/s wr, 111 op/s
Jan 31 07:58:15 compute-2 nova_compute[226829]: 2026-01-31 07:58:15.109 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:58:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:15.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:58:16 compute-2 nova_compute[226829]: 2026-01-31 07:58:16.228 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:16.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:16 compute-2 ceph-mon[77282]: pgmap v1830: 305 pgs: 305 active+clean; 327 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 124 op/s
Jan 31 07:58:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1802042998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/14013203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2773235226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:17 compute-2 nova_compute[226829]: 2026-01-31 07:58:17.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:17 compute-2 nova_compute[226829]: 2026-01-31 07:58:17.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:58:17 compute-2 nova_compute[226829]: 2026-01-31 07:58:17.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:58:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:17.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:18.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:18 compute-2 nova_compute[226829]: 2026-01-31 07:58:18.463 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:58:18 compute-2 nova_compute[226829]: 2026-01-31 07:58:18.463 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:58:18 compute-2 nova_compute[226829]: 2026-01-31 07:58:18.464 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:58:18 compute-2 nova_compute[226829]: 2026-01-31 07:58:18.464 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:58:18 compute-2 ceph-mon[77282]: pgmap v1831: 305 pgs: 305 active+clean; 313 MiB data, 831 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 26 KiB/s wr, 119 op/s
Jan 31 07:58:18 compute-2 sudo[263865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:18 compute-2 sudo[263865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:18 compute-2 sudo[263865]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:18 compute-2 sudo[263890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:18 compute-2 sudo[263890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:18 compute-2 sudo[263890]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:19 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 31 07:58:19 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 07:58:19 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 07:58:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:19.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:20 compute-2 nova_compute[226829]: 2026-01-31 07:58:20.110 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:20 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 31 07:58:20 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 31 07:58:20 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 31 07:58:20 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 31 07:58:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:20.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:20 compute-2 ceph-mon[77282]: pgmap v1832: 305 pgs: 305 active+clean; 248 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 30 KiB/s wr, 133 op/s
Jan 31 07:58:21 compute-2 nova_compute[226829]: 2026-01-31 07:58:21.231 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:21.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:22.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:22 compute-2 ceph-mon[77282]: pgmap v1833: 305 pgs: 305 active+clean; 252 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 357 KiB/s wr, 83 op/s
Jan 31 07:58:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:23.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:23 compute-2 ceph-mon[77282]: pgmap v1834: 305 pgs: 305 active+clean; 259 MiB data, 797 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 728 KiB/s wr, 135 op/s
Jan 31 07:58:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:24.043 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:58:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:24.044 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:58:24 compute-2 nova_compute[226829]: 2026-01-31 07:58:24.045 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:24.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2488209533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:25 compute-2 nova_compute[226829]: 2026-01-31 07:58:25.112 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:25.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1324782983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:26 compute-2 ceph-mon[77282]: pgmap v1835: 305 pgs: 305 active+clean; 277 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 693 KiB/s rd, 2.1 MiB/s wr, 168 op/s
Jan 31 07:58:26 compute-2 nova_compute[226829]: 2026-01-31 07:58:26.233 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:58:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:26.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:58:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:27.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:28 compute-2 ceph-mon[77282]: pgmap v1836: 305 pgs: 305 active+clean; 281 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 204 op/s
Jan 31 07:58:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:28.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:29 compute-2 sudo[263921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:29 compute-2 sudo[263921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-2 sudo[263921]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:29 compute-2 sudo[263947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:29 compute-2 sudo[263947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-2 sudo[263947]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:29 compute-2 sudo[263972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:29 compute-2 sudo[263972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-2 sudo[263972]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:29 compute-2 sudo[263997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:58:29 compute-2 sudo[263997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:58:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:29.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:58:29 compute-2 nova_compute[226829]: 2026-01-31 07:58:29.664 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Updating instance_info_cache with network_info: [{"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:58:29 compute-2 sudo[263997]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.114 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.246 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-2ea2db2b-f865-43f2-9f44-4a92a02bb804" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.247 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.247 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.248 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.248 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.249 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.249 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.250 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:30.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:30 compute-2 ceph-mon[77282]: pgmap v1837: 305 pgs: 305 active+clean; 281 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 2.1 MiB/s wr, 234 op/s
Jan 31 07:58:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.402 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.403 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.403 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.403 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.404 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:58:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2620656044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:30 compute-2 nova_compute[226829]: 2026-01-31 07:58:30.819 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:30 compute-2 podman[264075]: 2026-01-31 07:58:30.926069493 +0000 UTC m=+0.067244262 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:58:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:31.046 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.235 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:58:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:58:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:58:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:58:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:58:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2620656044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.536 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.536 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:58:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:31.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.668 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.669 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4325MB free_disk=20.85165786743164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.669 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.669 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.881 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 2ea2db2b-f865-43f2-9f44-4a92a02bb804 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.882 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.882 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:58:31 compute-2 nova_compute[226829]: 2026-01-31 07:58:31.973 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:58:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2638932588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:32.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:32 compute-2 nova_compute[226829]: 2026-01-31 07:58:32.391 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:32 compute-2 nova_compute[226829]: 2026-01-31 07:58:32.397 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:58:32 compute-2 ceph-mon[77282]: pgmap v1838: 305 pgs: 305 active+clean; 281 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 216 op/s
Jan 31 07:58:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2638932588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:32 compute-2 nova_compute[226829]: 2026-01-31 07:58:32.528 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:58:32 compute-2 nova_compute[226829]: 2026-01-31 07:58:32.796 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:58:32 compute-2 nova_compute[226829]: 2026-01-31 07:58:32.797 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:33.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:34.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:34 compute-2 ceph-mon[77282]: pgmap v1839: 305 pgs: 305 active+clean; 257 MiB data, 802 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 1.8 MiB/s wr, 220 op/s
Jan 31 07:58:35 compute-2 nova_compute[226829]: 2026-01-31 07:58:35.116 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:35.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:36 compute-2 nova_compute[226829]: 2026-01-31 07:58:36.238 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:36.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:36 compute-2 ceph-mon[77282]: pgmap v1840: 305 pgs: 305 active+clean; 222 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 159 KiB/s rd, 1.4 MiB/s wr, 162 op/s
Jan 31 07:58:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1854533963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:36 compute-2 sudo[264127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:36 compute-2 sudo[264127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:36 compute-2 sudo[264127]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:36 compute-2 sudo[264152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:58:36 compute-2 sudo[264152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:36 compute-2 sudo[264152]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:37.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:38.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:38 compute-2 sudo[264178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:38 compute-2 sudo[264178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:38 compute-2 sudo[264178]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:38 compute-2 ceph-mon[77282]: pgmap v1841: 305 pgs: 305 active+clean; 190 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 18 KiB/s wr, 119 op/s
Jan 31 07:58:38 compute-2 sudo[264203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:38 compute-2 sudo[264203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:38 compute-2 sudo[264203]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:38 compute-2 nova_compute[226829]: 2026-01-31 07:58:38.912 226833 DEBUG oslo_concurrency.lockutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:38 compute-2 nova_compute[226829]: 2026-01-31 07:58:38.912 226833 DEBUG oslo_concurrency.lockutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:38 compute-2 nova_compute[226829]: 2026-01-31 07:58:38.913 226833 DEBUG oslo_concurrency.lockutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:38 compute-2 nova_compute[226829]: 2026-01-31 07:58:38.913 226833 DEBUG oslo_concurrency.lockutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:38 compute-2 nova_compute[226829]: 2026-01-31 07:58:38.913 226833 DEBUG oslo_concurrency.lockutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:38 compute-2 nova_compute[226829]: 2026-01-31 07:58:38.915 226833 INFO nova.compute.manager [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Terminating instance
Jan 31 07:58:38 compute-2 nova_compute[226829]: 2026-01-31 07:58:38.915 226833 DEBUG nova.compute.manager [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:58:39 compute-2 kernel: tapc8e5f1dd-33 (unregistering): left promiscuous mode
Jan 31 07:58:39 compute-2 NetworkManager[48999]: <info>  [1769846319.0563] device (tapc8e5f1dd-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:58:39 compute-2 ovn_controller[133834]: 2026-01-31T07:58:39Z|00282|binding|INFO|Releasing lport c8e5f1dd-3393-4d16-b1fe-e259ddea188d from this chassis (sb_readonly=0)
Jan 31 07:58:39 compute-2 ovn_controller[133834]: 2026-01-31T07:58:39Z|00283|binding|INFO|Setting lport c8e5f1dd-3393-4d16-b1fe-e259ddea188d down in Southbound
Jan 31 07:58:39 compute-2 ovn_controller[133834]: 2026-01-31T07:58:39Z|00284|binding|INFO|Removing iface tapc8e5f1dd-33 ovn-installed in OVS
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.105 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.108 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.112 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:39 compute-2 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000051.scope: Deactivated successfully.
Jan 31 07:58:39 compute-2 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000051.scope: Consumed 15.130s CPU time.
Jan 31 07:58:39 compute-2 systemd-machined[195142]: Machine qemu-35-instance-00000051 terminated.
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.195 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:da:40 10.100.0.13'], port_security=['fa:16:3e:96:da:40 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2ea2db2b-f865-43f2-9f44-4a92a02bb804', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1cd91610847a480caeee0ae3cdabf066', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f1695a46-1d81-4453-9c52-9917c020bc65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e38d659-b6a8-4d3d-8a23-b8299c5114da, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c8e5f1dd-3393-4d16-b1fe-e259ddea188d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.196 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c8e5f1dd-3393-4d16-b1fe-e259ddea188d in datapath ca1ed3b2-b27d-427e-a9bd-cc12393752eb unbound from our chassis
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.197 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca1ed3b2-b27d-427e-a9bd-cc12393752eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.199 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a556a9-8346-473c-9f28-50ac3577e79d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.200 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb namespace which is not needed anymore
Jan 31 07:58:39 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[263742]: [NOTICE]   (263746) : haproxy version is 2.8.14-c23fe91
Jan 31 07:58:39 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[263742]: [NOTICE]   (263746) : path to executable is /usr/sbin/haproxy
Jan 31 07:58:39 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[263742]: [WARNING]  (263746) : Exiting Master process...
Jan 31 07:58:39 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[263742]: [WARNING]  (263746) : Exiting Master process...
Jan 31 07:58:39 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[263742]: [ALERT]    (263746) : Current worker (263748) exited with code 143 (Terminated)
Jan 31 07:58:39 compute-2 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[263742]: [WARNING]  (263746) : All workers exited. Exiting... (0)
Jan 31 07:58:39 compute-2 systemd[1]: libpod-238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca.scope: Deactivated successfully.
Jan 31 07:58:39 compute-2 podman[264253]: 2026-01-31 07:58:39.320082709 +0000 UTC m=+0.053804198 container died 238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.351 226833 INFO nova.virt.libvirt.driver [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Instance destroyed successfully.
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.352 226833 DEBUG nova.objects.instance [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'resources' on Instance uuid 2ea2db2b-f865-43f2-9f44-4a92a02bb804 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:58:39 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca-userdata-shm.mount: Deactivated successfully.
Jan 31 07:58:39 compute-2 systemd[1]: var-lib-containers-storage-overlay-23aba9f9d5797d9b4cea41da5256d80e40877f4390cacf1bb9cd652a55148b67-merged.mount: Deactivated successfully.
Jan 31 07:58:39 compute-2 podman[264253]: 2026-01-31 07:58:39.375661755 +0000 UTC m=+0.109383264 container cleanup 238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:58:39 compute-2 systemd[1]: libpod-conmon-238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca.scope: Deactivated successfully.
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.398 226833 DEBUG nova.virt.libvirt.vif [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:56:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-128986884',display_name='tempest-ListServerFiltersTestJSON-instance-128986884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-128986884',id=81,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:56:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-w0cs3g2v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:57:52Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=2ea2db2b-f865-43f2-9f44-4a92a02bb804,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.399 226833 DEBUG nova.network.os_vif_util [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "address": "fa:16:3e:96:da:40", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8e5f1dd-33", "ovs_interfaceid": "c8e5f1dd-3393-4d16-b1fe-e259ddea188d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.400 226833 DEBUG nova.network.os_vif_util [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.401 226833 DEBUG os_vif [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.405 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.405 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8e5f1dd-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.407 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.410 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.413 226833 INFO os_vif [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:da:40,bridge_name='br-int',has_traffic_filtering=True,id=c8e5f1dd-3393-4d16-b1fe-e259ddea188d,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8e5f1dd-33')
Jan 31 07:58:39 compute-2 podman[264294]: 2026-01-31 07:58:39.443124082 +0000 UTC m=+0.051696162 container remove 238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.446 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b633573c-4b32-4691-a61b-ac20898497c9]: (4, ('Sat Jan 31 07:58:39 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb (238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca)\n238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca\nSat Jan 31 07:58:39 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb (238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca)\n238c81e664ac2c96d2731919fdfd3a957199d5953d92c2e5a2ba540390028bca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.448 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1b255b-233b-43af-ba11-339d3da66012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.450 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca1ed3b2-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.451 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:39 compute-2 kernel: tapca1ed3b2-b0: left promiscuous mode
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.453 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.457 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4f456152-4ad1-421d-9589-f58125c3f073]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.459 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.473 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6b3360-c501-4fc1-b66b-3ca71dcbca7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.475 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3fc81b-0715-4928-8b17-e35b8935f5b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.488 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ac866e-4567-40a3-a4d7-fc2adf70b683]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652389, 'reachable_time': 15075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264327, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.491 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:58:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:58:39.492 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[d44dcf00-04ea-4923-8afc-2fb78c9a77e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:39 compute-2 systemd[1]: run-netns-ovnmeta\x2dca1ed3b2\x2db27d\x2d427e\x2da9bd\x2dcc12393752eb.mount: Deactivated successfully.
Jan 31 07:58:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:58:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:39.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.855 226833 DEBUG nova.compute.manager [req-fd90f098-97dc-426a-9e44-e70219ba09df req-25f66d5b-1205-4f33-b688-40070ed3c412 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-vif-unplugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.856 226833 DEBUG oslo_concurrency.lockutils [req-fd90f098-97dc-426a-9e44-e70219ba09df req-25f66d5b-1205-4f33-b688-40070ed3c412 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.856 226833 DEBUG oslo_concurrency.lockutils [req-fd90f098-97dc-426a-9e44-e70219ba09df req-25f66d5b-1205-4f33-b688-40070ed3c412 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.856 226833 DEBUG oslo_concurrency.lockutils [req-fd90f098-97dc-426a-9e44-e70219ba09df req-25f66d5b-1205-4f33-b688-40070ed3c412 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.857 226833 DEBUG nova.compute.manager [req-fd90f098-97dc-426a-9e44-e70219ba09df req-25f66d5b-1205-4f33-b688-40070ed3c412 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] No waiting events found dispatching network-vif-unplugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:58:39 compute-2 nova_compute[226829]: 2026-01-31 07:58:39.857 226833 DEBUG nova.compute.manager [req-fd90f098-97dc-426a-9e44-e70219ba09df req-25f66d5b-1205-4f33-b688-40070ed3c412 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-vif-unplugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:58:40 compute-2 nova_compute[226829]: 2026-01-31 07:58:40.102 226833 INFO nova.virt.libvirt.driver [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Deleting instance files /var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804_del
Jan 31 07:58:40 compute-2 nova_compute[226829]: 2026-01-31 07:58:40.103 226833 INFO nova.virt.libvirt.driver [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Deletion of /var/lib/nova/instances/2ea2db2b-f865-43f2-9f44-4a92a02bb804_del complete
Jan 31 07:58:40 compute-2 nova_compute[226829]: 2026-01-31 07:58:40.118 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:40 compute-2 nova_compute[226829]: 2026-01-31 07:58:40.239 226833 INFO nova.compute.manager [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Took 1.32 seconds to destroy the instance on the hypervisor.
Jan 31 07:58:40 compute-2 nova_compute[226829]: 2026-01-31 07:58:40.240 226833 DEBUG oslo.service.loopingcall [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:58:40 compute-2 nova_compute[226829]: 2026-01-31 07:58:40.240 226833 DEBUG nova.compute.manager [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:58:40 compute-2 nova_compute[226829]: 2026-01-31 07:58:40.241 226833 DEBUG nova.network.neutron [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:58:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:40.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:40 compute-2 ceph-mon[77282]: pgmap v1842: 305 pgs: 305 active+clean; 122 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 17 KiB/s wr, 88 op/s
Jan 31 07:58:41 compute-2 nova_compute[226829]: 2026-01-31 07:58:41.479 226833 DEBUG nova.network.neutron [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:58:41 compute-2 nova_compute[226829]: 2026-01-31 07:58:41.567 226833 INFO nova.compute.manager [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Took 1.33 seconds to deallocate network for instance.
Jan 31 07:58:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:41.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:41 compute-2 nova_compute[226829]: 2026-01-31 07:58:41.668 226833 DEBUG oslo_concurrency.lockutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:41 compute-2 nova_compute[226829]: 2026-01-31 07:58:41.669 226833 DEBUG oslo_concurrency.lockutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:41 compute-2 nova_compute[226829]: 2026-01-31 07:58:41.755 226833 DEBUG oslo_concurrency.processutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1933574814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.017 226833 DEBUG nova.compute.manager [req-2fafdd71-cd6c-437c-b924-61d75a5ef117 req-3cb791a0-0906-48cb-9ff1-90bcc626a66a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.017 226833 DEBUG oslo_concurrency.lockutils [req-2fafdd71-cd6c-437c-b924-61d75a5ef117 req-3cb791a0-0906-48cb-9ff1-90bcc626a66a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.018 226833 DEBUG oslo_concurrency.lockutils [req-2fafdd71-cd6c-437c-b924-61d75a5ef117 req-3cb791a0-0906-48cb-9ff1-90bcc626a66a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.018 226833 DEBUG oslo_concurrency.lockutils [req-2fafdd71-cd6c-437c-b924-61d75a5ef117 req-3cb791a0-0906-48cb-9ff1-90bcc626a66a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.018 226833 DEBUG nova.compute.manager [req-2fafdd71-cd6c-437c-b924-61d75a5ef117 req-3cb791a0-0906-48cb-9ff1-90bcc626a66a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] No waiting events found dispatching network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.019 226833 WARNING nova.compute.manager [req-2fafdd71-cd6c-437c-b924-61d75a5ef117 req-3cb791a0-0906-48cb-9ff1-90bcc626a66a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received unexpected event network-vif-plugged-c8e5f1dd-3393-4d16-b1fe-e259ddea188d for instance with vm_state deleted and task_state None.
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.019 226833 DEBUG nova.compute.manager [req-2fafdd71-cd6c-437c-b924-61d75a5ef117 req-3cb791a0-0906-48cb-9ff1-90bcc626a66a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Received event network-vif-deleted-c8e5f1dd-3393-4d16-b1fe-e259ddea188d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:58:42 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1005632506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.172 226833 DEBUG oslo_concurrency.processutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.178 226833 DEBUG nova.compute.provider_tree [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.251 226833 DEBUG nova.scheduler.client.report [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.318 226833 DEBUG oslo_concurrency.lockutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.379 226833 INFO nova.scheduler.client.report [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Deleted allocations for instance 2ea2db2b-f865-43f2-9f44-4a92a02bb804
Jan 31 07:58:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:42.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:42 compute-2 nova_compute[226829]: 2026-01-31 07:58:42.566 226833 DEBUG oslo_concurrency.lockutils [None req-73c155d4-b0e5-4665-b95b-f205367986ce f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "2ea2db2b-f865-43f2-9f44-4a92a02bb804" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:42 compute-2 ceph-mon[77282]: pgmap v1843: 305 pgs: 305 active+clean; 97 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 5.2 KiB/s wr, 66 op/s
Jan 31 07:58:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1005632506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:43.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:43 compute-2 ceph-mon[77282]: pgmap v1844: 305 pgs: 305 active+clean; 78 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 5.5 KiB/s wr, 70 op/s
Jan 31 07:58:43 compute-2 nova_compute[226829]: 2026-01-31 07:58:43.792 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:44.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:44 compute-2 nova_compute[226829]: 2026-01-31 07:58:44.454 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:58:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/388839460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:58:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:58:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/388839460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:58:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/388839460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:58:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/388839460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:58:45 compute-2 nova_compute[226829]: 2026-01-31 07:58:45.121 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:45 compute-2 podman[264353]: 2026-01-31 07:58:45.191826696 +0000 UTC m=+0.073013998 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:58:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:45.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:46 compute-2 ceph-mon[77282]: pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 5.8 KiB/s wr, 67 op/s
Jan 31 07:58:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:46.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:48 compute-2 nova_compute[226829]: 2026-01-31 07:58:48.101 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:48 compute-2 ceph-mon[77282]: pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 5.5 KiB/s wr, 61 op/s
Jan 31 07:58:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:48.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:49 compute-2 nova_compute[226829]: 2026-01-31 07:58:49.489 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:58:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:49.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:58:50 compute-2 nova_compute[226829]: 2026-01-31 07:58:50.123 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:50 compute-2 ceph-mon[77282]: pgmap v1847: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 4.2 KiB/s wr, 44 op/s
Jan 31 07:58:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:50.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:51.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:52 compute-2 ceph-mon[77282]: pgmap v1848: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 07:58:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:52.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:53.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:54 compute-2 nova_compute[226829]: 2026-01-31 07:58:54.348 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846319.3464549, 2ea2db2b-f865-43f2-9f44-4a92a02bb804 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:58:54 compute-2 nova_compute[226829]: 2026-01-31 07:58:54.348 226833 INFO nova.compute.manager [-] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] VM Stopped (Lifecycle Event)
Jan 31 07:58:54 compute-2 ceph-mon[77282]: pgmap v1849: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Jan 31 07:58:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:58:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:54.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:58:54 compute-2 nova_compute[226829]: 2026-01-31 07:58:54.493 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:55 compute-2 nova_compute[226829]: 2026-01-31 07:58:55.064 226833 DEBUG nova.compute.manager [None req-783c80c8-6d31-4f5c-b818-ed55a676f893 - - - - - -] [instance: 2ea2db2b-f865-43f2-9f44-4a92a02bb804] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:58:55 compute-2 nova_compute[226829]: 2026-01-31 07:58:55.126 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:55.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:58:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:56.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:58:56 compute-2 ceph-mon[77282]: pgmap v1850: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 9.0 KiB/s rd, 341 B/s wr, 13 op/s
Jan 31 07:58:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:58:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:57.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:58:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:58.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:58 compute-2 ceph-mon[77282]: pgmap v1851: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:58:58 compute-2 sudo[264381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:58 compute-2 sudo[264381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:58 compute-2 sudo[264381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:58 compute-2 sudo[264406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:58 compute-2 sudo[264406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:58 compute-2 sudo[264406]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:59 compute-2 nova_compute[226829]: 2026-01-31 07:58:59.534 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:58:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:59.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:00 compute-2 nova_compute[226829]: 2026-01-31 07:59:00.128 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:00.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:00 compute-2 ceph-mon[77282]: pgmap v1852: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:01 compute-2 podman[264432]: 2026-01-31 07:59:01.194943255 +0000 UTC m=+0.085775913 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 07:59:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:01.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:01 compute-2 ceph-mon[77282]: pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:02.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:03.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:04.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:04 compute-2 nova_compute[226829]: 2026-01-31 07:59:04.538 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:04 compute-2 ceph-mon[77282]: pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:05 compute-2 nova_compute[226829]: 2026-01-31 07:59:05.130 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:05.374 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:59:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:05.376 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:59:05 compute-2 nova_compute[226829]: 2026-01-31 07:59:05.424 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:59:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:05.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:59:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:06.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:06 compute-2 ceph-mon[77282]: pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:06.870 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:06.872 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:06.873 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:07.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:07 compute-2 ceph-mon[77282]: pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:08.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:09 compute-2 nova_compute[226829]: 2026-01-31 07:59:09.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:09 compute-2 nova_compute[226829]: 2026-01-31 07:59:09.541 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:09.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:10 compute-2 nova_compute[226829]: 2026-01-31 07:59:10.132 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:10.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:10 compute-2 nova_compute[226829]: 2026-01-31 07:59:10.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:10 compute-2 ceph-mon[77282]: pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:11 compute-2 nova_compute[226829]: 2026-01-31 07:59:11.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:11.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:12.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:12 compute-2 ceph-mon[77282]: pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/471546171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:13.378 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:13.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:14.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:14 compute-2 ceph-mon[77282]: pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:14 compute-2 nova_compute[226829]: 2026-01-31 07:59:14.545 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:15 compute-2 nova_compute[226829]: 2026-01-31 07:59:15.135 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:15.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:16 compute-2 podman[264466]: 2026-01-31 07:59:16.147752988 +0000 UTC m=+0.040410066 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:59:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:16.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:16 compute-2 ceph-mon[77282]: pgmap v1860: 305 pgs: 305 active+clean; 59 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 915 KiB/s wr, 13 op/s
Jan 31 07:59:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1140290837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:17 compute-2 nova_compute[226829]: 2026-01-31 07:59:17.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:17 compute-2 nova_compute[226829]: 2026-01-31 07:59:17.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:59:17 compute-2 nova_compute[226829]: 2026-01-31 07:59:17.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:59:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:17.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:18.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:18 compute-2 ceph-mon[77282]: pgmap v1861: 305 pgs: 305 active+clean; 70 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.0 MiB/s wr, 25 op/s
Jan 31 07:59:18 compute-2 sudo[264488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:18 compute-2 sudo[264488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:18 compute-2 sudo[264488]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:18 compute-2 sudo[264513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:18 compute-2 sudo[264513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:18 compute-2 sudo[264513]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:19 compute-2 nova_compute[226829]: 2026-01-31 07:59:19.548 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:19.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:19 compute-2 nova_compute[226829]: 2026-01-31 07:59:19.760 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:59:19 compute-2 nova_compute[226829]: 2026-01-31 07:59:19.760 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:19 compute-2 nova_compute[226829]: 2026-01-31 07:59:19.761 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:19 compute-2 nova_compute[226829]: 2026-01-31 07:59:19.793 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:19 compute-2 nova_compute[226829]: 2026-01-31 07:59:19.793 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:19 compute-2 nova_compute[226829]: 2026-01-31 07:59:19.793 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:19 compute-2 nova_compute[226829]: 2026-01-31 07:59:19.794 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:59:19 compute-2 nova_compute[226829]: 2026-01-31 07:59:19.794 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.136 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:59:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2114909117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.229 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.373 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.374 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4512MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.374 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.374 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:20.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.489 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.489 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.533 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:20 compute-2 ceph-mon[77282]: pgmap v1862: 305 pgs: 305 active+clean; 88 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:59:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3292487236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2114909117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/182031490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:59:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3013397379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.965 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:20 compute-2 nova_compute[226829]: 2026-01-31 07:59:20.973 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:59:21 compute-2 nova_compute[226829]: 2026-01-31 07:59:21.027 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:59:21 compute-2 nova_compute[226829]: 2026-01-31 07:59:21.062 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:59:21 compute-2 nova_compute[226829]: 2026-01-31 07:59:21.062 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:21.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:21 compute-2 nova_compute[226829]: 2026-01-31 07:59:21.789 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:21 compute-2 nova_compute[226829]: 2026-01-31 07:59:21.790 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1724540177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3013397379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1206870935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:22.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:23 compute-2 ceph-mon[77282]: pgmap v1863: 305 pgs: 305 active+clean; 88 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:59:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:23.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:24.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:24 compute-2 nova_compute[226829]: 2026-01-31 07:59:24.550 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:24 compute-2 ceph-mon[77282]: pgmap v1864: 305 pgs: 305 active+clean; 96 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Jan 31 07:59:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4102117780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2095121666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:25 compute-2 nova_compute[226829]: 2026-01-31 07:59:25.138 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:25.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:26.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:26 compute-2 ceph-mon[77282]: pgmap v1865: 305 pgs: 305 active+clean; 123 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.3 MiB/s wr, 53 op/s
Jan 31 07:59:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:27.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:27 compute-2 ceph-mon[77282]: pgmap v1866: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Jan 31 07:59:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:28.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:28 compute-2 nova_compute[226829]: 2026-01-31 07:59:28.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:28 compute-2 nova_compute[226829]: 2026-01-31 07:59:28.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:59:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:29 compute-2 nova_compute[226829]: 2026-01-31 07:59:29.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:29.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:30 compute-2 nova_compute[226829]: 2026-01-31 07:59:30.139 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:30.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:31 compute-2 ceph-mon[77282]: pgmap v1867: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.5 MiB/s wr, 34 op/s
Jan 31 07:59:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:31.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:32 compute-2 podman[264591]: 2026-01-31 07:59:32.188995811 +0000 UTC m=+0.069906994 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 07:59:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:32.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:32 compute-2 ceph-mon[77282]: pgmap v1868: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 31 07:59:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:33.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:34 compute-2 ovn_controller[133834]: 2026-01-31T07:59:34Z|00285|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 07:59:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:34.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:34 compute-2 nova_compute[226829]: 2026-01-31 07:59:34.556 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:34 compute-2 ceph-mon[77282]: pgmap v1869: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 750 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 31 07:59:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1381825303' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3865166314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:35 compute-2 nova_compute[226829]: 2026-01-31 07:59:35.141 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:35.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:36 compute-2 ceph-mon[77282]: pgmap v1870: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.5 MiB/s wr, 84 op/s
Jan 31 07:59:36 compute-2 nova_compute[226829]: 2026-01-31 07:59:36.281 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Acquiring lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:36 compute-2 nova_compute[226829]: 2026-01-31 07:59:36.281 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:36 compute-2 nova_compute[226829]: 2026-01-31 07:59:36.434 226833 DEBUG nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:59:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:36.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:36 compute-2 sudo[264619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:36 compute-2 sudo[264619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:36 compute-2 sudo[264619]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:36 compute-2 nova_compute[226829]: 2026-01-31 07:59:36.673 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:36 compute-2 nova_compute[226829]: 2026-01-31 07:59:36.673 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:36 compute-2 nova_compute[226829]: 2026-01-31 07:59:36.682 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:59:36 compute-2 nova_compute[226829]: 2026-01-31 07:59:36.682 226833 INFO nova.compute.claims [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Claim successful on node compute-2.ctlplane.example.com
Jan 31 07:59:36 compute-2 sudo[264644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:36 compute-2 sudo[264644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:36 compute-2 sudo[264644]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:36 compute-2 sudo[264669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:36 compute-2 sudo[264669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:36 compute-2 sudo[264669]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:36 compute-2 sudo[264694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:59:36 compute-2 sudo[264694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:36 compute-2 nova_compute[226829]: 2026-01-31 07:59:36.990 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:37 compute-2 sudo[264694]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:37 compute-2 sudo[264760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:37 compute-2 sudo[264760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:37 compute-2 sudo[264760]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:37 compute-2 sudo[264785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:37 compute-2 sudo[264785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:37 compute-2 sudo[264785]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:37 compute-2 sudo[264810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:37 compute-2 sudo[264810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:37 compute-2 sudo[264810]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:59:37 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2139458933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:37 compute-2 nova_compute[226829]: 2026-01-31 07:59:37.446 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:37 compute-2 nova_compute[226829]: 2026-01-31 07:59:37.452 226833 DEBUG nova.compute.provider_tree [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:59:37 compute-2 sudo[264836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:59:37 compute-2 sudo[264836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:37 compute-2 nova_compute[226829]: 2026-01-31 07:59:37.497 226833 DEBUG nova.scheduler.client.report [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:59:37 compute-2 nova_compute[226829]: 2026-01-31 07:59:37.548 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:37 compute-2 nova_compute[226829]: 2026-01-31 07:59:37.549 226833 DEBUG nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:59:37 compute-2 nova_compute[226829]: 2026-01-31 07:59:37.661 226833 DEBUG nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:59:37 compute-2 nova_compute[226829]: 2026-01-31 07:59:37.662 226833 DEBUG nova.network.neutron [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:59:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:37.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:37 compute-2 nova_compute[226829]: 2026-01-31 07:59:37.739 226833 INFO nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:59:37 compute-2 sudo[264836]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:37 compute-2 nova_compute[226829]: 2026-01-31 07:59:37.960 226833 DEBUG nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:59:37 compute-2 nova_compute[226829]: 2026-01-31 07:59:37.992 226833 DEBUG nova.policy [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea670e6956634b1b82cb9bcb09ff4400', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eefab01229294ca9bb0e9aed1933e3b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-2 ceph-mon[77282]: pgmap v1871: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 241 KiB/s wr, 74 op/s
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2139458933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:59:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:59:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:38.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:39 compute-2 sudo[264895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:39 compute-2 sudo[264895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:39 compute-2 sudo[264895]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:39 compute-2 sudo[264920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:39 compute-2 sudo[264920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:39 compute-2 sudo[264920]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.186 226833 DEBUG nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.187 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.188 226833 INFO nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Creating image(s)
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.215 226833 DEBUG nova.storage.rbd_utils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] rbd image b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.243 226833 DEBUG nova.storage.rbd_utils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] rbd image b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.275 226833 DEBUG nova.storage.rbd_utils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] rbd image b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.279 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.333 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.335 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.335 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.336 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.373 226833 DEBUG nova.storage.rbd_utils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] rbd image b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.377 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:39 compute-2 nova_compute[226829]: 2026-01-31 07:59:39.560 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:39.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:40 compute-2 nova_compute[226829]: 2026-01-31 07:59:40.143 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:40 compute-2 ceph-mon[77282]: pgmap v1872: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 25 KiB/s wr, 83 op/s
Jan 31 07:59:40 compute-2 nova_compute[226829]: 2026-01-31 07:59:40.208 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.831s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:40 compute-2 nova_compute[226829]: 2026-01-31 07:59:40.278 226833 DEBUG nova.storage.rbd_utils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] resizing rbd image b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:59:40 compute-2 nova_compute[226829]: 2026-01-31 07:59:40.365 226833 DEBUG nova.objects.instance [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lazy-loading 'migration_context' on Instance uuid b6d1d7e9-182a-43ce-beab-85e0bb023fad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:59:40 compute-2 nova_compute[226829]: 2026-01-31 07:59:40.410 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:59:40 compute-2 nova_compute[226829]: 2026-01-31 07:59:40.410 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Ensure instance console log exists: /var/lib/nova/instances/b6d1d7e9-182a-43ce-beab-85e0bb023fad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:59:40 compute-2 nova_compute[226829]: 2026-01-31 07:59:40.410 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:40 compute-2 nova_compute[226829]: 2026-01-31 07:59:40.411 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:40 compute-2 nova_compute[226829]: 2026-01-31 07:59:40.411 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:40.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:40 compute-2 nova_compute[226829]: 2026-01-31 07:59:40.611 226833 DEBUG nova.network.neutron [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Successfully created port: 47628b26-1097-42cf-9f62-4c7d909083a5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:59:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:41.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:42 compute-2 ceph-mon[77282]: pgmap v1873: 305 pgs: 305 active+clean; 151 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 523 KiB/s wr, 87 op/s
Jan 31 07:59:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2876323651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3586046035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:42.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:43.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:44 compute-2 nova_compute[226829]: 2026-01-31 07:59:44.311 226833 DEBUG nova.network.neutron [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Successfully updated port: 47628b26-1097-42cf-9f62-4c7d909083a5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:59:44 compute-2 nova_compute[226829]: 2026-01-31 07:59:44.436 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Acquiring lock "refresh_cache-b6d1d7e9-182a-43ce-beab-85e0bb023fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:59:44 compute-2 nova_compute[226829]: 2026-01-31 07:59:44.436 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Acquired lock "refresh_cache-b6d1d7e9-182a-43ce-beab-85e0bb023fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:59:44 compute-2 nova_compute[226829]: 2026-01-31 07:59:44.436 226833 DEBUG nova.network.neutron [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:59:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:44.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:44 compute-2 nova_compute[226829]: 2026-01-31 07:59:44.564 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:44 compute-2 nova_compute[226829]: 2026-01-31 07:59:44.577 226833 DEBUG nova.compute.manager [req-4736317e-a72c-44fb-8ccc-2ff6e757567d req-b1fd5acf-2ac5-4248-b1d4-2985032b6ec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Received event network-changed-47628b26-1097-42cf-9f62-4c7d909083a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:59:44 compute-2 nova_compute[226829]: 2026-01-31 07:59:44.577 226833 DEBUG nova.compute.manager [req-4736317e-a72c-44fb-8ccc-2ff6e757567d req-b1fd5acf-2ac5-4248-b1d4-2985032b6ec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Refreshing instance network info cache due to event network-changed-47628b26-1097-42cf-9f62-4c7d909083a5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:59:44 compute-2 nova_compute[226829]: 2026-01-31 07:59:44.577 226833 DEBUG oslo_concurrency.lockutils [req-4736317e-a72c-44fb-8ccc-2ff6e757567d req-b1fd5acf-2ac5-4248-b1d4-2985032b6ec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b6d1d7e9-182a-43ce-beab-85e0bb023fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:59:44 compute-2 ceph-mon[77282]: pgmap v1874: 305 pgs: 305 active+clean; 158 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 729 KiB/s wr, 102 op/s
Jan 31 07:59:44 compute-2 nova_compute[226829]: 2026-01-31 07:59:44.789 226833 DEBUG nova.network.neutron [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:59:45 compute-2 nova_compute[226829]: 2026-01-31 07:59:45.199 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:45.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:59:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:46.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:59:46 compute-2 nova_compute[226829]: 2026-01-31 07:59:46.711 226833 DEBUG nova.network.neutron [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Updating instance_info_cache with network_info: [{"id": "47628b26-1097-42cf-9f62-4c7d909083a5", "address": "fa:16:3e:2c:6f:40", "network": {"id": "5fdaecd5-5a26-4054-aea7-a820fc05f87a", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-541524873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eefab01229294ca9bb0e9aed1933e3b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47628b26-10", "ovs_interfaceid": "47628b26-1097-42cf-9f62-4c7d909083a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:59:47 compute-2 podman[265115]: 2026-01-31 07:59:47.172532644 +0000 UTC m=+0.059451891 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:59:47 compute-2 ceph-mon[77282]: pgmap v1875: 305 pgs: 305 active+clean; 181 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.228 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Releasing lock "refresh_cache-b6d1d7e9-182a-43ce-beab-85e0bb023fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.229 226833 DEBUG nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Instance network_info: |[{"id": "47628b26-1097-42cf-9f62-4c7d909083a5", "address": "fa:16:3e:2c:6f:40", "network": {"id": "5fdaecd5-5a26-4054-aea7-a820fc05f87a", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-541524873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eefab01229294ca9bb0e9aed1933e3b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47628b26-10", "ovs_interfaceid": "47628b26-1097-42cf-9f62-4c7d909083a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.229 226833 DEBUG oslo_concurrency.lockutils [req-4736317e-a72c-44fb-8ccc-2ff6e757567d req-b1fd5acf-2ac5-4248-b1d4-2985032b6ec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b6d1d7e9-182a-43ce-beab-85e0bb023fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.230 226833 DEBUG nova.network.neutron [req-4736317e-a72c-44fb-8ccc-2ff6e757567d req-b1fd5acf-2ac5-4248-b1d4-2985032b6ec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Refreshing network info cache for port 47628b26-1097-42cf-9f62-4c7d909083a5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.233 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Start _get_guest_xml network_info=[{"id": "47628b26-1097-42cf-9f62-4c7d909083a5", "address": "fa:16:3e:2c:6f:40", "network": {"id": "5fdaecd5-5a26-4054-aea7-a820fc05f87a", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-541524873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eefab01229294ca9bb0e9aed1933e3b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47628b26-10", "ovs_interfaceid": "47628b26-1097-42cf-9f62-4c7d909083a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.238 226833 WARNING nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.247 226833 DEBUG nova.virt.libvirt.host [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.248 226833 DEBUG nova.virt.libvirt.host [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.251 226833 DEBUG nova.virt.libvirt.host [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.252 226833 DEBUG nova.virt.libvirt.host [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.254 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.254 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.254 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.255 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.255 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.255 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.255 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.256 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.256 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.256 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.256 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.257 226833 DEBUG nova.virt.hardware [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.260 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:59:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4264646496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.667 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.693 226833 DEBUG nova.storage.rbd_utils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] rbd image b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:47 compute-2 nova_compute[226829]: 2026-01-31 07:59:47.699 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:47.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:47 compute-2 sudo[265197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:47 compute-2 sudo[265197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:47 compute-2 sudo[265197]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:47 compute-2 sudo[265222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:59:47 compute-2 sudo[265222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:47 compute-2 sudo[265222]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:59:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/851711816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.124 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.125 226833 DEBUG nova.virt.libvirt.vif [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1202355201',display_name='tempest-NoVNCConsoleTestJSON-server-1202355201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1202355201',id=88,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eefab01229294ca9bb0e9aed1933e3b4',ramdisk_id='',reservation_id='r-xf5ak4cj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-72280206',owner_user_name='tempest-NoVNCConsoleTestJSON-72280206-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:59:38Z,user_data=None,user_id='ea670e6956634b1b82cb9bcb09ff4400',uuid=b6d1d7e9-182a-43ce-beab-85e0bb023fad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47628b26-1097-42cf-9f62-4c7d909083a5", "address": "fa:16:3e:2c:6f:40", "network": {"id": "5fdaecd5-5a26-4054-aea7-a820fc05f87a", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-541524873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eefab01229294ca9bb0e9aed1933e3b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47628b26-10", "ovs_interfaceid": "47628b26-1097-42cf-9f62-4c7d909083a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.126 226833 DEBUG nova.network.os_vif_util [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Converting VIF {"id": "47628b26-1097-42cf-9f62-4c7d909083a5", "address": "fa:16:3e:2c:6f:40", "network": {"id": "5fdaecd5-5a26-4054-aea7-a820fc05f87a", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-541524873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eefab01229294ca9bb0e9aed1933e3b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47628b26-10", "ovs_interfaceid": "47628b26-1097-42cf-9f62-4c7d909083a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.126 226833 DEBUG nova.network.os_vif_util [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=47628b26-1097-42cf-9f62-4c7d909083a5,network=Network(5fdaecd5-5a26-4054-aea7-a820fc05f87a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47628b26-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.127 226833 DEBUG nova.objects.instance [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6d1d7e9-182a-43ce-beab-85e0bb023fad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:59:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:48 compute-2 ceph-mon[77282]: pgmap v1876: 305 pgs: 305 active+clean; 181 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 31 07:59:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4264646496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/851711816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:48.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.666 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <uuid>b6d1d7e9-182a-43ce-beab-85e0bb023fad</uuid>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <name>instance-00000058</name>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <metadata>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <nova:name>tempest-NoVNCConsoleTestJSON-server-1202355201</nova:name>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 07:59:47</nova:creationTime>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <nova:user uuid="ea670e6956634b1b82cb9bcb09ff4400">tempest-NoVNCConsoleTestJSON-72280206-project-member</nova:user>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <nova:project uuid="eefab01229294ca9bb0e9aed1933e3b4">tempest-NoVNCConsoleTestJSON-72280206</nova:project>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <nova:port uuid="47628b26-1097-42cf-9f62-4c7d909083a5">
Jan 31 07:59:48 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   </metadata>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <system>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <entry name="serial">b6d1d7e9-182a-43ce-beab-85e0bb023fad</entry>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <entry name="uuid">b6d1d7e9-182a-43ce-beab-85e0bb023fad</entry>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     </system>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <os>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   </os>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <features>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <apic/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   </features>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   </clock>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   </cpu>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   <devices>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk">
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       </source>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk.config">
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       </source>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 07:59:48 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       </auth>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     </disk>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:2c:6f:40"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <target dev="tap47628b26-10"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     </interface>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/b6d1d7e9-182a-43ce-beab-85e0bb023fad/console.log" append="off"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     </serial>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <video>
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     </video>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     </rng>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 07:59:48 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 07:59:48 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 07:59:48 compute-2 nova_compute[226829]:   </devices>
Jan 31 07:59:48 compute-2 nova_compute[226829]: </domain>
Jan 31 07:59:48 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.668 226833 DEBUG nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Preparing to wait for external event network-vif-plugged-47628b26-1097-42cf-9f62-4c7d909083a5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.668 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Acquiring lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.668 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.668 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.669 226833 DEBUG nova.virt.libvirt.vif [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1202355201',display_name='tempest-NoVNCConsoleTestJSON-server-1202355201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1202355201',id=88,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eefab01229294ca9bb0e9aed1933e3b4',ramdisk_id='',reservation_id='r-xf5ak4cj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-72280206',owner_user_name='tempest-NoVNCConsoleTestJSON-72280206-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:59:38Z,user_data=None,user_id='ea670e6956634b1b82cb9bcb09ff4400',uuid=b6d1d7e9-182a-43ce-beab-85e0bb023fad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47628b26-1097-42cf-9f62-4c7d909083a5", "address": "fa:16:3e:2c:6f:40", "network": {"id": "5fdaecd5-5a26-4054-aea7-a820fc05f87a", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-541524873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eefab01229294ca9bb0e9aed1933e3b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47628b26-10", "ovs_interfaceid": "47628b26-1097-42cf-9f62-4c7d909083a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.670 226833 DEBUG nova.network.os_vif_util [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Converting VIF {"id": "47628b26-1097-42cf-9f62-4c7d909083a5", "address": "fa:16:3e:2c:6f:40", "network": {"id": "5fdaecd5-5a26-4054-aea7-a820fc05f87a", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-541524873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eefab01229294ca9bb0e9aed1933e3b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47628b26-10", "ovs_interfaceid": "47628b26-1097-42cf-9f62-4c7d909083a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.670 226833 DEBUG nova.network.os_vif_util [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=47628b26-1097-42cf-9f62-4c7d909083a5,network=Network(5fdaecd5-5a26-4054-aea7-a820fc05f87a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47628b26-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.670 226833 DEBUG os_vif [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=47628b26-1097-42cf-9f62-4c7d909083a5,network=Network(5fdaecd5-5a26-4054-aea7-a820fc05f87a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47628b26-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.671 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.672 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.672 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.678 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47628b26-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.678 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47628b26-10, col_values=(('external_ids', {'iface-id': '47628b26-1097-42cf-9f62-4c7d909083a5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:6f:40', 'vm-uuid': 'b6d1d7e9-182a-43ce-beab-85e0bb023fad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.683 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:59:48 compute-2 NetworkManager[48999]: <info>  [1769846388.6838] manager: (tap47628b26-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.690 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:48 compute-2 nova_compute[226829]: 2026-01-31 07:59:48.692 226833 INFO os_vif [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=47628b26-1097-42cf-9f62-4c7d909083a5,network=Network(5fdaecd5-5a26-4054-aea7-a820fc05f87a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47628b26-10')
Jan 31 07:59:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:49 compute-2 nova_compute[226829]: 2026-01-31 07:59:49.437 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:59:49 compute-2 nova_compute[226829]: 2026-01-31 07:59:49.438 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:59:49 compute-2 nova_compute[226829]: 2026-01-31 07:59:49.438 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] No VIF found with MAC fa:16:3e:2c:6f:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:59:49 compute-2 nova_compute[226829]: 2026-01-31 07:59:49.438 226833 INFO nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Using config drive
Jan 31 07:59:49 compute-2 nova_compute[226829]: 2026-01-31 07:59:49.486 226833 DEBUG nova.storage.rbd_utils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] rbd image b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:49.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:50 compute-2 nova_compute[226829]: 2026-01-31 07:59:50.200 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:50 compute-2 ceph-mon[77282]: pgmap v1877: 305 pgs: 305 active+clean; 163 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Jan 31 07:59:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:50.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:59:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:51.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:59:52 compute-2 nova_compute[226829]: 2026-01-31 07:59:52.099 226833 INFO nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Creating config drive at /var/lib/nova/instances/b6d1d7e9-182a-43ce-beab-85e0bb023fad/disk.config
Jan 31 07:59:52 compute-2 nova_compute[226829]: 2026-01-31 07:59:52.105 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6d1d7e9-182a-43ce-beab-85e0bb023fad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuz2us4kc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:52 compute-2 nova_compute[226829]: 2026-01-31 07:59:52.228 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6d1d7e9-182a-43ce-beab-85e0bb023fad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuz2us4kc" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:52 compute-2 nova_compute[226829]: 2026-01-31 07:59:52.257 226833 DEBUG nova.storage.rbd_utils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] rbd image b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:52 compute-2 nova_compute[226829]: 2026-01-31 07:59:52.261 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6d1d7e9-182a-43ce-beab-85e0bb023fad/disk.config b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:52.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:52 compute-2 ceph-mon[77282]: pgmap v1878: 305 pgs: 305 active+clean; 155 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.190 226833 DEBUG oslo_concurrency.processutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6d1d7e9-182a-43ce-beab-85e0bb023fad/disk.config b6d1d7e9-182a-43ce-beab-85e0bb023fad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.929s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.192 226833 INFO nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Deleting local config drive /var/lib/nova/instances/b6d1d7e9-182a-43ce-beab-85e0bb023fad/disk.config because it was imported into RBD.
Jan 31 07:59:53 compute-2 kernel: tap47628b26-10: entered promiscuous mode
Jan 31 07:59:53 compute-2 NetworkManager[48999]: <info>  [1769846393.2384] manager: (tap47628b26-10): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Jan 31 07:59:53 compute-2 ovn_controller[133834]: 2026-01-31T07:59:53Z|00286|binding|INFO|Claiming lport 47628b26-1097-42cf-9f62-4c7d909083a5 for this chassis.
Jan 31 07:59:53 compute-2 ovn_controller[133834]: 2026-01-31T07:59:53Z|00287|binding|INFO|47628b26-1097-42cf-9f62-4c7d909083a5: Claiming fa:16:3e:2c:6f:40 10.100.0.9
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.239 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.243 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.260 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:53 compute-2 ovn_controller[133834]: 2026-01-31T07:59:53Z|00288|binding|INFO|Setting lport 47628b26-1097-42cf-9f62-4c7d909083a5 ovn-installed in OVS
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.263 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:53 compute-2 systemd-machined[195142]: New machine qemu-36-instance-00000058.
Jan 31 07:59:53 compute-2 systemd[1]: Started Virtual Machine qemu-36-instance-00000058.
Jan 31 07:59:53 compute-2 systemd-udevd[265327]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:59:53 compute-2 NetworkManager[48999]: <info>  [1769846393.3098] device (tap47628b26-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:59:53 compute-2 NetworkManager[48999]: <info>  [1769846393.3105] device (tap47628b26-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:59:53 compute-2 ovn_controller[133834]: 2026-01-31T07:59:53Z|00289|binding|INFO|Setting lport 47628b26-1097-42cf-9f62-4c7d909083a5 up in Southbound
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.347 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:6f:40 10.100.0.9'], port_security=['fa:16:3e:2c:6f:40 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6d1d7e9-182a-43ce-beab-85e0bb023fad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fdaecd5-5a26-4054-aea7-a820fc05f87a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eefab01229294ca9bb0e9aed1933e3b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62e1ab70-cfe5-4699-ab44-3a22333f071b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2b116c1-42f7-408e-ab98-1af6342c9dcc, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=47628b26-1097-42cf-9f62-4c7d909083a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.348 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 47628b26-1097-42cf-9f62-4c7d909083a5 in datapath 5fdaecd5-5a26-4054-aea7-a820fc05f87a bound to our chassis
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.350 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5fdaecd5-5a26-4054-aea7-a820fc05f87a
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.361 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8eeedb7e-2552-497e-b3e7-82990055b78a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.362 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5fdaecd5-51 in ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.364 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5fdaecd5-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.364 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[030134f7-3cf8-4d3a-b3b3-562bef6ffc93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.365 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d006d34b-df3e-4e0a-ad6f-1cb03acdd00a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.379 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad82c65-5c8b-458b-8358-42b43117a130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.391 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a45c7830-98e6-4f2f-98e6-df58c5ff3fc5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.413 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[95734886-a4ae-4ab8-9a41-522052b997a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 NetworkManager[48999]: <info>  [1769846393.4185] manager: (tap5fdaecd5-50): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Jan 31 07:59:53 compute-2 systemd-udevd[265332]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.420 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a5358978-0885-4230-a3d1-9fd4e456cf61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.447 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[22fde922-5af8-4c96-b671-48dac41a2f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.451 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[b4839b06-6a52-4383-a807-2f9cb3e911e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 NetworkManager[48999]: <info>  [1769846393.4663] device (tap5fdaecd5-50): carrier: link connected
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.468 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[a6763501-d555-4bc1-a1eb-3a6986b373e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.479 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f0698c-40a7-4953-87ca-47111f6116b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5fdaecd5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664593, 'reachable_time': 44035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265358, 'error': None, 'target': 'ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.490 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bec661b9-b2f5-46dc-a239-c2d4ff66b910]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:3825'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664593, 'tstamp': 664593}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265363, 'error': None, 'target': 'ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.502 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f6aafc4b-2a84-4a12-9303-884544658394]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5fdaecd5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:38:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664593, 'reachable_time': 44035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265375, 'error': None, 'target': 'ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.523 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[090d770a-eb70-4759-9227-6e2b6b892902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.546 226833 DEBUG nova.network.neutron [req-4736317e-a72c-44fb-8ccc-2ff6e757567d req-b1fd5acf-2ac5-4248-b1d4-2985032b6ec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Updated VIF entry in instance network info cache for port 47628b26-1097-42cf-9f62-4c7d909083a5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.547 226833 DEBUG nova.network.neutron [req-4736317e-a72c-44fb-8ccc-2ff6e757567d req-b1fd5acf-2ac5-4248-b1d4-2985032b6ec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Updating instance_info_cache with network_info: [{"id": "47628b26-1097-42cf-9f62-4c7d909083a5", "address": "fa:16:3e:2c:6f:40", "network": {"id": "5fdaecd5-5a26-4054-aea7-a820fc05f87a", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-541524873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eefab01229294ca9bb0e9aed1933e3b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47628b26-10", "ovs_interfaceid": "47628b26-1097-42cf-9f62-4c7d909083a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.557 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[022fc692-548f-4287-8f41-6ff035f48d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.559 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fdaecd5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.559 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.560 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fdaecd5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.562 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:53 compute-2 kernel: tap5fdaecd5-50: entered promiscuous mode
Jan 31 07:59:53 compute-2 NetworkManager[48999]: <info>  [1769846393.5633] manager: (tap5fdaecd5-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.565 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5fdaecd5-50, col_values=(('external_ids', {'iface-id': '3b693ea8-4a1d-4e0c-bb5e-50602bebc381'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:53 compute-2 ovn_controller[133834]: 2026-01-31T07:59:53Z|00290|binding|INFO|Releasing lport 3b693ea8-4a1d-4e0c-bb5e-50602bebc381 from this chassis (sb_readonly=0)
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.566 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.567 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fdaecd5-5a26-4054-aea7-a820fc05f87a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fdaecd5-5a26-4054-aea7-a820fc05f87a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.568 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[30ba676d-ef88-47d8-84d2-468a83e7863c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.568 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: global
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5fdaecd5-5a26-4054-aea7-a820fc05f87a
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5fdaecd5-5a26-4054-aea7-a820fc05f87a.pid.haproxy
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5fdaecd5-5a26-4054-aea7-a820fc05f87a
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:59:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:53.569 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a', 'env', 'PROCESS_TAG=haproxy-5fdaecd5-5a26-4054-aea7-a820fc05f87a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5fdaecd5-5a26-4054-aea7-a820fc05f87a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.571 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.680 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:53.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.746 226833 DEBUG oslo_concurrency.lockutils [req-4736317e-a72c-44fb-8ccc-2ff6e757567d req-b1fd5acf-2ac5-4248-b1d4-2985032b6ec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b6d1d7e9-182a-43ce-beab-85e0bb023fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.824 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846393.8238418, b6d1d7e9-182a-43ce-beab-85e0bb023fad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.825 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] VM Started (Lifecycle Event)
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.943 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.957 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846393.8244128, b6d1d7e9-182a-43ce-beab-85e0bb023fad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:59:53 compute-2 nova_compute[226829]: 2026-01-31 07:59:53.958 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] VM Paused (Lifecycle Event)
Jan 31 07:59:53 compute-2 podman[265434]: 2026-01-31 07:59:53.873152012 +0000 UTC m=+0.019618192 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.106 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.109 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:59:54 compute-2 podman[265434]: 2026-01-31 07:59:54.135542949 +0000 UTC m=+0.282009109 container create 8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 07:59:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:54 compute-2 systemd[1]: Started libpod-conmon-8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93.scope.
Jan 31 07:59:54 compute-2 ceph-mon[77282]: pgmap v1879: 305 pgs: 305 active+clean; 145 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 198 op/s
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.204 226833 DEBUG nova.compute.manager [req-da76f0fd-7181-4b21-967e-3f9bedebba81 req-ee351824-deea-4759-9b86-961f303c1053 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Received event network-vif-plugged-47628b26-1097-42cf-9f62-4c7d909083a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.204 226833 DEBUG oslo_concurrency.lockutils [req-da76f0fd-7181-4b21-967e-3f9bedebba81 req-ee351824-deea-4759-9b86-961f303c1053 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.205 226833 DEBUG oslo_concurrency.lockutils [req-da76f0fd-7181-4b21-967e-3f9bedebba81 req-ee351824-deea-4759-9b86-961f303c1053 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.205 226833 DEBUG oslo_concurrency.lockutils [req-da76f0fd-7181-4b21-967e-3f9bedebba81 req-ee351824-deea-4759-9b86-961f303c1053 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.205 226833 DEBUG nova.compute.manager [req-da76f0fd-7181-4b21-967e-3f9bedebba81 req-ee351824-deea-4759-9b86-961f303c1053 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Processing event network-vif-plugged-47628b26-1097-42cf-9f62-4c7d909083a5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.206 226833 DEBUG nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.210 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.213 226833 INFO nova.virt.libvirt.driver [-] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Instance spawned successfully.
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.213 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:59:54 compute-2 systemd[1]: Started libcrun container.
Jan 31 07:59:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b47ca3fffa3401f321d565b9bd626234abdb3bf6489efec63f5db556bf67606/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.221 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.221 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846394.2104406, b6d1d7e9-182a-43ce-beab-85e0bb023fad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.221 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] VM Resumed (Lifecycle Event)
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.250 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.250 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.251 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.251 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.252 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.252 226833 DEBUG nova.virt.libvirt.driver [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.289 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:59:54 compute-2 podman[265434]: 2026-01-31 07:59:54.292067519 +0000 UTC m=+0.438533699 container init 8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.293 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:59:54 compute-2 podman[265434]: 2026-01-31 07:59:54.300103766 +0000 UTC m=+0.446569926 container start 8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:59:54 compute-2 neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a[265450]: [NOTICE]   (265454) : New worker (265456) forked
Jan 31 07:59:54 compute-2 neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a[265450]: [NOTICE]   (265454) : Loading success.
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.512 226833 INFO nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Took 15.33 seconds to spawn the instance on the hypervisor.
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.513 226833 DEBUG nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.522 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:59:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:54.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.740 226833 INFO nova.compute.manager [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Took 18.11 seconds to build instance.
Jan 31 07:59:54 compute-2 nova_compute[226829]: 2026-01-31 07:59:54.971 226833 DEBUG oslo_concurrency.lockutils [None req-d9464166-9886-4279-832a-f7c3bc1108f8 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:55 compute-2 nova_compute[226829]: 2026-01-31 07:59:55.203 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/18686290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:55.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:55 compute-2 nova_compute[226829]: 2026-01-31 07:59:55.996 226833 DEBUG nova.compute.manager [None req-9bb3ca3e-db85-481b-ab6e-544c6cbbbd6c ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196
Jan 31 07:59:56 compute-2 nova_compute[226829]: 2026-01-31 07:59:56.469 226833 DEBUG nova.compute.manager [req-1ac07358-1c0c-43d4-b010-b84852a98e18 req-c6debcc9-e65e-4d24-a66d-e7a3309fa60e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Received event network-vif-plugged-47628b26-1097-42cf-9f62-4c7d909083a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:59:56 compute-2 nova_compute[226829]: 2026-01-31 07:59:56.469 226833 DEBUG oslo_concurrency.lockutils [req-1ac07358-1c0c-43d4-b010-b84852a98e18 req-c6debcc9-e65e-4d24-a66d-e7a3309fa60e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:56 compute-2 nova_compute[226829]: 2026-01-31 07:59:56.469 226833 DEBUG oslo_concurrency.lockutils [req-1ac07358-1c0c-43d4-b010-b84852a98e18 req-c6debcc9-e65e-4d24-a66d-e7a3309fa60e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:56 compute-2 nova_compute[226829]: 2026-01-31 07:59:56.470 226833 DEBUG oslo_concurrency.lockutils [req-1ac07358-1c0c-43d4-b010-b84852a98e18 req-c6debcc9-e65e-4d24-a66d-e7a3309fa60e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:56 compute-2 nova_compute[226829]: 2026-01-31 07:59:56.470 226833 DEBUG nova.compute.manager [req-1ac07358-1c0c-43d4-b010-b84852a98e18 req-c6debcc9-e65e-4d24-a66d-e7a3309fa60e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] No waiting events found dispatching network-vif-plugged-47628b26-1097-42cf-9f62-4c7d909083a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:59:56 compute-2 nova_compute[226829]: 2026-01-31 07:59:56.470 226833 WARNING nova.compute.manager [req-1ac07358-1c0c-43d4-b010-b84852a98e18 req-c6debcc9-e65e-4d24-a66d-e7a3309fa60e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Received unexpected event network-vif-plugged-47628b26-1097-42cf-9f62-4c7d909083a5 for instance with vm_state active and task_state None.
Jan 31 07:59:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:59:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:56.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:59:56 compute-2 ceph-mon[77282]: pgmap v1880: 305 pgs: 305 active+clean; 164 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 224 op/s
Jan 31 07:59:57 compute-2 nova_compute[226829]: 2026-01-31 07:59:57.171 226833 DEBUG nova.compute.manager [None req-376dad2b-081b-4f8d-aa32-6c1c0fb9aadb ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196
Jan 31 07:59:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:57.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:57 compute-2 nova_compute[226829]: 2026-01-31 07:59:57.921 226833 DEBUG oslo_concurrency.lockutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Acquiring lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:57 compute-2 nova_compute[226829]: 2026-01-31 07:59:57.922 226833 DEBUG oslo_concurrency.lockutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:57 compute-2 nova_compute[226829]: 2026-01-31 07:59:57.922 226833 DEBUG oslo_concurrency.lockutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Acquiring lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:57 compute-2 nova_compute[226829]: 2026-01-31 07:59:57.922 226833 DEBUG oslo_concurrency.lockutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:57 compute-2 nova_compute[226829]: 2026-01-31 07:59:57.923 226833 DEBUG oslo_concurrency.lockutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:57 compute-2 nova_compute[226829]: 2026-01-31 07:59:57.924 226833 INFO nova.compute.manager [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Terminating instance
Jan 31 07:59:57 compute-2 nova_compute[226829]: 2026-01-31 07:59:57.925 226833 DEBUG nova.compute.manager [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.035 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:58.035 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:59:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:58.038 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:59:58 compute-2 ceph-mon[77282]: pgmap v1881: 305 pgs: 305 active+clean; 166 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 179 op/s
Jan 31 07:59:58 compute-2 kernel: tap47628b26-10 (unregistering): left promiscuous mode
Jan 31 07:59:58 compute-2 NetworkManager[48999]: <info>  [1769846398.5259] device (tap47628b26-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:59:58 compute-2 ovn_controller[133834]: 2026-01-31T07:59:58Z|00291|binding|INFO|Releasing lport 47628b26-1097-42cf-9f62-4c7d909083a5 from this chassis (sb_readonly=0)
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.534 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:58 compute-2 ovn_controller[133834]: 2026-01-31T07:59:58Z|00292|binding|INFO|Setting lport 47628b26-1097-42cf-9f62-4c7d909083a5 down in Southbound
Jan 31 07:59:58 compute-2 ovn_controller[133834]: 2026-01-31T07:59:58Z|00293|binding|INFO|Removing iface tap47628b26-10 ovn-installed in OVS
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.536 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:59:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:58.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.545 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:58.559 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:6f:40 10.100.0.9'], port_security=['fa:16:3e:2c:6f:40 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6d1d7e9-182a-43ce-beab-85e0bb023fad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fdaecd5-5a26-4054-aea7-a820fc05f87a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eefab01229294ca9bb0e9aed1933e3b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62e1ab70-cfe5-4699-ab44-3a22333f071b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2b116c1-42f7-408e-ab98-1af6342c9dcc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=47628b26-1097-42cf-9f62-4c7d909083a5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:59:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:58.562 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 47628b26-1097-42cf-9f62-4c7d909083a5 in datapath 5fdaecd5-5a26-4054-aea7-a820fc05f87a unbound from our chassis
Jan 31 07:59:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:58.566 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5fdaecd5-5a26-4054-aea7-a820fc05f87a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:59:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:58.567 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[318f1acd-ec14-4611-b588-686f61164748]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:58.567 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a namespace which is not needed anymore
Jan 31 07:59:58 compute-2 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000058.scope: Deactivated successfully.
Jan 31 07:59:58 compute-2 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000058.scope: Consumed 4.154s CPU time.
Jan 31 07:59:58 compute-2 systemd-machined[195142]: Machine qemu-36-instance-00000058 terminated.
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.682 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:58 compute-2 neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a[265450]: [NOTICE]   (265454) : haproxy version is 2.8.14-c23fe91
Jan 31 07:59:58 compute-2 neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a[265450]: [NOTICE]   (265454) : path to executable is /usr/sbin/haproxy
Jan 31 07:59:58 compute-2 neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a[265450]: [WARNING]  (265454) : Exiting Master process...
Jan 31 07:59:58 compute-2 neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a[265450]: [ALERT]    (265454) : Current worker (265456) exited with code 143 (Terminated)
Jan 31 07:59:58 compute-2 neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a[265450]: [WARNING]  (265454) : All workers exited. Exiting... (0)
Jan 31 07:59:58 compute-2 systemd[1]: libpod-8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93.scope: Deactivated successfully.
Jan 31 07:59:58 compute-2 podman[265492]: 2026-01-31 07:59:58.712135964 +0000 UTC m=+0.077957183 container died 8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.763 226833 INFO nova.virt.libvirt.driver [-] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Instance destroyed successfully.
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.764 226833 DEBUG nova.objects.instance [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lazy-loading 'resources' on Instance uuid b6d1d7e9-182a-43ce-beab-85e0bb023fad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.848 226833 DEBUG nova.virt.libvirt.vif [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1202355201',display_name='tempest-NoVNCConsoleTestJSON-server-1202355201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1202355201',id=88,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:59:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eefab01229294ca9bb0e9aed1933e3b4',ramdisk_id='',reservation_id='r-xf5ak4cj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-NoVNCConsoleTestJSON-72280206',owner_user_name='tempest-NoVNCConsoleTestJSON-72280206-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:59:54Z,user_data=None,user_id='ea670e6956634b1b82cb9bcb09ff4400',uuid=b6d1d7e9-182a-43ce-beab-85e0bb023fad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47628b26-1097-42cf-9f62-4c7d909083a5", "address": "fa:16:3e:2c:6f:40", "network": {"id": "5fdaecd5-5a26-4054-aea7-a820fc05f87a", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-541524873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eefab01229294ca9bb0e9aed1933e3b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47628b26-10", "ovs_interfaceid": "47628b26-1097-42cf-9f62-4c7d909083a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.849 226833 DEBUG nova.network.os_vif_util [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Converting VIF {"id": "47628b26-1097-42cf-9f62-4c7d909083a5", "address": "fa:16:3e:2c:6f:40", "network": {"id": "5fdaecd5-5a26-4054-aea7-a820fc05f87a", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-541524873-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eefab01229294ca9bb0e9aed1933e3b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47628b26-10", "ovs_interfaceid": "47628b26-1097-42cf-9f62-4c7d909083a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.849 226833 DEBUG nova.network.os_vif_util [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=47628b26-1097-42cf-9f62-4c7d909083a5,network=Network(5fdaecd5-5a26-4054-aea7-a820fc05f87a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47628b26-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.850 226833 DEBUG os_vif [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=47628b26-1097-42cf-9f62-4c7d909083a5,network=Network(5fdaecd5-5a26-4054-aea7-a820fc05f87a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47628b26-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.851 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.852 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47628b26-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.853 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.854 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.855 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.858 226833 INFO os_vif [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=47628b26-1097-42cf-9f62-4c7d909083a5,network=Network(5fdaecd5-5a26-4054-aea7-a820fc05f87a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47628b26-10')
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.976 226833 DEBUG nova.compute.manager [req-c4bef276-eb37-4553-a14e-8873df5c06d5 req-b1b65299-3ea9-4d60-87da-148f32faa2f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Received event network-vif-unplugged-47628b26-1097-42cf-9f62-4c7d909083a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.997 226833 DEBUG oslo_concurrency.lockutils [req-c4bef276-eb37-4553-a14e-8873df5c06d5 req-b1b65299-3ea9-4d60-87da-148f32faa2f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.997 226833 DEBUG oslo_concurrency.lockutils [req-c4bef276-eb37-4553-a14e-8873df5c06d5 req-b1b65299-3ea9-4d60-87da-148f32faa2f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.998 226833 DEBUG oslo_concurrency.lockutils [req-c4bef276-eb37-4553-a14e-8873df5c06d5 req-b1b65299-3ea9-4d60-87da-148f32faa2f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.998 226833 DEBUG nova.compute.manager [req-c4bef276-eb37-4553-a14e-8873df5c06d5 req-b1b65299-3ea9-4d60-87da-148f32faa2f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] No waiting events found dispatching network-vif-unplugged-47628b26-1097-42cf-9f62-4c7d909083a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:59:58 compute-2 nova_compute[226829]: 2026-01-31 07:59:58.999 226833 DEBUG nova.compute.manager [req-c4bef276-eb37-4553-a14e-8873df5c06d5 req-b1b65299-3ea9-4d60-87da-148f32faa2f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Received event network-vif-unplugged-47628b26-1097-42cf-9f62-4c7d909083a5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:59:59 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93-userdata-shm.mount: Deactivated successfully.
Jan 31 07:59:59 compute-2 systemd[1]: var-lib-containers-storage-overlay-3b47ca3fffa3401f321d565b9bd626234abdb3bf6489efec63f5db556bf67606-merged.mount: Deactivated successfully.
Jan 31 07:59:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:59 compute-2 sudo[265551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:59 compute-2 sudo[265551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:59 compute-2 sudo[265551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:59 compute-2 podman[265492]: 2026-01-31 07:59:59.345563731 +0000 UTC m=+0.711384900 container cleanup 8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:59:59 compute-2 systemd[1]: libpod-conmon-8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93.scope: Deactivated successfully.
Jan 31 07:59:59 compute-2 sudo[265576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:59 compute-2 sudo[265576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:59 compute-2 sudo[265576]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 07:59:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:59.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:59 compute-2 podman[265601]: 2026-01-31 07:59:59.774712475 +0000 UTC m=+0.409317188 container remove 8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:59:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:59.779 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ac06c5d3-6ff6-4598-9d58-96fca7b3520f]: (4, ('Sat Jan 31 07:59:58 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a (8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93)\n8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93\nSat Jan 31 07:59:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a (8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93)\n8fc90130c84482182083a20c6fb4e1c8deb2e0222c94a4c2ad12667faa15ce93\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:59.781 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9924bf19-e2a4-4043-92e3-fb0ea36d1e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:59.783 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fdaecd5-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:59 compute-2 nova_compute[226829]: 2026-01-31 07:59:59.784 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:59 compute-2 kernel: tap5fdaecd5-50: left promiscuous mode
Jan 31 07:59:59 compute-2 nova_compute[226829]: 2026-01-31 07:59:59.789 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:59 compute-2 nova_compute[226829]: 2026-01-31 07:59:59.790 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:59.792 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[af16e13e-e5ee-4bba-9a9f-29407512e8d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.871236) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846399871338, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1398, "num_deletes": 256, "total_data_size": 3195884, "memory_usage": 3240112, "flush_reason": "Manual Compaction"}
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Jan 31 07:59:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:59.811 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1c45c014-7979-4aac-9dff-16f5c73455d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:59.873 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2fda48-f09e-49a4-b6ae-1c5168082b5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:59.884 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a7215ee5-727d-4f7a-8acb-ea4c2355aefc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664588, 'reachable_time': 15801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265616, 'error': None, 'target': 'ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846399886190, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 2086952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43006, "largest_seqno": 44399, "table_properties": {"data_size": 2080869, "index_size": 3350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12959, "raw_average_key_size": 19, "raw_value_size": 2068660, "raw_average_value_size": 3172, "num_data_blocks": 147, "num_entries": 652, "num_filter_entries": 652, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846285, "oldest_key_time": 1769846285, "file_creation_time": 1769846399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 15018 microseconds, and 4599 cpu microseconds.
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:59:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:59.887 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5fdaecd5-5a26-4054-aea7-a820fc05f87a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:59:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 07:59:59.887 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[6624e3de-b270-4a37-8908-f42fe761198e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:59 compute-2 systemd[1]: run-netns-ovnmeta\x2d5fdaecd5\x2d5a26\x2d4054\x2daea7\x2da820fc05f87a.mount: Deactivated successfully.
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.886265) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 2086952 bytes OK
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.886285) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.889684) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.889701) EVENT_LOG_v1 {"time_micros": 1769846399889696, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.889720) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3189334, prev total WAL file size 3205387, number of live WAL files 2.
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.890261) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(2038KB)], [81(9314KB)]
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846399890337, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 11625064, "oldest_snapshot_seqno": -1}
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 6928 keys, 11452126 bytes, temperature: kUnknown
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846399940387, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 11452126, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11405394, "index_size": 28316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 178284, "raw_average_key_size": 25, "raw_value_size": 11280906, "raw_average_value_size": 1628, "num_data_blocks": 1133, "num_entries": 6928, "num_filter_entries": 6928, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769846399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.940667) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11452126 bytes
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.942284) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.9 rd, 228.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.1 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(11.1) write-amplify(5.5) OK, records in: 7457, records dropped: 529 output_compression: NoCompression
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.942319) EVENT_LOG_v1 {"time_micros": 1769846399942307, "job": 50, "event": "compaction_finished", "compaction_time_micros": 50132, "compaction_time_cpu_micros": 21190, "output_level": 6, "num_output_files": 1, "total_output_size": 11452126, "num_input_records": 7457, "num_output_records": 6928, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846399942637, "job": 50, "event": "table_file_deletion", "file_number": 83}
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846399943255, "job": 50, "event": "table_file_deletion", "file_number": 81}
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.890140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.943317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.943322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.943324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.943325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:59:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-07:59:59.943326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:00 compute-2 nova_compute[226829]: 2026-01-31 08:00:00.204 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:00 compute-2 ceph-mon[77282]: pgmap v1882: 305 pgs: 305 active+clean; 167 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 222 op/s
Jan 31 08:00:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:00.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:01 compute-2 nova_compute[226829]: 2026-01-31 08:00:01.368 226833 DEBUG nova.compute.manager [req-d638f4f5-8b15-49f6-af38-5d0568f77693 req-f04c4a8e-f25a-445a-be1d-0eb7652483df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Received event network-vif-plugged-47628b26-1097-42cf-9f62-4c7d909083a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:00:01 compute-2 nova_compute[226829]: 2026-01-31 08:00:01.368 226833 DEBUG oslo_concurrency.lockutils [req-d638f4f5-8b15-49f6-af38-5d0568f77693 req-f04c4a8e-f25a-445a-be1d-0eb7652483df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:01 compute-2 nova_compute[226829]: 2026-01-31 08:00:01.369 226833 DEBUG oslo_concurrency.lockutils [req-d638f4f5-8b15-49f6-af38-5d0568f77693 req-f04c4a8e-f25a-445a-be1d-0eb7652483df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:01 compute-2 nova_compute[226829]: 2026-01-31 08:00:01.369 226833 DEBUG oslo_concurrency.lockutils [req-d638f4f5-8b15-49f6-af38-5d0568f77693 req-f04c4a8e-f25a-445a-be1d-0eb7652483df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:01 compute-2 nova_compute[226829]: 2026-01-31 08:00:01.369 226833 DEBUG nova.compute.manager [req-d638f4f5-8b15-49f6-af38-5d0568f77693 req-f04c4a8e-f25a-445a-be1d-0eb7652483df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] No waiting events found dispatching network-vif-plugged-47628b26-1097-42cf-9f62-4c7d909083a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:00:01 compute-2 nova_compute[226829]: 2026-01-31 08:00:01.369 226833 WARNING nova.compute.manager [req-d638f4f5-8b15-49f6-af38-5d0568f77693 req-f04c4a8e-f25a-445a-be1d-0eb7652483df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Received unexpected event network-vif-plugged-47628b26-1097-42cf-9f62-4c7d909083a5 for instance with vm_state active and task_state deleting.
Jan 31 08:00:01 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 08:00:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:01.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:02.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:02 compute-2 ceph-mon[77282]: pgmap v1883: 305 pgs: 305 active+clean; 167 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 08:00:03 compute-2 podman[265619]: 2026-01-31 08:00:03.179444349 +0000 UTC m=+0.069499954 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:03 compute-2 nova_compute[226829]: 2026-01-31 08:00:03.515 226833 INFO nova.virt.libvirt.driver [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Deleting instance files /var/lib/nova/instances/b6d1d7e9-182a-43ce-beab-85e0bb023fad_del
Jan 31 08:00:03 compute-2 nova_compute[226829]: 2026-01-31 08:00:03.515 226833 INFO nova.virt.libvirt.driver [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Deletion of /var/lib/nova/instances/b6d1d7e9-182a-43ce-beab-85e0bb023fad_del complete
Jan 31 08:00:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:03.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:03 compute-2 nova_compute[226829]: 2026-01-31 08:00:03.853 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:03 compute-2 nova_compute[226829]: 2026-01-31 08:00:03.956 226833 INFO nova.compute.manager [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Took 6.03 seconds to destroy the instance on the hypervisor.
Jan 31 08:00:03 compute-2 nova_compute[226829]: 2026-01-31 08:00:03.956 226833 DEBUG oslo.service.loopingcall [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:00:03 compute-2 nova_compute[226829]: 2026-01-31 08:00:03.957 226833 DEBUG nova.compute.manager [-] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:00:03 compute-2 nova_compute[226829]: 2026-01-31 08:00:03.957 226833 DEBUG nova.network.neutron [-] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:00:03 compute-2 ceph-mon[77282]: pgmap v1884: 305 pgs: 305 active+clean; 160 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Jan 31 08:00:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:04.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:00:05.042 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:05 compute-2 nova_compute[226829]: 2026-01-31 08:00:05.207 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:05.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:06.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:06 compute-2 ceph-mon[77282]: pgmap v1885: 305 pgs: 305 active+clean; 138 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 142 op/s
Jan 31 08:00:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:00:06.871 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:00:06.873 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:00:06.873 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:07 compute-2 nova_compute[226829]: 2026-01-31 08:00:07.607 226833 DEBUG nova.network.neutron [-] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:00:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:07.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:07 compute-2 nova_compute[226829]: 2026-01-31 08:00:07.862 226833 INFO nova.compute.manager [-] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Took 3.90 seconds to deallocate network for instance.
Jan 31 08:00:08 compute-2 nova_compute[226829]: 2026-01-31 08:00:08.231 226833 DEBUG oslo_concurrency.lockutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:08 compute-2 nova_compute[226829]: 2026-01-31 08:00:08.232 226833 DEBUG oslo_concurrency.lockutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:08 compute-2 nova_compute[226829]: 2026-01-31 08:00:08.303 226833 DEBUG oslo_concurrency.processutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:08.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:08 compute-2 ceph-mon[77282]: pgmap v1886: 305 pgs: 305 active+clean; 105 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 54 KiB/s wr, 103 op/s
Jan 31 08:00:08 compute-2 nova_compute[226829]: 2026-01-31 08:00:08.760 226833 DEBUG nova.compute.manager [req-07e81a2e-6c68-4fa8-95fc-23ec1818e2e0 req-5e608684-26bb-44b7-853d-71a6acd4c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Received event network-vif-deleted-47628b26-1097-42cf-9f62-4c7d909083a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:00:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:00:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3673352418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:08 compute-2 nova_compute[226829]: 2026-01-31 08:00:08.834 226833 DEBUG oslo_concurrency.processutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:08 compute-2 nova_compute[226829]: 2026-01-31 08:00:08.845 226833 DEBUG nova.compute.provider_tree [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:00:08 compute-2 nova_compute[226829]: 2026-01-31 08:00:08.906 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:09 compute-2 nova_compute[226829]: 2026-01-31 08:00:09.077 226833 DEBUG nova.scheduler.client.report [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:00:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:09 compute-2 nova_compute[226829]: 2026-01-31 08:00:09.433 226833 DEBUG oslo_concurrency.lockutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:09 compute-2 nova_compute[226829]: 2026-01-31 08:00:09.590 226833 INFO nova.scheduler.client.report [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Deleted allocations for instance b6d1d7e9-182a-43ce-beab-85e0bb023fad
Jan 31 08:00:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:09.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3673352418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2829613338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:00:09 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Jan 31 08:00:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:09.984274) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:00:09 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Jan 31 08:00:09 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846409984303, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 359, "num_deletes": 251, "total_data_size": 314920, "memory_usage": 323224, "flush_reason": "Manual Compaction"}
Jan 31 08:00:09 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410080354, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 207473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44404, "largest_seqno": 44758, "table_properties": {"data_size": 205294, "index_size": 343, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5437, "raw_average_key_size": 18, "raw_value_size": 201023, "raw_average_value_size": 686, "num_data_blocks": 15, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846399, "oldest_key_time": 1769846399, "file_creation_time": 1769846409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 96151 microseconds, and 1155 cpu microseconds.
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.080417) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 207473 bytes OK
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.080441) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.090203) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.090246) EVENT_LOG_v1 {"time_micros": 1769846410090236, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.090292) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 312492, prev total WAL file size 312492, number of live WAL files 2.
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.090817) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(202KB)], [84(10MB)]
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410090897, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 11659599, "oldest_snapshot_seqno": -1}
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 6711 keys, 9719776 bytes, temperature: kUnknown
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410194510, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 9719776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9675949, "index_size": 25923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 174467, "raw_average_key_size": 25, "raw_value_size": 9556776, "raw_average_value_size": 1424, "num_data_blocks": 1024, "num_entries": 6711, "num_filter_entries": 6711, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769846410, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.194822) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9719776 bytes
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.200149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.4 rd, 93.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(103.0) write-amplify(46.8) OK, records in: 7221, records dropped: 510 output_compression: NoCompression
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.200187) EVENT_LOG_v1 {"time_micros": 1769846410200173, "job": 52, "event": "compaction_finished", "compaction_time_micros": 103688, "compaction_time_cpu_micros": 35488, "output_level": 6, "num_output_files": 1, "total_output_size": 9719776, "num_input_records": 7221, "num_output_records": 6711, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410200378, "job": 52, "event": "table_file_deletion", "file_number": 86}
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410202477, "job": 52, "event": "table_file_deletion", "file_number": 84}
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.090654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.202538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.202544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.202547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.202549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:00:10.202550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-2 nova_compute[226829]: 2026-01-31 08:00:10.210 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:10 compute-2 nova_compute[226829]: 2026-01-31 08:00:10.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:10 compute-2 nova_compute[226829]: 2026-01-31 08:00:10.490 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:10 compute-2 nova_compute[226829]: 2026-01-31 08:00:10.547 226833 DEBUG oslo_concurrency.lockutils [None req-143eac5e-518b-436d-b3c4-5dc388f05825 ea670e6956634b1b82cb9bcb09ff4400 eefab01229294ca9bb0e9aed1933e3b4 - - default default] Lock "b6d1d7e9-182a-43ce-beab-85e0bb023fad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:10.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:10 compute-2 ceph-mon[77282]: pgmap v1887: 305 pgs: 305 active+clean; 88 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Jan 31 08:00:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/388081703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:00:11 compute-2 nova_compute[226829]: 2026-01-31 08:00:11.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:11.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:11 compute-2 ceph-mon[77282]: pgmap v1888: 305 pgs: 305 active+clean; 88 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 31 08:00:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:12.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:13.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:13 compute-2 nova_compute[226829]: 2026-01-31 08:00:13.762 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846398.7613904, b6d1d7e9-182a-43ce-beab-85e0bb023fad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:00:13 compute-2 nova_compute[226829]: 2026-01-31 08:00:13.763 226833 INFO nova.compute.manager [-] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] VM Stopped (Lifecycle Event)
Jan 31 08:00:13 compute-2 nova_compute[226829]: 2026-01-31 08:00:13.910 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:13 compute-2 nova_compute[226829]: 2026-01-31 08:00:13.929 226833 DEBUG nova.compute.manager [None req-ca409e30-7d9f-48ea-a99b-f1f24400010f - - - - - -] [instance: b6d1d7e9-182a-43ce-beab-85e0bb023fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:00:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:14 compute-2 ceph-mon[77282]: pgmap v1889: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 31 08:00:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:14.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:15 compute-2 nova_compute[226829]: 2026-01-31 08:00:15.210 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:15.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:16 compute-2 ceph-mon[77282]: pgmap v1890: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Jan 31 08:00:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3962088818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:16.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:17 compute-2 nova_compute[226829]: 2026-01-31 08:00:17.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:17 compute-2 nova_compute[226829]: 2026-01-31 08:00:17.487 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:00:17 compute-2 nova_compute[226829]: 2026-01-31 08:00:17.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:00:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:17.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:17 compute-2 nova_compute[226829]: 2026-01-31 08:00:17.850 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:00:18 compute-2 podman[265676]: 2026-01-31 08:00:18.175978063 +0000 UTC m=+0.059892223 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:00:18 compute-2 ceph-mon[77282]: pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 31 08:00:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:18.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:18 compute-2 nova_compute[226829]: 2026-01-31 08:00:18.913 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:19 compute-2 sudo[265699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:19 compute-2 sudo[265699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:19 compute-2 sudo[265699]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:19 compute-2 sudo[265724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:19 compute-2 sudo[265724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:19 compute-2 sudo[265724]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:19.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:20 compute-2 nova_compute[226829]: 2026-01-31 08:00:20.212 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:20 compute-2 nova_compute[226829]: 2026-01-31 08:00:20.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:20 compute-2 nova_compute[226829]: 2026-01-31 08:00:20.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:20 compute-2 ceph-mon[77282]: pgmap v1892: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 31 08:00:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:20.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:21 compute-2 nova_compute[226829]: 2026-01-31 08:00:21.065 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:21 compute-2 nova_compute[226829]: 2026-01-31 08:00:21.066 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:21 compute-2 nova_compute[226829]: 2026-01-31 08:00:21.066 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:21 compute-2 nova_compute[226829]: 2026-01-31 08:00:21.066 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:00:21 compute-2 nova_compute[226829]: 2026-01-31 08:00:21.066 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:00:21 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2543805942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1411870379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:21 compute-2 nova_compute[226829]: 2026-01-31 08:00:21.531 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:21 compute-2 nova_compute[226829]: 2026-01-31 08:00:21.708 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:00:21 compute-2 nova_compute[226829]: 2026-01-31 08:00:21.709 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4508MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:00:21 compute-2 nova_compute[226829]: 2026-01-31 08:00:21.709 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:21 compute-2 nova_compute[226829]: 2026-01-31 08:00:21.710 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:21.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:22 compute-2 nova_compute[226829]: 2026-01-31 08:00:22.476 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:00:22 compute-2 nova_compute[226829]: 2026-01-31 08:00:22.477 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:00:22 compute-2 nova_compute[226829]: 2026-01-31 08:00:22.508 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:22 compute-2 ceph-mon[77282]: pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 7.8 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 08:00:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2543805942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:22.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:00:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1694252163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:22 compute-2 nova_compute[226829]: 2026-01-31 08:00:22.937 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:22 compute-2 nova_compute[226829]: 2026-01-31 08:00:22.944 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:00:23 compute-2 nova_compute[226829]: 2026-01-31 08:00:23.537 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:00:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:23.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1694252163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:23 compute-2 nova_compute[226829]: 2026-01-31 08:00:23.916 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:00:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:00:24 compute-2 nova_compute[226829]: 2026-01-31 08:00:24.623 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:00:24 compute-2 nova_compute[226829]: 2026-01-31 08:00:24.624 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:24 compute-2 ceph-mon[77282]: pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 31 08:00:25 compute-2 nova_compute[226829]: 2026-01-31 08:00:25.214 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:25 compute-2 nova_compute[226829]: 2026-01-31 08:00:25.625 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:25 compute-2 nova_compute[226829]: 2026-01-31 08:00:25.626 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:25.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:25 compute-2 ceph-mon[77282]: pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 47 op/s
Jan 31 08:00:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:27 compute-2 nova_compute[226829]: 2026-01-31 08:00:27.734 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:27.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:28 compute-2 ceph-mon[77282]: pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 46 op/s
Jan 31 08:00:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2982580096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:28.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:28 compute-2 nova_compute[226829]: 2026-01-31 08:00:28.918 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:29 compute-2 nova_compute[226829]: 2026-01-31 08:00:29.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:29 compute-2 nova_compute[226829]: 2026-01-31 08:00:29.487 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:00:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:29.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:30 compute-2 nova_compute[226829]: 2026-01-31 08:00:30.215 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:30.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:30 compute-2 ceph-mon[77282]: pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 31 08:00:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/59050685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:31 compute-2 nova_compute[226829]: 2026-01-31 08:00:31.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:31.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:32.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:32 compute-2 ceph-mon[77282]: pgmap v1898: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 KiB/s wr, 69 op/s
Jan 31 08:00:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:33.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:33 compute-2 nova_compute[226829]: 2026-01-31 08:00:33.921 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:33 compute-2 ceph-mon[77282]: pgmap v1899: 305 pgs: 305 active+clean; 68 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 78 op/s
Jan 31 08:00:34 compute-2 podman[265802]: 2026-01-31 08:00:34.175229909 +0000 UTC m=+0.066789060 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:00:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:34.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:35 compute-2 nova_compute[226829]: 2026-01-31 08:00:35.217 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:35.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:36.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:36 compute-2 ceph-mon[77282]: pgmap v1900: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 91 op/s
Jan 31 08:00:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1831992663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:00:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:38.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:00:38 compute-2 ceph-mon[77282]: pgmap v1901: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 1.5 KiB/s wr, 53 op/s
Jan 31 08:00:38 compute-2 nova_compute[226829]: 2026-01-31 08:00:38.941 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:39 compute-2 sudo[265831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:39 compute-2 sudo[265831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:39 compute-2 sudo[265831]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:39 compute-2 sudo[265856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:39 compute-2 sudo[265856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:39 compute-2 sudo[265856]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:39.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:40 compute-2 nova_compute[226829]: 2026-01-31 08:00:40.219 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:00:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:40.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:00:40 compute-2 ceph-mon[77282]: pgmap v1902: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 1.5 KiB/s wr, 53 op/s
Jan 31 08:00:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:41.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:42.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:42 compute-2 ceph-mon[77282]: pgmap v1903: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Jan 31 08:00:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/937994319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:43 compute-2 ceph-mon[77282]: pgmap v1904: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 426 B/s wr, 21 op/s
Jan 31 08:00:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:43.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:43 compute-2 nova_compute[226829]: 2026-01-31 08:00:43.944 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:44.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1479498858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:00:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1479498858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:00:45 compute-2 nova_compute[226829]: 2026-01-31 08:00:45.221 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:45.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:46 compute-2 ceph-mon[77282]: pgmap v1905: 305 pgs: 305 active+clean; 60 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 1021 KiB/s wr, 14 op/s
Jan 31 08:00:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:46.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:47.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:47 compute-2 sudo[265885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:47 compute-2 sudo[265885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-2 sudo[265885]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-2 sudo[265910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:48 compute-2 sudo[265910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-2 sudo[265910]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-2 sudo[265935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:48 compute-2 sudo[265935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-2 sudo[265935]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-2 sudo[265960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:00:48 compute-2 sudo[265960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-2 sudo[265960]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-2 ceph-mon[77282]: pgmap v1906: 305 pgs: 305 active+clean; 71 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 1.3 MiB/s wr, 3 op/s
Jan 31 08:00:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:00:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:48.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:00:48 compute-2 nova_compute[226829]: 2026-01-31 08:00:48.948 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:49 compute-2 podman[266015]: 2026-01-31 08:00:49.20910206 +0000 UTC m=+0.093254928 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:00:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:00:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:00:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:00:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:00:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:00:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:00:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:00:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:49.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:50 compute-2 nova_compute[226829]: 2026-01-31 08:00:50.223 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:50.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:50 compute-2 ceph-mon[77282]: pgmap v1907: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:51.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:52.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:52 compute-2 ceph-mon[77282]: pgmap v1908: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:00:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:53.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:00:53 compute-2 nova_compute[226829]: 2026-01-31 08:00:53.952 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:54.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:54 compute-2 ceph-mon[77282]: pgmap v1909: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:00:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:00:54 compute-2 sudo[266038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:54 compute-2 sudo[266038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:54 compute-2 sudo[266038]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:54 compute-2 sudo[266063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:00:54 compute-2 sudo[266063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:54 compute-2 sudo[266063]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:55 compute-2 nova_compute[226829]: 2026-01-31 08:00:55.225 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2327357670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:55.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:00:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:56.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:00:56 compute-2 ceph-mon[77282]: pgmap v1910: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/493614597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:00:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3673755692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:00:57 compute-2 ceph-mon[77282]: pgmap v1911: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 794 KiB/s wr, 25 op/s
Jan 31 08:00:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:57.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:58.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:58 compute-2 nova_compute[226829]: 2026-01-31 08:00:58.956 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:59 compute-2 sudo[266091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:59 compute-2 sudo[266091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:59 compute-2 sudo[266091]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:00:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:59.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:59 compute-2 sudo[266116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:59 compute-2 sudo[266116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:59 compute-2 sudo[266116]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:00 compute-2 nova_compute[226829]: 2026-01-31 08:01:00.227 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:00 compute-2 ceph-mon[77282]: pgmap v1912: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 485 KiB/s wr, 23 op/s
Jan 31 08:01:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:00.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:01.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:01 compute-2 CROND[266143]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 08:01:01 compute-2 run-parts[266146]: (/etc/cron.hourly) starting 0anacron
Jan 31 08:01:01 compute-2 run-parts[266152]: (/etc/cron.hourly) finished 0anacron
Jan 31 08:01:01 compute-2 CROND[266142]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 08:01:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:02.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:02 compute-2 ceph-mon[77282]: pgmap v1913: 305 pgs: 305 active+clean; 102 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 767 KiB/s wr, 12 op/s
Jan 31 08:01:03 compute-2 nova_compute[226829]: 2026-01-31 08:01:03.385 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:03.386 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:03.391 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:01:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:03.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:03 compute-2 nova_compute[226829]: 2026-01-31 08:01:03.958 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:04.395 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:04.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:04 compute-2 ceph-mon[77282]: pgmap v1914: 305 pgs: 305 active+clean; 123 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 297 KiB/s rd, 1.1 MiB/s wr, 29 op/s
Jan 31 08:01:05 compute-2 podman[266154]: 2026-01-31 08:01:05.189400852 +0000 UTC m=+0.071455996 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:01:05 compute-2 nova_compute[226829]: 2026-01-31 08:01:05.228 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:05.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:06.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:06 compute-2 ceph-mon[77282]: pgmap v1915: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Jan 31 08:01:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:06.872 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:06.873 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:06.873 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:07.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:08.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:08 compute-2 ceph-mon[77282]: pgmap v1916: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 31 08:01:08 compute-2 nova_compute[226829]: 2026-01-31 08:01:08.962 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:09 compute-2 ceph-mon[77282]: pgmap v1917: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 08:01:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:09.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:10 compute-2 nova_compute[226829]: 2026-01-31 08:01:10.230 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:10.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:11 compute-2 nova_compute[226829]: 2026-01-31 08:01:11.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:11 compute-2 nova_compute[226829]: 2026-01-31 08:01:11.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:01:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:11.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:01:12 compute-2 ceph-mon[77282]: pgmap v1918: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 08:01:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2911858492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/529156101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:12 compute-2 nova_compute[226829]: 2026-01-31 08:01:12.465 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:12 compute-2 nova_compute[226829]: 2026-01-31 08:01:12.467 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:12 compute-2 nova_compute[226829]: 2026-01-31 08:01:12.580 226833 DEBUG nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:01:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:01:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:12.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:01:12 compute-2 nova_compute[226829]: 2026-01-31 08:01:12.813 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:12 compute-2 nova_compute[226829]: 2026-01-31 08:01:12.813 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:12 compute-2 nova_compute[226829]: 2026-01-31 08:01:12.828 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:01:12 compute-2 nova_compute[226829]: 2026-01-31 08:01:12.828 226833 INFO nova.compute.claims [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:01:12 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 31 08:01:13 compute-2 nova_compute[226829]: 2026-01-31 08:01:13.148 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:13 compute-2 nova_compute[226829]: 2026-01-31 08:01:13.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:01:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4158684158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:13 compute-2 nova_compute[226829]: 2026-01-31 08:01:13.579 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:13 compute-2 nova_compute[226829]: 2026-01-31 08:01:13.583 226833 DEBUG nova.compute.provider_tree [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:01:13 compute-2 nova_compute[226829]: 2026-01-31 08:01:13.747 226833 DEBUG nova.scheduler.client.report [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:01:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:13.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:13 compute-2 ovn_controller[133834]: 2026-01-31T08:01:13Z|00294|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 08:01:13 compute-2 nova_compute[226829]: 2026-01-31 08:01:13.965 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:13 compute-2 nova_compute[226829]: 2026-01-31 08:01:13.988 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:13 compute-2 nova_compute[226829]: 2026-01-31 08:01:13.990 226833 DEBUG nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:01:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:14 compute-2 nova_compute[226829]: 2026-01-31 08:01:14.256 226833 DEBUG nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:01:14 compute-2 nova_compute[226829]: 2026-01-31 08:01:14.257 226833 DEBUG nova.network.neutron [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:01:14 compute-2 ceph-mon[77282]: pgmap v1919: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 88 op/s
Jan 31 08:01:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4158684158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:14 compute-2 nova_compute[226829]: 2026-01-31 08:01:14.441 226833 INFO nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:01:14 compute-2 nova_compute[226829]: 2026-01-31 08:01:14.547 226833 DEBUG nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:01:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:14.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:14 compute-2 nova_compute[226829]: 2026-01-31 08:01:14.844 226833 DEBUG nova.policy [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd9ed446fb2cf4fc0a4e619c6c766fddc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1fcec9ca13964c7191134db4420ab049', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.003 226833 DEBUG nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.006 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.006 226833 INFO nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Creating image(s)
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.049 226833 DEBUG nova.storage.rbd_utils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.086 226833 DEBUG nova.storage.rbd_utils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.116 226833 DEBUG nova.storage.rbd_utils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.120 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.175 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.177 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.178 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.179 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.220 226833 DEBUG nova.storage.rbd_utils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.225 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.251 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.572 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.646 226833 DEBUG nova.storage.rbd_utils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] resizing rbd image dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.773 226833 DEBUG nova.objects.instance [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'migration_context' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.810 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.810 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Ensure instance console log exists: /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.811 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.811 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:15 compute-2 nova_compute[226829]: 2026-01-31 08:01:15.811 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:15.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:16 compute-2 ceph-mon[77282]: pgmap v1920: 305 pgs: 305 active+clean; 147 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:01:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:16.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1041038657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:17 compute-2 nova_compute[226829]: 2026-01-31 08:01:17.713 226833 DEBUG nova.network.neutron [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Successfully created port: 86cf2bf6-2f28-4435-b081-a3945070ed2d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:01:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:17.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:18 compute-2 nova_compute[226829]: 2026-01-31 08:01:18.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:18 compute-2 nova_compute[226829]: 2026-01-31 08:01:18.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:01:18 compute-2 nova_compute[226829]: 2026-01-31 08:01:18.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:01:18 compute-2 ceph-mon[77282]: pgmap v1921: 305 pgs: 305 active+clean; 171 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 722 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Jan 31 08:01:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/212975079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:18 compute-2 nova_compute[226829]: 2026-01-31 08:01:18.532 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:01:18 compute-2 nova_compute[226829]: 2026-01-31 08:01:18.532 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:01:18 compute-2 nova_compute[226829]: 2026-01-31 08:01:18.533 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:18 compute-2 nova_compute[226829]: 2026-01-31 08:01:18.533 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:01:18 compute-2 nova_compute[226829]: 2026-01-31 08:01:18.562 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:01:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:18.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:19 compute-2 nova_compute[226829]: 2026-01-31 08:01:19.017 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:19.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:19 compute-2 nova_compute[226829]: 2026-01-31 08:01:19.853 226833 DEBUG nova.network.neutron [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Successfully updated port: 86cf2bf6-2f28-4435-b081-a3945070ed2d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:01:19 compute-2 nova_compute[226829]: 2026-01-31 08:01:19.892 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:19 compute-2 nova_compute[226829]: 2026-01-31 08:01:19.892 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:19 compute-2 nova_compute[226829]: 2026-01-31 08:01:19.893 226833 DEBUG nova.network.neutron [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:01:19 compute-2 sudo[266379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:19 compute-2 sudo[266379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:19 compute-2 sudo[266379]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:19 compute-2 sudo[266410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:20 compute-2 sudo[266410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:20 compute-2 sudo[266410]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:20 compute-2 podman[266403]: 2026-01-31 08:01:20.00685857 +0000 UTC m=+0.052479173 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Jan 31 08:01:20 compute-2 nova_compute[226829]: 2026-01-31 08:01:20.190 226833 DEBUG nova.compute.manager [req-673c2891-c267-4851-90c6-3ffd4e76bb5a req-2fea99ac-13d2-43da-85ca-80ab4a6552ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-changed-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:20 compute-2 nova_compute[226829]: 2026-01-31 08:01:20.190 226833 DEBUG nova.compute.manager [req-673c2891-c267-4851-90c6-3ffd4e76bb5a req-2fea99ac-13d2-43da-85ca-80ab4a6552ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Refreshing instance network info cache due to event network-changed-86cf2bf6-2f28-4435-b081-a3945070ed2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:01:20 compute-2 nova_compute[226829]: 2026-01-31 08:01:20.190 226833 DEBUG oslo_concurrency.lockutils [req-673c2891-c267-4851-90c6-3ffd4e76bb5a req-2fea99ac-13d2-43da-85ca-80ab4a6552ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:20 compute-2 nova_compute[226829]: 2026-01-31 08:01:20.233 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:20 compute-2 ceph-mon[77282]: pgmap v1922: 305 pgs: 305 active+clean; 213 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 31 08:01:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:20.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:20 compute-2 nova_compute[226829]: 2026-01-31 08:01:20.776 226833 DEBUG nova.network.neutron [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:01:21 compute-2 nova_compute[226829]: 2026-01-31 08:01:21.517 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:21 compute-2 nova_compute[226829]: 2026-01-31 08:01:21.518 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:21.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:22 compute-2 sshd-session[266304]: Connection closed by 118.160.47.139 port 40946 [preauth]
Jan 31 08:01:22 compute-2 nova_compute[226829]: 2026-01-31 08:01:22.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:22 compute-2 nova_compute[226829]: 2026-01-31 08:01:22.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:22 compute-2 ceph-mon[77282]: pgmap v1923: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 148 op/s
Jan 31 08:01:22 compute-2 nova_compute[226829]: 2026-01-31 08:01:22.644 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:22 compute-2 nova_compute[226829]: 2026-01-31 08:01:22.645 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:22 compute-2 nova_compute[226829]: 2026-01-31 08:01:22.645 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:22 compute-2 nova_compute[226829]: 2026-01-31 08:01:22.645 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:01:22 compute-2 nova_compute[226829]: 2026-01-31 08:01:22.645 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:22.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:01:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3362426033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.025 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.169 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.171 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4490MB free_disk=20.90125274658203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.171 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.171 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.345 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance dfba7f29-bde8-4327-a7b3-1c4fd44e045a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.346 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.346 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.425 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3362426033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:01:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2288252936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:23.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.840 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.847 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.878 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.916 226833 DEBUG nova.network.neutron [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.935 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.936 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.954 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.955 226833 DEBUG nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance network_info: |[{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.955 226833 DEBUG oslo_concurrency.lockutils [req-673c2891-c267-4851-90c6-3ffd4e76bb5a req-2fea99ac-13d2-43da-85ca-80ab4a6552ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.956 226833 DEBUG nova.network.neutron [req-673c2891-c267-4851-90c6-3ffd4e76bb5a req-2fea99ac-13d2-43da-85ca-80ab4a6552ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Refreshing network info cache for port 86cf2bf6-2f28-4435-b081-a3945070ed2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.960 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Start _get_guest_xml network_info=[{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.966 226833 WARNING nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.974 226833 DEBUG nova.virt.libvirt.host [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.975 226833 DEBUG nova.virt.libvirt.host [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.981 226833 DEBUG nova.virt.libvirt.host [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.982 226833 DEBUG nova.virt.libvirt.host [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.983 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.984 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.984 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.985 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.985 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.985 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.985 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.986 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.986 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.986 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.987 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.987 226833 DEBUG nova.virt.hardware [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:01:23 compute-2 nova_compute[226829]: 2026-01-31 08:01:23.991 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.020 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:24 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3381635505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.475 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.500 226833 DEBUG nova.storage.rbd_utils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.503 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:24.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:24 compute-2 ceph-mon[77282]: pgmap v1924: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 08:01:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2288252936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3381635505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:24 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4219408586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.895 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.897 226833 DEBUG nova.virt.libvirt.vif [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.897 226833 DEBUG nova.network.os_vif_util [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.899 226833 DEBUG nova.network.os_vif_util [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.900 226833 DEBUG nova.objects.instance [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.921 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <uuid>dfba7f29-bde8-4327-a7b3-1c4fd44e045a</uuid>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <name>instance-0000005b</name>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestJSON-server-1163372726</nova:name>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:01:23</nova:creationTime>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <nova:port uuid="86cf2bf6-2f28-4435-b081-a3945070ed2d">
Jan 31 08:01:24 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <system>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <entry name="serial">dfba7f29-bde8-4327-a7b3-1c4fd44e045a</entry>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <entry name="uuid">dfba7f29-bde8-4327-a7b3-1c4fd44e045a</entry>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     </system>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <os>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   </os>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <features>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   </features>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk">
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       </source>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk.config">
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       </source>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:01:24 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:01:98:96"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <target dev="tap86cf2bf6-2f"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/console.log" append="off"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <video>
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     </video>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:01:24 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:01:24 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:01:24 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:01:24 compute-2 nova_compute[226829]: </domain>
Jan 31 08:01:24 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.922 226833 DEBUG nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Preparing to wait for external event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.923 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.923 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.923 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.924 226833 DEBUG nova.virt.libvirt.vif [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.925 226833 DEBUG nova.network.os_vif_util [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.926 226833 DEBUG nova.network.os_vif_util [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.926 226833 DEBUG os_vif [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.927 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.928 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.928 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.936 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.936 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86cf2bf6-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.937 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86cf2bf6-2f, col_values=(('external_ids', {'iface-id': '86cf2bf6-2f28-4435-b081-a3945070ed2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:98:96', 'vm-uuid': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.939 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.942 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:01:24 compute-2 NetworkManager[48999]: <info>  [1769846484.9424] manager: (tap86cf2bf6-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.946 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:24 compute-2 nova_compute[226829]: 2026-01-31 08:01:24.948 226833 INFO os_vif [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')
Jan 31 08:01:25 compute-2 nova_compute[226829]: 2026-01-31 08:01:25.061 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:25 compute-2 nova_compute[226829]: 2026-01-31 08:01:25.062 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:25 compute-2 nova_compute[226829]: 2026-01-31 08:01:25.062 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No VIF found with MAC fa:16:3e:01:98:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:01:25 compute-2 nova_compute[226829]: 2026-01-31 08:01:25.062 226833 INFO nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Using config drive
Jan 31 08:01:25 compute-2 nova_compute[226829]: 2026-01-31 08:01:25.090 226833 DEBUG nova.storage.rbd_utils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:25 compute-2 nova_compute[226829]: 2026-01-31 08:01:25.235 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:25.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4219408586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:25 compute-2 ceph-mon[77282]: pgmap v1925: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 08:01:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:26.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.019 226833 INFO nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Creating config drive at /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/disk.config
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.024 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2hf_r07c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.151 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2hf_r07c" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.185 226833 DEBUG nova.storage.rbd_utils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.189 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/disk.config dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/496966673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.329 226833 DEBUG oslo_concurrency.processutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/disk.config dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.330 226833 INFO nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Deleting local config drive /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/disk.config because it was imported into RBD.
Jan 31 08:01:27 compute-2 kernel: tap86cf2bf6-2f: entered promiscuous mode
Jan 31 08:01:27 compute-2 NetworkManager[48999]: <info>  [1769846487.3764] manager: (tap86cf2bf6-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Jan 31 08:01:27 compute-2 ovn_controller[133834]: 2026-01-31T08:01:27Z|00295|binding|INFO|Claiming lport 86cf2bf6-2f28-4435-b081-a3945070ed2d for this chassis.
Jan 31 08:01:27 compute-2 ovn_controller[133834]: 2026-01-31T08:01:27Z|00296|binding|INFO|86cf2bf6-2f28-4435-b081-a3945070ed2d: Claiming fa:16:3e:01:98:96 10.100.0.13
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.377 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.382 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.384 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:27 compute-2 systemd-udevd[266631]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:01:27 compute-2 ovn_controller[133834]: 2026-01-31T08:01:27Z|00297|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d ovn-installed in OVS
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.405 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:27 compute-2 ovn_controller[133834]: 2026-01-31T08:01:27Z|00298|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d up in Southbound
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.407 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.408 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis
Jan 31 08:01:27 compute-2 systemd-machined[195142]: New machine qemu-37-instance-0000005b.
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.410 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:01:27 compute-2 NetworkManager[48999]: <info>  [1769846487.4153] device (tap86cf2bf6-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:01:27 compute-2 NetworkManager[48999]: <info>  [1769846487.4161] device (tap86cf2bf6-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.421 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdf6ccd-260f-4cf1-a312-0e27726e1613]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.422 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.424 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.424 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[06b11bf8-01db-408b-a031-5e153191e360]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.426 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a821a776-51a5-4fa2-bb61-98e6701b1765]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 systemd[1]: Started Virtual Machine qemu-37-instance-0000005b.
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.437 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[9049e1ef-5ff1-4fea-8c82-67e8db8372be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.448 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[023868ae-04e1-49c6-bc01-3651ece0e86d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.472 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[a1906d35-a157-4dd7-bf5d-387ca64eebd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.476 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[224e7df6-4b2d-4100-a14b-96c242b7db56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 systemd-udevd[266635]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:01:27 compute-2 NetworkManager[48999]: <info>  [1769846487.4781] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/155)
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.505 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc57840-2604-4b09-b792-6b814d163ef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.508 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[73b35695-7647-4aa7-bd68-1e89d7518a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 NetworkManager[48999]: <info>  [1769846487.5256] device (tap5cc2535f-00): carrier: link connected
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.529 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[570b8d28-7b3a-45bf-bb08-485e1acbbb88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.544 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[41af3890-611f-468d-b21c-da58bef9a25d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673999, 'reachable_time': 38240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266665, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.557 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ece31a-bddb-4556-ad14-db7d693ac734]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673999, 'tstamp': 673999}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266666, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.571 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a98ff026-f48a-4ad5-abea-4832c7118703]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673999, 'reachable_time': 38240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266667, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.593 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1c5a6b-7f62-4772-9b7c-46dcc2b56880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.635 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d8539ed7-5af5-46ef-951f-dc1fbcab84b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.637 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.637 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.637 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.639 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:27 compute-2 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.640 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:27 compute-2 NetworkManager[48999]: <info>  [1769846487.6417] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.641 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.642 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:27 compute-2 ovn_controller[133834]: 2026-01-31T08:01:27Z|00299|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.647 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.648 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.648 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[efb1d135-61e6-4f4c-9434-1682265dec3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.649 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:01:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:27.651 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.801 226833 DEBUG nova.network.neutron [req-673c2891-c267-4851-90c6-3ffd4e76bb5a req-2fea99ac-13d2-43da-85ca-80ab4a6552ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updated VIF entry in instance network info cache for port 86cf2bf6-2f28-4435-b081-a3945070ed2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.802 226833 DEBUG nova.network.neutron [req-673c2891-c267-4851-90c6-3ffd4e76bb5a req-2fea99ac-13d2-43da-85ca-80ab4a6552ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.837 226833 DEBUG oslo_concurrency.lockutils [req-673c2891-c267-4851-90c6-3ffd4e76bb5a req-2fea99ac-13d2-43da-85ca-80ab4a6552ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.839 226833 DEBUG nova.compute.manager [req-4025d0e8-e9a5-4bf7-9fee-a05ada91e287 req-ac1eb8fa-1075-4676-8db1-37dea5d7ad90 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.840 226833 DEBUG oslo_concurrency.lockutils [req-4025d0e8-e9a5-4bf7-9fee-a05ada91e287 req-ac1eb8fa-1075-4676-8db1-37dea5d7ad90 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.840 226833 DEBUG oslo_concurrency.lockutils [req-4025d0e8-e9a5-4bf7-9fee-a05ada91e287 req-ac1eb8fa-1075-4676-8db1-37dea5d7ad90 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.841 226833 DEBUG oslo_concurrency.lockutils [req-4025d0e8-e9a5-4bf7-9fee-a05ada91e287 req-ac1eb8fa-1075-4676-8db1-37dea5d7ad90 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:27 compute-2 nova_compute[226829]: 2026-01-31 08:01:27.841 226833 DEBUG nova.compute.manager [req-4025d0e8-e9a5-4bf7-9fee-a05ada91e287 req-ac1eb8fa-1075-4676-8db1-37dea5d7ad90 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Processing event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:01:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:27.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:28 compute-2 podman[266714]: 2026-01-31 08:01:28.003418108 +0000 UTC m=+0.062006360 container create c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:01:28 compute-2 systemd[1]: Started libpod-conmon-c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc.scope.
Jan 31 08:01:28 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:01:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3354c19ac48336039dfaebf5bdbb6fb68d33d16128846ddbb2dd43061852f4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:28 compute-2 podman[266714]: 2026-01-31 08:01:27.973264331 +0000 UTC m=+0.031852603 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:01:28 compute-2 podman[266714]: 2026-01-31 08:01:28.072435248 +0000 UTC m=+0.131023530 container init c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:28 compute-2 podman[266714]: 2026-01-31 08:01:28.080071604 +0000 UTC m=+0.138659866 container start c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:01:28 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266751]: [NOTICE]   (266759) : New worker (266761) forked
Jan 31 08:01:28 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266751]: [NOTICE]   (266759) : Loading success.
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.163 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846488.1625896, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.163 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Started (Lifecycle Event)
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.165 226833 DEBUG nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.167 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.170 226833 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance spawned successfully.
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.170 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.193 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.197 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.201 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.201 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.201 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.202 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.202 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.202 226833 DEBUG nova.virt.libvirt.driver [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.237 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.238 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846488.1649923, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.238 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Paused (Lifecycle Event)
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.282 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.286 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846488.1672988, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.286 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Resumed (Lifecycle Event)
Jan 31 08:01:28 compute-2 ceph-mon[77282]: pgmap v1926: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 147 op/s
Jan 31 08:01:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3110385296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.333 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.338 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.341 226833 INFO nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Took 13.34 seconds to spawn the instance on the hypervisor.
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.341 226833 DEBUG nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.403 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.489 226833 INFO nova.compute.manager [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Took 15.72 seconds to build instance.
Jan 31 08:01:28 compute-2 nova_compute[226829]: 2026-01-31 08:01:28.548 226833 DEBUG oslo_concurrency.lockutils [None req-3636a0c3-b60e-4343-8bae-fe36d03718e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:28.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3147785232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:29.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:29 compute-2 nova_compute[226829]: 2026-01-31 08:01:29.939 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:30 compute-2 nova_compute[226829]: 2026-01-31 08:01:30.065 226833 DEBUG nova.compute.manager [req-f660f201-4e97-489d-aaec-b5e0332d9851 req-e2664154-5ab8-454a-9f24-f9ec5b1b140c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:30 compute-2 nova_compute[226829]: 2026-01-31 08:01:30.066 226833 DEBUG oslo_concurrency.lockutils [req-f660f201-4e97-489d-aaec-b5e0332d9851 req-e2664154-5ab8-454a-9f24-f9ec5b1b140c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:30 compute-2 nova_compute[226829]: 2026-01-31 08:01:30.066 226833 DEBUG oslo_concurrency.lockutils [req-f660f201-4e97-489d-aaec-b5e0332d9851 req-e2664154-5ab8-454a-9f24-f9ec5b1b140c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:30 compute-2 nova_compute[226829]: 2026-01-31 08:01:30.066 226833 DEBUG oslo_concurrency.lockutils [req-f660f201-4e97-489d-aaec-b5e0332d9851 req-e2664154-5ab8-454a-9f24-f9ec5b1b140c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:30 compute-2 nova_compute[226829]: 2026-01-31 08:01:30.066 226833 DEBUG nova.compute.manager [req-f660f201-4e97-489d-aaec-b5e0332d9851 req-e2664154-5ab8-454a-9f24-f9ec5b1b140c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:30 compute-2 nova_compute[226829]: 2026-01-31 08:01:30.067 226833 WARNING nova.compute.manager [req-f660f201-4e97-489d-aaec-b5e0332d9851 req-e2664154-5ab8-454a-9f24-f9ec5b1b140c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:01:30 compute-2 nova_compute[226829]: 2026-01-31 08:01:30.236 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:30 compute-2 ceph-mon[77282]: pgmap v1927: 305 pgs: 305 active+clean; 191 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 191 op/s
Jan 31 08:01:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3439344745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/824275673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:30.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:31.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:32 compute-2 ceph-mon[77282]: pgmap v1928: 305 pgs: 305 active+clean; 199 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 186 op/s
Jan 31 08:01:32 compute-2 nova_compute[226829]: 2026-01-31 08:01:32.937 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:32 compute-2 nova_compute[226829]: 2026-01-31 08:01:32.938 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:01:33 compute-2 NetworkManager[48999]: <info>  [1769846493.2001] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Jan 31 08:01:33 compute-2 NetworkManager[48999]: <info>  [1769846493.2020] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Jan 31 08:01:33 compute-2 nova_compute[226829]: 2026-01-31 08:01:33.199 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:33 compute-2 nova_compute[226829]: 2026-01-31 08:01:33.292 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:33 compute-2 ovn_controller[133834]: 2026-01-31T08:01:33Z|00300|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:01:33 compute-2 nova_compute[226829]: 2026-01-31 08:01:33.314 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:01:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:33.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:01:33 compute-2 ceph-mon[77282]: pgmap v1929: 305 pgs: 305 active+clean; 248 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 190 op/s
Jan 31 08:01:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:34 compute-2 nova_compute[226829]: 2026-01-31 08:01:34.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:34 compute-2 nova_compute[226829]: 2026-01-31 08:01:34.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:01:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:34.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:34 compute-2 nova_compute[226829]: 2026-01-31 08:01:34.941 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:35 compute-2 nova_compute[226829]: 2026-01-31 08:01:35.238 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:35.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:36 compute-2 podman[266776]: 2026-01-31 08:01:36.185791552 +0000 UTC m=+0.074686434 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:01:36 compute-2 ceph-mon[77282]: pgmap v1930: 305 pgs: 305 active+clean; 260 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 246 op/s
Jan 31 08:01:36 compute-2 nova_compute[226829]: 2026-01-31 08:01:36.647 226833 DEBUG nova.compute.manager [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-changed-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:36 compute-2 nova_compute[226829]: 2026-01-31 08:01:36.647 226833 DEBUG nova.compute.manager [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Refreshing instance network info cache due to event network-changed-86cf2bf6-2f28-4435-b081-a3945070ed2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:01:36 compute-2 nova_compute[226829]: 2026-01-31 08:01:36.647 226833 DEBUG oslo_concurrency.lockutils [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:36 compute-2 nova_compute[226829]: 2026-01-31 08:01:36.647 226833 DEBUG oslo_concurrency.lockutils [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:36 compute-2 nova_compute[226829]: 2026-01-31 08:01:36.648 226833 DEBUG nova.network.neutron [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Refreshing network info cache for port 86cf2bf6-2f28-4435-b081-a3945070ed2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:01:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:36.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:37.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:38 compute-2 ceph-mon[77282]: pgmap v1931: 305 pgs: 305 active+clean; 260 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 265 op/s
Jan 31 08:01:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:38.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:39.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:39 compute-2 nova_compute[226829]: 2026-01-31 08:01:39.943 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:40 compute-2 sudo[266806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:40 compute-2 sudo[266806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:40 compute-2 sudo[266806]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:40 compute-2 sudo[266831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:40 compute-2 sudo[266831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:40 compute-2 sudo[266831]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:40 compute-2 nova_compute[226829]: 2026-01-31 08:01:40.241 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:40 compute-2 ceph-mon[77282]: pgmap v1932: 305 pgs: 305 active+clean; 260 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 296 op/s
Jan 31 08:01:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:40.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/963186934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:41 compute-2 ovn_controller[133834]: 2026-01-31T08:01:41Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:98:96 10.100.0.13
Jan 31 08:01:41 compute-2 ovn_controller[133834]: 2026-01-31T08:01:41Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:98:96 10.100.0.13
Jan 31 08:01:41 compute-2 nova_compute[226829]: 2026-01-31 08:01:41.848 226833 DEBUG nova.network.neutron [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updated VIF entry in instance network info cache for port 86cf2bf6-2f28-4435-b081-a3945070ed2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:01:41 compute-2 nova_compute[226829]: 2026-01-31 08:01:41.849 226833 DEBUG nova.network.neutron [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:41.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:42 compute-2 ceph-mon[77282]: pgmap v1933: 305 pgs: 305 active+clean; 265 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.4 MiB/s wr, 244 op/s
Jan 31 08:01:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3768615480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:42.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:43 compute-2 nova_compute[226829]: 2026-01-31 08:01:43.119 226833 DEBUG oslo_concurrency.lockutils [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:43.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:44 compute-2 ceph-mon[77282]: pgmap v1934: 305 pgs: 305 active+clean; 270 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.7 MiB/s wr, 181 op/s
Jan 31 08:01:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:44.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:44 compute-2 nova_compute[226829]: 2026-01-31 08:01:44.945 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:45 compute-2 nova_compute[226829]: 2026-01-31 08:01:45.243 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:45.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/563089829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:01:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/563089829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:01:46 compute-2 nova_compute[226829]: 2026-01-31 08:01:46.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:46.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:47 compute-2 ceph-mon[77282]: pgmap v1935: 305 pgs: 305 active+clean; 265 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.8 MiB/s wr, 180 op/s
Jan 31 08:01:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:47.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:47 compute-2 ceph-mon[77282]: pgmap v1936: 305 pgs: 305 active+clean; 259 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 31 08:01:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:48 compute-2 nova_compute[226829]: 2026-01-31 08:01:48.938 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:48 compute-2 nova_compute[226829]: 2026-01-31 08:01:48.938 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1292270437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:49 compute-2 nova_compute[226829]: 2026-01-31 08:01:49.175 226833 DEBUG nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:01:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:49 compute-2 nova_compute[226829]: 2026-01-31 08:01:49.586 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:49 compute-2 nova_compute[226829]: 2026-01-31 08:01:49.587 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:49 compute-2 nova_compute[226829]: 2026-01-31 08:01:49.603 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:01:49 compute-2 nova_compute[226829]: 2026-01-31 08:01:49.604 226833 INFO nova.compute.claims [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:01:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:49.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:49 compute-2 nova_compute[226829]: 2026-01-31 08:01:49.947 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.035 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:50 compute-2 podman[266862]: 2026-01-31 08:01:50.163077459 +0000 UTC m=+0.045162495 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:01:50 compute-2 ceph-mon[77282]: pgmap v1937: 305 pgs: 305 active+clean; 246 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.245 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:01:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1763936160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.478 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.483 226833 DEBUG nova.compute.provider_tree [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.520 226833 DEBUG nova.scheduler.client.report [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.558 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.558 226833 DEBUG nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.672 226833 DEBUG nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.673 226833 DEBUG nova.network.neutron [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.703 226833 INFO nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:01:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:50.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:50 compute-2 nova_compute[226829]: 2026-01-31 08:01:50.832 226833 DEBUG nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.038 226833 DEBUG nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.041 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.042 226833 INFO nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Creating image(s)
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.084 226833 DEBUG nova.storage.rbd_utils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e2e6f697-3383-4887-b25a-c89447e67fc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.122 226833 DEBUG nova.storage.rbd_utils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e2e6f697-3383-4887-b25a-c89447e67fc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.152 226833 DEBUG nova.storage.rbd_utils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e2e6f697-3383-4887-b25a-c89447e67fc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.156 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.212 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.213 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.214 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.214 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.243 226833 DEBUG nova.storage.rbd_utils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e2e6f697-3383-4887-b25a-c89447e67fc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.247 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e2e6f697-3383-4887-b25a-c89447e67fc2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:51 compute-2 nova_compute[226829]: 2026-01-31 08:01:51.309 226833 DEBUG nova.policy [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '63e95edea0164ae2a9820dc10467335d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:01:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1763936160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:51.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:52 compute-2 nova_compute[226829]: 2026-01-31 08:01:52.513 226833 DEBUG oslo_concurrency.lockutils [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:52 compute-2 nova_compute[226829]: 2026-01-31 08:01:52.514 226833 DEBUG oslo_concurrency.lockutils [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:52 compute-2 nova_compute[226829]: 2026-01-31 08:01:52.514 226833 INFO nova.compute.manager [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Rebooting instance
Jan 31 08:01:52 compute-2 nova_compute[226829]: 2026-01-31 08:01:52.532 226833 DEBUG oslo_concurrency.lockutils [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:52 compute-2 nova_compute[226829]: 2026-01-31 08:01:52.532 226833 DEBUG oslo_concurrency.lockutils [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:52 compute-2 nova_compute[226829]: 2026-01-31 08:01:52.533 226833 DEBUG nova.network.neutron [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:01:52 compute-2 nova_compute[226829]: 2026-01-31 08:01:52.649 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e2e6f697-3383-4887-b25a-c89447e67fc2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:52 compute-2 nova_compute[226829]: 2026-01-31 08:01:52.727 226833 DEBUG nova.network.neutron [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Successfully created port: 647b42a2-81d3-49dc-9075-d31077bfffe8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:01:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:52.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:52 compute-2 nova_compute[226829]: 2026-01-31 08:01:52.737 226833 DEBUG nova.storage.rbd_utils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] resizing rbd image e2e6f697-3383-4887-b25a-c89447e67fc2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:01:52 compute-2 ceph-mon[77282]: pgmap v1938: 305 pgs: 305 active+clean; 247 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Jan 31 08:01:53 compute-2 nova_compute[226829]: 2026-01-31 08:01:53.078 226833 DEBUG nova.objects.instance [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'migration_context' on Instance uuid e2e6f697-3383-4887-b25a-c89447e67fc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:53 compute-2 nova_compute[226829]: 2026-01-31 08:01:53.098 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:01:53 compute-2 nova_compute[226829]: 2026-01-31 08:01:53.099 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Ensure instance console log exists: /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:01:53 compute-2 nova_compute[226829]: 2026-01-31 08:01:53.099 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:53 compute-2 nova_compute[226829]: 2026-01-31 08:01:53.099 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:53 compute-2 nova_compute[226829]: 2026-01-31 08:01:53.099 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:53.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:53 compute-2 ceph-mon[77282]: pgmap v1939: 305 pgs: 305 active+clean; 250 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 151 op/s
Jan 31 08:01:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:54 compute-2 nova_compute[226829]: 2026-01-31 08:01:54.713 226833 DEBUG nova.network.neutron [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Successfully updated port: 647b42a2-81d3-49dc-9075-d31077bfffe8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:01:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:01:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:01:54 compute-2 nova_compute[226829]: 2026-01-31 08:01:54.740 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:54 compute-2 nova_compute[226829]: 2026-01-31 08:01:54.740 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:54 compute-2 nova_compute[226829]: 2026-01-31 08:01:54.740 226833 DEBUG nova.network.neutron [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:01:54 compute-2 nova_compute[226829]: 2026-01-31 08:01:54.826 226833 DEBUG nova.network.neutron [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:54 compute-2 nova_compute[226829]: 2026-01-31 08:01:54.859 226833 DEBUG oslo_concurrency.lockutils [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:54 compute-2 nova_compute[226829]: 2026-01-31 08:01:54.860 226833 DEBUG nova.compute.manager [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:54 compute-2 sudo[267070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:54 compute-2 sudo[267070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:54 compute-2 sudo[267070]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:54 compute-2 nova_compute[226829]: 2026-01-31 08:01:54.948 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-2 sudo[267095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:01:54 compute-2 sudo[267095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:54 compute-2 sudo[267095]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:55 compute-2 sudo[267120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:55 compute-2 sudo[267120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:55 compute-2 sudo[267120]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:55 compute-2 kernel: tap86cf2bf6-2f (unregistering): left promiscuous mode
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.047 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 NetworkManager[48999]: <info>  [1769846515.0483] device (tap86cf2bf6-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.055 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00301|binding|INFO|Releasing lport 86cf2bf6-2f28-4435-b081-a3945070ed2d from this chassis (sb_readonly=0)
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00302|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d down in Southbound
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00303|binding|INFO|Removing iface tap86cf2bf6-2f ovn-installed in OVS
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.059 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.062 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.067 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.071 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.073 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.077 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cf48e147-f082-4890-abbf-efd282f56d55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.078 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore
Jan 31 08:01:55 compute-2 sudo[267145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:01:55 compute-2 sudo[267145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:55 compute-2 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 31 08:01:55 compute-2 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005b.scope: Consumed 14.168s CPU time.
Jan 31 08:01:55 compute-2 systemd-machined[195142]: Machine qemu-37-instance-0000005b terminated.
Jan 31 08:01:55 compute-2 kernel: tap86cf2bf6-2f: entered promiscuous mode
Jan 31 08:01:55 compute-2 NetworkManager[48999]: <info>  [1769846515.1908] manager: (tap86cf2bf6-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Jan 31 08:01:55 compute-2 systemd-udevd[267170]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:01:55 compute-2 kernel: tap86cf2bf6-2f (unregistering): left promiscuous mode
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00304|binding|INFO|Claiming lport 86cf2bf6-2f28-4435-b081-a3945070ed2d for this chassis.
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.197 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00305|binding|INFO|86cf2bf6-2f28-4435-b081-a3945070ed2d: Claiming fa:16:3e:01:98:96 10.100.0.13
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.210 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00306|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d ovn-installed in OVS
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00307|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d up in Southbound
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.211 226833 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance destroyed successfully.
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.212 226833 DEBUG nova.objects.instance [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.213 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00308|binding|INFO|Releasing lport 86cf2bf6-2f28-4435-b081-a3945070ed2d from this chassis (sb_readonly=1)
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00309|if_status|INFO|Dropped 10 log messages in last 372 seconds (most recently, 372 seconds ago) due to excessive rate
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00310|if_status|INFO|Not setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d down as sb is readonly
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00311|binding|INFO|Removing iface tap86cf2bf6-2f ovn-installed in OVS
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.214 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.218 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266751]: [NOTICE]   (266759) : haproxy version is 2.8.14-c23fe91
Jan 31 08:01:55 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266751]: [NOTICE]   (266759) : path to executable is /usr/sbin/haproxy
Jan 31 08:01:55 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266751]: [WARNING]  (266759) : Exiting Master process...
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00312|binding|INFO|Releasing lport 86cf2bf6-2f28-4435-b081-a3945070ed2d from this chassis (sb_readonly=0)
Jan 31 08:01:55 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266751]: [ALERT]    (266759) : Current worker (266761) exited with code 143 (Terminated)
Jan 31 08:01:55 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266751]: [WARNING]  (266759) : All workers exited. Exiting... (0)
Jan 31 08:01:55 compute-2 ovn_controller[133834]: 2026-01-31T08:01:55Z|00313|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d down in Southbound
Jan 31 08:01:55 compute-2 systemd[1]: libpod-c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc.scope: Deactivated successfully.
Jan 31 08:01:55 compute-2 podman[267194]: 2026-01-31 08:01:55.231269959 +0000 UTC m=+0.065672669 container died c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.236 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.236 226833 DEBUG nova.virt.libvirt.vif [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.237 226833 DEBUG nova.network.os_vif_util [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.238 226833 DEBUG nova.network.os_vif_util [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.238 226833 DEBUG os_vif [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.241 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.242 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86cf2bf6-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.243 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc-userdata-shm.mount: Deactivated successfully.
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.255 226833 INFO os_vif [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')
Jan 31 08:01:55 compute-2 systemd[1]: var-lib-containers-storage-overlay-c3354c19ac48336039dfaebf5bdbb6fb68d33d16128846ddbb2dd43061852f4b-merged.mount: Deactivated successfully.
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.263 226833 DEBUG nova.virt.libvirt.driver [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Start _get_guest_xml network_info=[{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.267 226833 WARNING nova.virt.libvirt.driver [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.274 226833 DEBUG nova.virt.libvirt.host [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:01:55 compute-2 podman[267194]: 2026-01-31 08:01:55.275626551 +0000 UTC m=+0.110029261 container cleanup c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.274 226833 DEBUG nova.virt.libvirt.host [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.280 226833 DEBUG nova.virt.libvirt.host [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.284 226833 DEBUG nova.virt.libvirt.host [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:01:55 compute-2 systemd[1]: libpod-conmon-c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc.scope: Deactivated successfully.
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.285 226833 DEBUG nova.virt.libvirt.driver [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.285 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.286 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.286 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.287 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.287 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.287 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.288 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.288 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.288 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.289 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.289 226833 DEBUG nova.virt.hardware [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.290 226833 DEBUG nova.objects.instance [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'vcpu_model' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.312 226833 DEBUG oslo_concurrency.processutils [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:55 compute-2 podman[267236]: 2026-01-31 08:01:55.335437872 +0000 UTC m=+0.044554028 container remove c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.339 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f078bb0b-506d-43a0-8ae6-6cea93eb900c]: (4, ('Sat Jan 31 08:01:55 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc)\nc38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc\nSat Jan 31 08:01:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (c38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc)\nc38c002ae31fff9389f6e4e4e2afe3733d683cb9dc02efef06c48bcb3d5174cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.341 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[026f66a2-a719-443c-ae32-7e41f7e12051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.343 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.345 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.352 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.356 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f6da6653-6096-4c2c-aed6-6f8af13ba955]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.377 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc0f701-86b3-4985-807e-5a03e1ef36a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.378 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b847c3ad-4759-43c9-b2ff-3322a4e0a737]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.390 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1688dcbf-4e2d-4317-a922-3b7175951e8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673994, 'reachable_time': 39275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267257, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:55 compute-2 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.396 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.396 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6eb5cf-a50b-426e-85cc-3efec1475e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.396 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.398 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.399 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1bbaae8c-8a38-4d39-8d99-bae70712d766]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.400 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.401 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:01:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:55.402 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7eef9c74-ba38-4d5d-bcba-b9550f20edda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.411 226833 DEBUG nova.network.neutron [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:01:55 compute-2 sudo[267145]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.602 226833 DEBUG nova.compute.manager [req-45585406-ac09-46c8-88b3-531435187c94 req-3fd5f8a1-81d4-42c8-9ae0-9581e65616f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.602 226833 DEBUG oslo_concurrency.lockutils [req-45585406-ac09-46c8-88b3-531435187c94 req-3fd5f8a1-81d4-42c8-9ae0-9581e65616f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.603 226833 DEBUG oslo_concurrency.lockutils [req-45585406-ac09-46c8-88b3-531435187c94 req-3fd5f8a1-81d4-42c8-9ae0-9581e65616f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.603 226833 DEBUG oslo_concurrency.lockutils [req-45585406-ac09-46c8-88b3-531435187c94 req-3fd5f8a1-81d4-42c8-9ae0-9581e65616f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.603 226833 DEBUG nova.compute.manager [req-45585406-ac09-46c8-88b3-531435187c94 req-3fd5f8a1-81d4-42c8-9ae0-9581e65616f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.603 226833 WARNING nova.compute.manager [req-45585406-ac09-46c8-88b3-531435187c94 req-3fd5f8a1-81d4-42c8-9ae0-9581e65616f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state reboot_started_hard.
Jan 31 08:01:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3520477777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.728 226833 DEBUG oslo_concurrency.processutils [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.768 226833 DEBUG oslo_concurrency.processutils [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:55.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.975 226833 DEBUG nova.compute.manager [req-7039b028-ee48-4e6c-be99-ff31801470da req-85a0e9a5-9933-4989-b96e-234bb22bf0da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-changed-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.976 226833 DEBUG nova.compute.manager [req-7039b028-ee48-4e6c-be99-ff31801470da req-85a0e9a5-9933-4989-b96e-234bb22bf0da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Refreshing instance network info cache due to event network-changed-647b42a2-81d3-49dc-9075-d31077bfffe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:01:55 compute-2 nova_compute[226829]: 2026-01-31 08:01:55.976 226833 DEBUG oslo_concurrency.lockutils [req-7039b028-ee48-4e6c-be99-ff31801470da req-85a0e9a5-9933-4989-b96e-234bb22bf0da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3165637769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.169 226833 DEBUG oslo_concurrency.processutils [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.171 226833 DEBUG nova.virt.libvirt.vif [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.171 226833 DEBUG nova.network.os_vif_util [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.172 226833 DEBUG nova.network.os_vif_util [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.173 226833 DEBUG nova.objects.instance [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.218 226833 DEBUG nova.virt.libvirt.driver [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <uuid>dfba7f29-bde8-4327-a7b3-1c4fd44e045a</uuid>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <name>instance-0000005b</name>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestJSON-server-1163372726</nova:name>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:01:55</nova:creationTime>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <nova:port uuid="86cf2bf6-2f28-4435-b081-a3945070ed2d">
Jan 31 08:01:56 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <system>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <entry name="serial">dfba7f29-bde8-4327-a7b3-1c4fd44e045a</entry>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <entry name="uuid">dfba7f29-bde8-4327-a7b3-1c4fd44e045a</entry>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     </system>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <os>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   </os>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <features>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   </features>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk">
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       </source>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk.config">
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       </source>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:01:56 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:01:98:96"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <target dev="tap86cf2bf6-2f"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/console.log" append="off"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <video>
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     </video>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:01:56 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:01:56 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:01:56 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:01:56 compute-2 nova_compute[226829]: </domain>
Jan 31 08:01:56 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.219 226833 DEBUG nova.virt.libvirt.driver [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.219 226833 DEBUG nova.virt.libvirt.driver [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.220 226833 DEBUG nova.virt.libvirt.vif [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.220 226833 DEBUG nova.network.os_vif_util [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.221 226833 DEBUG nova.network.os_vif_util [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.221 226833 DEBUG os_vif [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.222 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.223 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.226 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.226 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86cf2bf6-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.227 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86cf2bf6-2f, col_values=(('external_ids', {'iface-id': '86cf2bf6-2f28-4435-b081-a3945070ed2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:98:96', 'vm-uuid': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.228 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 NetworkManager[48999]: <info>  [1769846516.2304] manager: (tap86cf2bf6-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.231 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.234 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.235 226833 INFO os_vif [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')
Jan 31 08:01:56 compute-2 kernel: tap86cf2bf6-2f: entered promiscuous mode
Jan 31 08:01:56 compute-2 NetworkManager[48999]: <info>  [1769846516.2992] manager: (tap86cf2bf6-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Jan 31 08:01:56 compute-2 ovn_controller[133834]: 2026-01-31T08:01:56Z|00314|binding|INFO|Claiming lport 86cf2bf6-2f28-4435-b081-a3945070ed2d for this chassis.
Jan 31 08:01:56 compute-2 ovn_controller[133834]: 2026-01-31T08:01:56Z|00315|binding|INFO|86cf2bf6-2f28-4435-b081-a3945070ed2d: Claiming fa:16:3e:01:98:96 10.100.0.13
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.299 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 ovn_controller[133834]: 2026-01-31T08:01:56Z|00316|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d ovn-installed in OVS
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.306 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.308 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 NetworkManager[48999]: <info>  [1769846516.3090] device (tap86cf2bf6-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:01:56 compute-2 NetworkManager[48999]: <info>  [1769846516.3097] device (tap86cf2bf6-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:01:56 compute-2 ovn_controller[133834]: 2026-01-31T08:01:56Z|00317|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d up in Southbound
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.314 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.316 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.319 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:01:56 compute-2 systemd-machined[195142]: New machine qemu-38-instance-0000005b.
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.326 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[44c9a7f9-1f84-4348-a10b-0f48268f0102]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.327 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.329 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.329 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c7407a87-6350-4c5b-8397-6e25804390c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.329 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e88bfa-6cc6-4826-9fcc-4372a3acc07c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 systemd[1]: Started Virtual Machine qemu-38-instance-0000005b.
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.337 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[a14295d2-bbeb-435a-b726-a76559619493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.361 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7b7985-e03a-4db6-8b41-ad9870e9f619]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.384 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[df8ffc51-a8c9-48eb-8517-d133e16471eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 NetworkManager[48999]: <info>  [1769846516.3902] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/162)
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.389 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[78066f46-bdac-4c5f-aa64-c25a694ad0ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.417 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0f518b-3f65-4ea5-b60c-5faee35b4c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.420 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c1cc0b-5147-46c1-9f6f-780a06532ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 NetworkManager[48999]: <info>  [1769846516.4406] device (tap5cc2535f-00): carrier: link connected
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.445 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[6dffd530-d73d-4a95-b70b-f92abf3dd2b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.456 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c53e7010-afcd-4b89-ae15-544b883120b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676891, 'reachable_time': 39063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267376, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.468 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[191ca3c7-f7c8-4ff3-9dc3-3e82cac72de8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676891, 'tstamp': 676891}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267377, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.479 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[05c2019a-5095-4e79-a9a7-df1f01f2d5aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676891, 'reachable_time': 39063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267378, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.498 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3dce22-9a51-4a9f-8a61-6fce131b2f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.534 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7175f0-c303-40a5-9297-e84c37cd4513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.535 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.535 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.535 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 NetworkManager[48999]: <info>  [1769846516.5377] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/163)
Jan 31 08:01:56 compute-2 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.544 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.545 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 ovn_controller[133834]: 2026-01-31T08:01:56Z|00318|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.546 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.548 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.549 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6670ccab-5c80-40f2-9cf3-1f99ab4cdf05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:56 compute-2 nova_compute[226829]: 2026-01-31 08:01:56.550 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.551 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:01:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:56.552 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:01:56 compute-2 ceph-mon[77282]: pgmap v1940: 305 pgs: 305 active+clean; 272 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 156 op/s
Jan 31 08:01:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:01:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:01:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:01:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:01:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:01:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:01:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3520477777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3165637769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:56 compute-2 podman[267417]: 2026-01-31 08:01:56.87010326 +0000 UTC m=+0.047799416 container create 97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 08:01:56 compute-2 systemd[1]: Started libpod-conmon-97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8.scope.
Jan 31 08:01:56 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:01:56 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a56d2d30e79cbf8824524d2b0b37f9f148d07fbd685e4375b3df8fcacb2047e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:56 compute-2 podman[267417]: 2026-01-31 08:01:56.926324194 +0000 UTC m=+0.104020370 container init 97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:01:56 compute-2 podman[267417]: 2026-01-31 08:01:56.931708758 +0000 UTC m=+0.109404914 container start 97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:01:56 compute-2 podman[267417]: 2026-01-31 08:01:56.846360077 +0000 UTC m=+0.024056263 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:01:56 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[267443]: [NOTICE]   (267447) : New worker (267449) forked
Jan 31 08:01:56 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[267443]: [NOTICE]   (267447) : Loading success.
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.021 226833 DEBUG nova.network.neutron [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating instance_info_cache with network_info: [{"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.054 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.055 226833 DEBUG nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Instance network_info: |[{"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.055 226833 DEBUG oslo_concurrency.lockutils [req-7039b028-ee48-4e6c-be99-ff31801470da req-85a0e9a5-9933-4989-b96e-234bb22bf0da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.056 226833 DEBUG nova.network.neutron [req-7039b028-ee48-4e6c-be99-ff31801470da req-85a0e9a5-9933-4989-b96e-234bb22bf0da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Refreshing network info cache for port 647b42a2-81d3-49dc-9075-d31077bfffe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.061 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Start _get_guest_xml network_info=[{"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.066 226833 WARNING nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.075 226833 DEBUG nova.virt.libvirt.host [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.076 226833 DEBUG nova.virt.libvirt.host [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.086 226833 DEBUG nova.virt.libvirt.host [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.087 226833 DEBUG nova.virt.libvirt.host [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.089 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.089 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.090 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.090 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.091 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.091 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.091 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.092 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.092 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.093 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.093 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.093 226833 DEBUG nova.virt.hardware [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.098 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.332 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for dfba7f29-bde8-4327-a7b3-1c4fd44e045a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.333 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846517.3318489, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.333 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Resumed (Lifecycle Event)
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.336 226833 DEBUG nova.compute.manager [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.342 226833 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance rebooted successfully.
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.342 226833 DEBUG nova.compute.manager [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.392 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.395 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.480 226833 DEBUG oslo_concurrency.lockutils [None req-1a9a1bdf-e2e1-4fef-91e6-6b6969cd42db d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.482 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.482 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846517.3357594, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.482 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Started (Lifecycle Event)
Jan 31 08:01:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3425632881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.532 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.537 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.548 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.581 226833 DEBUG nova.storage.rbd_utils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e2e6f697-3383-4887-b25a-c89447e67fc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.586 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.833 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.834 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.834 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.834 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.835 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.835 226833 WARNING nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.835 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.835 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.836 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.836 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.836 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.837 226833 WARNING nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.837 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.837 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.838 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.838 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.838 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.839 226833 WARNING nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.839 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.839 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.840 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.840 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.840 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.841 226833 WARNING nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.841 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.841 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.842 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.842 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.842 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.842 226833 WARNING nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.843 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.843 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.843 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.843 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.844 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.844 226833 WARNING nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.844 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.845 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.845 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.845 226833 DEBUG oslo_concurrency.lockutils [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.846 226833 DEBUG nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:57 compute-2 nova_compute[226829]: 2026-01-31 08:01:57.846 226833 WARNING nova.compute.manager [req-913172c2-9dfe-4076-972a-d5466a357dd8 req-c15aae2a-cd90-497a-9dc8-688928fbba0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:01:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:57.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3425632881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1697821044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.048 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.050 226833 DEBUG nova.virt.libvirt.vif [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-810198552',display_name='tempest-ServerDiskConfigTestJSON-server-810198552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-810198552',id=93,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-1w400nzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:50Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=e2e6f697-3383-4887-b25a-c89447e67fc2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.050 226833 DEBUG nova.network.os_vif_util [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.051 226833 DEBUG nova.network.os_vif_util [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.053 226833 DEBUG nova.objects.instance [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2e6f697-3383-4887-b25a-c89447e67fc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.087 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <uuid>e2e6f697-3383-4887-b25a-c89447e67fc2</uuid>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <name>instance-0000005d</name>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-810198552</nova:name>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:01:57</nova:creationTime>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <nova:user uuid="63e95edea0164ae2a9820dc10467335d">tempest-ServerDiskConfigTestJSON-984925022-project-member</nova:user>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <nova:project uuid="be74d11d2f5a4d9aae2dbe32c31ad9c3">tempest-ServerDiskConfigTestJSON-984925022</nova:project>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <nova:port uuid="647b42a2-81d3-49dc-9075-d31077bfffe8">
Jan 31 08:01:58 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <system>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <entry name="serial">e2e6f697-3383-4887-b25a-c89447e67fc2</entry>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <entry name="uuid">e2e6f697-3383-4887-b25a-c89447e67fc2</entry>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     </system>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <os>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   </os>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <features>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   </features>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e2e6f697-3383-4887-b25a-c89447e67fc2_disk">
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       </source>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e2e6f697-3383-4887-b25a-c89447e67fc2_disk.config">
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       </source>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:01:58 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:14:eb:48"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <target dev="tap647b42a2-81"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/console.log" append="off"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <video>
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     </video>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:01:58 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:01:58 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:01:58 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:01:58 compute-2 nova_compute[226829]: </domain>
Jan 31 08:01:58 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.094 226833 DEBUG nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Preparing to wait for external event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.095 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.096 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.096 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.099 226833 DEBUG nova.virt.libvirt.vif [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-810198552',display_name='tempest-ServerDiskConfigTestJSON-server-810198552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-810198552',id=93,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-1w400nzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:50Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=e2e6f697-3383-4887-b25a-c89447e67fc2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.100 226833 DEBUG nova.network.os_vif_util [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.103 226833 DEBUG nova.network.os_vif_util [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.104 226833 DEBUG os_vif [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.104 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.105 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.105 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.108 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.108 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647b42a2-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.109 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap647b42a2-81, col_values=(('external_ids', {'iface-id': '647b42a2-81d3-49dc-9075-d31077bfffe8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:eb:48', 'vm-uuid': 'e2e6f697-3383-4887-b25a-c89447e67fc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.111 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:58 compute-2 NetworkManager[48999]: <info>  [1769846518.1130] manager: (tap647b42a2-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.113 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.117 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.118 226833 INFO os_vif [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81')
Jan 31 08:01:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3381306407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.306 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.307 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.307 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No VIF found with MAC fa:16:3e:14:eb:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.308 226833 INFO nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Using config drive
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.333 226833 DEBUG nova.storage.rbd_utils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e2e6f697-3383-4887-b25a-c89447e67fc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:58.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:58 compute-2 ceph-mon[77282]: pgmap v1941: 305 pgs: 305 active+clean; 293 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 31 08:01:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1697821044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3381306407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.921 226833 INFO nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Creating config drive at /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/disk.config
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.925 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpg18yibql execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.944 226833 INFO nova.compute.manager [None req-552409fe-6c7a-4de9-b9f5-2d8f89882ef4 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Get console output
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.950 226833 INFO oslo.privsep.daemon [None req-552409fe-6c7a-4de9-b9f5-2d8f89882ef4 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp3of85ld1/privsep.sock']
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.996 226833 DEBUG nova.network.neutron [req-7039b028-ee48-4e6c-be99-ff31801470da req-85a0e9a5-9933-4989-b96e-234bb22bf0da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updated VIF entry in instance network info cache for port 647b42a2-81d3-49dc-9075-d31077bfffe8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:01:58 compute-2 nova_compute[226829]: 2026-01-31 08:01:58.998 226833 DEBUG nova.network.neutron [req-7039b028-ee48-4e6c-be99-ff31801470da req-85a0e9a5-9933-4989-b96e-234bb22bf0da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating instance_info_cache with network_info: [{"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.019 226833 DEBUG oslo_concurrency.lockutils [req-7039b028-ee48-4e6c-be99-ff31801470da req-85a0e9a5-9933-4989-b96e-234bb22bf0da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.047 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpg18yibql" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.075 226833 DEBUG nova.storage.rbd_utils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e2e6f697-3383-4887-b25a-c89447e67fc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.080 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/disk.config e2e6f697-3383-4887-b25a-c89447e67fc2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.246 226833 DEBUG oslo_concurrency.processutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/disk.config e2e6f697-3383-4887-b25a-c89447e67fc2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.248 226833 INFO nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Deleting local config drive /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/disk.config because it was imported into RBD.
Jan 31 08:01:59 compute-2 kernel: tap647b42a2-81: entered promiscuous mode
Jan 31 08:01:59 compute-2 ovn_controller[133834]: 2026-01-31T08:01:59Z|00319|binding|INFO|Claiming lport 647b42a2-81d3-49dc-9075-d31077bfffe8 for this chassis.
Jan 31 08:01:59 compute-2 ovn_controller[133834]: 2026-01-31T08:01:59Z|00320|binding|INFO|647b42a2-81d3-49dc-9075-d31077bfffe8: Claiming fa:16:3e:14:eb:48 10.100.0.3
Jan 31 08:01:59 compute-2 NetworkManager[48999]: <info>  [1769846519.2898] manager: (tap647b42a2-81): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.285 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:59 compute-2 ovn_controller[133834]: 2026-01-31T08:01:59Z|00321|binding|INFO|Setting lport 647b42a2-81d3-49dc-9075-d31077bfffe8 ovn-installed in OVS
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.304 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:59 compute-2 ovn_controller[133834]: 2026-01-31T08:01:59Z|00322|binding|INFO|Setting lport 647b42a2-81d3-49dc-9075-d31077bfffe8 up in Southbound
Jan 31 08:01:59 compute-2 systemd-udevd[267624]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.311 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:eb:48 10.100.0.3'], port_security=['fa:16:3e:14:eb:48 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e2e6f697-3383-4887-b25a-c89447e67fc2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=647b42a2-81d3-49dc-9075-d31077bfffe8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.313 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 647b42a2-81d3-49dc-9075-d31077bfffe8 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 bound to our chassis
Jan 31 08:01:59 compute-2 systemd-machined[195142]: New machine qemu-39-instance-0000005d.
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.315 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:01:59 compute-2 NetworkManager[48999]: <info>  [1769846519.3227] device (tap647b42a2-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:01:59 compute-2 NetworkManager[48999]: <info>  [1769846519.3236] device (tap647b42a2-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.324 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c444c79b-b1c1-4723-a304-4c391bbc79c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.325 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap121329c8-21 in ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.326 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap121329c8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.326 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b5065e05-803e-4c72-986c-32998a73480c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 systemd[1]: Started Virtual Machine qemu-39-instance-0000005d.
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.327 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[35badb7a-1c7e-4bf4-9fa6-8fd86966086a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.338 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[6fce71e4-c2aa-4be0-b3a1-a424d561bc56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.347 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7f739a-44ec-4f8f-97d1-abd13ec1e85f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.369 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[652e29f5-5a4d-4edb-b501-4b334f6b856c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 NetworkManager[48999]: <info>  [1769846519.3748] manager: (tap121329c8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/166)
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.373 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1765010e-d95c-4e00-91a4-ae17b425c2ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.397 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[58c6a193-9d20-4aed-a28a-88216811c537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.400 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[766bb92f-2ffb-4cf7-b68b-aab4fe035a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 NetworkManager[48999]: <info>  [1769846519.4202] device (tap121329c8-20): carrier: link connected
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.425 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c52b52fc-cf8c-464c-bb39-9da485278ea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.438 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b53a2b7c-8380-4cf6-86a9-b7752077a90d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677189, 'reachable_time': 42155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267657, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.448 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[56e3d76a-8a23-4450-a4a4-e8b7fd656b1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:a3c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677189, 'tstamp': 677189}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267658, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.461 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1f7476-ec48-499a-b4f4-98e1a3471a89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677189, 'reachable_time': 42155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267659, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.482 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9cfa492b-4efb-457d-97aa-41a93156d63a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.520 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[54b79153-6458-424b-b399-ebac1a36403c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.522 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.522 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.523 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap121329c8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:59 compute-2 kernel: tap121329c8-20: entered promiscuous mode
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.524 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:59 compute-2 NetworkManager[48999]: <info>  [1769846519.5256] manager: (tap121329c8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.531 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap121329c8-20, col_values=(('external_ids', {'iface-id': 'e59d8348-5c5c-4c82-ba21-91f3a512c65e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.532 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.533 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:59 compute-2 ovn_controller[133834]: 2026-01-31T08:01:59Z|00323|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.535 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.536 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[850376de-8d91-4774-a38b-4d20a05e9e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.536 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:01:59 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:01:59 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.541 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:01:59.543 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'env', 'PROCESS_TAG=haproxy-121329c8-2359-4e9d-9f2b-4932f8740470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/121329c8-2359-4e9d-9f2b-4932f8740470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.699 226833 INFO oslo.privsep.daemon [None req-552409fe-6c7a-4de9-b9f5-2d8f89882ef4 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Spawned new privsep daemon via rootwrap
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.572 267670 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.576 267670 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.578 267670 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.578 267670 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267670
Jan 31 08:01:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:01:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:01:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:59.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.887 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846519.8869512, e2e6f697-3383-4887-b25a-c89447e67fc2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.887 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] VM Started (Lifecycle Event)
Jan 31 08:01:59 compute-2 podman[267735]: 2026-01-31 08:01:59.914389369 +0000 UTC m=+0.043466858 container create ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.949 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.954 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846519.8876553, e2e6f697-3383-4887-b25a-c89447e67fc2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.954 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] VM Paused (Lifecycle Event)
Jan 31 08:01:59 compute-2 systemd[1]: Started libpod-conmon-ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516.scope.
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.980 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:59 compute-2 nova_compute[226829]: 2026-01-31 08:01:59.983 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:01:59 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:01:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce573429ca4dd2aeb9bfe181d200187294e34dc094481d16b734c07895c6c0c1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:59 compute-2 podman[267735]: 2026-01-31 08:01:59.892495696 +0000 UTC m=+0.021573205 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:02:00 compute-2 podman[267735]: 2026-01-31 08:02:00.004206542 +0000 UTC m=+0.133284201 container init ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:02:00 compute-2 podman[267735]: 2026-01-31 08:02:00.009525086 +0000 UTC m=+0.138602575 container start ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:02:00 compute-2 nova_compute[226829]: 2026-01-31 08:02:00.011 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:02:00 compute-2 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[267749]: [NOTICE]   (267753) : New worker (267755) forked
Jan 31 08:02:00 compute-2 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[267749]: [NOTICE]   (267753) : Loading success.
Jan 31 08:02:00 compute-2 sudo[267764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:00 compute-2 sudo[267764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:00 compute-2 sudo[267764]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:00 compute-2 nova_compute[226829]: 2026-01-31 08:02:00.249 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:00 compute-2 sudo[267789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:00 compute-2 sudo[267789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:00 compute-2 sudo[267789]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:00 compute-2 ceph-mon[77282]: pgmap v1942: 305 pgs: 305 active+clean; 293 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Jan 31 08:02:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:00.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:01.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:02 compute-2 ceph-mon[77282]: pgmap v1943: 305 pgs: 305 active+clean; 307 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 160 op/s
Jan 31 08:02:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:02:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.456 226833 DEBUG nova.compute.manager [req-2caed61e-6362-4a3f-a858-a3541583ddc0 req-9aa2a7cf-9317-4df1-9b0c-80d005bcc1be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.456 226833 DEBUG oslo_concurrency.lockutils [req-2caed61e-6362-4a3f-a858-a3541583ddc0 req-9aa2a7cf-9317-4df1-9b0c-80d005bcc1be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.456 226833 DEBUG oslo_concurrency.lockutils [req-2caed61e-6362-4a3f-a858-a3541583ddc0 req-9aa2a7cf-9317-4df1-9b0c-80d005bcc1be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.457 226833 DEBUG oslo_concurrency.lockutils [req-2caed61e-6362-4a3f-a858-a3541583ddc0 req-9aa2a7cf-9317-4df1-9b0c-80d005bcc1be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.457 226833 DEBUG nova.compute.manager [req-2caed61e-6362-4a3f-a858-a3541583ddc0 req-9aa2a7cf-9317-4df1-9b0c-80d005bcc1be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Processing event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.457 226833 DEBUG nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.461 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846522.4615035, e2e6f697-3383-4887-b25a-c89447e67fc2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.462 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] VM Resumed (Lifecycle Event)
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.464 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.467 226833 INFO nova.virt.libvirt.driver [-] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Instance spawned successfully.
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.468 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.518 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.519 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.519 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.520 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.520 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.521 226833 DEBUG nova.virt.libvirt.driver [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.530 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.533 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.597 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.635 226833 INFO nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Took 11.60 seconds to spawn the instance on the hypervisor.
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.636 226833 DEBUG nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:02 compute-2 sudo[267815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:02 compute-2 sudo[267815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:02 compute-2 sudo[267815]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.723 226833 INFO nova.compute.manager [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Took 13.24 seconds to build instance.
Jan 31 08:02:02 compute-2 sudo[267840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:02:02 compute-2 sudo[267840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:02 compute-2 sudo[267840]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:02 compute-2 nova_compute[226829]: 2026-01-31 08:02:02.747 226833 DEBUG oslo_concurrency.lockutils [None req-7230b14b-4d74-4b47-8958-05ba34a7811a 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:03 compute-2 nova_compute[226829]: 2026-01-31 08:02:03.111 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:03.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:04 compute-2 ceph-mon[77282]: pgmap v1944: 305 pgs: 305 active+clean; 309 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 167 op/s
Jan 31 08:02:04 compute-2 nova_compute[226829]: 2026-01-31 08:02:04.610 226833 DEBUG nova.compute.manager [req-fdad0385-9eee-4cd8-92a3-4e17f884c81a req-f4fb4dd6-3583-4da4-9cc2-f911a99effd6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:04 compute-2 nova_compute[226829]: 2026-01-31 08:02:04.612 226833 DEBUG oslo_concurrency.lockutils [req-fdad0385-9eee-4cd8-92a3-4e17f884c81a req-f4fb4dd6-3583-4da4-9cc2-f911a99effd6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:04 compute-2 nova_compute[226829]: 2026-01-31 08:02:04.613 226833 DEBUG oslo_concurrency.lockutils [req-fdad0385-9eee-4cd8-92a3-4e17f884c81a req-f4fb4dd6-3583-4da4-9cc2-f911a99effd6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:04 compute-2 nova_compute[226829]: 2026-01-31 08:02:04.613 226833 DEBUG oslo_concurrency.lockutils [req-fdad0385-9eee-4cd8-92a3-4e17f884c81a req-f4fb4dd6-3583-4da4-9cc2-f911a99effd6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:04 compute-2 nova_compute[226829]: 2026-01-31 08:02:04.614 226833 DEBUG nova.compute.manager [req-fdad0385-9eee-4cd8-92a3-4e17f884c81a req-f4fb4dd6-3583-4da4-9cc2-f911a99effd6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:04 compute-2 nova_compute[226829]: 2026-01-31 08:02:04.615 226833 WARNING nova.compute.manager [req-fdad0385-9eee-4cd8-92a3-4e17f884c81a req-f4fb4dd6-3583-4da4-9cc2-f911a99effd6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received unexpected event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with vm_state active and task_state None.
Jan 31 08:02:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:04.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:05 compute-2 nova_compute[226829]: 2026-01-31 08:02:05.251 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:05.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:06 compute-2 nova_compute[226829]: 2026-01-31 08:02:06.019 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:06.019 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:02:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:06.022 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:02:06 compute-2 ceph-mon[77282]: pgmap v1945: 305 pgs: 305 active+clean; 320 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Jan 31 08:02:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:06.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:06.873 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:06.874 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:06.875 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:07.025 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:07 compute-2 podman[267867]: 2026-01-31 08:02:07.233852249 +0000 UTC m=+0.112922760 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:02:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:07.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:08 compute-2 nova_compute[226829]: 2026-01-31 08:02:08.113 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:08 compute-2 nova_compute[226829]: 2026-01-31 08:02:08.316 226833 DEBUG oslo_concurrency.lockutils [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:08 compute-2 nova_compute[226829]: 2026-01-31 08:02:08.317 226833 DEBUG oslo_concurrency.lockutils [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:08 compute-2 nova_compute[226829]: 2026-01-31 08:02:08.317 226833 DEBUG nova.compute.manager [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:08 compute-2 nova_compute[226829]: 2026-01-31 08:02:08.321 226833 DEBUG nova.compute.manager [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 31 08:02:08 compute-2 nova_compute[226829]: 2026-01-31 08:02:08.322 226833 DEBUG nova.objects.instance [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'flavor' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:08 compute-2 nova_compute[226829]: 2026-01-31 08:02:08.367 226833 DEBUG nova.virt.libvirt.driver [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:02:08 compute-2 ceph-mon[77282]: pgmap v1946: 305 pgs: 305 active+clean; 326 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.8 MiB/s wr, 215 op/s
Jan 31 08:02:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:09.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:10 compute-2 nova_compute[226829]: 2026-01-31 08:02:10.254 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:10 compute-2 ceph-mon[77282]: pgmap v1947: 305 pgs: 305 active+clean; 326 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 219 op/s
Jan 31 08:02:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:10.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:11 compute-2 ovn_controller[133834]: 2026-01-31T08:02:11Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:98:96 10.100.0.13
Jan 31 08:02:11 compute-2 ceph-mon[77282]: pgmap v1948: 305 pgs: 305 active+clean; 326 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 31 08:02:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:11.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:02:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:12.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:02:13 compute-2 nova_compute[226829]: 2026-01-31 08:02:13.115 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:13 compute-2 nova_compute[226829]: 2026-01-31 08:02:13.701 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:13 compute-2 nova_compute[226829]: 2026-01-31 08:02:13.703 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:13 compute-2 nova_compute[226829]: 2026-01-31 08:02:13.703 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:13.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:14 compute-2 ceph-mon[77282]: pgmap v1949: 305 pgs: 305 active+clean; 296 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.2 MiB/s wr, 152 op/s
Jan 31 08:02:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:14.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:15 compute-2 nova_compute[226829]: 2026-01-31 08:02:15.255 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:15.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:16 compute-2 ceph-mon[77282]: pgmap v1950: 305 pgs: 305 active+clean; 274 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 720 KiB/s wr, 165 op/s
Jan 31 08:02:16 compute-2 ovn_controller[133834]: 2026-01-31T08:02:16Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:eb:48 10.100.0.3
Jan 31 08:02:16 compute-2 ovn_controller[133834]: 2026-01-31T08:02:16Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:eb:48 10.100.0.3
Jan 31 08:02:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:16.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:16 compute-2 nova_compute[226829]: 2026-01-31 08:02:16.872 226833 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:02:16 compute-2 nova_compute[226829]: 2026-01-31 08:02:16.873 226833 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:02:16 compute-2 nova_compute[226829]: 2026-01-31 08:02:16.873 226833 DEBUG nova.network.neutron [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:02:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1810113601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3785980968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:17.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:18 compute-2 nova_compute[226829]: 2026-01-31 08:02:18.119 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:18 compute-2 nova_compute[226829]: 2026-01-31 08:02:18.417 226833 DEBUG nova.virt.libvirt.driver [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 08:02:18 compute-2 ceph-mon[77282]: pgmap v1951: 305 pgs: 305 active+clean; 254 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 839 KiB/s wr, 129 op/s
Jan 31 08:02:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3615033175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:02:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:18.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:02:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:19 compute-2 nova_compute[226829]: 2026-01-31 08:02:19.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:19 compute-2 nova_compute[226829]: 2026-01-31 08:02:19.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:02:19 compute-2 nova_compute[226829]: 2026-01-31 08:02:19.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:02:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1313823788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:19 compute-2 nova_compute[226829]: 2026-01-31 08:02:19.508 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:02:19 compute-2 nova_compute[226829]: 2026-01-31 08:02:19.509 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:02:19 compute-2 nova_compute[226829]: 2026-01-31 08:02:19.509 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:02:19 compute-2 nova_compute[226829]: 2026-01-31 08:02:19.509 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:19.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.081 226833 DEBUG nova.network.neutron [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating instance_info_cache with network_info: [{"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.138 226833 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.257 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.356 226833 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.356 226833 DEBUG nova.virt.libvirt.volume.remotefs [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Creating file /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/a2e658b7715a46588ea9654fc19cc730.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.357 226833 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/a2e658b7715a46588ea9654fc19cc730.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:20 compute-2 sudo[267901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:20 compute-2 sudo[267901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:20 compute-2 sudo[267901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:20 compute-2 podman[267925]: 2026-01-31 08:02:20.421005505 +0000 UTC m=+0.036202721 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 31 08:02:20 compute-2 sudo[267933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:20 compute-2 sudo[267933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:20 compute-2 sudo[267933]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:20 compute-2 ceph-mon[77282]: pgmap v1952: 305 pgs: 305 active+clean; 299 MiB data, 878 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 31 08:02:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3581130653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.779 226833 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/a2e658b7715a46588ea9654fc19cc730.tmp" returned: 1 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.780 226833 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/a2e658b7715a46588ea9654fc19cc730.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 08:02:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.781 226833 DEBUG nova.virt.libvirt.volume.remotefs [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Creating directory /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.781 226833 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:20.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:20 compute-2 kernel: tap86cf2bf6-2f (unregistering): left promiscuous mode
Jan 31 08:02:20 compute-2 NetworkManager[48999]: <info>  [1769846540.9533] device (tap86cf2bf6-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:02:20 compute-2 ovn_controller[133834]: 2026-01-31T08:02:20Z|00324|binding|INFO|Releasing lport 86cf2bf6-2f28-4435-b081-a3945070ed2d from this chassis (sb_readonly=0)
Jan 31 08:02:20 compute-2 ovn_controller[133834]: 2026-01-31T08:02:20Z|00325|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d down in Southbound
Jan 31 08:02:20 compute-2 ovn_controller[133834]: 2026-01-31T08:02:20Z|00326|binding|INFO|Removing iface tap86cf2bf6-2f ovn-installed in OVS
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.959 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.961 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.966 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.967 226833 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:20 compute-2 nova_compute[226829]: 2026-01-31 08:02:20.973 226833 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:02:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:20.992 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:02:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:20.994 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:02:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:20.996 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:20.998 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e85a7816-9476-4737-ada8-5f1480580683]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:21.000 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore
Jan 31 08:02:21 compute-2 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 31 08:02:21 compute-2 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005b.scope: Consumed 14.591s CPU time.
Jan 31 08:02:21 compute-2 systemd-machined[195142]: Machine qemu-38-instance-0000005b terminated.
Jan 31 08:02:21 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[267443]: [NOTICE]   (267447) : haproxy version is 2.8.14-c23fe91
Jan 31 08:02:21 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[267443]: [NOTICE]   (267447) : path to executable is /usr/sbin/haproxy
Jan 31 08:02:21 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[267443]: [WARNING]  (267447) : Exiting Master process...
Jan 31 08:02:21 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[267443]: [ALERT]    (267447) : Current worker (267449) exited with code 143 (Terminated)
Jan 31 08:02:21 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[267443]: [WARNING]  (267447) : All workers exited. Exiting... (0)
Jan 31 08:02:21 compute-2 systemd[1]: libpod-97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8.scope: Deactivated successfully.
Jan 31 08:02:21 compute-2 podman[267995]: 2026-01-31 08:02:21.111301394 +0000 UTC m=+0.041840705 container died 97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:02:21 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8-userdata-shm.mount: Deactivated successfully.
Jan 31 08:02:21 compute-2 systemd[1]: var-lib-containers-storage-overlay-0a56d2d30e79cbf8824524d2b0b37f9f148d07fbd685e4375b3df8fcacb2047e-merged.mount: Deactivated successfully.
Jan 31 08:02:21 compute-2 podman[267995]: 2026-01-31 08:02:21.154800991 +0000 UTC m=+0.085340302 container cleanup 97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:02:21 compute-2 systemd[1]: libpod-conmon-97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8.scope: Deactivated successfully.
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.178 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.180 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:21 compute-2 podman[268025]: 2026-01-31 08:02:21.301442594 +0000 UTC m=+0.132949813 container remove 97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:21.305 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e3def7eb-eb68-4797-a4cd-931a21cfe8a1]: (4, ('Sat Jan 31 08:02:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8)\n97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8\nSat Jan 31 08:02:21 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8)\n97172884cc7aaa8fe04f757a8f2554ae066fae3e77ab6923edf57bdb686c66f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:21.307 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b18ef70b-4907-4810-a711-68118d09beb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:21.309 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.311 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:21 compute-2 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.317 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:21.321 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d6664340-4b07-491a-afc0-75954f490c84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:21.335 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1e27f5-17a5-41c6-a1ab-bd1e77053d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:21.336 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab32f06-e95c-4069-b367-01f622ee6d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:21.348 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[da7be3e2-433b-4e81-a942-6b10dc389a90]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676885, 'reachable_time': 42199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268056, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:21 compute-2 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:21.355 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:02:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:21.355 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[0830b01b-852f-4109-b7bd-31b00500bd8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.372 226833 DEBUG nova.compute.manager [req-c3fd7394-d722-457f-a68c-51eefad0b5af req-dc2ca9a3-7b33-48c6-b3fe-6ed66cded8c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.373 226833 DEBUG oslo_concurrency.lockutils [req-c3fd7394-d722-457f-a68c-51eefad0b5af req-dc2ca9a3-7b33-48c6-b3fe-6ed66cded8c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.373 226833 DEBUG oslo_concurrency.lockutils [req-c3fd7394-d722-457f-a68c-51eefad0b5af req-dc2ca9a3-7b33-48c6-b3fe-6ed66cded8c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.373 226833 DEBUG oslo_concurrency.lockutils [req-c3fd7394-d722-457f-a68c-51eefad0b5af req-dc2ca9a3-7b33-48c6-b3fe-6ed66cded8c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.373 226833 DEBUG nova.compute.manager [req-c3fd7394-d722-457f-a68c-51eefad0b5af req-dc2ca9a3-7b33-48c6-b3fe-6ed66cded8c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.374 226833 WARNING nova.compute.manager [req-c3fd7394-d722-457f-a68c-51eefad0b5af req-dc2ca9a3-7b33-48c6-b3fe-6ed66cded8c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state powering-off.
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.430 226833 INFO nova.virt.libvirt.driver [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance shutdown successfully after 13 seconds.
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.434 226833 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance destroyed successfully.
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.434 226833 DEBUG nova.objects.instance [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'numa_topology' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.486 226833 DEBUG nova.compute.manager [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:21 compute-2 nova_compute[226829]: 2026-01-31 08:02:21.585 226833 DEBUG oslo_concurrency.lockutils [None req-35abfe77-cb7f-432a-8a25-ddbaf0691a6a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:22 compute-2 ceph-mon[77282]: pgmap v1953: 305 pgs: 305 active+clean; 313 MiB data, 883 MiB used, 20 GiB / 21 GiB avail; 882 KiB/s rd, 3.1 MiB/s wr, 151 op/s
Jan 31 08:02:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:22.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.121 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.315 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:23 compute-2 kernel: tap647b42a2-81 (unregistering): left promiscuous mode
Jan 31 08:02:23 compute-2 NetworkManager[48999]: <info>  [1769846543.3411] device (tap647b42a2-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:02:23 compute-2 ovn_controller[133834]: 2026-01-31T08:02:23Z|00327|binding|INFO|Releasing lport 647b42a2-81d3-49dc-9075-d31077bfffe8 from this chassis (sb_readonly=0)
Jan 31 08:02:23 compute-2 ovn_controller[133834]: 2026-01-31T08:02:23Z|00328|binding|INFO|Setting lport 647b42a2-81d3-49dc-9075-d31077bfffe8 down in Southbound
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.346 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:23 compute-2 ovn_controller[133834]: 2026-01-31T08:02:23Z|00329|binding|INFO|Removing iface tap647b42a2-81 ovn-installed in OVS
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.350 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.351 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.352 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.353 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:23 compute-2 ovn_controller[133834]: 2026-01-31T08:02:23Z|00330|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.357 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.359 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:eb:48 10.100.0.3'], port_security=['fa:16:3e:14:eb:48 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e2e6f697-3383-4887-b25a-c89447e67fc2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=647b42a2-81d3-49dc-9075-d31077bfffe8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.363 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 647b42a2-81d3-49dc-9075-d31077bfffe8 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 unbound from our chassis
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.368 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 121329c8-2359-4e9d-9f2b-4932f8740470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.369 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3f694910-6445-4afd-b557-2e3a8f73ab6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.370 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace which is not needed anymore
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.371 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:23 compute-2 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 31 08:02:23 compute-2 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005d.scope: Consumed 14.081s CPU time.
Jan 31 08:02:23 compute-2 systemd-machined[195142]: Machine qemu-39-instance-0000005d terminated.
Jan 31 08:02:23 compute-2 ovn_controller[133834]: 2026-01-31T08:02:23Z|00331|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.444 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.489 226833 DEBUG nova.objects.instance [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'flavor' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:23 compute-2 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[267749]: [NOTICE]   (267753) : haproxy version is 2.8.14-c23fe91
Jan 31 08:02:23 compute-2 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[267749]: [NOTICE]   (267753) : path to executable is /usr/sbin/haproxy
Jan 31 08:02:23 compute-2 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[267749]: [WARNING]  (267753) : Exiting Master process...
Jan 31 08:02:23 compute-2 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[267749]: [WARNING]  (267753) : Exiting Master process...
Jan 31 08:02:23 compute-2 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[267749]: [ALERT]    (267753) : Current worker (267755) exited with code 143 (Terminated)
Jan 31 08:02:23 compute-2 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[267749]: [WARNING]  (267753) : All workers exited. Exiting... (0)
Jan 31 08:02:23 compute-2 systemd[1]: libpod-ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516.scope: Deactivated successfully.
Jan 31 08:02:23 compute-2 podman[268084]: 2026-01-31 08:02:23.501472805 +0000 UTC m=+0.046216583 container died ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.517 226833 DEBUG oslo_concurrency.lockutils [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.518 226833 DEBUG oslo_concurrency.lockutils [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.518 226833 DEBUG nova.network.neutron [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:02:23 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516-userdata-shm.mount: Deactivated successfully.
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.518 226833 DEBUG nova.objects.instance [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'info_cache' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:23 compute-2 systemd[1]: var-lib-containers-storage-overlay-ce573429ca4dd2aeb9bfe181d200187294e34dc094481d16b734c07895c6c0c1-merged.mount: Deactivated successfully.
Jan 31 08:02:23 compute-2 podman[268084]: 2026-01-31 08:02:23.537920663 +0000 UTC m=+0.082664441 container cleanup ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:02:23 compute-2 systemd[1]: libpod-conmon-ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516.scope: Deactivated successfully.
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.548 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.549 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.549 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.550 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.550 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:23 compute-2 NetworkManager[48999]: <info>  [1769846543.5593] manager: (tap647b42a2-81): new Tun device (/org/freedesktop/NetworkManager/Devices/168)
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.582 226833 DEBUG nova.compute.manager [req-e6cda379-6c6f-4ef8-bfd5-151d099a97f7 req-73d97584-4f58-4cd2-bebb-1bc0d4c0196d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.583 226833 DEBUG oslo_concurrency.lockutils [req-e6cda379-6c6f-4ef8-bfd5-151d099a97f7 req-73d97584-4f58-4cd2-bebb-1bc0d4c0196d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.583 226833 DEBUG oslo_concurrency.lockutils [req-e6cda379-6c6f-4ef8-bfd5-151d099a97f7 req-73d97584-4f58-4cd2-bebb-1bc0d4c0196d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.583 226833 DEBUG oslo_concurrency.lockutils [req-e6cda379-6c6f-4ef8-bfd5-151d099a97f7 req-73d97584-4f58-4cd2-bebb-1bc0d4c0196d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.584 226833 DEBUG nova.compute.manager [req-e6cda379-6c6f-4ef8-bfd5-151d099a97f7 req-73d97584-4f58-4cd2-bebb-1bc0d4c0196d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.584 226833 WARNING nova.compute.manager [req-e6cda379-6c6f-4ef8-bfd5-151d099a97f7 req-73d97584-4f58-4cd2-bebb-1bc0d4c0196d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state stopped and task_state powering-on.
Jan 31 08:02:23 compute-2 podman[268116]: 2026-01-31 08:02:23.590096796 +0000 UTC m=+0.038462663 container remove ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.593 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1a5269-04f4-4049-b95e-5414c77d7baa]: (4, ('Sat Jan 31 08:02:23 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516)\nebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516\nSat Jan 31 08:02:23 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (ebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516)\nebcd474f29b25b8905112d432fc6db5c9c5410ef27a2f189f38ad6f180076516\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.594 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4008f39c-6be2-44b1-b5ee-963c31933675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.595 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:23 compute-2 kernel: tap121329c8-20: left promiscuous mode
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.597 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.607 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.609 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1246641a-cd51-41a4-a6a1-70e5abab1899]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.620 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1d319c61-1540-4fff-b217-239a994965e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.621 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ad83c7ac-51a8-4a55-ae16-ea348a52a663]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.632 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5abdd014-6ea1-4a65-af60-21dc403816f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677183, 'reachable_time': 39065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268150, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.633 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:23.634 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc4cf97-1d65-4fe0-8e04-0122c3b79bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:23 compute-2 systemd[1]: run-netns-ovnmeta\x2d121329c8\x2d2359\x2d4e9d\x2d9f2b\x2d4932f8740470.mount: Deactivated successfully.
Jan 31 08:02:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:23.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:02:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1830699388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.971 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.987 226833 INFO nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Instance shutdown successfully after 3 seconds.
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.992 226833 INFO nova.virt.libvirt.driver [-] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Instance destroyed successfully.
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.993 226833 DEBUG nova.virt.libvirt.vif [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-810198552',display_name='tempest-ServerDiskConfigTestJSON-server-810198552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-810198552',id=93,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:02:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-1w400nzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:02:14Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=e2e6f697-3383-4887-b25a-c89447e67fc2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:14:eb:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.993 226833 DEBUG nova.network.os_vif_util [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:14:eb:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.994 226833 DEBUG nova.network.os_vif_util [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.994 226833 DEBUG os_vif [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.996 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.997 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647b42a2-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:23 compute-2 nova_compute[226829]: 2026-01-31 08:02:23.998 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.000 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.001 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.006 226833 INFO os_vif [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81')
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.010 226833 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.010 226833 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.226 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.227 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.230 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.230 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.241 226833 DEBUG nova.compute.manager [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-unplugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.242 226833 DEBUG oslo_concurrency.lockutils [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.242 226833 DEBUG oslo_concurrency.lockutils [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.242 226833 DEBUG oslo_concurrency.lockutils [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.242 226833 DEBUG nova.compute.manager [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-unplugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.243 226833 WARNING nova.compute.manager [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received unexpected event network-vif-unplugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with vm_state active and task_state resize_migrating.
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.355 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.356 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4444MB free_disk=20.83095932006836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.356 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.356 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.394 226833 DEBUG neutronclient.v2_0.client [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 647b42a2-81d3-49dc-9075-d31077bfffe8 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.537 226833 INFO nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating resource usage from migration 5efdbc6e-b65a-4054-9384-ce6814e3054a
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.578 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance dfba7f29-bde8-4327-a7b3-1c4fd44e045a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.578 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Migration 5efdbc6e-b65a-4054-9384-ce6814e3054a is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.578 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.579 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.609 226833 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.609 226833 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.610 226833 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:24 compute-2 nova_compute[226829]: 2026-01-31 08:02:24.685 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:24 compute-2 ceph-mon[77282]: pgmap v1954: 305 pgs: 305 active+clean; 328 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 710 KiB/s rd, 3.9 MiB/s wr, 150 op/s
Jan 31 08:02:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1830699388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/440359653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1328614510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:24.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:02:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2310884254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:25 compute-2 nova_compute[226829]: 2026-01-31 08:02:25.140 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:25 compute-2 nova_compute[226829]: 2026-01-31 08:02:25.146 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:02:25 compute-2 nova_compute[226829]: 2026-01-31 08:02:25.177 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:02:25 compute-2 nova_compute[226829]: 2026-01-31 08:02:25.222 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:02:25 compute-2 nova_compute[226829]: 2026-01-31 08:02:25.223 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:02:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3503687341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:02:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:02:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3503687341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:02:25 compute-2 nova_compute[226829]: 2026-01-31 08:02:25.257 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2310884254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4180076573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3503687341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:02:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3503687341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:02:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3928281601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:25.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:26 compute-2 nova_compute[226829]: 2026-01-31 08:02:26.224 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:26 compute-2 ceph-mon[77282]: pgmap v1955: 305 pgs: 305 active+clean; 328 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 633 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Jan 31 08:02:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:26.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.193 226833 DEBUG nova.compute.manager [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-changed-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.193 226833 DEBUG nova.compute.manager [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Refreshing instance network info cache due to event network-changed-647b42a2-81d3-49dc-9075-d31077bfffe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.193 226833 DEBUG oslo_concurrency.lockutils [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.193 226833 DEBUG oslo_concurrency.lockutils [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.194 226833 DEBUG nova.network.neutron [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Refreshing network info cache for port 647b42a2-81d3-49dc-9075-d31077bfffe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.603 226833 DEBUG nova.compute.manager [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.603 226833 DEBUG oslo_concurrency.lockutils [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.603 226833 DEBUG oslo_concurrency.lockutils [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.604 226833 DEBUG oslo_concurrency.lockutils [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.604 226833 DEBUG nova.compute.manager [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.604 226833 WARNING nova.compute.manager [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received unexpected event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with vm_state active and task_state resize_migrated.
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.655 226833 DEBUG nova.network.neutron [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.680 226833 DEBUG oslo_concurrency.lockutils [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.710 226833 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance destroyed successfully.
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.710 226833 DEBUG nova.objects.instance [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'numa_topology' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.738 226833 DEBUG nova.objects.instance [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.756 226833 DEBUG nova.virt.libvirt.vif [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:02:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.757 226833 DEBUG nova.network.os_vif_util [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.757 226833 DEBUG nova.network.os_vif_util [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.758 226833 DEBUG os_vif [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.760 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.760 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86cf2bf6-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.762 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.764 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.766 226833 INFO os_vif [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.773 226833 DEBUG nova.virt.libvirt.driver [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Start _get_guest_xml network_info=[{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.775 226833 WARNING nova.virt.libvirt.driver [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.790 226833 DEBUG nova.virt.libvirt.host [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.790 226833 DEBUG nova.virt.libvirt.host [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.793 226833 DEBUG nova.virt.libvirt.host [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.794 226833 DEBUG nova.virt.libvirt.host [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.795 226833 DEBUG nova.virt.libvirt.driver [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.796 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.796 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.796 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.796 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.797 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.797 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.797 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.797 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.797 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.797 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.797 226833 DEBUG nova.virt.hardware [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.798 226833 DEBUG nova.objects.instance [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'vcpu_model' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:27 compute-2 nova_compute[226829]: 2026-01-31 08:02:27.826 226833 DEBUG oslo_concurrency.processutils [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:27.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:02:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/718281349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.243 226833 DEBUG oslo_concurrency.processutils [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.274 226833 DEBUG oslo_concurrency.processutils [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:02:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2772903479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.703 226833 DEBUG oslo_concurrency.processutils [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.704 226833 DEBUG nova.virt.libvirt.vif [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:02:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.705 226833 DEBUG nova.network.os_vif_util [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.706 226833 DEBUG nova.network.os_vif_util [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.707 226833 DEBUG nova.objects.instance [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.727 226833 DEBUG nova.virt.libvirt.driver [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <uuid>dfba7f29-bde8-4327-a7b3-1c4fd44e045a</uuid>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <name>instance-0000005b</name>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestJSON-server-1163372726</nova:name>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:02:27</nova:creationTime>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <nova:port uuid="86cf2bf6-2f28-4435-b081-a3945070ed2d">
Jan 31 08:02:28 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <system>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <entry name="serial">dfba7f29-bde8-4327-a7b3-1c4fd44e045a</entry>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <entry name="uuid">dfba7f29-bde8-4327-a7b3-1c4fd44e045a</entry>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     </system>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <os>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   </os>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <features>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   </features>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk">
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       </source>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk.config">
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       </source>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:02:28 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:01:98:96"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <target dev="tap86cf2bf6-2f"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/console.log" append="off"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <video>
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     </video>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:02:28 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:02:28 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:02:28 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:02:28 compute-2 nova_compute[226829]: </domain>
Jan 31 08:02:28 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.728 226833 DEBUG nova.virt.libvirt.driver [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.729 226833 DEBUG nova.virt.libvirt.driver [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.730 226833 DEBUG nova.virt.libvirt.vif [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:02:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.731 226833 DEBUG nova.network.os_vif_util [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.731 226833 DEBUG nova.network.os_vif_util [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.732 226833 DEBUG os_vif [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.733 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.734 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.734 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.738 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.738 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86cf2bf6-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.739 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86cf2bf6-2f, col_values=(('external_ids', {'iface-id': '86cf2bf6-2f28-4435-b081-a3945070ed2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:98:96', 'vm-uuid': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.741 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 NetworkManager[48999]: <info>  [1769846548.7425] manager: (tap86cf2bf6-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.743 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.745 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.746 226833 INFO os_vif [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')
Jan 31 08:02:28 compute-2 ceph-mon[77282]: pgmap v1956: 305 pgs: 305 active+clean; 328 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 365 KiB/s rd, 4.0 MiB/s wr, 122 op/s
Jan 31 08:02:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/718281349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2772903479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:02:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:28.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:02:28 compute-2 kernel: tap86cf2bf6-2f: entered promiscuous mode
Jan 31 08:02:28 compute-2 NetworkManager[48999]: <info>  [1769846548.7997] manager: (tap86cf2bf6-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.798 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 ovn_controller[133834]: 2026-01-31T08:02:28Z|00332|binding|INFO|Claiming lport 86cf2bf6-2f28-4435-b081-a3945070ed2d for this chassis.
Jan 31 08:02:28 compute-2 ovn_controller[133834]: 2026-01-31T08:02:28Z|00333|binding|INFO|86cf2bf6-2f28-4435-b081-a3945070ed2d: Claiming fa:16:3e:01:98:96 10.100.0.13
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.804 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.808 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.812 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.821 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 NetworkManager[48999]: <info>  [1769846548.8226] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Jan 31 08:02:28 compute-2 NetworkManager[48999]: <info>  [1769846548.8239] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Jan 31 08:02:28 compute-2 systemd-udevd[268273]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.832 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '9', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.833 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.836 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:02:28 compute-2 NetworkManager[48999]: <info>  [1769846548.8457] device (tap86cf2bf6-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:02:28 compute-2 systemd-machined[195142]: New machine qemu-40-instance-0000005b.
Jan 31 08:02:28 compute-2 NetworkManager[48999]: <info>  [1769846548.8473] device (tap86cf2bf6-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.847 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f480a8c4-83db-4ec5-80ad-b7d70fd98d7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.848 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.851 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.851 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f48d45e8-261f-474e-aca5-8282ba82e223]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.852 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[00b576cd-7126-42de-a672-b53e19cbbb19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.863 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[02ad7bd7-fcd6-4621-86c5-578dd45b7887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.886 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bcc5af-41c3-4359-8a23-6a74ab2418bc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 systemd[1]: Started Virtual Machine qemu-40-instance-0000005b.
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.907 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.913 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3533e753-fe5d-4120-be3c-4be546c5b2a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.917 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9266c04b-bbe9-4e4c-8f57-00f75d85f122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 NetworkManager[48999]: <info>  [1769846548.9185] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Jan 31 08:02:28 compute-2 systemd-udevd[268277]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.924 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 ovn_controller[133834]: 2026-01-31T08:02:28Z|00334|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d ovn-installed in OVS
Jan 31 08:02:28 compute-2 ovn_controller[133834]: 2026-01-31T08:02:28Z|00335|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d up in Southbound
Jan 31 08:02:28 compute-2 nova_compute[226829]: 2026-01-31 08:02:28.928 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.941 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[cd556e53-aeb5-47a8-a2e5-17e740c36f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.944 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4af2ae8d-824c-43af-bb40-ed303ef7284e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 NetworkManager[48999]: <info>  [1769846548.9612] device (tap5cc2535f-00): carrier: link connected
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.965 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb1950f-b8fc-472b-b39f-a9634f950691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.978 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e009683f-52b6-4195-8347-4fde0b098c2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680143, 'reachable_time': 34828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268307, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:28.988 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3f2f21-8fce-46c6-94a0-14c9cc2f6467]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680143, 'tstamp': 680143}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268308, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.004 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ec370c3b-8be5-4b96-9d52-30cf512fb963]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680143, 'reachable_time': 34828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268309, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.032 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1e8872e8-3416-4a24-8463-f0eafa40570f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.071 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc14505-b7fa-4903-9385-954f70b10cc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.073 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.073 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.073 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.075 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:29 compute-2 NetworkManager[48999]: <info>  [1769846549.0756] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Jan 31 08:02:29 compute-2 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.076 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.080 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:29 compute-2 ovn_controller[133834]: 2026-01-31T08:02:29Z|00336|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.082 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.087 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.088 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.089 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[54cb243e-5e37-4fe5-966a-36fa8544cd99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.090 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:02:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:29.090 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:02:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:29 compute-2 podman[268342]: 2026-01-31 08:02:29.412655451 +0000 UTC m=+0.040823388 container create 9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:02:29 compute-2 systemd[1]: Started libpod-conmon-9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b.scope.
Jan 31 08:02:29 compute-2 podman[268342]: 2026-01-31 08:02:29.389197366 +0000 UTC m=+0.017365343 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:02:29 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:02:29 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137deb62d69c708da9f807da67ebeab588366ac0ddc7bb33581702c0f1e074ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:29 compute-2 podman[268342]: 2026-01-31 08:02:29.511862007 +0000 UTC m=+0.140029944 container init 9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:02:29 compute-2 podman[268342]: 2026-01-31 08:02:29.515855206 +0000 UTC m=+0.144023133 container start 9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 08:02:29 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268365]: [NOTICE]   (268380) : New worker (268398) forked
Jan 31 08:02:29 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268365]: [NOTICE]   (268380) : Loading success.
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.649 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for dfba7f29-bde8-4327-a7b3-1c4fd44e045a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.650 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846549.648692, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.650 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Resumed (Lifecycle Event)
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.653 226833 DEBUG nova.compute.manager [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.658 226833 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance rebooted successfully.
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.658 226833 DEBUG nova.compute.manager [None req-e6a7ce64-6d6c-4706-a61a-15ef9920be9e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.683 226833 DEBUG nova.compute.manager [req-bde5fdbe-d6f2-4bd1-ab98-69dfe2a1d7c2 req-e0cd971b-ce73-46f2-87fe-8389833c2be6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.683 226833 DEBUG oslo_concurrency.lockutils [req-bde5fdbe-d6f2-4bd1-ab98-69dfe2a1d7c2 req-e0cd971b-ce73-46f2-87fe-8389833c2be6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.683 226833 DEBUG oslo_concurrency.lockutils [req-bde5fdbe-d6f2-4bd1-ab98-69dfe2a1d7c2 req-e0cd971b-ce73-46f2-87fe-8389833c2be6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.683 226833 DEBUG oslo_concurrency.lockutils [req-bde5fdbe-d6f2-4bd1-ab98-69dfe2a1d7c2 req-e0cd971b-ce73-46f2-87fe-8389833c2be6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.684 226833 DEBUG nova.compute.manager [req-bde5fdbe-d6f2-4bd1-ab98-69dfe2a1d7c2 req-e0cd971b-ce73-46f2-87fe-8389833c2be6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.684 226833 WARNING nova.compute.manager [req-bde5fdbe-d6f2-4bd1-ab98-69dfe2a1d7c2 req-e0cd971b-ce73-46f2-87fe-8389833c2be6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state stopped and task_state powering-on.
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.717 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.720 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.723 226833 DEBUG nova.network.neutron [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updated VIF entry in instance network info cache for port 647b42a2-81d3-49dc-9075-d31077bfffe8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.724 226833 DEBUG nova.network.neutron [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating instance_info_cache with network_info: [{"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:29 compute-2 ceph-mon[77282]: pgmap v1957: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 280 KiB/s rd, 3.2 MiB/s wr, 99 op/s
Jan 31 08:02:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:29.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.954 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846549.64978, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:02:29 compute-2 nova_compute[226829]: 2026-01-31 08:02:29.955 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Started (Lifecycle Event)
Jan 31 08:02:30 compute-2 nova_compute[226829]: 2026-01-31 08:02:30.177 226833 DEBUG oslo_concurrency.lockutils [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:02:30 compute-2 nova_compute[226829]: 2026-01-31 08:02:30.236 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:30 compute-2 nova_compute[226829]: 2026-01-31 08:02:30.242 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:02:30 compute-2 nova_compute[226829]: 2026-01-31 08:02:30.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:30.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e251 e251: 3 total, 3 up, 3 in
Jan 31 08:02:31 compute-2 nova_compute[226829]: 2026-01-31 08:02:31.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:31 compute-2 nova_compute[226829]: 2026-01-31 08:02:31.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:02:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:31.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:31 compute-2 nova_compute[226829]: 2026-01-31 08:02:31.942 226833 DEBUG nova.compute.manager [req-2dcf012d-d63c-4cc9-9269-f488c39491bd req-1bfd1646-87e6-4f52-b84c-e56d9f80e5f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:31 compute-2 nova_compute[226829]: 2026-01-31 08:02:31.943 226833 DEBUG oslo_concurrency.lockutils [req-2dcf012d-d63c-4cc9-9269-f488c39491bd req-1bfd1646-87e6-4f52-b84c-e56d9f80e5f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:31 compute-2 nova_compute[226829]: 2026-01-31 08:02:31.943 226833 DEBUG oslo_concurrency.lockutils [req-2dcf012d-d63c-4cc9-9269-f488c39491bd req-1bfd1646-87e6-4f52-b84c-e56d9f80e5f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:31 compute-2 nova_compute[226829]: 2026-01-31 08:02:31.943 226833 DEBUG oslo_concurrency.lockutils [req-2dcf012d-d63c-4cc9-9269-f488c39491bd req-1bfd1646-87e6-4f52-b84c-e56d9f80e5f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:31 compute-2 nova_compute[226829]: 2026-01-31 08:02:31.944 226833 DEBUG nova.compute.manager [req-2dcf012d-d63c-4cc9-9269-f488c39491bd req-1bfd1646-87e6-4f52-b84c-e56d9f80e5f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:31 compute-2 nova_compute[226829]: 2026-01-31 08:02:31.944 226833 WARNING nova.compute.manager [req-2dcf012d-d63c-4cc9-9269-f488c39491bd req-1bfd1646-87e6-4f52-b84c-e56d9f80e5f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:02:32 compute-2 ceph-mon[77282]: osdmap e251: 3 total, 3 up, 3 in
Jan 31 08:02:32 compute-2 ceph-mon[77282]: pgmap v1959: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 832 KiB/s rd, 1.0 MiB/s wr, 78 op/s
Jan 31 08:02:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:32.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:33 compute-2 ovn_controller[133834]: 2026-01-31T08:02:33Z|00337|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:02:33 compute-2 nova_compute[226829]: 2026-01-31 08:02:33.149 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3417555742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:33 compute-2 nova_compute[226829]: 2026-01-31 08:02:33.742 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:33.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:34 compute-2 nova_compute[226829]: 2026-01-31 08:02:34.203 226833 DEBUG nova.compute.manager [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:34 compute-2 nova_compute[226829]: 2026-01-31 08:02:34.204 226833 DEBUG oslo_concurrency.lockutils [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:34 compute-2 nova_compute[226829]: 2026-01-31 08:02:34.205 226833 DEBUG oslo_concurrency.lockutils [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:34 compute-2 nova_compute[226829]: 2026-01-31 08:02:34.205 226833 DEBUG oslo_concurrency.lockutils [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:34 compute-2 nova_compute[226829]: 2026-01-31 08:02:34.205 226833 DEBUG nova.compute.manager [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:34 compute-2 nova_compute[226829]: 2026-01-31 08:02:34.206 226833 WARNING nova.compute.manager [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received unexpected event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with vm_state active and task_state resize_finish.
Jan 31 08:02:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:34 compute-2 ceph-mon[77282]: pgmap v1960: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 46 KiB/s wr, 114 op/s
Jan 31 08:02:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/520128495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:34.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:35 compute-2 nova_compute[226829]: 2026-01-31 08:02:35.265 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:36 compute-2 nova_compute[226829]: 2026-01-31 08:02:36.373 226833 DEBUG nova.compute.manager [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:36 compute-2 nova_compute[226829]: 2026-01-31 08:02:36.373 226833 DEBUG oslo_concurrency.lockutils [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:36 compute-2 nova_compute[226829]: 2026-01-31 08:02:36.374 226833 DEBUG oslo_concurrency.lockutils [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:36 compute-2 nova_compute[226829]: 2026-01-31 08:02:36.374 226833 DEBUG oslo_concurrency.lockutils [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:36 compute-2 nova_compute[226829]: 2026-01-31 08:02:36.374 226833 DEBUG nova.compute.manager [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:36 compute-2 nova_compute[226829]: 2026-01-31 08:02:36.374 226833 WARNING nova.compute.manager [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received unexpected event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with vm_state resized and task_state None.
Jan 31 08:02:36 compute-2 nova_compute[226829]: 2026-01-31 08:02:36.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:36.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:37 compute-2 ceph-mon[77282]: pgmap v1961: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 45 KiB/s wr, 198 op/s
Jan 31 08:02:37 compute-2 nova_compute[226829]: 2026-01-31 08:02:37.891 226833 INFO nova.compute.manager [None req-5327e256-cca4-46c3-a7b0-0bb7cd83af11 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Pausing
Jan 31 08:02:37 compute-2 nova_compute[226829]: 2026-01-31 08:02:37.892 226833 DEBUG nova.objects.instance [None req-5327e256-cca4-46c3-a7b0-0bb7cd83af11 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'flavor' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:37.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:37 compute-2 nova_compute[226829]: 2026-01-31 08:02:37.933 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846557.9328694, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:02:37 compute-2 nova_compute[226829]: 2026-01-31 08:02:37.933 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Paused (Lifecycle Event)
Jan 31 08:02:37 compute-2 nova_compute[226829]: 2026-01-31 08:02:37.935 226833 DEBUG nova.compute.manager [None req-5327e256-cca4-46c3-a7b0-0bb7cd83af11 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:37 compute-2 nova_compute[226829]: 2026-01-31 08:02:37.960 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:37 compute-2 nova_compute[226829]: 2026-01-31 08:02:37.962 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:02:37 compute-2 nova_compute[226829]: 2026-01-31 08:02:37.990 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 31 08:02:38 compute-2 ceph-mon[77282]: pgmap v1962: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 18 KiB/s wr, 230 op/s
Jan 31 08:02:38 compute-2 podman[268421]: 2026-01-31 08:02:38.192764264 +0000 UTC m=+0.074161800 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:02:38 compute-2 nova_compute[226829]: 2026-01-31 08:02:38.337 226833 DEBUG oslo_concurrency.lockutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:38 compute-2 nova_compute[226829]: 2026-01-31 08:02:38.338 226833 DEBUG oslo_concurrency.lockutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:38 compute-2 nova_compute[226829]: 2026-01-31 08:02:38.338 226833 DEBUG nova.compute.manager [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Going to confirm migration 12 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 31 08:02:38 compute-2 nova_compute[226829]: 2026-01-31 08:02:38.575 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846543.5703576, e2e6f697-3383-4887-b25a-c89447e67fc2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:02:38 compute-2 nova_compute[226829]: 2026-01-31 08:02:38.575 226833 INFO nova.compute.manager [-] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] VM Stopped (Lifecycle Event)
Jan 31 08:02:38 compute-2 nova_compute[226829]: 2026-01-31 08:02:38.744 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:38.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.570 226833 DEBUG nova.compute.manager [None req-ffddbdff-e6b6-43f9-b5f0-cc8c1ea0c952 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.576 226833 DEBUG nova.compute.manager [None req-ffddbdff-e6b6-43f9-b5f0-cc8c1ea0c952 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.614 226833 INFO nova.compute.manager [None req-ffddbdff-e6b6-43f9-b5f0-cc8c1ea0c952 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-2.ctlplane.example.com
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.615 226833 INFO nova.compute.manager [None req-f5ae9df3-d9b9-499a-ba17-fae729113e00 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Unpausing
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.616 226833 DEBUG nova.objects.instance [None req-f5ae9df3-d9b9-499a-ba17-fae729113e00 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'flavor' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.661 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846559.660597, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.662 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Resumed (Lifecycle Event)
Jan 31 08:02:39 compute-2 virtqemud[226546]: argument unsupported: QEMU guest agent is not configured
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.667 226833 DEBUG nova.virt.libvirt.guest [None req-f5ae9df3-d9b9-499a-ba17-fae729113e00 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.668 226833 DEBUG nova.compute.manager [None req-f5ae9df3-d9b9-499a-ba17-fae729113e00 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.717 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:39 compute-2 nova_compute[226829]: 2026-01-31 08:02:39.720 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:02:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:39.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:40 compute-2 nova_compute[226829]: 2026-01-31 08:02:40.265 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:40 compute-2 nova_compute[226829]: 2026-01-31 08:02:40.285 226833 DEBUG neutronclient.v2_0.client [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 647b42a2-81d3-49dc-9075-d31077bfffe8 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 31 08:02:40 compute-2 nova_compute[226829]: 2026-01-31 08:02:40.286 226833 DEBUG oslo_concurrency.lockutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:02:40 compute-2 nova_compute[226829]: 2026-01-31 08:02:40.286 226833 DEBUG oslo_concurrency.lockutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:02:40 compute-2 nova_compute[226829]: 2026-01-31 08:02:40.286 226833 DEBUG nova.network.neutron [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:02:40 compute-2 nova_compute[226829]: 2026-01-31 08:02:40.287 226833 DEBUG nova.objects.instance [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'info_cache' on Instance uuid e2e6f697-3383-4887-b25a-c89447e67fc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:40 compute-2 sudo[268449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:40 compute-2 sudo[268449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:40 compute-2 sudo[268449]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:40 compute-2 sudo[268474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:40 compute-2 sudo[268474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:40 compute-2 sudo[268474]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:40 compute-2 ceph-mon[77282]: pgmap v1963: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.7 KiB/s wr, 266 op/s
Jan 31 08:02:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:40.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:41 compute-2 ovn_controller[133834]: 2026-01-31T08:02:41Z|00338|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:02:41 compute-2 nova_compute[226829]: 2026-01-31 08:02:41.550 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:41.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:42.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:42 compute-2 ceph-mon[77282]: pgmap v1964: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.4 KiB/s wr, 236 op/s
Jan 31 08:02:43 compute-2 nova_compute[226829]: 2026-01-31 08:02:43.350 226833 DEBUG nova.network.neutron [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating instance_info_cache with network_info: [{"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:43 compute-2 nova_compute[226829]: 2026-01-31 08:02:43.378 226833 DEBUG oslo_concurrency.lockutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:02:43 compute-2 nova_compute[226829]: 2026-01-31 08:02:43.378 226833 DEBUG nova.objects.instance [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'migration_context' on Instance uuid e2e6f697-3383-4887-b25a-c89447e67fc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:43 compute-2 nova_compute[226829]: 2026-01-31 08:02:43.515 226833 DEBUG nova.storage.rbd_utils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] removing snapshot(nova-resize) on rbd image(e2e6f697-3383-4887-b25a-c89447e67fc2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:02:43 compute-2 nova_compute[226829]: 2026-01-31 08:02:43.747 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:43 compute-2 ceph-mon[77282]: pgmap v1965: 305 pgs: 305 active+clean; 332 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 341 KiB/s wr, 203 op/s
Jan 31 08:02:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:43.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:44.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e252 e252: 3 total, 3 up, 3 in
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.266 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:45 compute-2 ovn_controller[133834]: 2026-01-31T08:02:45Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:98:96 10.100.0.13
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.620 226833 DEBUG nova.virt.libvirt.vif [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-810198552',display_name='tempest-ServerDiskConfigTestJSON-server-810198552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-810198552',id=93,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:02:34Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-1w400nzc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:02:34Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=e2e6f697-3383-4887-b25a-c89447e67fc2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.620 226833 DEBUG nova.network.os_vif_util [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.622 226833 DEBUG nova.network.os_vif_util [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.623 226833 DEBUG os_vif [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.627 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.628 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647b42a2-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.629 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.632 226833 INFO os_vif [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81')
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.632 226833 DEBUG oslo_concurrency.lockutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.633 226833 DEBUG oslo_concurrency.lockutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.700 226833 DEBUG nova.scheduler.client.report [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.742 226833 DEBUG nova.scheduler.client.report [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.743 226833 DEBUG nova.compute.provider_tree [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.777 226833 DEBUG nova.scheduler.client.report [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.837 226833 DEBUG nova.scheduler.client.report [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:02:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:45.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:45 compute-2 nova_compute[226829]: 2026-01-31 08:02:45.956 226833 DEBUG oslo_concurrency.processutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3863939065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:02:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3863939065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:02:46 compute-2 ceph-mon[77282]: osdmap e252: 3 total, 3 up, 3 in
Jan 31 08:02:46 compute-2 ceph-mon[77282]: pgmap v1967: 305 pgs: 305 active+clean; 343 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.7 MiB/s wr, 142 op/s
Jan 31 08:02:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:02:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3153227033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:46 compute-2 nova_compute[226829]: 2026-01-31 08:02:46.645 226833 DEBUG oslo_concurrency.processutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:46 compute-2 nova_compute[226829]: 2026-01-31 08:02:46.650 226833 DEBUG nova.compute.provider_tree [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:02:46 compute-2 nova_compute[226829]: 2026-01-31 08:02:46.672 226833 DEBUG nova.scheduler.client.report [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:02:46 compute-2 nova_compute[226829]: 2026-01-31 08:02:46.732 226833 DEBUG oslo_concurrency.lockutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:02:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:46.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:02:46 compute-2 nova_compute[226829]: 2026-01-31 08:02:46.867 226833 INFO nova.scheduler.client.report [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Deleted allocation for migration 5efdbc6e-b65a-4054-9384-ce6814e3054a
Jan 31 08:02:46 compute-2 nova_compute[226829]: 2026-01-31 08:02:46.950 226833 DEBUG oslo_concurrency.lockutils [None req-4e90b449-fd82-4860-b56c-515ae6bbd19f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 8.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3153227033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:47.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:48 compute-2 ceph-mon[77282]: pgmap v1968: 305 pgs: 305 active+clean; 349 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Jan 31 08:02:48 compute-2 nova_compute[226829]: 2026-01-31 08:02:48.750 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:48.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 e253: 3 total, 3 up, 3 in
Jan 31 08:02:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:49.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:50 compute-2 nova_compute[226829]: 2026-01-31 08:02:50.268 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:50 compute-2 ceph-mon[77282]: pgmap v1969: 305 pgs: 305 active+clean; 360 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 167 op/s
Jan 31 08:02:50 compute-2 ceph-mon[77282]: osdmap e253: 3 total, 3 up, 3 in
Jan 31 08:02:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:50.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:51 compute-2 podman[268562]: 2026-01-31 08:02:51.223762418 +0000 UTC m=+0.098286133 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 08:02:51 compute-2 nova_compute[226829]: 2026-01-31 08:02:51.774 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:51.774 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:02:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:51.776 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:02:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:51.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:52 compute-2 ceph-mon[77282]: pgmap v1971: 305 pgs: 305 active+clean; 361 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 234 op/s
Jan 31 08:02:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:52.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:53 compute-2 nova_compute[226829]: 2026-01-31 08:02:53.752 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:53.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:54.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:54 compute-2 ceph-mon[77282]: pgmap v1972: 305 pgs: 305 active+clean; 361 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 181 op/s
Jan 31 08:02:55 compute-2 nova_compute[226829]: 2026-01-31 08:02:55.270 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:55 compute-2 nova_compute[226829]: 2026-01-31 08:02:55.483 226833 DEBUG oslo_concurrency.lockutils [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:55 compute-2 nova_compute[226829]: 2026-01-31 08:02:55.484 226833 DEBUG oslo_concurrency.lockutils [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:55 compute-2 nova_compute[226829]: 2026-01-31 08:02:55.484 226833 INFO nova.compute.manager [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Rebooting instance
Jan 31 08:02:55 compute-2 nova_compute[226829]: 2026-01-31 08:02:55.513 226833 DEBUG oslo_concurrency.lockutils [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:02:55 compute-2 nova_compute[226829]: 2026-01-31 08:02:55.513 226833 DEBUG oslo_concurrency.lockutils [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:02:55 compute-2 nova_compute[226829]: 2026-01-31 08:02:55.513 226833 DEBUG nova.network.neutron [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:02:55 compute-2 ceph-mon[77282]: pgmap v1973: 305 pgs: 305 active+clean; 361 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 947 KiB/s wr, 152 op/s
Jan 31 08:02:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:55.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:56.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:57 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:57.780 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:02:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:57.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:02:58 compute-2 nova_compute[226829]: 2026-01-31 08:02:58.050 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:58 compute-2 ceph-mon[77282]: pgmap v1974: 305 pgs: 305 active+clean; 345 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 425 KiB/s wr, 130 op/s
Jan 31 08:02:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1288948321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/729951909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:58 compute-2 nova_compute[226829]: 2026-01-31 08:02:58.548 226833 DEBUG nova.network.neutron [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:58 compute-2 nova_compute[226829]: 2026-01-31 08:02:58.585 226833 DEBUG oslo_concurrency.lockutils [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:02:58 compute-2 nova_compute[226829]: 2026-01-31 08:02:58.588 226833 DEBUG nova.compute.manager [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:58 compute-2 nova_compute[226829]: 2026-01-31 08:02:58.754 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:58 compute-2 kernel: tap86cf2bf6-2f (unregistering): left promiscuous mode
Jan 31 08:02:58 compute-2 NetworkManager[48999]: <info>  [1769846578.7979] device (tap86cf2bf6-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:02:58 compute-2 ovn_controller[133834]: 2026-01-31T08:02:58Z|00339|binding|INFO|Releasing lport 86cf2bf6-2f28-4435-b081-a3945070ed2d from this chassis (sb_readonly=0)
Jan 31 08:02:58 compute-2 ovn_controller[133834]: 2026-01-31T08:02:58Z|00340|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d down in Southbound
Jan 31 08:02:58 compute-2 ovn_controller[133834]: 2026-01-31T08:02:58Z|00341|binding|INFO|Removing iface tap86cf2bf6-2f ovn-installed in OVS
Jan 31 08:02:58 compute-2 nova_compute[226829]: 2026-01-31 08:02:58.804 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:58 compute-2 nova_compute[226829]: 2026-01-31 08:02:58.806 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:58 compute-2 nova_compute[226829]: 2026-01-31 08:02:58.813 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:58 compute-2 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 31 08:02:58 compute-2 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000005b.scope: Consumed 14.132s CPU time.
Jan 31 08:02:58 compute-2 systemd-machined[195142]: Machine qemu-40-instance-0000005b terminated.
Jan 31 08:02:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:58.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:58.887 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '10', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:02:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:58.888 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:02:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:58.890 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:02:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:58.892 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fb111fc2-7cd5-4d6f-850e-11211cdf34df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:58.893 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore
Jan 31 08:02:58 compute-2 nova_compute[226829]: 2026-01-31 08:02:58.981 226833 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance destroyed successfully.
Jan 31 08:02:58 compute-2 nova_compute[226829]: 2026-01-31 08:02:58.982 226833 DEBUG nova.objects.instance [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:59 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268365]: [NOTICE]   (268380) : haproxy version is 2.8.14-c23fe91
Jan 31 08:02:59 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268365]: [NOTICE]   (268380) : path to executable is /usr/sbin/haproxy
Jan 31 08:02:59 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268365]: [WARNING]  (268380) : Exiting Master process...
Jan 31 08:02:59 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268365]: [WARNING]  (268380) : Exiting Master process...
Jan 31 08:02:59 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268365]: [ALERT]    (268380) : Current worker (268398) exited with code 143 (Terminated)
Jan 31 08:02:59 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268365]: [WARNING]  (268380) : All workers exited. Exiting... (0)
Jan 31 08:02:59 compute-2 systemd[1]: libpod-9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b.scope: Deactivated successfully.
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.018 226833 DEBUG nova.virt.libvirt.vif [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:02:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.018 226833 DEBUG nova.network.os_vif_util [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.019 226833 DEBUG nova.network.os_vif_util [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.020 226833 DEBUG os_vif [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.022 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.022 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86cf2bf6-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.027 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:02:59 compute-2 podman[268613]: 2026-01-31 08:02:59.028549325 +0000 UTC m=+0.057867859 container died 9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.033 226833 INFO os_vif [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.045 226833 DEBUG nova.virt.libvirt.driver [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Start _get_guest_xml network_info=[{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.052 226833 WARNING nova.virt.libvirt.driver [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:02:59 compute-2 systemd[1]: var-lib-containers-storage-overlay-137deb62d69c708da9f807da67ebeab588366ac0ddc7bb33581702c0f1e074ff-merged.mount: Deactivated successfully.
Jan 31 08:02:59 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b-userdata-shm.mount: Deactivated successfully.
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.063 226833 DEBUG nova.virt.libvirt.host [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:02:59 compute-2 podman[268613]: 2026-01-31 08:02:59.064310334 +0000 UTC m=+0.093628858 container cleanup 9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.064 226833 DEBUG nova.virt.libvirt.host [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.070 226833 DEBUG nova.virt.libvirt.host [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:02:59 compute-2 systemd[1]: libpod-conmon-9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b.scope: Deactivated successfully.
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.071 226833 DEBUG nova.virt.libvirt.host [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.072 226833 DEBUG nova.virt.libvirt.driver [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.072 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.073 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.073 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.073 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.073 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.073 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.073 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.074 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.074 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.074 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.074 226833 DEBUG nova.virt.hardware [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.074 226833 DEBUG nova.objects.instance [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'vcpu_model' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.094 226833 DEBUG oslo_concurrency.processutils [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:59 compute-2 podman[268653]: 2026-01-31 08:02:59.12251913 +0000 UTC m=+0.043810327 container remove 9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:02:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:59.126 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5318a27e-bd38-4adf-858b-3cb9d72dcca2]: (4, ('Sat Jan 31 08:02:58 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b)\n9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b\nSat Jan 31 08:02:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b)\n9361b600090fd646c52d88ec7812e9d88b5abc15d75ad5b66cb2d8baeb25898b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:59.128 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a91e71-749a-4702-950c-7fc0a04a361c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:59.129 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:59 compute-2 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.132 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.137 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.138 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:59.142 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b779620d-8deb-4e76-9578-a1906b6406e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:59.158 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[09891506-e7fc-41ee-a98c-9d72482c8eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:59.160 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[661d94b0-2dfd-4d7a-8eba-bf4e472a8190]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:59.171 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b378e36e-b491-490f-bf5d-f54662a83b6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680138, 'reachable_time': 34580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268669, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:59 compute-2 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 08:02:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:59.178 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:02:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:02:59.179 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[58a25e1f-59f9-4873-bb44-fd4fdc827cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:02:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2639171500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.522 226833 DEBUG oslo_concurrency.processutils [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.553 226833 DEBUG oslo_concurrency.processutils [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:02:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2933201801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:02:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:59.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.961 226833 DEBUG oslo_concurrency.processutils [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.962 226833 DEBUG nova.virt.libvirt.vif [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:02:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.962 226833 DEBUG nova.network.os_vif_util [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.963 226833 DEBUG nova.network.os_vif_util [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.964 226833 DEBUG nova.objects.instance [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.988 226833 DEBUG nova.virt.libvirt.driver [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <uuid>dfba7f29-bde8-4327-a7b3-1c4fd44e045a</uuid>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <name>instance-0000005b</name>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestJSON-server-1163372726</nova:name>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:02:59</nova:creationTime>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <nova:port uuid="86cf2bf6-2f28-4435-b081-a3945070ed2d">
Jan 31 08:02:59 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <system>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <entry name="serial">dfba7f29-bde8-4327-a7b3-1c4fd44e045a</entry>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <entry name="uuid">dfba7f29-bde8-4327-a7b3-1c4fd44e045a</entry>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     </system>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <os>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   </os>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <features>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   </features>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk">
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       </source>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk.config">
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       </source>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:02:59 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:01:98:96"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <target dev="tap86cf2bf6-2f"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/console.log" append="off"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <video>
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     </video>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:02:59 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:02:59 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:02:59 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:02:59 compute-2 nova_compute[226829]: </domain>
Jan 31 08:02:59 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.990 226833 DEBUG nova.virt.libvirt.driver [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.990 226833 DEBUG nova.virt.libvirt.driver [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.992 226833 DEBUG nova.virt.libvirt.vif [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:02:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.992 226833 DEBUG nova.network.os_vif_util [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.993 226833 DEBUG nova.network.os_vif_util [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.994 226833 DEBUG os_vif [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.995 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.995 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:59 compute-2 nova_compute[226829]: 2026-01-31 08:02:59.996 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.001 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.001 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86cf2bf6-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.002 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86cf2bf6-2f, col_values=(('external_ids', {'iface-id': '86cf2bf6-2f28-4435-b081-a3945070ed2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:98:96', 'vm-uuid': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.004 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 NetworkManager[48999]: <info>  [1769846580.0057] manager: (tap86cf2bf6-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.009 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.011 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.012 226833 INFO os_vif [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')
Jan 31 08:03:00 compute-2 kernel: tap86cf2bf6-2f: entered promiscuous mode
Jan 31 08:03:00 compute-2 NetworkManager[48999]: <info>  [1769846580.0868] manager: (tap86cf2bf6-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Jan 31 08:03:00 compute-2 ovn_controller[133834]: 2026-01-31T08:03:00Z|00342|binding|INFO|Claiming lport 86cf2bf6-2f28-4435-b081-a3945070ed2d for this chassis.
Jan 31 08:03:00 compute-2 ovn_controller[133834]: 2026-01-31T08:03:00Z|00343|binding|INFO|86cf2bf6-2f28-4435-b081-a3945070ed2d: Claiming fa:16:3e:01:98:96 10.100.0.13
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.088 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 systemd-udevd[268591]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:03:00 compute-2 ovn_controller[133834]: 2026-01-31T08:03:00Z|00344|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d ovn-installed in OVS
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.094 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.095 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.097 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.100 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.104 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '10', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:03:00 compute-2 NetworkManager[48999]: <info>  [1769846580.1050] device (tap86cf2bf6-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:03:00 compute-2 NetworkManager[48999]: <info>  [1769846580.1057] device (tap86cf2bf6-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:03:00 compute-2 ovn_controller[133834]: 2026-01-31T08:03:00Z|00345|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d up in Southbound
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.105 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.107 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.115 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7adeabc0-9f46-49af-a551-3443cb0bc2e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.116 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.118 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.118 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[09f11c47-2e82-4d85-a1c3-5667fe464233]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.119 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1e47214e-867b-4502-aff5-c7eac96ead5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 systemd-machined[195142]: New machine qemu-41-instance-0000005b.
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.128 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a2fb5b-f38e-418d-8c67-d42277a713aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.136 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[06586cc8-7664-4507-a1de-b8004267aab2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 systemd[1]: Started Virtual Machine qemu-41-instance-0000005b.
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.156 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c53c58b2-7a66-4583-9074-7b514c7bf79e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.160 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7a044729-2ca1-407e-9024-34214c4ec29a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 NetworkManager[48999]: <info>  [1769846580.1612] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/177)
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.189 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2e95b106-0bb3-41c6-bd10-31bec3bffd44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.194 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[279d21b5-36f9-4704-a82e-a6141ecfa253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 NetworkManager[48999]: <info>  [1769846580.2130] device (tap5cc2535f-00): carrier: link connected
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.217 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5d951f97-5a8f-45ba-a833-4d4e6069c29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.230 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[74b311d9-f81e-4d27-8bc0-03666144e80c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683268, 'reachable_time': 20252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268779, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.239 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[32cb54c2-0675-4edc-a977-4d0efb8a776d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683268, 'tstamp': 683268}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268780, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.249 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[127fe278-10f8-416f-b7be-8dbbfbe8fadc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683268, 'reachable_time': 20252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268781, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.274 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5e0570-2e66-49ce-8e82-b83858544c1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.322 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[632640f0-205e-430a-beb3-5e1501947654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.324 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.325 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.325 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.327 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 NetworkManager[48999]: <info>  [1769846580.3277] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Jan 31 08:03:00 compute-2 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.329 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.330 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.331 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 ovn_controller[133834]: 2026-01-31T08:03:00Z|00346|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.332 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.333 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.334 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1f18c196-91ce-4aa4-8012-91c26050eb37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.335 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.335 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:03:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:00.336 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:03:00 compute-2 ceph-mon[77282]: pgmap v1975: 305 pgs: 305 active+clean; 271 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 296 KiB/s rd, 1.2 MiB/s wr, 117 op/s
Jan 31 08:03:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2639171500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2933201801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.656 226833 DEBUG nova.compute.manager [req-e2d373de-0ed1-44c3-8d7d-562fdefc69a2 req-1508fa67-3a45-487e-a4d8-d6af99c42163 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.656 226833 DEBUG oslo_concurrency.lockutils [req-e2d373de-0ed1-44c3-8d7d-562fdefc69a2 req-1508fa67-3a45-487e-a4d8-d6af99c42163 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.656 226833 DEBUG oslo_concurrency.lockutils [req-e2d373de-0ed1-44c3-8d7d-562fdefc69a2 req-1508fa67-3a45-487e-a4d8-d6af99c42163 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.657 226833 DEBUG oslo_concurrency.lockutils [req-e2d373de-0ed1-44c3-8d7d-562fdefc69a2 req-1508fa67-3a45-487e-a4d8-d6af99c42163 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.657 226833 DEBUG nova.compute.manager [req-e2d373de-0ed1-44c3-8d7d-562fdefc69a2 req-1508fa67-3a45-487e-a4d8-d6af99c42163 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.657 226833 WARNING nova.compute.manager [req-e2d373de-0ed1-44c3-8d7d-562fdefc69a2 req-1508fa67-3a45-487e-a4d8-d6af99c42163 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state reboot_started_hard.
Jan 31 08:03:00 compute-2 sudo[268837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:00 compute-2 sudo[268837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:00 compute-2 sudo[268837]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:00 compute-2 podman[268813]: 2026-01-31 08:03:00.696157505 +0000 UTC m=+0.064804036 container create 39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 08:03:00 compute-2 systemd[1]: Started libpod-conmon-39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523.scope.
Jan 31 08:03:00 compute-2 sudo[268884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:00 compute-2 sudo[268884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:00 compute-2 sudo[268884]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:00 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:03:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c2aa138a4f45ede76ce4fd1402f86142b48c4e93ebc76657a6a55eb228de92/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:00 compute-2 podman[268813]: 2026-01-31 08:03:00.753470878 +0000 UTC m=+0.122117419 container init 39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:03:00 compute-2 podman[268813]: 2026-01-31 08:03:00.756953512 +0000 UTC m=+0.125600043 container start 39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:03:00 compute-2 podman[268813]: 2026-01-31 08:03:00.672945356 +0000 UTC m=+0.041591907 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:03:00 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268916]: [NOTICE]   (268924) : New worker (268926) forked
Jan 31 08:03:00 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268916]: [NOTICE]   (268924) : Loading success.
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.815 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for dfba7f29-bde8-4327-a7b3-1c4fd44e045a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.816 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846580.8154068, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.816 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Resumed (Lifecycle Event)
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.818 226833 DEBUG nova.compute.manager [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.822 226833 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance rebooted successfully.
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.823 226833 DEBUG nova.compute.manager [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:00.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.856 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.859 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.906 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.907 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846580.818097, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.907 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Started (Lifecycle Event)
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.935 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.937 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:03:00 compute-2 nova_compute[226829]: 2026-01-31 08:03:00.978 226833 DEBUG oslo_concurrency.lockutils [None req-bec18ab2-c730-4542-8605-7eed68ab9094 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:01.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:02 compute-2 ceph-mon[77282]: pgmap v1976: 305 pgs: 305 active+clean; 258 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 262 KiB/s rd, 1.9 MiB/s wr, 115 op/s
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.778 226833 DEBUG nova.compute.manager [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.779 226833 DEBUG oslo_concurrency.lockutils [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.779 226833 DEBUG oslo_concurrency.lockutils [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.779 226833 DEBUG oslo_concurrency.lockutils [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.780 226833 DEBUG nova.compute.manager [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.780 226833 WARNING nova.compute.manager [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.780 226833 DEBUG nova.compute.manager [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.781 226833 DEBUG oslo_concurrency.lockutils [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.781 226833 DEBUG oslo_concurrency.lockutils [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.781 226833 DEBUG oslo_concurrency.lockutils [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.782 226833 DEBUG nova.compute.manager [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.782 226833 WARNING nova.compute.manager [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.782 226833 DEBUG nova.compute.manager [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.782 226833 DEBUG oslo_concurrency.lockutils [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.783 226833 DEBUG oslo_concurrency.lockutils [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.783 226833 DEBUG oslo_concurrency.lockutils [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.783 226833 DEBUG nova.compute.manager [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:03:02 compute-2 nova_compute[226829]: 2026-01-31 08:03:02.784 226833 WARNING nova.compute.manager [req-80f63cd9-9d0c-4f58-8a84-0fd8fcc197de req-aceec2da-e563-48bd-b52f-07df0db5b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state None.
Jan 31 08:03:02 compute-2 sudo[268937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:02 compute-2 sudo[268937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:02 compute-2 sudo[268937]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:02 compute-2 sudo[268962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:03:02 compute-2 sudo[268962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:02 compute-2 sudo[268962]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:02.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:02 compute-2 sudo[268987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:02 compute-2 sudo[268987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:02 compute-2 sudo[268987]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:02 compute-2 sudo[269012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:03:02 compute-2 sudo[269012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:03 compute-2 sudo[269012]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/381090428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:03 compute-2 nova_compute[226829]: 2026-01-31 08:03:03.911 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:04 compute-2 ceph-mon[77282]: pgmap v1977: 305 pgs: 305 active+clean; 249 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 208 KiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 08:03:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:03:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:03:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:03:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:03:04 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:03:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:04.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:05 compute-2 nova_compute[226829]: 2026-01-31 08:03:05.012 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:05 compute-2 nova_compute[226829]: 2026-01-31 08:03:05.275 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:05.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:06 compute-2 ceph-mon[77282]: pgmap v1978: 305 pgs: 305 active+clean; 264 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 MiB/s wr, 148 op/s
Jan 31 08:03:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:06.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:06.874 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:06.875 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:06.876 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:07.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:08 compute-2 ceph-mon[77282]: pgmap v1979: 305 pgs: 305 active+clean; 280 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 191 op/s
Jan 31 08:03:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/623362724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3514863003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:08.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:09 compute-2 podman[269071]: 2026-01-31 08:03:09.225982429 +0000 UTC m=+0.104303816 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:03:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:09.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:10 compute-2 nova_compute[226829]: 2026-01-31 08:03:10.015 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:10 compute-2 nova_compute[226829]: 2026-01-31 08:03:10.277 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:10 compute-2 sudo[269099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:10 compute-2 sudo[269099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:10 compute-2 sudo[269099]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:10 compute-2 ceph-mon[77282]: pgmap v1980: 305 pgs: 305 active+clean; 277 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Jan 31 08:03:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1024500865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:10 compute-2 sudo[269124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:03:10 compute-2 sudo[269124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:10 compute-2 sudo[269124]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:10.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:11 compute-2 nova_compute[226829]: 2026-01-31 08:03:11.509 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:11.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:12 compute-2 ceph-mon[77282]: pgmap v1981: 305 pgs: 305 active+clean; 260 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Jan 31 08:03:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:12.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:13.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:14 compute-2 ovn_controller[133834]: 2026-01-31T08:03:14Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:98:96 10.100.0.13
Jan 31 08:03:14 compute-2 nova_compute[226829]: 2026-01-31 08:03:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:14 compute-2 ceph-mon[77282]: pgmap v1982: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Jan 31 08:03:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:14.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:15 compute-2 nova_compute[226829]: 2026-01-31 08:03:15.019 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:15 compute-2 nova_compute[226829]: 2026-01-31 08:03:15.279 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:15 compute-2 nova_compute[226829]: 2026-01-31 08:03:15.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:15 compute-2 nova_compute[226829]: 2026-01-31 08:03:15.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:15 compute-2 ceph-mon[77282]: pgmap v1983: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 31 08:03:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:15.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:16.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:17.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:18 compute-2 ceph-mon[77282]: pgmap v1984: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 161 op/s
Jan 31 08:03:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:18.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/449284864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:19 compute-2 nova_compute[226829]: 2026-01-31 08:03:19.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:19 compute-2 nova_compute[226829]: 2026-01-31 08:03:19.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:03:19 compute-2 nova_compute[226829]: 2026-01-31 08:03:19.514 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:19 compute-2 nova_compute[226829]: 2026-01-31 08:03:19.536 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:03:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:19.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:20 compute-2 nova_compute[226829]: 2026-01-31 08:03:20.021 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:20 compute-2 nova_compute[226829]: 2026-01-31 08:03:20.281 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:20 compute-2 ceph-mon[77282]: pgmap v1985: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 511 KiB/s wr, 148 op/s
Jan 31 08:03:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1309241626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:20 compute-2 sudo[269154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:20 compute-2 sudo[269154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:20 compute-2 sudo[269154]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:20 compute-2 sudo[269179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:20 compute-2 sudo[269179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:20 compute-2 sudo[269179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:20.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:21 compute-2 nova_compute[226829]: 2026-01-31 08:03:21.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/689929559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:21.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:22 compute-2 podman[269205]: 2026-01-31 08:03:22.165919427 +0000 UTC m=+0.052902485 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:03:22 compute-2 ceph-mon[77282]: pgmap v1986: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 33 KiB/s wr, 134 op/s
Jan 31 08:03:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:22.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:23.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:24 compute-2 ceph-mon[77282]: pgmap v1987: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 116 op/s
Jan 31 08:03:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:24.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:25 compute-2 nova_compute[226829]: 2026-01-31 08:03:25.021 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:25 compute-2 nova_compute[226829]: 2026-01-31 08:03:25.283 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:25 compute-2 nova_compute[226829]: 2026-01-31 08:03:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:25 compute-2 nova_compute[226829]: 2026-01-31 08:03:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:25 compute-2 nova_compute[226829]: 2026-01-31 08:03:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:25 compute-2 nova_compute[226829]: 2026-01-31 08:03:25.678 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:25 compute-2 nova_compute[226829]: 2026-01-31 08:03:25.679 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:25 compute-2 nova_compute[226829]: 2026-01-31 08:03:25.679 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:25 compute-2 nova_compute[226829]: 2026-01-31 08:03:25.679 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:03:25 compute-2 nova_compute[226829]: 2026-01-31 08:03:25.680 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1142760437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2436905090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:25.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:03:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/521678790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.097 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.380 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.381 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.508 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.509 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4188MB free_disk=20.876117706298828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.510 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.510 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.657 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance dfba7f29-bde8-4327-a7b3-1c4fd44e045a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.658 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.658 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:03:26 compute-2 ceph-mon[77282]: pgmap v1988: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 16 KiB/s wr, 107 op/s
Jan 31 08:03:26 compute-2 nova_compute[226829]: 2026-01-31 08:03:26.706 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2769752874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/521678790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:26.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:03:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/125820870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:27 compute-2 nova_compute[226829]: 2026-01-31 08:03:27.133 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:27 compute-2 nova_compute[226829]: 2026-01-31 08:03:27.138 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:03:27 compute-2 nova_compute[226829]: 2026-01-31 08:03:27.200 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:03:27 compute-2 nova_compute[226829]: 2026-01-31 08:03:27.257 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:03:27 compute-2 nova_compute[226829]: 2026-01-31 08:03:27.258 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/125820870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:27.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:28 compute-2 ceph-mon[77282]: pgmap v1989: 305 pgs: 305 active+clean; 254 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 464 KiB/s wr, 93 op/s
Jan 31 08:03:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:03:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:28.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:03:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:30 compute-2 nova_compute[226829]: 2026-01-31 08:03:30.023 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:30 compute-2 ceph-mon[77282]: pgmap v1990: 305 pgs: 305 active+clean; 315 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.5 MiB/s wr, 127 op/s
Jan 31 08:03:30 compute-2 nova_compute[226829]: 2026-01-31 08:03:30.285 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:30.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:31.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:32 compute-2 ceph-mon[77282]: pgmap v1991: 305 pgs: 305 active+clean; 328 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 31 08:03:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:32.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/787839996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:33.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:34 compute-2 nova_compute[226829]: 2026-01-31 08:03:34.259 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:34 compute-2 nova_compute[226829]: 2026-01-31 08:03:34.260 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:03:34 compute-2 ceph-mon[77282]: pgmap v1992: 305 pgs: 305 active+clean; 343 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.4 MiB/s wr, 112 op/s
Jan 31 08:03:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4228796052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:34.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:35 compute-2 nova_compute[226829]: 2026-01-31 08:03:35.024 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:35 compute-2 nova_compute[226829]: 2026-01-31 08:03:35.289 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:35.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:36 compute-2 ceph-mon[77282]: pgmap v1993: 305 pgs: 305 active+clean; 367 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.6 MiB/s wr, 120 op/s
Jan 31 08:03:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:03:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:36.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:03:37 compute-2 ceph-mon[77282]: pgmap v1994: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 134 op/s
Jan 31 08:03:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:37.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:38.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:39.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:40 compute-2 nova_compute[226829]: 2026-01-31 08:03:40.026 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:40 compute-2 podman[269279]: 2026-01-31 08:03:40.23619241 +0000 UTC m=+0.122064968 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:03:40 compute-2 nova_compute[226829]: 2026-01-31 08:03:40.290 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:40 compute-2 ceph-mon[77282]: pgmap v1995: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.3 MiB/s wr, 160 op/s
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.499312) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620499423, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2400, "num_deletes": 252, "total_data_size": 5626513, "memory_usage": 5695376, "flush_reason": "Manual Compaction"}
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620521221, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 3688012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44763, "largest_seqno": 47158, "table_properties": {"data_size": 3678370, "index_size": 6072, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20449, "raw_average_key_size": 20, "raw_value_size": 3658904, "raw_average_value_size": 3684, "num_data_blocks": 264, "num_entries": 993, "num_filter_entries": 993, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846410, "oldest_key_time": 1769846410, "file_creation_time": 1769846620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 21972 microseconds, and 10903 cpu microseconds.
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.521281) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 3688012 bytes OK
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.521301) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.522743) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.522755) EVENT_LOG_v1 {"time_micros": 1769846620522751, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.522772) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 5616025, prev total WAL file size 5616025, number of live WAL files 2.
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.523559) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(3601KB)], [87(9491KB)]
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620523645, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 13407788, "oldest_snapshot_seqno": -1}
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7181 keys, 11455239 bytes, temperature: kUnknown
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620584051, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 11455239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11406843, "index_size": 29340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 185144, "raw_average_key_size": 25, "raw_value_size": 11278047, "raw_average_value_size": 1570, "num_data_blocks": 1164, "num_entries": 7181, "num_filter_entries": 7181, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769846620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.585200) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11455239 bytes
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.586324) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.1 rd, 189.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.3 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7704, records dropped: 523 output_compression: NoCompression
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.586358) EVENT_LOG_v1 {"time_micros": 1769846620586345, "job": 54, "event": "compaction_finished", "compaction_time_micros": 60371, "compaction_time_cpu_micros": 22800, "output_level": 6, "num_output_files": 1, "total_output_size": 11455239, "num_input_records": 7704, "num_output_records": 7181, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620586993, "job": 54, "event": "table_file_deletion", "file_number": 89}
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620588014, "job": 54, "event": "table_file_deletion", "file_number": 87}
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.523467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.588108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.588114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.588116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.588117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:03:40.588119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:40.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:40 compute-2 sudo[269305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:40 compute-2 sudo[269305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:40 compute-2 sudo[269305]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:41 compute-2 sudo[269330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:41 compute-2 sudo[269330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:41 compute-2 sudo[269330]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:41.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:42 compute-2 nova_compute[226829]: 2026-01-31 08:03:42.334 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:42 compute-2 nova_compute[226829]: 2026-01-31 08:03:42.335 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:42 compute-2 nova_compute[226829]: 2026-01-31 08:03:42.363 226833 DEBUG nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:03:42 compute-2 nova_compute[226829]: 2026-01-31 08:03:42.504 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:42 compute-2 nova_compute[226829]: 2026-01-31 08:03:42.504 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:42 compute-2 nova_compute[226829]: 2026-01-31 08:03:42.516 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:03:42 compute-2 nova_compute[226829]: 2026-01-31 08:03:42.516 226833 INFO nova.compute.claims [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:03:42 compute-2 ceph-mon[77282]: pgmap v1996: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 104 op/s
Jan 31 08:03:42 compute-2 nova_compute[226829]: 2026-01-31 08:03:42.693 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:42.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:03:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4063449277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.123 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.128 226833 DEBUG nova.compute.provider_tree [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.150 226833 DEBUG nova.scheduler.client.report [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.177 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.177 226833 DEBUG nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.245 226833 DEBUG nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.245 226833 DEBUG nova.network.neutron [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.274 226833 INFO nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.292 226833 DEBUG nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.386 226833 DEBUG nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.387 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.387 226833 INFO nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Creating image(s)
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.410 226833 DEBUG nova.storage.rbd_utils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.434 226833 DEBUG nova.storage.rbd_utils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.458 226833 DEBUG nova.storage.rbd_utils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.461 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.511 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.512 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.512 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.513 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e254 e254: 3 total, 3 up, 3 in
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.533 226833 DEBUG nova.storage.rbd_utils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.538 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 380842ce-2460-4a95-94a6-836a6137b09c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4063449277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.872 226833 DEBUG nova.policy [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd9ed446fb2cf4fc0a4e619c6c766fddc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1fcec9ca13964c7191134db4420ab049', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.877 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 380842ce-2460-4a95-94a6-836a6137b09c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:43 compute-2 nova_compute[226829]: 2026-01-31 08:03:43.947 226833 DEBUG nova.storage.rbd_utils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] resizing rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:03:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:43.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:44 compute-2 nova_compute[226829]: 2026-01-31 08:03:44.106 226833 DEBUG nova.objects.instance [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'migration_context' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:44 compute-2 nova_compute[226829]: 2026-01-31 08:03:44.137 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:03:44 compute-2 nova_compute[226829]: 2026-01-31 08:03:44.138 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Ensure instance console log exists: /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:03:44 compute-2 nova_compute[226829]: 2026-01-31 08:03:44.138 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:44 compute-2 nova_compute[226829]: 2026-01-31 08:03:44.139 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:44 compute-2 nova_compute[226829]: 2026-01-31 08:03:44.139 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:44 compute-2 ceph-mon[77282]: pgmap v1997: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Jan 31 08:03:44 compute-2 ceph-mon[77282]: osdmap e254: 3 total, 3 up, 3 in
Jan 31 08:03:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2913285531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:44 compute-2 nova_compute[226829]: 2026-01-31 08:03:44.748 226833 DEBUG nova.network.neutron [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Successfully created port: 3141db9a-3775-4074-8fd7-00af7879125d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:03:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:44.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:45 compute-2 nova_compute[226829]: 2026-01-31 08:03:45.028 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:45 compute-2 nova_compute[226829]: 2026-01-31 08:03:45.291 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1226616456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1766540124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:03:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1766540124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:03:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:45.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:46 compute-2 ceph-mon[77282]: pgmap v1999: 305 pgs: 305 active+clean; 399 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 08:03:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:46.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:47 compute-2 nova_compute[226829]: 2026-01-31 08:03:47.792 226833 DEBUG nova.network.neutron [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Successfully updated port: 3141db9a-3775-4074-8fd7-00af7879125d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:03:47 compute-2 nova_compute[226829]: 2026-01-31 08:03:47.832 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-380842ce-2460-4a95-94a6-836a6137b09c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:03:47 compute-2 nova_compute[226829]: 2026-01-31 08:03:47.832 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-380842ce-2460-4a95-94a6-836a6137b09c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:03:47 compute-2 nova_compute[226829]: 2026-01-31 08:03:47.832 226833 DEBUG nova.network.neutron [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:03:47 compute-2 nova_compute[226829]: 2026-01-31 08:03:47.986 226833 DEBUG nova.network.neutron [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:03:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:47.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:48 compute-2 ceph-mon[77282]: pgmap v2000: 305 pgs: 305 active+clean; 406 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 138 op/s
Jan 31 08:03:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:48.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:49 compute-2 nova_compute[226829]: 2026-01-31 08:03:49.769 226833 DEBUG nova.compute.manager [req-1c5dc5ed-e9d5-41f6-b42d-38c38df991fd req-18ab7d57-8978-4484-9934-148f262c89d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-changed-3141db9a-3775-4074-8fd7-00af7879125d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:49 compute-2 nova_compute[226829]: 2026-01-31 08:03:49.769 226833 DEBUG nova.compute.manager [req-1c5dc5ed-e9d5-41f6-b42d-38c38df991fd req-18ab7d57-8978-4484-9934-148f262c89d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Refreshing instance network info cache due to event network-changed-3141db9a-3775-4074-8fd7-00af7879125d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:03:49 compute-2 nova_compute[226829]: 2026-01-31 08:03:49.770 226833 DEBUG oslo_concurrency.lockutils [req-1c5dc5ed-e9d5-41f6-b42d-38c38df991fd req-18ab7d57-8978-4484-9934-148f262c89d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-380842ce-2460-4a95-94a6-836a6137b09c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:03:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:49.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.029 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.293 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.443 226833 DEBUG nova.network.neutron [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Updating instance_info_cache with network_info: [{"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.475 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-380842ce-2460-4a95-94a6-836a6137b09c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.476 226833 DEBUG nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance network_info: |[{"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.476 226833 DEBUG oslo_concurrency.lockutils [req-1c5dc5ed-e9d5-41f6-b42d-38c38df991fd req-18ab7d57-8978-4484-9934-148f262c89d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-380842ce-2460-4a95-94a6-836a6137b09c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.476 226833 DEBUG nova.network.neutron [req-1c5dc5ed-e9d5-41f6-b42d-38c38df991fd req-18ab7d57-8978-4484-9934-148f262c89d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Refreshing network info cache for port 3141db9a-3775-4074-8fd7-00af7879125d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.480 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Start _get_guest_xml network_info=[{"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.484 226833 WARNING nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:03:50 compute-2 ceph-mon[77282]: pgmap v2001: 305 pgs: 305 active+clean; 426 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.0 MiB/s wr, 153 op/s
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.491 226833 DEBUG nova.virt.libvirt.host [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.491 226833 DEBUG nova.virt.libvirt.host [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.504 226833 DEBUG nova.virt.libvirt.host [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.505 226833 DEBUG nova.virt.libvirt.host [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.506 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.506 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.507 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.507 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.507 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.507 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.508 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.508 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.508 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.508 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.508 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.509 226833 DEBUG nova.virt.hardware [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.512 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:50.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:03:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1927397384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:50 compute-2 nova_compute[226829]: 2026-01-31 08:03:50.989 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.013 226833 DEBUG nova.storage.rbd_utils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.018 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:03:51 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3553796967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.455 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.457 226833 DEBUG nova.virt.libvirt.vif [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:03:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1756792593',display_name='tempest-tempest.common.compute-instance-1756792593',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1756792593',id=97,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-st38ql6s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:03:43Z,user_data=None,user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=380842ce-2460-4a95-94a6-836a6137b09c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.458 226833 DEBUG nova.network.os_vif_util [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.459 226833 DEBUG nova.network.os_vif_util [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.461 226833 DEBUG nova.objects.instance [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.478 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <uuid>380842ce-2460-4a95-94a6-836a6137b09c</uuid>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <name>instance-00000061</name>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <nova:name>tempest-tempest.common.compute-instance-1756792593</nova:name>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:03:50</nova:creationTime>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <nova:port uuid="3141db9a-3775-4074-8fd7-00af7879125d">
Jan 31 08:03:51 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <system>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <entry name="serial">380842ce-2460-4a95-94a6-836a6137b09c</entry>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <entry name="uuid">380842ce-2460-4a95-94a6-836a6137b09c</entry>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     </system>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <os>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   </os>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <features>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   </features>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/380842ce-2460-4a95-94a6-836a6137b09c_disk">
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       </source>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/380842ce-2460-4a95-94a6-836a6137b09c_disk.config">
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       </source>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:03:51 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:ae:74:48"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <target dev="tap3141db9a-37"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/console.log" append="off"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <video>
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     </video>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:03:51 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:03:51 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:03:51 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:03:51 compute-2 nova_compute[226829]: </domain>
Jan 31 08:03:51 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.479 226833 DEBUG nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Preparing to wait for external event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.479 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.480 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.480 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.480 226833 DEBUG nova.virt.libvirt.vif [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:03:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1756792593',display_name='tempest-tempest.common.compute-instance-1756792593',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1756792593',id=97,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-st38ql6s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:03:43Z,user_data=None,user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=380842ce-2460-4a95-94a6-836a6137b09c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.481 226833 DEBUG nova.network.os_vif_util [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.481 226833 DEBUG nova.network.os_vif_util [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.482 226833 DEBUG os_vif [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.482 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.483 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.483 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.488 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.489 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3141db9a-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.490 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3141db9a-37, col_values=(('external_ids', {'iface-id': '3141db9a-3775-4074-8fd7-00af7879125d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:74:48', 'vm-uuid': '380842ce-2460-4a95-94a6-836a6137b09c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.492 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:51 compute-2 NetworkManager[48999]: <info>  [1769846631.4937] manager: (tap3141db9a-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.495 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.499 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.501 226833 INFO os_vif [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37')
Jan 31 08:03:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1927397384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3553796967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.556 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.557 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.558 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No VIF found with MAC fa:16:3e:ae:74:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.558 226833 INFO nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Using config drive
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.576 226833 DEBUG nova.storage.rbd_utils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:51 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.994 226833 INFO nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Creating config drive at /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config
Jan 31 08:03:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:51.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:51.999 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp88ehkhsj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.124 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp88ehkhsj" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.154 226833 DEBUG nova.storage.rbd_utils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.158 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config 380842ce-2460-4a95-94a6-836a6137b09c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.304 226833 DEBUG oslo_concurrency.processutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config 380842ce-2460-4a95-94a6-836a6137b09c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.305 226833 INFO nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Deleting local config drive /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config because it was imported into RBD.
Jan 31 08:03:52 compute-2 kernel: tap3141db9a-37: entered promiscuous mode
Jan 31 08:03:52 compute-2 NetworkManager[48999]: <info>  [1769846632.3491] manager: (tap3141db9a-37): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.350 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:52 compute-2 ovn_controller[133834]: 2026-01-31T08:03:52Z|00347|binding|INFO|Claiming lport 3141db9a-3775-4074-8fd7-00af7879125d for this chassis.
Jan 31 08:03:52 compute-2 ovn_controller[133834]: 2026-01-31T08:03:52Z|00348|binding|INFO|3141db9a-3775-4074-8fd7-00af7879125d: Claiming fa:16:3e:ae:74:48 10.100.0.9
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.356 226833 DEBUG nova.network.neutron [req-1c5dc5ed-e9d5-41f6-b42d-38c38df991fd req-18ab7d57-8978-4484-9934-148f262c89d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Updated VIF entry in instance network info cache for port 3141db9a-3775-4074-8fd7-00af7879125d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.357 226833 DEBUG nova.network.neutron [req-1c5dc5ed-e9d5-41f6-b42d-38c38df991fd req-18ab7d57-8978-4484-9934-148f262c89d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Updating instance_info_cache with network_info: [{"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.359 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:52 compute-2 ovn_controller[133834]: 2026-01-31T08:03:52Z|00349|binding|INFO|Setting lport 3141db9a-3775-4074-8fd7-00af7879125d ovn-installed in OVS
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.359 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:52 compute-2 ovn_controller[133834]: 2026-01-31T08:03:52Z|00350|binding|INFO|Setting lport 3141db9a-3775-4074-8fd7-00af7879125d up in Southbound
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.361 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:74:48 10.100.0.9'], port_security=['fa:16:3e:ae:74:48 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '380842ce-2460-4a95-94a6-836a6137b09c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3ae8b6d-25bb-4c6c-8115-39e86b6452b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=3141db9a-3775-4074-8fd7-00af7879125d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.364 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 3141db9a-3775-4074-8fd7-00af7879125d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.366 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:03:52 compute-2 systemd-machined[195142]: New machine qemu-42-instance-00000061.
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.381 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc6cd91-29e8-4df6-9686-c9bfb628508f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.385 226833 DEBUG oslo_concurrency.lockutils [req-1c5dc5ed-e9d5-41f6-b42d-38c38df991fd req-18ab7d57-8978-4484-9934-148f262c89d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-380842ce-2460-4a95-94a6-836a6137b09c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:03:52 compute-2 systemd[1]: Started Virtual Machine qemu-42-instance-00000061.
Jan 31 08:03:52 compute-2 systemd-udevd[269698]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:03:52 compute-2 NetworkManager[48999]: <info>  [1769846632.4137] device (tap3141db9a-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:03:52 compute-2 NetworkManager[48999]: <info>  [1769846632.4156] device (tap3141db9a-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.414 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d815ff72-598e-476e-9650-362e19ccab0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.418 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[76ff4bc5-b977-48fd-959d-c0aead7b81a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.436 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[db34ad92-39cc-4a53-88de-5aa0f9bc6d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:52 compute-2 podman[269682]: 2026-01-31 08:03:52.440711588 +0000 UTC m=+0.063800829 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.448 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9325fa-89de-469c-baa2-6999d82309fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683268, 'reachable_time': 20252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269714, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.458 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[23d77328-e562-4f7f-b2e9-b4de2c0418d7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5cc2535f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683277, 'tstamp': 683277}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269716, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc2535f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683279, 'tstamp': 683279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269716, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.460 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.462 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.463 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.463 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.463 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.464 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:52.464 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:03:52 compute-2 ceph-mon[77282]: pgmap v2002: 305 pgs: 305 active+clean; 433 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.601 226833 DEBUG nova.compute.manager [req-801138e9-f594-4649-9015-89d663c05cfa req-885bb84d-71ae-441c-98af-1f4d115e68a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.602 226833 DEBUG oslo_concurrency.lockutils [req-801138e9-f594-4649-9015-89d663c05cfa req-885bb84d-71ae-441c-98af-1f4d115e68a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.602 226833 DEBUG oslo_concurrency.lockutils [req-801138e9-f594-4649-9015-89d663c05cfa req-885bb84d-71ae-441c-98af-1f4d115e68a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.602 226833 DEBUG oslo_concurrency.lockutils [req-801138e9-f594-4649-9015-89d663c05cfa req-885bb84d-71ae-441c-98af-1f4d115e68a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.603 226833 DEBUG nova.compute.manager [req-801138e9-f594-4649-9015-89d663c05cfa req-885bb84d-71ae-441c-98af-1f4d115e68a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Processing event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.698 226833 DEBUG nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.698 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846632.6975937, 380842ce-2460-4a95-94a6-836a6137b09c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.699 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] VM Started (Lifecycle Event)
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.701 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.703 226833 INFO nova.virt.libvirt.driver [-] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance spawned successfully.
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.703 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.724 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.729 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.731 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.731 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.732 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.732 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.733 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.733 226833 DEBUG nova.virt.libvirt.driver [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.772 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.773 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846632.698574, 380842ce-2460-4a95-94a6-836a6137b09c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.773 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] VM Paused (Lifecycle Event)
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.806 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.810 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846632.7006767, 380842ce-2460-4a95-94a6-836a6137b09c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.810 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] VM Resumed (Lifecycle Event)
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.821 226833 INFO nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Took 9.43 seconds to spawn the instance on the hypervisor.
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.822 226833 DEBUG nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.830 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.833 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.859 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.888 226833 INFO nova.compute.manager [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Took 10.43 seconds to build instance.
Jan 31 08:03:52 compute-2 nova_compute[226829]: 2026-01-31 08:03:52.903 226833 DEBUG oslo_concurrency.lockutils [None req-fa23fcb2-7f47-4d36-b246-7580218f64f2 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:03:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:52.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:03:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:53.521 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:03:53 compute-2 nova_compute[226829]: 2026-01-31 08:03:53.521 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:53.522 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:03:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:03:53.523 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e255 e255: 3 total, 3 up, 3 in
Jan 31 08:03:54 compute-2 ceph-mon[77282]: pgmap v2003: 305 pgs: 305 active+clean; 439 MiB data, 958 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Jan 31 08:03:54 compute-2 nova_compute[226829]: 2026-01-31 08:03:54.696 226833 DEBUG nova.compute.manager [req-eb38a1fa-fd29-4cf5-8501-b845c2a5d2c0 req-22fb7335-6f2a-4380-aef9-43b02520299d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:54 compute-2 nova_compute[226829]: 2026-01-31 08:03:54.696 226833 DEBUG oslo_concurrency.lockutils [req-eb38a1fa-fd29-4cf5-8501-b845c2a5d2c0 req-22fb7335-6f2a-4380-aef9-43b02520299d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:54 compute-2 nova_compute[226829]: 2026-01-31 08:03:54.696 226833 DEBUG oslo_concurrency.lockutils [req-eb38a1fa-fd29-4cf5-8501-b845c2a5d2c0 req-22fb7335-6f2a-4380-aef9-43b02520299d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:54 compute-2 nova_compute[226829]: 2026-01-31 08:03:54.697 226833 DEBUG oslo_concurrency.lockutils [req-eb38a1fa-fd29-4cf5-8501-b845c2a5d2c0 req-22fb7335-6f2a-4380-aef9-43b02520299d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:54 compute-2 nova_compute[226829]: 2026-01-31 08:03:54.697 226833 DEBUG nova.compute.manager [req-eb38a1fa-fd29-4cf5-8501-b845c2a5d2c0 req-22fb7335-6f2a-4380-aef9-43b02520299d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] No waiting events found dispatching network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:03:54 compute-2 nova_compute[226829]: 2026-01-31 08:03:54.697 226833 WARNING nova.compute.manager [req-eb38a1fa-fd29-4cf5-8501-b845c2a5d2c0 req-22fb7335-6f2a-4380-aef9-43b02520299d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received unexpected event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d for instance with vm_state active and task_state None.
Jan 31 08:03:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:54.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:55 compute-2 nova_compute[226829]: 2026-01-31 08:03:55.193 226833 INFO nova.compute.manager [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Rebuilding instance
Jan 31 08:03:55 compute-2 nova_compute[226829]: 2026-01-31 08:03:55.295 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:55 compute-2 nova_compute[226829]: 2026-01-31 08:03:55.532 226833 DEBUG nova.objects.instance [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:55 compute-2 nova_compute[226829]: 2026-01-31 08:03:55.555 226833 DEBUG nova.compute.manager [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:55 compute-2 ceph-mon[77282]: osdmap e255: 3 total, 3 up, 3 in
Jan 31 08:03:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2818416923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:55 compute-2 nova_compute[226829]: 2026-01-31 08:03:55.602 226833 DEBUG nova.objects.instance [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_requests' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:55 compute-2 nova_compute[226829]: 2026-01-31 08:03:55.614 226833 DEBUG nova.objects.instance [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:55 compute-2 nova_compute[226829]: 2026-01-31 08:03:55.626 226833 DEBUG nova.objects.instance [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:55 compute-2 nova_compute[226829]: 2026-01-31 08:03:55.639 226833 DEBUG nova.objects.instance [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'migration_context' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:55 compute-2 nova_compute[226829]: 2026-01-31 08:03:55.654 226833 DEBUG nova.objects.instance [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 08:03:55 compute-2 nova_compute[226829]: 2026-01-31 08:03:55.658 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:03:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:56.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:56 compute-2 nova_compute[226829]: 2026-01-31 08:03:56.493 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:56 compute-2 ceph-mon[77282]: pgmap v2005: 305 pgs: 305 active+clean; 451 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 248 op/s
Jan 31 08:03:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:56.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:58.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:58 compute-2 ceph-mon[77282]: pgmap v2006: 305 pgs: 305 active+clean; 453 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 232 op/s
Jan 31 08:03:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1010431320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:03:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:03:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:58.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:03:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 e256: 3 total, 3 up, 3 in
Jan 31 08:04:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:00.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:00 compute-2 nova_compute[226829]: 2026-01-31 08:04:00.299 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:00 compute-2 ceph-mon[77282]: pgmap v2007: 305 pgs: 305 active+clean; 453 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 234 op/s
Jan 31 08:04:00 compute-2 ceph-mon[77282]: osdmap e256: 3 total, 3 up, 3 in
Jan 31 08:04:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:00.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:01 compute-2 sudo[269763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:01 compute-2 sudo[269763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:01 compute-2 sudo[269763]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:01 compute-2 sudo[269788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:01 compute-2 sudo[269788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:01 compute-2 sudo[269788]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:01 compute-2 nova_compute[226829]: 2026-01-31 08:04:01.496 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:01 compute-2 ceph-mon[77282]: pgmap v2009: 305 pgs: 305 active+clean; 433 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1008 KiB/s wr, 243 op/s
Jan 31 08:04:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:02.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:02.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:04.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:04 compute-2 ceph-mon[77282]: pgmap v2010: 305 pgs: 305 active+clean; 409 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 98 KiB/s wr, 165 op/s
Jan 31 08:04:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 31 08:04:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:04.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 31 08:04:05 compute-2 nova_compute[226829]: 2026-01-31 08:04:05.300 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/218588558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:05 compute-2 nova_compute[226829]: 2026-01-31 08:04:05.712 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 08:04:05 compute-2 ovn_controller[133834]: 2026-01-31T08:04:05Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:74:48 10.100.0.9
Jan 31 08:04:05 compute-2 ovn_controller[133834]: 2026-01-31T08:04:05Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:74:48 10.100.0.9
Jan 31 08:04:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:06.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:06 compute-2 nova_compute[226829]: 2026-01-31 08:04:06.499 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:06 compute-2 ceph-mon[77282]: pgmap v2011: 305 pgs: 305 active+clean; 374 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 88 KiB/s wr, 154 op/s
Jan 31 08:04:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:06.874 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:06.875 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:06.876 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:06.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:08.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:08 compute-2 kernel: tap3141db9a-37 (unregistering): left promiscuous mode
Jan 31 08:04:08 compute-2 NetworkManager[48999]: <info>  [1769846648.4019] device (tap3141db9a-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.410 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:08 compute-2 ovn_controller[133834]: 2026-01-31T08:04:08Z|00351|binding|INFO|Releasing lport 3141db9a-3775-4074-8fd7-00af7879125d from this chassis (sb_readonly=0)
Jan 31 08:04:08 compute-2 ovn_controller[133834]: 2026-01-31T08:04:08Z|00352|binding|INFO|Setting lport 3141db9a-3775-4074-8fd7-00af7879125d down in Southbound
Jan 31 08:04:08 compute-2 ovn_controller[133834]: 2026-01-31T08:04:08Z|00353|binding|INFO|Removing iface tap3141db9a-37 ovn-installed in OVS
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.417 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.423 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:74:48 10.100.0.9'], port_security=['fa:16:3e:ae:74:48 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '380842ce-2460-4a95-94a6-836a6137b09c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3ae8b6d-25bb-4c6c-8115-39e86b6452b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=3141db9a-3775-4074-8fd7-00af7879125d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.424 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 3141db9a-3775-4074-8fd7-00af7879125d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.427 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.444 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4293da02-db9b-4d00-83c0-869ab6d9da69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.470 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[95c14f05-983e-4e30-b92e-0073583419e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:08 compute-2 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000061.scope: Deactivated successfully.
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.474 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f74e7dbc-d98d-46da-8f2e-010023e88e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:08 compute-2 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000061.scope: Consumed 13.047s CPU time.
Jan 31 08:04:08 compute-2 systemd-machined[195142]: Machine qemu-42-instance-00000061 terminated.
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.496 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3df6cc5d-214a-4458-baf6-cadfa2c97b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.509 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cbfa4f68-027b-4d92-9d93-9cc75b51190a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683268, 'reachable_time': 20252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269828, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.522 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bedddab2-ee00-45ac-8e88-7e2e77e7b42e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5cc2535f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683277, 'tstamp': 683277}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269829, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc2535f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683279, 'tstamp': 683279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269829, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.524 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.526 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.530 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.530 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.531 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.531 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:08.532 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.633 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.636 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:08 compute-2 ceph-mon[77282]: pgmap v2012: 305 pgs: 305 active+clean; 376 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 446 KiB/s wr, 142 op/s
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.731 226833 INFO nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance shutdown successfully after 13 seconds.
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.737 226833 INFO nova.virt.libvirt.driver [-] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance destroyed successfully.
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.742 226833 INFO nova.virt.libvirt.driver [-] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance destroyed successfully.
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.744 226833 DEBUG nova.virt.libvirt.vif [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:03:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1756792593',display_name='tempest-ServerActionsTestJSON-server-491025228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1756792593',id=97,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-st38ql6s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:03:54Z,user_data=None,user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=380842ce-2460-4a95-94a6-836a6137b09c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.744 226833 DEBUG nova.network.os_vif_util [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.745 226833 DEBUG nova.network.os_vif_util [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.746 226833 DEBUG os_vif [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.750 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.751 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3141db9a-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.753 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.756 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:04:08 compute-2 nova_compute[226829]: 2026-01-31 08:04:08.759 226833 INFO os_vif [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37')
Jan 31 08:04:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:08.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.150 226833 DEBUG nova.compute.manager [req-66c3eb2d-76dc-4a87-923a-ac88bd529b58 req-6c6773bc-c5a5-4751-b2f3-7a303e7fee11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-vif-unplugged-3141db9a-3775-4074-8fd7-00af7879125d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.150 226833 DEBUG oslo_concurrency.lockutils [req-66c3eb2d-76dc-4a87-923a-ac88bd529b58 req-6c6773bc-c5a5-4751-b2f3-7a303e7fee11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.150 226833 DEBUG oslo_concurrency.lockutils [req-66c3eb2d-76dc-4a87-923a-ac88bd529b58 req-6c6773bc-c5a5-4751-b2f3-7a303e7fee11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.151 226833 DEBUG oslo_concurrency.lockutils [req-66c3eb2d-76dc-4a87-923a-ac88bd529b58 req-6c6773bc-c5a5-4751-b2f3-7a303e7fee11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.151 226833 DEBUG nova.compute.manager [req-66c3eb2d-76dc-4a87-923a-ac88bd529b58 req-6c6773bc-c5a5-4751-b2f3-7a303e7fee11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] No waiting events found dispatching network-vif-unplugged-3141db9a-3775-4074-8fd7-00af7879125d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.151 226833 WARNING nova.compute.manager [req-66c3eb2d-76dc-4a87-923a-ac88bd529b58 req-6c6773bc-c5a5-4751-b2f3-7a303e7fee11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received unexpected event network-vif-unplugged-3141db9a-3775-4074-8fd7-00af7879125d for instance with vm_state active and task_state rebuilding.
Jan 31 08:04:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.353 226833 INFO nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Deleting instance files /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c_del
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.354 226833 INFO nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Deletion of /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c_del complete
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.713 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.713 226833 INFO nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Creating image(s)
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.736 226833 DEBUG nova.storage.rbd_utils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.765 226833 DEBUG nova.storage.rbd_utils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.786 226833 DEBUG nova.storage.rbd_utils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.789 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.844 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.845 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.845 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.846 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.868 226833 DEBUG nova.storage.rbd_utils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:09 compute-2 nova_compute[226829]: 2026-01-31 08:04:09.871 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 380842ce-2460-4a95-94a6-836a6137b09c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:10.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.164 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 380842ce-2460-4a95-94a6-836a6137b09c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.243 226833 DEBUG nova.storage.rbd_utils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] resizing rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.302 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.388 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.388 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Ensure instance console log exists: /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.389 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.389 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.389 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.391 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Start _get_guest_xml network_info=[{"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.396 226833 WARNING nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.408 226833 DEBUG nova.virt.libvirt.host [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.409 226833 DEBUG nova.virt.libvirt.host [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.415 226833 DEBUG nova.virt.libvirt.host [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.416 226833 DEBUG nova.virt.libvirt.host [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.418 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.418 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.419 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.419 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.419 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.419 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.420 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.420 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.420 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.420 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.421 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.421 226833 DEBUG nova.virt.hardware [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.421 226833 DEBUG nova.objects.instance [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.452 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:10 compute-2 sudo[270047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:10 compute-2 sudo[270047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:10 compute-2 sudo[270047]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:10 compute-2 ceph-mon[77282]: pgmap v2013: 305 pgs: 305 active+clean; 392 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 4.0 MiB/s wr, 181 op/s
Jan 31 08:04:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4123609605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:10 compute-2 sudo[270078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:04:10 compute-2 sudo[270078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:10 compute-2 sudo[270078]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:10 compute-2 podman[270071]: 2026-01-31 08:04:10.787536844 +0000 UTC m=+0.078817936 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:04:10 compute-2 sudo[270115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:10 compute-2 sudo[270115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:10 compute-2 sudo[270115]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:10 compute-2 sudo[270146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:04:10 compute-2 sudo[270146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:04:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3314209745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.891 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.912 226833 DEBUG nova.storage.rbd_utils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:10 compute-2 nova_compute[226829]: 2026-01-31 08:04:10.916 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:10.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:11 compute-2 sudo[270146]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:04:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1075596497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.333 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.336 226833 DEBUG nova.virt.libvirt.vif [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:03:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1756792593',display_name='tempest-ServerActionsTestJSON-server-491025228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1756792593',id=97,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-st38ql6s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:09Z,user_data=None,user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=380842ce-2460-4a95-94a6-836a6137b09c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.337 226833 DEBUG nova.network.os_vif_util [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.338 226833 DEBUG nova.network.os_vif_util [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.343 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <uuid>380842ce-2460-4a95-94a6-836a6137b09c</uuid>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <name>instance-00000061</name>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestJSON-server-491025228</nova:name>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:04:10</nova:creationTime>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <nova:port uuid="3141db9a-3775-4074-8fd7-00af7879125d">
Jan 31 08:04:11 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <system>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <entry name="serial">380842ce-2460-4a95-94a6-836a6137b09c</entry>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <entry name="uuid">380842ce-2460-4a95-94a6-836a6137b09c</entry>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     </system>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <os>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   </os>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <features>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   </features>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/380842ce-2460-4a95-94a6-836a6137b09c_disk">
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       </source>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/380842ce-2460-4a95-94a6-836a6137b09c_disk.config">
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       </source>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:04:11 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:ae:74:48"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <target dev="tap3141db9a-37"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/console.log" append="off"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <video>
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     </video>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:04:11 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:04:11 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:04:11 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:04:11 compute-2 nova_compute[226829]: </domain>
Jan 31 08:04:11 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.345 226833 DEBUG nova.compute.manager [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Preparing to wait for external event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.345 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.346 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.346 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.347 226833 DEBUG nova.virt.libvirt.vif [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:03:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1756792593',display_name='tempest-ServerActionsTestJSON-server-491025228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1756792593',id=97,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-st38ql6s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:09Z,user_data=None,user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=380842ce-2460-4a95-94a6-836a6137b09c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.348 226833 DEBUG nova.network.os_vif_util [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.349 226833 DEBUG nova.network.os_vif_util [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.350 226833 DEBUG os_vif [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.351 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.352 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.352 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.356 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.356 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3141db9a-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.357 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3141db9a-37, col_values=(('external_ids', {'iface-id': '3141db9a-3775-4074-8fd7-00af7879125d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:74:48', 'vm-uuid': '380842ce-2460-4a95-94a6-836a6137b09c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.360 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:11 compute-2 NetworkManager[48999]: <info>  [1769846651.3618] manager: (tap3141db9a-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.364 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.366 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:11 compute-2 nova_compute[226829]: 2026-01-31 08:04:11.367 226833 INFO os_vif [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37')
Jan 31 08:04:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3314209745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1075596497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:12.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.026 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.027 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.027 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No VIF found with MAC fa:16:3e:ae:74:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.028 226833 INFO nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Using config drive
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.064 226833 DEBUG nova.storage.rbd_utils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.177 226833 DEBUG nova.compute.manager [req-b589c673-b04a-420d-aab8-b7e0ce80f0f4 req-fdbab6bb-e4c6-44ef-a349-e9bfb691f27e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.178 226833 DEBUG oslo_concurrency.lockutils [req-b589c673-b04a-420d-aab8-b7e0ce80f0f4 req-fdbab6bb-e4c6-44ef-a349-e9bfb691f27e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.179 226833 DEBUG oslo_concurrency.lockutils [req-b589c673-b04a-420d-aab8-b7e0ce80f0f4 req-fdbab6bb-e4c6-44ef-a349-e9bfb691f27e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.179 226833 DEBUG oslo_concurrency.lockutils [req-b589c673-b04a-420d-aab8-b7e0ce80f0f4 req-fdbab6bb-e4c6-44ef-a349-e9bfb691f27e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.180 226833 DEBUG nova.compute.manager [req-b589c673-b04a-420d-aab8-b7e0ce80f0f4 req-fdbab6bb-e4c6-44ef-a349-e9bfb691f27e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Processing event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.455 226833 DEBUG nova.objects.instance [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:12 compute-2 nova_compute[226829]: 2026-01-31 08:04:12.671 226833 DEBUG nova.objects.instance [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'keypairs' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:12 compute-2 ceph-mon[77282]: pgmap v2014: 305 pgs: 305 active+clean; 396 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 521 KiB/s rd, 4.6 MiB/s wr, 174 op/s
Jan 31 08:04:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:12.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.010 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.010 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.051 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.142 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.142 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.149 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.149 226833 INFO nova.compute.claims [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.518 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.736 226833 INFO nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Creating config drive at /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.744 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0toq4h9n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.872 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0toq4h9n" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.897 226833 DEBUG nova.storage.rbd_utils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 380842ce-2460-4a95-94a6-836a6137b09c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.900 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config 380842ce-2460-4a95-94a6-836a6137b09c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:13 compute-2 ceph-mon[77282]: pgmap v2015: 305 pgs: 305 active+clean; 361 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 4.7 MiB/s wr, 156 op/s
Jan 31 08:04:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1909634964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:04:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1873123469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.956 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.962 226833 DEBUG nova.compute.provider_tree [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:04:13 compute-2 nova_compute[226829]: 2026-01-31 08:04:13.995 226833 DEBUG nova.scheduler.client.report [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:04:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:14.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.038 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.039 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.091 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.091 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.127 226833 INFO nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:04:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.251 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.299 226833 INFO nova.virt.block_device [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Booting with volume 38f98947-0572-42f2-a1bf-7adcaef6ac4c at /dev/vda
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.524 226833 DEBUG os_brick.utils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.527 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.539 226833 DEBUG oslo_concurrency.processutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config 380842ce-2460-4a95-94a6-836a6137b09c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.540 226833 INFO nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Deleting local config drive /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c/disk.config because it was imported into RBD.
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.541 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.542 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9442c7-f1a9-4675-98cb-44f30935981e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.543 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.548 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.549 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc0f1cb-b403-4f0b-ac4f-0d9d5db3490a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.551 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.558 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.558 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[e5850c88-d159-4007-b1c5-394696bf44db]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.559 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e4274f-ebe6-49e8-be8f-e10cc00e4e08]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.560 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:14 compute-2 kernel: tap3141db9a-37: entered promiscuous mode
Jan 31 08:04:14 compute-2 NetworkManager[48999]: <info>  [1769846654.5779] manager: (tap3141db9a-37): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Jan 31 08:04:14 compute-2 ovn_controller[133834]: 2026-01-31T08:04:14Z|00354|binding|INFO|Claiming lport 3141db9a-3775-4074-8fd7-00af7879125d for this chassis.
Jan 31 08:04:14 compute-2 ovn_controller[133834]: 2026-01-31T08:04:14Z|00355|binding|INFO|3141db9a-3775-4074-8fd7-00af7879125d: Claiming fa:16:3e:ae:74:48 10.100.0.9
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.579 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.582 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:14 compute-2 ovn_controller[133834]: 2026-01-31T08:04:14Z|00356|binding|INFO|Setting lport 3141db9a-3775-4074-8fd7-00af7879125d ovn-installed in OVS
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.584 226833 DEBUG os_brick.initiator.connectors.lightos [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.584 226833 DEBUG os_brick.initiator.connectors.lightos [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.584 226833 DEBUG os_brick.initiator.connectors.lightos [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.585 226833 DEBUG os_brick.utils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.585 226833 DEBUG nova.virt.block_device [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating existing volume attachment record: f51781cc-c8af-46ab-9e0f-4fa3ba2d07c1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:14 compute-2 ovn_controller[133834]: 2026-01-31T08:04:14Z|00357|binding|INFO|Setting lport 3141db9a-3775-4074-8fd7-00af7879125d up in Southbound
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.591 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:74:48 10.100.0.9'], port_security=['fa:16:3e:ae:74:48 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '380842ce-2460-4a95-94a6-836a6137b09c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f3ae8b6d-25bb-4c6c-8115-39e86b6452b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=3141db9a-3775-4074-8fd7-00af7879125d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.592 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 3141db9a-3775-4074-8fd7-00af7879125d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.594 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:04:14 compute-2 systemd-machined[195142]: New machine qemu-43-instance-00000061.
Jan 31 08:04:14 compute-2 systemd-udevd[270350]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.603 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2221a6-b32f-4d06-86eb-d69ba528922b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:14 compute-2 NetworkManager[48999]: <info>  [1769846654.6084] device (tap3141db9a-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:04:14 compute-2 NetworkManager[48999]: <info>  [1769846654.6091] device (tap3141db9a-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:04:14 compute-2 systemd[1]: Started Virtual Machine qemu-43-instance-00000061.
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.624 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ebae36-3fe3-464b-8a2d-ff300eacd850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.627 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8785b1b3-48de-4986-b173-e220222a5791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.641 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ca21b600-6b4a-4b52-b30f-e5434491fd67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.653 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c2584285-771c-4476-9e3b-ed02537c8456]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683268, 'reachable_time': 20252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270360, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.664 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a9d5d7-ad51-4d73-b78a-92fc9be51b2d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5cc2535f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683277, 'tstamp': 683277}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270363, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc2535f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683279, 'tstamp': 683279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270363, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.666 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:14 compute-2 nova_compute[226829]: 2026-01-31 08:04:14.668 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.668 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.669 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.669 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:14.670 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:14.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.043 226833 DEBUG nova.compute.manager [req-182de5ff-3da8-4ea5-98c2-26c351c3be93 req-5a6c2b2d-9ee0-43f2-a480-e9dd8f0cc611 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.043 226833 DEBUG oslo_concurrency.lockutils [req-182de5ff-3da8-4ea5-98c2-26c351c3be93 req-5a6c2b2d-9ee0-43f2-a480-e9dd8f0cc611 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.044 226833 DEBUG oslo_concurrency.lockutils [req-182de5ff-3da8-4ea5-98c2-26c351c3be93 req-5a6c2b2d-9ee0-43f2-a480-e9dd8f0cc611 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.044 226833 DEBUG oslo_concurrency.lockutils [req-182de5ff-3da8-4ea5-98c2-26c351c3be93 req-5a6c2b2d-9ee0-43f2-a480-e9dd8f0cc611 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.044 226833 DEBUG nova.compute.manager [req-182de5ff-3da8-4ea5-98c2-26c351c3be93 req-5a6c2b2d-9ee0-43f2-a480-e9dd8f0cc611 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] No waiting events found dispatching network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.044 226833 WARNING nova.compute.manager [req-182de5ff-3da8-4ea5-98c2-26c351c3be93 req-5a6c2b2d-9ee0-43f2-a480-e9dd8f0cc611 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received unexpected event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d for instance with vm_state active and task_state rebuild_spawning.
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.113 226833 DEBUG nova.policy [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8ff6e8783c3f4132b787cb0653fff9b0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2327b93dd7d648efad6d2b303f9e462e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.284 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for 380842ce-2460-4a95-94a6-836a6137b09c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.285 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846655.2831895, 380842ce-2460-4a95-94a6-836a6137b09c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.285 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] VM Started (Lifecycle Event)
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.288 226833 DEBUG nova.compute.manager [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:04:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1873123469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3501060088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:04:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:04:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:04:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:04:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.292 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.319 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.323 226833 INFO nova.virt.libvirt.driver [-] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance spawned successfully.
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.323 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.326 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:04:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:04:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4027809735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.553 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.553 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.553 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.554 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.554 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:15 compute-2 nova_compute[226829]: 2026-01-31 08:04:15.554 226833 DEBUG nova.virt.libvirt.driver [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:16.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.179 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.179 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846655.2833867, 380842ce-2460-4a95-94a6-836a6137b09c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.179 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] VM Paused (Lifecycle Event)
Jan 31 08:04:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1632053411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:16 compute-2 ceph-mon[77282]: pgmap v2016: 305 pgs: 305 active+clean; 363 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 6.6 MiB/s wr, 196 op/s
Jan 31 08:04:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4027809735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1474416236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.361 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.490 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.497 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846655.2909122, 380842ce-2460-4a95-94a6-836a6137b09c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.497 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] VM Resumed (Lifecycle Event)
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.544 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.547 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.558 226833 DEBUG nova.compute.manager [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.585 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.693 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.693 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.693 226833 DEBUG nova.objects.instance [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.760 226833 INFO nova.virt.block_device [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Booting with volume 5839a328-04ff-4569-98df-b992860dfc6d at /dev/vdb
Jan 31 08:04:16 compute-2 nova_compute[226829]: 2026-01-31 08:04:16.891 226833 DEBUG oslo_concurrency.lockutils [None req-17130305-5ed1-4b26-8b96-2e0c93fee35f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:16.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.003 226833 DEBUG os_brick.utils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.006 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.023 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.023 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[eeadb3c5-b888-45e7-bd86-5e645c5c84ab]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.025 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.031 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.031 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[16c562d3-c621-40b0-9c14-69f85e856983]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.034 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.039 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.040 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3c4913-7360-435f-a5dc-d9d72430150b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.041 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[cae59884-a717-442d-81bd-c3e880b974cb]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.042 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.064 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.067 226833 DEBUG os_brick.initiator.connectors.lightos [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.067 226833 DEBUG os_brick.initiator.connectors.lightos [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.067 226833 DEBUG os_brick.initiator.connectors.lightos [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.068 226833 DEBUG os_brick.utils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.068 226833 DEBUG nova.virt.block_device [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating existing volume attachment record: 54981775-e281-4e88-8440-b87bd4e8d7a3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.170 226833 DEBUG nova.compute.manager [req-1b7a7af2-f7c3-489c-a706-43ab82f53c9e req-7639b32b-ba58-4f56-badd-9fb839cd0030 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.170 226833 DEBUG oslo_concurrency.lockutils [req-1b7a7af2-f7c3-489c-a706-43ab82f53c9e req-7639b32b-ba58-4f56-badd-9fb839cd0030 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.170 226833 DEBUG oslo_concurrency.lockutils [req-1b7a7af2-f7c3-489c-a706-43ab82f53c9e req-7639b32b-ba58-4f56-badd-9fb839cd0030 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.171 226833 DEBUG oslo_concurrency.lockutils [req-1b7a7af2-f7c3-489c-a706-43ab82f53c9e req-7639b32b-ba58-4f56-badd-9fb839cd0030 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.171 226833 DEBUG nova.compute.manager [req-1b7a7af2-f7c3-489c-a706-43ab82f53c9e req-7639b32b-ba58-4f56-badd-9fb839cd0030 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] No waiting events found dispatching network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.171 226833 WARNING nova.compute.manager [req-1b7a7af2-f7c3-489c-a706-43ab82f53c9e req-7639b32b-ba58-4f56-badd-9fb839cd0030 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received unexpected event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d for instance with vm_state active and task_state None.
Jan 31 08:04:17 compute-2 nova_compute[226829]: 2026-01-31 08:04:17.462 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully created port: fbf252f3-5bb0-4da8-9564-87a1f052f2dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:04:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:18.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.222 226833 INFO nova.virt.block_device [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Booting with volume 799c0e3e-5252-4646-befe-0999c33cc481 at /dev/vdc
Jan 31 08:04:18 compute-2 ceph-mon[77282]: pgmap v2017: 305 pgs: 305 active+clean; 384 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 7.5 MiB/s wr, 199 op/s
Jan 31 08:04:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3197452955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.549 226833 DEBUG os_brick.utils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.550 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.558 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.558 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6189b9-4838-4976-9a39-8d042a03cd92]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.560 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.565 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.565 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[9e82f315-40fa-4fcb-ab24-cb924aafca0d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.568 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.574 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.574 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[116f2ada-1bd4-4441-b5d5-35253d8b039f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.575 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[b54709a4-26f7-4bb5-acca-5f09ca97ebeb]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.576 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.593 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.595 226833 DEBUG os_brick.initiator.connectors.lightos [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.595 226833 DEBUG os_brick.initiator.connectors.lightos [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.596 226833 DEBUG os_brick.initiator.connectors.lightos [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.596 226833 DEBUG os_brick.utils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] <== get_connector_properties: return (45ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.596 226833 DEBUG nova.virt.block_device [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating existing volume attachment record: eb70e519-8547-4929-bfc1-f1fa7219cf7b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:04:18 compute-2 nova_compute[226829]: 2026-01-31 08:04:18.625 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully created port: a3ef806e-29d0-4489-9e8b-fb24ee625783 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:04:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:18.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2681368728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:19 compute-2 nova_compute[226829]: 2026-01-31 08:04:19.994 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully created port: 24a33ffc-f652-4c4c-84e0-5ba78e56a6ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:04:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:04:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:20.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.142 226833 DEBUG oslo_concurrency.lockutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.142 226833 DEBUG oslo_concurrency.lockutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.143 226833 DEBUG oslo_concurrency.lockutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.143 226833 DEBUG oslo_concurrency.lockutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.144 226833 DEBUG oslo_concurrency.lockutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.145 226833 INFO nova.compute.manager [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Terminating instance
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.147 226833 DEBUG nova.compute.manager [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:04:20 compute-2 kernel: tap3141db9a-37 (unregistering): left promiscuous mode
Jan 31 08:04:20 compute-2 NetworkManager[48999]: <info>  [1769846660.1914] device (tap3141db9a-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:04:20 compute-2 ovn_controller[133834]: 2026-01-31T08:04:20Z|00358|binding|INFO|Releasing lport 3141db9a-3775-4074-8fd7-00af7879125d from this chassis (sb_readonly=0)
Jan 31 08:04:20 compute-2 ovn_controller[133834]: 2026-01-31T08:04:20Z|00359|binding|INFO|Setting lport 3141db9a-3775-4074-8fd7-00af7879125d down in Southbound
Jan 31 08:04:20 compute-2 ovn_controller[133834]: 2026-01-31T08:04:20Z|00360|binding|INFO|Removing iface tap3141db9a-37 ovn-installed in OVS
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.197 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.204 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:20 compute-2 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000061.scope: Deactivated successfully.
Jan 31 08:04:20 compute-2 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000061.scope: Consumed 5.367s CPU time.
Jan 31 08:04:20 compute-2 systemd-machined[195142]: Machine qemu-43-instance-00000061 terminated.
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.308 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.362 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.365 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.371 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:74:48 10.100.0.9'], port_security=['fa:16:3e:ae:74:48 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '380842ce-2460-4a95-94a6-836a6137b09c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f3ae8b6d-25bb-4c6c-8115-39e86b6452b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=3141db9a-3775-4074-8fd7-00af7879125d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.373 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 3141db9a-3775-4074-8fd7-00af7879125d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.376 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.378 226833 INFO nova.virt.libvirt.driver [-] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Instance destroyed successfully.
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.379 226833 DEBUG nova.objects.instance [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid 380842ce-2460-4a95-94a6-836a6137b09c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.391 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[85c69d81-44a9-4858-8bca-7e247387cb8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.414 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8cfda8-b01d-465d-bf97-05d0a54c4310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.417 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5e611b74-463c-4db5-83b5-e8fb8b999cf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.442 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[fe90c78a-c326-4bcc-8387-14214420417a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.458 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[19a8954b-3ac8-48e0-b341-3988d9042a47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683268, 'reachable_time': 20252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270445, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.469 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[feecbb00-959f-4cee-a577-a7fa4df1dd83]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5cc2535f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683277, 'tstamp': 683277}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270446, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc2535f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683279, 'tstamp': 683279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270446, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.470 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.472 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.475 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.475 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.476 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.476 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:20.476 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.506 226833 DEBUG nova.virt.libvirt.vif [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:03:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1756792593',display_name='tempest-ServerActionsTestJSON-server-491025228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1756792593',id=97,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:04:16Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-st38ql6s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:04:16Z,user_data=None,user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=380842ce-2460-4a95-94a6-836a6137b09c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.507 226833 DEBUG nova.network.os_vif_util [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "3141db9a-3775-4074-8fd7-00af7879125d", "address": "fa:16:3e:ae:74:48", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3141db9a-37", "ovs_interfaceid": "3141db9a-3775-4074-8fd7-00af7879125d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.507 226833 DEBUG nova.network.os_vif_util [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.508 226833 DEBUG os_vif [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.510 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.510 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3141db9a-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.511 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.513 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.518 226833 INFO os_vif [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:74:48,bridge_name='br-int',has_traffic_filtering=True,id=3141db9a-3775-4074-8fd7-00af7879125d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3141db9a-37')
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.539 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.539 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 08:04:20 compute-2 ceph-mon[77282]: pgmap v2018: 305 pgs: 305 active+clean; 388 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 7.2 MiB/s wr, 278 op/s
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.917 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.919 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.919 226833 INFO nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Creating image(s)
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.920 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.920 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Ensure instance console log exists: /var/lib/nova/instances/b0ff8f26-937f-43e0-b422-8a0fb0226eac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.921 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.921 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.922 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.945 226833 INFO nova.virt.libvirt.driver [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Deleting instance files /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c_del
Jan 31 08:04:20 compute-2 nova_compute[226829]: 2026-01-31 08:04:20.946 226833 INFO nova.virt.libvirt.driver [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Deletion of /var/lib/nova/instances/380842ce-2460-4a95-94a6-836a6137b09c_del complete
Jan 31 08:04:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:20.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.171 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.172 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.172 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.173 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.233 226833 INFO nova.compute.manager [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Took 1.09 seconds to destroy the instance on the hypervisor.
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.234 226833 DEBUG oslo.service.loopingcall [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.235 226833 DEBUG nova.compute.manager [-] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.235 226833 DEBUG nova.network.neutron [-] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:04:21 compute-2 sudo[270467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:21 compute-2 sudo[270467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:21 compute-2 sudo[270467]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:21 compute-2 sudo[270492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:21 compute-2 sudo[270492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:21 compute-2 sudo[270492]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:21 compute-2 sudo[270518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:21 compute-2 sudo[270518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:21 compute-2 sudo[270518]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:21 compute-2 sudo[270543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:04:21 compute-2 sudo[270543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:21 compute-2 sudo[270543]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.705 226833 DEBUG nova.compute.manager [req-aa2edd00-12a3-4869-82ab-b86a61432193 req-b8dfc327-57db-4882-83f9-9a5020a7deff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-vif-unplugged-3141db9a-3775-4074-8fd7-00af7879125d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.707 226833 DEBUG oslo_concurrency.lockutils [req-aa2edd00-12a3-4869-82ab-b86a61432193 req-b8dfc327-57db-4882-83f9-9a5020a7deff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.707 226833 DEBUG oslo_concurrency.lockutils [req-aa2edd00-12a3-4869-82ab-b86a61432193 req-b8dfc327-57db-4882-83f9-9a5020a7deff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.707 226833 DEBUG oslo_concurrency.lockutils [req-aa2edd00-12a3-4869-82ab-b86a61432193 req-b8dfc327-57db-4882-83f9-9a5020a7deff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.708 226833 DEBUG nova.compute.manager [req-aa2edd00-12a3-4869-82ab-b86a61432193 req-b8dfc327-57db-4882-83f9-9a5020a7deff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] No waiting events found dispatching network-vif-unplugged-3141db9a-3775-4074-8fd7-00af7879125d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:21 compute-2 nova_compute[226829]: 2026-01-31 08:04:21.708 226833 DEBUG nova.compute.manager [req-aa2edd00-12a3-4869-82ab-b86a61432193 req-b8dfc327-57db-4882-83f9-9a5020a7deff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-vif-unplugged-3141db9a-3775-4074-8fd7-00af7879125d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:04:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:22.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3994239849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:22 compute-2 ceph-mon[77282]: pgmap v2019: 305 pgs: 305 active+clean; 372 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 261 op/s
Jan 31 08:04:22 compute-2 nova_compute[226829]: 2026-01-31 08:04:22.849 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully created port: 01b25ee1-016c-4dac-860c-941a5efc920a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:04:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:22.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1877866369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.151 226833 DEBUG nova.network.neutron [-] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:23 compute-2 podman[270568]: 2026-01-31 08:04:23.170948441 +0000 UTC m=+0.053282825 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.201 226833 INFO nova.compute.manager [-] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Took 1.97 seconds to deallocate network for instance.
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.233 226833 DEBUG nova.compute.manager [req-868ecda3-4b25-4e71-b840-1f038552a135 req-d1bf52f5-c07c-44de-aad4-2e695d67eb71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-vif-deleted-3141db9a-3775-4074-8fd7-00af7879125d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.280 226833 DEBUG oslo_concurrency.lockutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.281 226833 DEBUG oslo_concurrency.lockutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.388 226833 DEBUG oslo_concurrency.processutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.768 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:04:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2361180351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.799 226833 DEBUG oslo_concurrency.processutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.807 226833 DEBUG nova.compute.provider_tree [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.905 226833 DEBUG nova.compute.manager [req-ab87a309-efb1-4a83-bf6a-d672ea39aae0 req-60a5bb38-aba6-4c3f-b656-b4baaeb73e1e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.906 226833 DEBUG oslo_concurrency.lockutils [req-ab87a309-efb1-4a83-bf6a-d672ea39aae0 req-60a5bb38-aba6-4c3f-b656-b4baaeb73e1e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "380842ce-2460-4a95-94a6-836a6137b09c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.906 226833 DEBUG oslo_concurrency.lockutils [req-ab87a309-efb1-4a83-bf6a-d672ea39aae0 req-60a5bb38-aba6-4c3f-b656-b4baaeb73e1e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.907 226833 DEBUG oslo_concurrency.lockutils [req-ab87a309-efb1-4a83-bf6a-d672ea39aae0 req-60a5bb38-aba6-4c3f-b656-b4baaeb73e1e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.907 226833 DEBUG nova.compute.manager [req-ab87a309-efb1-4a83-bf6a-d672ea39aae0 req-60a5bb38-aba6-4c3f-b656-b4baaeb73e1e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] No waiting events found dispatching network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.907 226833 WARNING nova.compute.manager [req-ab87a309-efb1-4a83-bf6a-d672ea39aae0 req-60a5bb38-aba6-4c3f-b656-b4baaeb73e1e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Received unexpected event network-vif-plugged-3141db9a-3775-4074-8fd7-00af7879125d for instance with vm_state deleted and task_state None.
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.921 226833 DEBUG nova.scheduler.client.report [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.926 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.926 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:04:23 compute-2 nova_compute[226829]: 2026-01-31 08:04:23.926 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:04:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:24.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:04:24 compute-2 nova_compute[226829]: 2026-01-31 08:04:24.123 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully created port: 522d060c-fdf5-4d50-b0b9-94211986b4ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:04:24 compute-2 ceph-mon[77282]: pgmap v2020: 305 pgs: 305 active+clean; 362 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.0 MiB/s wr, 289 op/s
Jan 31 08:04:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2361180351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:24 compute-2 nova_compute[226829]: 2026-01-31 08:04:24.217 226833 DEBUG oslo_concurrency.lockutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:24 compute-2 nova_compute[226829]: 2026-01-31 08:04:24.423 226833 INFO nova.scheduler.client.report [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Deleted allocations for instance 380842ce-2460-4a95-94a6-836a6137b09c
Jan 31 08:04:24 compute-2 nova_compute[226829]: 2026-01-31 08:04:24.725 226833 DEBUG oslo_concurrency.lockutils [None req-35a9be57-4c12-4241-9600-6c6c1557a837 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "380842ce-2460-4a95-94a6-836a6137b09c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:24.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:25 compute-2 nova_compute[226829]: 2026-01-31 08:04:25.311 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:25 compute-2 nova_compute[226829]: 2026-01-31 08:04:25.354 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully updated port: fbf252f3-5bb0-4da8-9564-87a1f052f2dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:04:25 compute-2 nova_compute[226829]: 2026-01-31 08:04:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:25 compute-2 nova_compute[226829]: 2026-01-31 08:04:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:25 compute-2 nova_compute[226829]: 2026-01-31 08:04:25.511 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:25 compute-2 nova_compute[226829]: 2026-01-31 08:04:25.995 226833 DEBUG nova.compute.manager [req-6467da09-00ea-47be-82a9-1cfae80c1999 req-0beb8920-4364-415c-be4d-18f0477b1f7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-changed-fbf252f3-5bb0-4da8-9564-87a1f052f2dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:25 compute-2 nova_compute[226829]: 2026-01-31 08:04:25.996 226833 DEBUG nova.compute.manager [req-6467da09-00ea-47be-82a9-1cfae80c1999 req-0beb8920-4364-415c-be4d-18f0477b1f7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing instance network info cache due to event network-changed-fbf252f3-5bb0-4da8-9564-87a1f052f2dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:04:25 compute-2 nova_compute[226829]: 2026-01-31 08:04:25.996 226833 DEBUG oslo_concurrency.lockutils [req-6467da09-00ea-47be-82a9-1cfae80c1999 req-0beb8920-4364-415c-be4d-18f0477b1f7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:25 compute-2 nova_compute[226829]: 2026-01-31 08:04:25.997 226833 DEBUG oslo_concurrency.lockutils [req-6467da09-00ea-47be-82a9-1cfae80c1999 req-0beb8920-4364-415c-be4d-18f0477b1f7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:25 compute-2 nova_compute[226829]: 2026-01-31 08:04:25.997 226833 DEBUG nova.network.neutron [req-6467da09-00ea-47be-82a9-1cfae80c1999 req-0beb8920-4364-415c-be4d-18f0477b1f7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing network info cache for port fbf252f3-5bb0-4da8-9564-87a1f052f2dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:04:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:26.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:26 compute-2 nova_compute[226829]: 2026-01-31 08:04:26.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:26 compute-2 ceph-mon[77282]: pgmap v2021: 305 pgs: 305 active+clean; 341 MiB data, 948 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.9 MiB/s wr, 312 op/s
Jan 31 08:04:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3666690667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:26.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:27 compute-2 nova_compute[226829]: 2026-01-31 08:04:27.167 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:27 compute-2 nova_compute[226829]: 2026-01-31 08:04:27.168 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:27 compute-2 nova_compute[226829]: 2026-01-31 08:04:27.168 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:27 compute-2 nova_compute[226829]: 2026-01-31 08:04:27.169 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:04:27 compute-2 nova_compute[226829]: 2026-01-31 08:04:27.169 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:04:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4046848482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:27 compute-2 nova_compute[226829]: 2026-01-31 08:04:27.558 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:04:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:28.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:04:28 compute-2 ceph-mon[77282]: pgmap v2022: 305 pgs: 305 active+clean; 321 MiB data, 934 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1011 KiB/s wr, 279 op/s
Jan 31 08:04:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4046848482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:29.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:30.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:30 compute-2 nova_compute[226829]: 2026-01-31 08:04:30.314 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:30 compute-2 nova_compute[226829]: 2026-01-31 08:04:30.513 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:30 compute-2 ceph-mon[77282]: pgmap v2023: 305 pgs: 305 active+clean; 295 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 35 KiB/s wr, 283 op/s
Jan 31 08:04:30 compute-2 nova_compute[226829]: 2026-01-31 08:04:30.685 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:04:30 compute-2 nova_compute[226829]: 2026-01-31 08:04:30.686 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:04:30 compute-2 nova_compute[226829]: 2026-01-31 08:04:30.854 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:04:30 compute-2 nova_compute[226829]: 2026-01-31 08:04:30.855 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4188MB free_disk=20.862239837646484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:04:30 compute-2 nova_compute[226829]: 2026-01-31 08:04:30.855 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:30 compute-2 nova_compute[226829]: 2026-01-31 08:04:30.856 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:31.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:31 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 31 08:04:31 compute-2 nova_compute[226829]: 2026-01-31 08:04:31.924 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance dfba7f29-bde8-4327-a7b3-1c4fd44e045a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:04:31 compute-2 nova_compute[226829]: 2026-01-31 08:04:31.924 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance b0ff8f26-937f-43e0-b422-8a0fb0226eac actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:04:31 compute-2 nova_compute[226829]: 2026-01-31 08:04:31.925 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:04:31 compute-2 nova_compute[226829]: 2026-01-31 08:04:31.925 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:04:31 compute-2 nova_compute[226829]: 2026-01-31 08:04:31.974 226833 DEBUG nova.network.neutron [req-6467da09-00ea-47be-82a9-1cfae80c1999 req-0beb8920-4364-415c-be4d-18f0477b1f7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:04:31 compute-2 nova_compute[226829]: 2026-01-31 08:04:31.999 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:32.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:04:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1682547861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:32 compute-2 nova_compute[226829]: 2026-01-31 08:04:32.392 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:32 compute-2 nova_compute[226829]: 2026-01-31 08:04:32.397 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:04:32 compute-2 nova_compute[226829]: 2026-01-31 08:04:32.420 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:04:32 compute-2 nova_compute[226829]: 2026-01-31 08:04:32.425 226833 DEBUG nova.network.neutron [req-6467da09-00ea-47be-82a9-1cfae80c1999 req-0beb8920-4364-415c-be4d-18f0477b1f7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:32 compute-2 nova_compute[226829]: 2026-01-31 08:04:32.457 226833 DEBUG oslo_concurrency.lockutils [req-6467da09-00ea-47be-82a9-1cfae80c1999 req-0beb8920-4364-415c-be4d-18f0477b1f7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:32 compute-2 nova_compute[226829]: 2026-01-31 08:04:32.463 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:04:32 compute-2 nova_compute[226829]: 2026-01-31 08:04:32.464 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:32 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:32.666 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:32 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:32.667 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:04:32 compute-2 nova_compute[226829]: 2026-01-31 08:04:32.667 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:32 compute-2 ceph-mon[77282]: pgmap v2024: 305 pgs: 305 active+clean; 304 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.1 MiB/s wr, 202 op/s
Jan 31 08:04:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1682547861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3472333272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:33.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:33 compute-2 nova_compute[226829]: 2026-01-31 08:04:33.140 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully updated port: fead71ab-29ff-4831-93d2-9a03e4ed64ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:04:33 compute-2 nova_compute[226829]: 2026-01-31 08:04:33.280 226833 DEBUG nova.compute.manager [req-56947dde-9fda-40cf-aab0-0e0c759da90c req-70d84131-87bf-44b3-8c3f-b68970838246 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-changed-fead71ab-29ff-4831-93d2-9a03e4ed64ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:33 compute-2 nova_compute[226829]: 2026-01-31 08:04:33.281 226833 DEBUG nova.compute.manager [req-56947dde-9fda-40cf-aab0-0e0c759da90c req-70d84131-87bf-44b3-8c3f-b68970838246 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing instance network info cache due to event network-changed-fead71ab-29ff-4831-93d2-9a03e4ed64ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:04:33 compute-2 nova_compute[226829]: 2026-01-31 08:04:33.281 226833 DEBUG oslo_concurrency.lockutils [req-56947dde-9fda-40cf-aab0-0e0c759da90c req-70d84131-87bf-44b3-8c3f-b68970838246 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:33 compute-2 nova_compute[226829]: 2026-01-31 08:04:33.281 226833 DEBUG oslo_concurrency.lockutils [req-56947dde-9fda-40cf-aab0-0e0c759da90c req-70d84131-87bf-44b3-8c3f-b68970838246 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:33 compute-2 nova_compute[226829]: 2026-01-31 08:04:33.281 226833 DEBUG nova.network.neutron [req-56947dde-9fda-40cf-aab0-0e0c759da90c req-70d84131-87bf-44b3-8c3f-b68970838246 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing network info cache for port fead71ab-29ff-4831-93d2-9a03e4ed64ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:04:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3480641687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:33 compute-2 nova_compute[226829]: 2026-01-31 08:04:33.809 226833 DEBUG nova.network.neutron [req-56947dde-9fda-40cf-aab0-0e0c759da90c req-70d84131-87bf-44b3-8c3f-b68970838246 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:04:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:34.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:34 compute-2 nova_compute[226829]: 2026-01-31 08:04:34.562 226833 DEBUG nova.network.neutron [req-56947dde-9fda-40cf-aab0-0e0c759da90c req-70d84131-87bf-44b3-8c3f-b68970838246 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:34 compute-2 nova_compute[226829]: 2026-01-31 08:04:34.587 226833 DEBUG oslo_concurrency.lockutils [req-56947dde-9fda-40cf-aab0-0e0c759da90c req-70d84131-87bf-44b3-8c3f-b68970838246 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:34 compute-2 nova_compute[226829]: 2026-01-31 08:04:34.605 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully updated port: e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:04:34 compute-2 ceph-mon[77282]: pgmap v2025: 305 pgs: 305 active+clean; 309 MiB data, 923 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 127 op/s
Jan 31 08:04:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:35.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.316 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.377 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846660.3763936, 380842ce-2460-4a95-94a6-836a6137b09c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.377 226833 INFO nova.compute.manager [-] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] VM Stopped (Lifecycle Event)
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.514 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.607 226833 DEBUG nova.compute.manager [req-c1d9e321-6258-4374-ba90-9bed8c291e32 req-a07d0994-cbb8-4b46-97f6-4fd0dc8d9388 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-changed-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.608 226833 DEBUG nova.compute.manager [req-c1d9e321-6258-4374-ba90-9bed8c291e32 req-a07d0994-cbb8-4b46-97f6-4fd0dc8d9388 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing instance network info cache due to event network-changed-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.608 226833 DEBUG oslo_concurrency.lockutils [req-c1d9e321-6258-4374-ba90-9bed8c291e32 req-a07d0994-cbb8-4b46-97f6-4fd0dc8d9388 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.609 226833 DEBUG oslo_concurrency.lockutils [req-c1d9e321-6258-4374-ba90-9bed8c291e32 req-a07d0994-cbb8-4b46-97f6-4fd0dc8d9388 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.609 226833 DEBUG nova.network.neutron [req-c1d9e321-6258-4374-ba90-9bed8c291e32 req-a07d0994-cbb8-4b46-97f6-4fd0dc8d9388 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing network info cache for port e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.621 226833 DEBUG nova.compute.manager [None req-aae240b3-e50c-4732-8f81-06ff0f9e1d20 - - - - - -] [instance: 380842ce-2460-4a95-94a6-836a6137b09c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:35 compute-2 nova_compute[226829]: 2026-01-31 08:04:35.871 226833 DEBUG nova.network.neutron [req-c1d9e321-6258-4374-ba90-9bed8c291e32 req-a07d0994-cbb8-4b46-97f6-4fd0dc8d9388 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:04:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:36.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:36 compute-2 nova_compute[226829]: 2026-01-31 08:04:36.257 226833 DEBUG nova.network.neutron [req-c1d9e321-6258-4374-ba90-9bed8c291e32 req-a07d0994-cbb8-4b46-97f6-4fd0dc8d9388 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:36 compute-2 nova_compute[226829]: 2026-01-31 08:04:36.275 226833 DEBUG oslo_concurrency.lockutils [req-c1d9e321-6258-4374-ba90-9bed8c291e32 req-a07d0994-cbb8-4b46-97f6-4fd0dc8d9388 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:36 compute-2 ceph-mon[77282]: pgmap v2026: 305 pgs: 305 active+clean; 325 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 31 08:04:36 compute-2 nova_compute[226829]: 2026-01-31 08:04:36.938 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully updated port: a3ef806e-29d0-4489-9e8b-fb24ee625783 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:04:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:37.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:37 compute-2 nova_compute[226829]: 2026-01-31 08:04:37.075 226833 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:37 compute-2 nova_compute[226829]: 2026-01-31 08:04:37.075 226833 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:37 compute-2 nova_compute[226829]: 2026-01-31 08:04:37.075 226833 DEBUG nova.network.neutron [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:04:37 compute-2 nova_compute[226829]: 2026-01-31 08:04:37.728 226833 DEBUG nova.compute.manager [req-a9a50272-c12e-4d6e-b4f0-ec235cf88005 req-a3ae72ac-6c74-475e-8e74-3c56cfd87468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-changed-a3ef806e-29d0-4489-9e8b-fb24ee625783 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:37 compute-2 nova_compute[226829]: 2026-01-31 08:04:37.729 226833 DEBUG nova.compute.manager [req-a9a50272-c12e-4d6e-b4f0-ec235cf88005 req-a3ae72ac-6c74-475e-8e74-3c56cfd87468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing instance network info cache due to event network-changed-a3ef806e-29d0-4489-9e8b-fb24ee625783. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:04:37 compute-2 nova_compute[226829]: 2026-01-31 08:04:37.729 226833 DEBUG oslo_concurrency.lockutils [req-a9a50272-c12e-4d6e-b4f0-ec235cf88005 req-a3ae72ac-6c74-475e-8e74-3c56cfd87468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:37 compute-2 nova_compute[226829]: 2026-01-31 08:04:37.729 226833 DEBUG oslo_concurrency.lockutils [req-a9a50272-c12e-4d6e-b4f0-ec235cf88005 req-a3ae72ac-6c74-475e-8e74-3c56cfd87468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:37 compute-2 nova_compute[226829]: 2026-01-31 08:04:37.729 226833 DEBUG nova.network.neutron [req-a9a50272-c12e-4d6e-b4f0-ec235cf88005 req-a3ae72ac-6c74-475e-8e74-3c56cfd87468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing network info cache for port a3ef806e-29d0-4489-9e8b-fb24ee625783 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:04:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/29766981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3004535060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:38 compute-2 nova_compute[226829]: 2026-01-31 08:04:38.002 226833 DEBUG nova.network.neutron [req-a9a50272-c12e-4d6e-b4f0-ec235cf88005 req-a3ae72ac-6c74-475e-8e74-3c56cfd87468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:04:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:38.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:38 compute-2 nova_compute[226829]: 2026-01-31 08:04:38.192 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully updated port: 24a33ffc-f652-4c4c-84e0-5ba78e56a6ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:04:38 compute-2 nova_compute[226829]: 2026-01-31 08:04:38.444 226833 DEBUG nova.network.neutron [req-a9a50272-c12e-4d6e-b4f0-ec235cf88005 req-a3ae72ac-6c74-475e-8e74-3c56cfd87468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:38 compute-2 nova_compute[226829]: 2026-01-31 08:04:38.670 226833 DEBUG oslo_concurrency.lockutils [req-a9a50272-c12e-4d6e-b4f0-ec235cf88005 req-a3ae72ac-6c74-475e-8e74-3c56cfd87468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:39 compute-2 ceph-mon[77282]: pgmap v2027: 305 pgs: 305 active+clean; 294 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 496 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Jan 31 08:04:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:39.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.464 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.782 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.783 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.801 226833 DEBUG nova.network.neutron [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.821 226833 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.863 226833 DEBUG nova.compute.manager [req-b42b2627-e123-4c3a-b612-611e0d8f4036 req-830448f1-e9a8-428a-9ccd-daad764fab05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-changed-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.864 226833 DEBUG nova.compute.manager [req-b42b2627-e123-4c3a-b612-611e0d8f4036 req-830448f1-e9a8-428a-9ccd-daad764fab05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing instance network info cache due to event network-changed-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.864 226833 DEBUG oslo_concurrency.lockutils [req-b42b2627-e123-4c3a-b612-611e0d8f4036 req-830448f1-e9a8-428a-9ccd-daad764fab05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.865 226833 DEBUG oslo_concurrency.lockutils [req-b42b2627-e123-4c3a-b612-611e0d8f4036 req-830448f1-e9a8-428a-9ccd-daad764fab05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.865 226833 DEBUG nova.network.neutron [req-b42b2627-e123-4c3a-b612-611e0d8f4036 req-830448f1-e9a8-428a-9ccd-daad764fab05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing network info cache for port 24a33ffc-f652-4c4c-84e0-5ba78e56a6ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.940 226833 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.941 226833 DEBUG nova.virt.libvirt.volume.remotefs [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Creating file /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/6f45b33d9919471cb4c0cf1ee214e26d.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 31 08:04:39 compute-2 nova_compute[226829]: 2026-01-31 08:04:39.941 226833 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/6f45b33d9919471cb4c0cf1ee214e26d.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:40 compute-2 ceph-mon[77282]: pgmap v2028: 305 pgs: 305 active+clean; 248 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 503 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Jan 31 08:04:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:40.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.161 226833 DEBUG nova.network.neutron [req-b42b2627-e123-4c3a-b612-611e0d8f4036 req-830448f1-e9a8-428a-9ccd-daad764fab05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.320 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.384 226833 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/6f45b33d9919471cb4c0cf1ee214e26d.tmp" returned: 1 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.385 226833 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/6f45b33d9919471cb4c0cf1ee214e26d.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.385 226833 DEBUG nova.virt.libvirt.volume.remotefs [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Creating directory /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.386 226833 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.459 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully updated port: 01b25ee1-016c-4dac-860c-941a5efc920a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.516 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.560 226833 DEBUG nova.network.neutron [req-b42b2627-e123-4c3a-b612-611e0d8f4036 req-830448f1-e9a8-428a-9ccd-daad764fab05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.576 226833 DEBUG oslo_concurrency.lockutils [req-b42b2627-e123-4c3a-b612-611e0d8f4036 req-830448f1-e9a8-428a-9ccd-daad764fab05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.599 226833 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:40 compute-2 nova_compute[226829]: 2026-01-31 08:04:40.603 226833 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:04:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:40.669 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:04:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:41.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:04:41 compute-2 podman[270668]: 2026-01-31 08:04:41.221949423 +0000 UTC m=+0.095801647 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:04:41 compute-2 nova_compute[226829]: 2026-01-31 08:04:41.277 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Successfully updated port: 522d060c-fdf5-4d50-b0b9-94211986b4ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:04:41 compute-2 nova_compute[226829]: 2026-01-31 08:04:41.291 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:41 compute-2 nova_compute[226829]: 2026-01-31 08:04:41.291 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquired lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:41 compute-2 nova_compute[226829]: 2026-01-31 08:04:41.291 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:04:41 compute-2 sudo[270696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:41 compute-2 sudo[270696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:41 compute-2 sudo[270696]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:41 compute-2 sudo[270721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:41 compute-2 sudo[270721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:41 compute-2 sudo[270721]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:41 compute-2 nova_compute[226829]: 2026-01-31 08:04:41.510 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:04:42 compute-2 nova_compute[226829]: 2026-01-31 08:04:42.049 226833 DEBUG nova.compute.manager [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-changed-01b25ee1-016c-4dac-860c-941a5efc920a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:42 compute-2 nova_compute[226829]: 2026-01-31 08:04:42.049 226833 DEBUG nova.compute.manager [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing instance network info cache due to event network-changed-01b25ee1-016c-4dac-860c-941a5efc920a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:04:42 compute-2 nova_compute[226829]: 2026-01-31 08:04:42.050 226833 DEBUG oslo_concurrency.lockutils [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:42.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:42 compute-2 ceph-mon[77282]: pgmap v2029: 305 pgs: 305 active+clean; 248 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 495 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 31 08:04:42 compute-2 ovn_controller[133834]: 2026-01-31T08:04:42Z|00361|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:04:42 compute-2 nova_compute[226829]: 2026-01-31 08:04:42.688 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:42 compute-2 kernel: tap86cf2bf6-2f (unregistering): left promiscuous mode
Jan 31 08:04:42 compute-2 NetworkManager[48999]: <info>  [1769846682.9136] device (tap86cf2bf6-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:04:42 compute-2 ovn_controller[133834]: 2026-01-31T08:04:42Z|00362|binding|INFO|Releasing lport 86cf2bf6-2f28-4435-b081-a3945070ed2d from this chassis (sb_readonly=0)
Jan 31 08:04:42 compute-2 ovn_controller[133834]: 2026-01-31T08:04:42Z|00363|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d down in Southbound
Jan 31 08:04:42 compute-2 nova_compute[226829]: 2026-01-31 08:04:42.921 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:42 compute-2 ovn_controller[133834]: 2026-01-31T08:04:42Z|00364|binding|INFO|Removing iface tap86cf2bf6-2f ovn-installed in OVS
Jan 31 08:04:42 compute-2 nova_compute[226829]: 2026-01-31 08:04:42.931 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:42 compute-2 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 31 08:04:42 compute-2 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005b.scope: Consumed 16.733s CPU time.
Jan 31 08:04:42 compute-2 systemd-machined[195142]: Machine qemu-41-instance-0000005b terminated.
Jan 31 08:04:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:42.990 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '12', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:42.991 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:04:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:42.993 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:04:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:42.994 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1033de-1a0b-4943-9bdd-ee4c42ffa9d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:42.995 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore
Jan 31 08:04:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:43.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:43 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268916]: [NOTICE]   (268924) : haproxy version is 2.8.14-c23fe91
Jan 31 08:04:43 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268916]: [NOTICE]   (268924) : path to executable is /usr/sbin/haproxy
Jan 31 08:04:43 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268916]: [WARNING]  (268924) : Exiting Master process...
Jan 31 08:04:43 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268916]: [WARNING]  (268924) : Exiting Master process...
Jan 31 08:04:43 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268916]: [ALERT]    (268924) : Current worker (268926) exited with code 143 (Terminated)
Jan 31 08:04:43 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[268916]: [WARNING]  (268924) : All workers exited. Exiting... (0)
Jan 31 08:04:43 compute-2 systemd[1]: libpod-39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523.scope: Deactivated successfully.
Jan 31 08:04:43 compute-2 podman[270772]: 2026-01-31 08:04:43.107107825 +0000 UTC m=+0.043596621 container died 39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:04:43 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523-userdata-shm.mount: Deactivated successfully.
Jan 31 08:04:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-38c2aa138a4f45ede76ce4fd1402f86142b48c4e93ebc76657a6a55eb228de92-merged.mount: Deactivated successfully.
Jan 31 08:04:43 compute-2 podman[270772]: 2026-01-31 08:04:43.141743384 +0000 UTC m=+0.078232150 container cleanup 39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:04:43 compute-2 systemd[1]: libpod-conmon-39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523.scope: Deactivated successfully.
Jan 31 08:04:43 compute-2 podman[270805]: 2026-01-31 08:04:43.20618975 +0000 UTC m=+0.042547483 container remove 39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:04:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:43.209 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0f16d99f-b884-4308-a4a7-82a328be1819]: (4, ('Sat Jan 31 08:04:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523)\n39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523\nSat Jan 31 08:04:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523)\n39d5a1f2fb77d9368a2b68672a3d4a02ce80fda50ae16a72f3fc150af0a56523\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:43.212 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[379af477-2642-4660-a8a0-1bad88a05047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:43.213 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.214 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:43 compute-2 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:43.226 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[317980ac-3093-4932-9d04-d5d2b7f07502]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:43.238 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[539128c1-a4d7-4b1f-9944-255afd078cff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:43.239 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[55fdea07-a2f1-43da-b48a-85af573fc018]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:43.254 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3050ee5b-f952-4094-90a3-ab55b967a4d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683262, 'reachable_time': 26114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270828, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:43 compute-2 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 08:04:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:43.259 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:04:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:43.260 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[35c85462-32a5-45be-9da6-43dedb0f5050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.564 226833 DEBUG nova.compute.manager [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.565 226833 DEBUG oslo_concurrency.lockutils [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.565 226833 DEBUG oslo_concurrency.lockutils [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.566 226833 DEBUG oslo_concurrency.lockutils [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.566 226833 DEBUG nova.compute.manager [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.567 226833 WARNING nova.compute.manager [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state resize_migrating.
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.620 226833 INFO nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance shutdown successfully after 3 seconds.
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.626 226833 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance destroyed successfully.
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.627 226833 DEBUG nova.virt.libvirt.vif [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:04:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:01:98:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.628 226833 DEBUG nova.network.os_vif_util [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:01:98:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.629 226833 DEBUG nova.network.os_vif_util [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.629 226833 DEBUG os_vif [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.632 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.633 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86cf2bf6-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.634 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.636 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.642 226833 INFO os_vif [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.647 226833 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.648 226833 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:04:43 compute-2 nova_compute[226829]: 2026-01-31 08:04:43.935 226833 DEBUG neutronclient.v2_0.client [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 86cf2bf6-2f28-4435-b081-a3945070ed2d for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 31 08:04:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:44.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:44 compute-2 nova_compute[226829]: 2026-01-31 08:04:44.193 226833 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:44 compute-2 nova_compute[226829]: 2026-01-31 08:04:44.193 226833 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:44 compute-2 nova_compute[226829]: 2026-01-31 08:04:44.194 226833 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:44 compute-2 ceph-mon[77282]: pgmap v2030: 305 pgs: 305 active+clean; 248 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 482 KiB/s rd, 1.1 MiB/s wr, 89 op/s
Jan 31 08:04:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:04:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3868303849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:04:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:04:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3868303849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:04:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:45.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:45 compute-2 nova_compute[226829]: 2026-01-31 08:04:45.321 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3868303849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:04:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3868303849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:04:45 compute-2 nova_compute[226829]: 2026-01-31 08:04:45.837 226833 DEBUG nova.compute.manager [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:45 compute-2 nova_compute[226829]: 2026-01-31 08:04:45.837 226833 DEBUG oslo_concurrency.lockutils [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:45 compute-2 nova_compute[226829]: 2026-01-31 08:04:45.838 226833 DEBUG oslo_concurrency.lockutils [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:45 compute-2 nova_compute[226829]: 2026-01-31 08:04:45.838 226833 DEBUG oslo_concurrency.lockutils [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:45 compute-2 nova_compute[226829]: 2026-01-31 08:04:45.838 226833 DEBUG nova.compute.manager [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:45 compute-2 nova_compute[226829]: 2026-01-31 08:04:45.838 226833 WARNING nova.compute.manager [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state resize_migrated.
Jan 31 08:04:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:46.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:46 compute-2 ceph-mon[77282]: pgmap v2031: 305 pgs: 305 active+clean; 248 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 860 KiB/s wr, 86 op/s
Jan 31 08:04:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:47.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:48.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:48 compute-2 nova_compute[226829]: 2026-01-31 08:04:48.144 226833 DEBUG nova.compute.manager [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-changed-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:48 compute-2 nova_compute[226829]: 2026-01-31 08:04:48.145 226833 DEBUG nova.compute.manager [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Refreshing instance network info cache due to event network-changed-86cf2bf6-2f28-4435-b081-a3945070ed2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:04:48 compute-2 nova_compute[226829]: 2026-01-31 08:04:48.145 226833 DEBUG oslo_concurrency.lockutils [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:48 compute-2 nova_compute[226829]: 2026-01-31 08:04:48.145 226833 DEBUG oslo_concurrency.lockutils [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:48 compute-2 nova_compute[226829]: 2026-01-31 08:04:48.146 226833 DEBUG nova.network.neutron [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Refreshing network info cache for port 86cf2bf6-2f28-4435-b081-a3945070ed2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:04:48 compute-2 ceph-mon[77282]: pgmap v2032: 305 pgs: 305 active+clean; 248 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 64 KiB/s wr, 43 op/s
Jan 31 08:04:48 compute-2 nova_compute[226829]: 2026-01-31 08:04:48.635 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:49.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2069683438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:50.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:50 compute-2 nova_compute[226829]: 2026-01-31 08:04:50.325 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:50 compute-2 nova_compute[226829]: 2026-01-31 08:04:50.612 226833 DEBUG nova.network.neutron [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updated VIF entry in instance network info cache for port 86cf2bf6-2f28-4435-b081-a3945070ed2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:04:50 compute-2 nova_compute[226829]: 2026-01-31 08:04:50.613 226833 DEBUG nova.network.neutron [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:50 compute-2 ceph-mon[77282]: pgmap v2033: 305 pgs: 305 active+clean; 248 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 6.2 KiB/s wr, 25 op/s
Jan 31 08:04:50 compute-2 nova_compute[226829]: 2026-01-31 08:04:50.642 226833 DEBUG oslo_concurrency.lockutils [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e257 e257: 3 total, 3 up, 3 in
Jan 31 08:04:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:51.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:51 compute-2 ceph-mon[77282]: osdmap e257: 3 total, 3 up, 3 in
Jan 31 08:04:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/91292284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:52 compute-2 nova_compute[226829]: 2026-01-31 08:04:52.791 226833 DEBUG nova.compute.manager [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:52 compute-2 nova_compute[226829]: 2026-01-31 08:04:52.792 226833 DEBUG oslo_concurrency.lockutils [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:52 compute-2 nova_compute[226829]: 2026-01-31 08:04:52.792 226833 DEBUG oslo_concurrency.lockutils [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:52 compute-2 nova_compute[226829]: 2026-01-31 08:04:52.792 226833 DEBUG oslo_concurrency.lockutils [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:52 compute-2 nova_compute[226829]: 2026-01-31 08:04:52.792 226833 DEBUG nova.compute.manager [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:52 compute-2 nova_compute[226829]: 2026-01-31 08:04:52.793 226833 WARNING nova.compute.manager [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state resize_finish.
Jan 31 08:04:52 compute-2 ceph-mon[77282]: pgmap v2035: 305 pgs: 305 active+clean; 264 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 5.1 KiB/s rd, 640 KiB/s wr, 6 op/s
Jan 31 08:04:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3432603807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:53.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.070 226833 DEBUG nova.network.neutron [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [{"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.248 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Releasing lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.248 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Instance network_info: |[{"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.249 226833 DEBUG oslo_concurrency.lockutils [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.249 226833 DEBUG nova.network.neutron [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing network info cache for port 01b25ee1-016c-4dac-860c-941a5efc920a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.257 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Start _get_guest_xml network_info=[{"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,
Jan 31 08:04:53 compute-2 nova_compute[226829]: min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-38f98947-0572-42f2-a1bf-7adcaef6ac4c', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '38f98947-0572-42f2-a1bf-7adcaef6ac4c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'attached_at': '', 'detached_at': '', 'volume_id': '38f98947-0572-42f2-a1bf-7adcaef6ac4c', 'serial': '38f98947-0572-42f2-a1bf-7adcaef6ac4c'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'f51781cc-c8af-46ab-9e0f-4fa3ba2d07c1', 'boot_index': 0, 'volume_type': None}, {'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5839a328-04ff-4569-98df-b992860dfc6d', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5839a328-04ff-4569-98df-b992860dfc6d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'attached_at': '', 'detached_at': '', 'volume_id': '5839a328-04ff-4569-98df-b992860dfc6d', 'serial': '5839a328-04ff-4569-98df-b992860dfc6d'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'attachment_id': '54981775-e281-4e88-8440-b87bd4e8d7a3', 'boot_index': 1, 'volume_type': None}, {'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-799c0e3e-5252-4646-befe-0999c33cc481', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '799c0e3e-5252-4646-befe-0999c33cc481', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'attached_at': '', 'detached_at': '', 'volume_id': '799c0e3e-5252-4646-befe-0999c33cc481', 'serial': '799c0e3e-5252-4646-befe-0999c33cc481'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vdc', 'attachment_id': 'eb70e519-8547-4929-bfc1-f1fa7219cf7b', 'boot_index': 2, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.262 226833 WARNING nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.268 226833 DEBUG nova.virt.libvirt.host [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.269 226833 DEBUG nova.virt.libvirt.host [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.274 226833 DEBUG nova.virt.libvirt.host [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.274 226833 DEBUG nova.virt.libvirt.host [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.276 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.276 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.277 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.277 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.278 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.278 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.278 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.278 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.279 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.279 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.279 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.280 226833 DEBUG nova.virt.hardware [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.312 226833 DEBUG nova.storage.rbd_utils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] rbd image b0ff8f26-937f-43e0-b422-8a0fb0226eac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.316 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:53 compute-2 rsyslogd[1005]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 08:04:53.257 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.638 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:04:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/884422311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:53 compute-2 nova_compute[226829]: 2026-01-31 08:04:53.718 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:54 compute-2 ceph-mon[77282]: pgmap v2036: 305 pgs: 305 active+clean; 269 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 848 KiB/s wr, 34 op/s
Jan 31 08:04:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/884422311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:04:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:54.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:04:54 compute-2 podman[270876]: 2026-01-31 08:04:54.155649523 +0000 UTC m=+0.045326069 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:04:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:55.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.040 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.042 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.044 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:8d:67,bridge_name='br-int',has_traffic_filtering=True,id=fbf252f3-5bb0-4da8-9564-87a1f052f2dc,network=Network(79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbf252f3-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.046 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.047 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.048 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5f,bridge_name='br-int',has_traffic_filtering=True,id=fead71ab-29ff-4831-93d2-9a03e4ed64ac,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfead71ab-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.050 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.051 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.053 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:33:c0,bridge_name='br-int',has_traffic_filtering=True,id=e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0f7cff6-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.054 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.055 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.056 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:50:1d,bridge_name='br-int',has_traffic_filtering=True,id=a3ef806e-29d0-4489-9e8b-fb24ee625783,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ef806e-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.058 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.058 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.060 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:a6:94,bridge_name='br-int',has_traffic_filtering=True,id=24a33ffc-f652-4c4c-84e0-5ba78e56a6ca,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24a33ffc-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.061 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.062 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.064 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:8f:2c,bridge_name='br-int',has_traffic_filtering=True,id=01b25ee1-016c-4dac-860c-941a5efc920a,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b25ee1-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.065 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.066 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.067 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:3a:d6,bridge_name='br-int',has_traffic_filtering=True,id=522d060c-fdf5-4d50-b0b9-94211986b4ee,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522d060c-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.070 226833 DEBUG nova.objects.instance [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lazy-loading 'pci_devices' on Instance uuid b0ff8f26-937f-43e0-b422-8a0fb0226eac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.327 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.458 226833 DEBUG nova.compute.manager [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.459 226833 DEBUG oslo_concurrency.lockutils [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.459 226833 DEBUG oslo_concurrency.lockutils [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.460 226833 DEBUG oslo_concurrency.lockutils [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.460 226833 DEBUG nova.compute.manager [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.461 226833 WARNING nova.compute.manager [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state resized and task_state None.
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.465 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <uuid>b0ff8f26-937f-43e0-b422-8a0fb0226eac</uuid>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <name>instance-00000063</name>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <nova:name>tempest-device-tagging-server-1727712073</nova:name>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:04:53</nova:creationTime>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:user uuid="8ff6e8783c3f4132b787cb0653fff9b0">tempest-TaggedBootDevicesTest_v242-948331740-project-member</nova:user>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:project uuid="2327b93dd7d648efad6d2b303f9e462e">tempest-TaggedBootDevicesTest_v242-948331740</nova:project>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:port uuid="fbf252f3-5bb0-4da8-9564-87a1f052f2dc">
Jan 31 08:04:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:port uuid="fead71ab-29ff-4831-93d2-9a03e4ed64ac">
Jan 31 08:04:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.1.1.170" ipVersion="4"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:port uuid="e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e">
Jan 31 08:04:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.1.1.182" ipVersion="4"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:port uuid="a3ef806e-29d0-4489-9e8b-fb24ee625783">
Jan 31 08:04:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.1.1.122" ipVersion="4"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:port uuid="24a33ffc-f652-4c4c-84e0-5ba78e56a6ca">
Jan 31 08:04:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.1.1.54" ipVersion="4"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:port uuid="01b25ee1-016c-4dac-860c-941a5efc920a">
Jan 31 08:04:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <nova:port uuid="522d060c-fdf5-4d50-b0b9-94211986b4ee">
Jan 31 08:04:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <system>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <entry name="serial">b0ff8f26-937f-43e0-b422-8a0fb0226eac</entry>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <entry name="uuid">b0ff8f26-937f-43e0-b422-8a0fb0226eac</entry>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </system>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <os>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   </os>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <features>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   </features>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/b0ff8f26-937f-43e0-b422-8a0fb0226eac_disk.config">
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </source>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-38f98947-0572-42f2-a1bf-7adcaef6ac4c">
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </source>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <serial>38f98947-0572-42f2-a1bf-7adcaef6ac4c</serial>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-5839a328-04ff-4569-98df-b992860dfc6d">
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </source>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <serial>5839a328-04ff-4569-98df-b992860dfc6d</serial>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-799c0e3e-5252-4646-befe-0999c33cc481">
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </source>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:04:55 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="vdc" bus="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <serial>799c0e3e-5252-4646-befe-0999c33cc481</serial>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:57:8d:67"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="tapfbf252f3-5b"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:52:8d:5f"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="tapfead71ab-29"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:c2:33:c0"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="tape0f7cff6-3b"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:88:50:1d"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="tapa3ef806e-29"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:3c:a6:94"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="tap24a33ffc-f6"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:0a:8f:2c"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="tap01b25ee1-01"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:3b:3a:d6"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <target dev="tap522d060c-fd"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/b0ff8f26-937f-43e0-b422-8a0fb0226eac/console.log" append="off"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <video>
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </video>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:04:55 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:04:55 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:04:55 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:04:55 compute-2 nova_compute[226829]: </domain>
Jan 31 08:04:55 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.467 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Preparing to wait for external event network-vif-plugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.468 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.468 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.469 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.469 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Preparing to wait for external event network-vif-plugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.469 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.470 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.470 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.470 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Preparing to wait for external event network-vif-plugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.471 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.471 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.471 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.471 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Preparing to wait for external event network-vif-plugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.472 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.472 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.472 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.473 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Preparing to wait for external event network-vif-plugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.473 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.473 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.474 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.474 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Preparing to wait for external event network-vif-plugged-01b25ee1-016c-4dac-860c-941a5efc920a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.474 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.475 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.475 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.476 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Preparing to wait for external event network-vif-plugged-522d060c-fdf5-4d50-b0b9-94211986b4ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.476 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.477 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.477 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.479 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.480 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.481 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:8d:67,bridge_name='br-int',has_traffic_filtering=True,id=fbf252f3-5bb0-4da8-9564-87a1f052f2dc,network=Network(79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbf252f3-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.482 226833 DEBUG os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:8d:67,bridge_name='br-int',has_traffic_filtering=True,id=fbf252f3-5bb0-4da8-9564-87a1f052f2dc,network=Network(79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbf252f3-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.483 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.484 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.485 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.491 226833 DEBUG nova.network.neutron [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updated VIF entry in instance network info cache for port 01b25ee1-016c-4dac-860c-941a5efc920a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.492 226833 DEBUG nova.network.neutron [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [{"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.495 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.496 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbf252f3-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.497 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbf252f3-5b, col_values=(('external_ids', {'iface-id': 'fbf252f3-5bb0-4da8-9564-87a1f052f2dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:8d:67', 'vm-uuid': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 NetworkManager[48999]: <info>  [1769846695.5025] manager: (tapfbf252f3-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.502 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.506 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.508 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.509 226833 INFO os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:8d:67,bridge_name='br-int',has_traffic_filtering=True,id=fbf252f3-5bb0-4da8-9564-87a1f052f2dc,network=Network(79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbf252f3-5b')
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.510 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.510 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.511 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5f,bridge_name='br-int',has_traffic_filtering=True,id=fead71ab-29ff-4831-93d2-9a03e4ed64ac,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfead71ab-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.511 226833 DEBUG os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5f,bridge_name='br-int',has_traffic_filtering=True,id=fead71ab-29ff-4831-93d2-9a03e4ed64ac,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfead71ab-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.512 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.512 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.512 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.514 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.515 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfead71ab-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.515 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfead71ab-29, col_values=(('external_ids', {'iface-id': 'fead71ab-29ff-4831-93d2-9a03e4ed64ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:8d:5f', 'vm-uuid': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.517 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 NetworkManager[48999]: <info>  [1769846695.5185] manager: (tapfead71ab-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.521 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.525 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.526 226833 INFO os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5f,bridge_name='br-int',has_traffic_filtering=True,id=fead71ab-29ff-4831-93d2-9a03e4ed64ac,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfead71ab-29')
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.528 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.528 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.529 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:33:c0,bridge_name='br-int',has_traffic_filtering=True,id=e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0f7cff6-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.530 226833 DEBUG os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:33:c0,bridge_name='br-int',has_traffic_filtering=True,id=e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0f7cff6-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.531 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.531 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.531 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.533 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.534 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0f7cff6-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.534 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0f7cff6-3b, col_values=(('external_ids', {'iface-id': 'e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:33:c0', 'vm-uuid': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.535 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 NetworkManager[48999]: <info>  [1769846695.5367] manager: (tape0f7cff6-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.544 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.545 226833 INFO os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:33:c0,bridge_name='br-int',has_traffic_filtering=True,id=e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0f7cff6-3b')
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.546 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.547 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.548 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:50:1d,bridge_name='br-int',has_traffic_filtering=True,id=a3ef806e-29d0-4489-9e8b-fb24ee625783,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ef806e-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.548 226833 DEBUG os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:50:1d,bridge_name='br-int',has_traffic_filtering=True,id=a3ef806e-29d0-4489-9e8b-fb24ee625783,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ef806e-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.549 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.550 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.552 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ef806e-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.553 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3ef806e-29, col_values=(('external_ids', {'iface-id': 'a3ef806e-29d0-4489-9e8b-fb24ee625783', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:50:1d', 'vm-uuid': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.554 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 NetworkManager[48999]: <info>  [1769846695.5558] manager: (tapa3ef806e-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.557 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.563 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.563 226833 INFO os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:50:1d,bridge_name='br-int',has_traffic_filtering=True,id=a3ef806e-29d0-4489-9e8b-fb24ee625783,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ef806e-29')
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.564 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.565 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.565 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:a6:94,bridge_name='br-int',has_traffic_filtering=True,id=24a33ffc-f652-4c4c-84e0-5ba78e56a6ca,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24a33ffc-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.566 226833 DEBUG os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:a6:94,bridge_name='br-int',has_traffic_filtering=True,id=24a33ffc-f652-4c4c-84e0-5ba78e56a6ca,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24a33ffc-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.566 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.566 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.567 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.568 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.569 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24a33ffc-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.569 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24a33ffc-f6, col_values=(('external_ids', {'iface-id': '24a33ffc-f652-4c4c-84e0-5ba78e56a6ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:a6:94', 'vm-uuid': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.570 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 NetworkManager[48999]: <info>  [1769846695.5712] manager: (tap24a33ffc-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.572 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.581 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.582 226833 INFO os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:a6:94,bridge_name='br-int',has_traffic_filtering=True,id=24a33ffc-f652-4c4c-84e0-5ba78e56a6ca,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24a33ffc-f6')
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.583 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.584 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.584 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:8f:2c,bridge_name='br-int',has_traffic_filtering=True,id=01b25ee1-016c-4dac-860c-941a5efc920a,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b25ee1-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.585 226833 DEBUG os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:8f:2c,bridge_name='br-int',has_traffic_filtering=True,id=01b25ee1-016c-4dac-860c-941a5efc920a,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b25ee1-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.585 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.585 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.586 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.588 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.588 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01b25ee1-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.588 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01b25ee1-01, col_values=(('external_ids', {'iface-id': '01b25ee1-016c-4dac-860c-941a5efc920a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:8f:2c', 'vm-uuid': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 NetworkManager[48999]: <info>  [1769846695.5911] manager: (tap01b25ee1-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.592 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.606 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.607 226833 INFO os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:8f:2c,bridge_name='br-int',has_traffic_filtering=True,id=01b25ee1-016c-4dac-860c-941a5efc920a,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b25ee1-01')
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.608 226833 DEBUG nova.virt.libvirt.vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.609 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.609 226833 DEBUG nova.network.os_vif_util [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:3a:d6,bridge_name='br-int',has_traffic_filtering=True,id=522d060c-fdf5-4d50-b0b9-94211986b4ee,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522d060c-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.610 226833 DEBUG os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:3a:d6,bridge_name='br-int',has_traffic_filtering=True,id=522d060c-fdf5-4d50-b0b9-94211986b4ee,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522d060c-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.610 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.610 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.611 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.613 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.613 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap522d060c-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.614 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap522d060c-fd, col_values=(('external_ids', {'iface-id': '522d060c-fdf5-4d50-b0b9-94211986b4ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:3a:d6', 'vm-uuid': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.615 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 NetworkManager[48999]: <info>  [1769846695.6160] manager: (tap522d060c-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.617 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.631 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.632 226833 INFO os_vif [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:3a:d6,bridge_name='br-int',has_traffic_filtering=True,id=522d060c-fdf5-4d50-b0b9-94211986b4ee,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522d060c-fd')
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.874 226833 DEBUG oslo_concurrency.lockutils [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.875 226833 DEBUG nova.compute.manager [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-changed-522d060c-fdf5-4d50-b0b9-94211986b4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.875 226833 DEBUG nova.compute.manager [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing instance network info cache due to event network-changed-522d060c-fdf5-4d50-b0b9-94211986b4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.875 226833 DEBUG oslo_concurrency.lockutils [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.876 226833 DEBUG oslo_concurrency.lockutils [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:55 compute-2 nova_compute[226829]: 2026-01-31 08:04:55.876 226833 DEBUG nova.network.neutron [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing network info cache for port 522d060c-fdf5-4d50-b0b9-94211986b4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:04:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:56.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:56 compute-2 nova_compute[226829]: 2026-01-31 08:04:56.215 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:04:56 compute-2 nova_compute[226829]: 2026-01-31 08:04:56.215 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:04:56 compute-2 nova_compute[226829]: 2026-01-31 08:04:56.216 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] No VIF found with MAC fa:16:3e:57:8d:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:04:56 compute-2 nova_compute[226829]: 2026-01-31 08:04:56.216 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] No VIF found with MAC fa:16:3e:3c:a6:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:04:56 compute-2 nova_compute[226829]: 2026-01-31 08:04:56.216 226833 INFO nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Using config drive
Jan 31 08:04:56 compute-2 nova_compute[226829]: 2026-01-31 08:04:56.241 226833 DEBUG nova.storage.rbd_utils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] rbd image b0ff8f26-937f-43e0-b422-8a0fb0226eac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:56 compute-2 nova_compute[226829]: 2026-01-31 08:04:56.534 226833 DEBUG oslo_concurrency.lockutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:56 compute-2 nova_compute[226829]: 2026-01-31 08:04:56.535 226833 DEBUG oslo_concurrency.lockutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:56 compute-2 nova_compute[226829]: 2026-01-31 08:04:56.535 226833 DEBUG nova.compute.manager [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Going to confirm migration 14 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 31 08:04:56 compute-2 nova_compute[226829]: 2026-01-31 08:04:56.558 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:56 compute-2 ceph-mon[77282]: pgmap v2037: 305 pgs: 305 active+clean; 295 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Jan 31 08:04:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:57.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:57 compute-2 nova_compute[226829]: 2026-01-31 08:04:57.821 226833 DEBUG neutronclient.v2_0.client [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 86cf2bf6-2f28-4435-b081-a3945070ed2d for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 31 08:04:57 compute-2 nova_compute[226829]: 2026-01-31 08:04:57.822 226833 DEBUG oslo_concurrency.lockutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:57 compute-2 nova_compute[226829]: 2026-01-31 08:04:57.822 226833 DEBUG oslo_concurrency.lockutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:57 compute-2 nova_compute[226829]: 2026-01-31 08:04:57.823 226833 DEBUG nova.network.neutron [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:04:57 compute-2 nova_compute[226829]: 2026-01-31 08:04:57.823 226833 DEBUG nova.objects.instance [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'info_cache' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.028 226833 INFO nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Creating config drive at /var/lib/nova/instances/b0ff8f26-937f-43e0-b422-8a0fb0226eac/disk.config
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.033 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b0ff8f26-937f-43e0-b422-8a0fb0226eac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwv2uk1w0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:58.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.157 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b0ff8f26-937f-43e0-b422-8a0fb0226eac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwv2uk1w0" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.157 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846683.156451, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.158 226833 INFO nova.compute.manager [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Stopped (Lifecycle Event)
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.192 226833 DEBUG nova.storage.rbd_utils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] rbd image b0ff8f26-937f-43e0-b422-8a0fb0226eac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.196 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b0ff8f26-937f-43e0-b422-8a0fb0226eac/disk.config b0ff8f26-937f-43e0-b422-8a0fb0226eac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.244 226833 DEBUG nova.compute.manager [None req-327ef77c-194a-40a6-a436-9a528827a18f - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.248 226833 DEBUG nova.compute.manager [None req-327ef77c-194a-40a6-a436-9a528827a18f - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.272 226833 INFO nova.compute.manager [None req-327ef77c-194a-40a6-a436-9a528827a18f - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-2.ctlplane.example.com
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.333 226833 DEBUG oslo_concurrency.processutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b0ff8f26-937f-43e0-b422-8a0fb0226eac/disk.config b0ff8f26-937f-43e0-b422-8a0fb0226eac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.333 226833 INFO nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Deleting local config drive /var/lib/nova/instances/b0ff8f26-937f-43e0-b422-8a0fb0226eac/disk.config because it was imported into RBD.
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.3663] manager: (tapfbf252f3-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/190)
Jan 31 08:04:58 compute-2 kernel: tapfbf252f3-5b: entered promiscuous mode
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00365|binding|INFO|Claiming lport fbf252f3-5bb0-4da8-9564-87a1f052f2dc for this chassis.
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00366|binding|INFO|fbf252f3-5bb0-4da8-9564-87a1f052f2dc: Claiming fa:16:3e:57:8d:67 10.100.0.8
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.377 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.3806] manager: (tapfead71ab-29): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Jan 31 08:04:58 compute-2 kernel: tapfead71ab-29: entered promiscuous mode
Jan 31 08:04:58 compute-2 systemd-udevd[271012]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:04:58 compute-2 systemd-udevd[271011]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:04:58 compute-2 systemd-udevd[271013]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.3894] manager: (tape0f7cff6-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.388 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:8d:67 10.100.0.8'], port_security=['fa:16:3e:57:8d:67 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2eb2c302-f4c8-4ded-ad35-1d7e03049730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dd2f7c-7ff8-4082-9961-5cfe817b3d85, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=fbf252f3-5bb0-4da8-9564-87a1f052f2dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00367|binding|INFO|Claiming lport fead71ab-29ff-4831-93d2-9a03e4ed64ac for this chassis.
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00368|binding|INFO|fead71ab-29ff-4831-93d2-9a03e4ed64ac: Claiming fa:16:3e:52:8d:5f 10.1.1.170
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.390 143841 INFO neutron.agent.ovn.metadata.agent [-] Port fbf252f3-5bb0-4da8-9564-87a1f052f2dc in datapath 79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa bound to our chassis
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.392 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.392 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00369|binding|INFO|Setting lport fbf252f3-5bb0-4da8-9564-87a1f052f2dc ovn-installed in OVS
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.398 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.3998] manager: (tapa3ef806e-29): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Jan 31 08:04:58 compute-2 systemd-udevd[271023]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.401 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b85e46-5f91-4234-9508-05a3bf3f5128]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.402 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79a3d7c9-b1 in ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4032] device (tapfbf252f3-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4048] device (tapfbf252f3-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4053] device (tapfead71ab-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4058] device (tapfead71ab-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:04:58 compute-2 kernel: tape0f7cff6-3b: entered promiscuous mode
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00370|binding|INFO|Setting lport fbf252f3-5bb0-4da8-9564-87a1f052f2dc up in Southbound
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4062] device (tape0f7cff6-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4068] device (tape0f7cff6-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.404 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79a3d7c9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.405 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[846859e8-25c5-46d4-8a87-adaa2094fee5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.409 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:8d:5f 10.1.1.170'], port_security=['fa:16:3e:52:8d:5f 10.1.1.170'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-1955072108', 'neutron:cidrs': '10.1.1.170/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0fc23939-3578-48f7-b98e-07928f016ed0', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-1955072108', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8eca293e-dca5-4092-be7b-a615369ad61a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64450e70-c4c7-42c3-be71-768482625146, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=fead71ab-29ff-4831-93d2-9a03e4ed64ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.410 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b247e916-c690-406e-934a-a0e432dfe55f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4132] manager: (tap24a33ffc-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.422 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[1d368e0a-d2f9-472e-9b05-b6aa81765a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 kernel: tapa3ef806e-29: entered promiscuous mode
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.426 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00371|binding|INFO|Claiming lport e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e for this chassis.
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00372|binding|INFO|e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e: Claiming fa:16:3e:c2:33:c0 10.1.1.182
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4284] device (tapa3ef806e-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4293] device (tapa3ef806e-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4321] manager: (tap01b25ee1-01): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Jan 31 08:04:58 compute-2 kernel: tap01b25ee1-01: entered promiscuous mode
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4329] device (tap24a33ffc-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4338] device (tap24a33ffc-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:04:58 compute-2 kernel: tap24a33ffc-f6: entered promiscuous mode
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.439 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.438 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:33:c0 10.1.1.182'], port_security=['fa:16:3e:c2:33:c0 10.1.1.182'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-973870628', 'neutron:cidrs': '10.1.1.182/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0fc23939-3578-48f7-b98e-07928f016ed0', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-973870628', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8eca293e-dca5-4092-be7b-a615369ad61a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64450e70-c4c7-42c3-be71-768482625146, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:58 compute-2 kernel: tap522d060c-fd: entered promiscuous mode
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4429] manager: (tap522d060c-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00373|binding|INFO|Claiming lport 24a33ffc-f652-4c4c-84e0-5ba78e56a6ca for this chassis.
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00374|binding|INFO|24a33ffc-f652-4c4c-84e0-5ba78e56a6ca: Claiming fa:16:3e:3c:a6:94 10.1.1.54
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00375|binding|INFO|Claiming lport a3ef806e-29d0-4489-9e8b-fb24ee625783 for this chassis.
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00376|binding|INFO|a3ef806e-29d0-4489-9e8b-fb24ee625783: Claiming fa:16:3e:88:50:1d 10.1.1.122
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00377|binding|INFO|Claiming lport 01b25ee1-016c-4dac-860c-941a5efc920a for this chassis.
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00378|binding|INFO|01b25ee1-016c-4dac-860c-941a5efc920a: Claiming fa:16:3e:0a:8f:2c 10.2.2.100
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4439] device (tap01b25ee1-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.443 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4445] device (tap01b25ee1-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00379|if_status|INFO|Dropped 13 log messages in last 564 seconds (most recently, 564 seconds ago) due to excessive rate
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00380|if_status|INFO|Not updating pb chassis for 522d060c-fdf5-4d50-b0b9-94211986b4ee now as sb is readonly
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00381|binding|INFO|Setting lport fead71ab-29ff-4831-93d2-9a03e4ed64ac ovn-installed in OVS
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.444 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8291d0-43b3-4af0-b458-51dc34fb39d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.446 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00382|binding|INFO|Setting lport e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e ovn-installed in OVS
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.452 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4546] device (tap522d060c-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4551] device (tap522d060c-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00383|binding|INFO|Claiming lport 522d060c-fdf5-4d50-b0b9-94211986b4ee for this chassis.
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.461 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:a6:94 10.1.1.54'], port_security=['fa:16:3e:3c:a6:94 10.1.1.54'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.54/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0fc23939-3578-48f7-b98e-07928f016ed0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2eb2c302-f4c8-4ded-ad35-1d7e03049730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64450e70-c4c7-42c3-be71-768482625146, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=24a33ffc-f652-4c4c-84e0-5ba78e56a6ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00384|binding|INFO|522d060c-fdf5-4d50-b0b9-94211986b4ee: Claiming fa:16:3e:3b:3a:d6 10.2.2.200
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00385|binding|INFO|Setting lport e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e up in Southbound
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00386|binding|INFO|Setting lport fead71ab-29ff-4831-93d2-9a03e4ed64ac up in Southbound
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.462 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:8f:2c 10.2.2.100'], port_security=['fa:16:3e:0a:8f:2c 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2eb2c302-f4c8-4ded-ad35-1d7e03049730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13ba0715-c991-496d-88de-9a6b20c0e088, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=01b25ee1-016c-4dac-860c-941a5efc920a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.464 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:50:1d 10.1.1.122'], port_security=['fa:16:3e:88:50:1d 10.1.1.122'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.122/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0fc23939-3578-48f7-b98e-07928f016ed0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2eb2c302-f4c8-4ded-ad35-1d7e03049730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64450e70-c4c7-42c3-be71-768482625146, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=a3ef806e-29d0-4489-9e8b-fb24ee625783) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.467 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:3a:d6 10.2.2.200'], port_security=['fa:16:3e:3b:3a:d6 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2eb2c302-f4c8-4ded-ad35-1d7e03049730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13ba0715-c991-496d-88de-9a6b20c0e088, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=522d060c-fdf5-4d50-b0b9-94211986b4ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.469 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[93101b67-af41-4ecb-a585-6d75b400269a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 systemd-machined[195142]: New machine qemu-44-instance-00000063.
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.4748] manager: (tap79a3d7c9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/197)
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.474 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e34e91c6-2103-449e-8bb5-9c18a0d357a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.475 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00387|binding|INFO|Setting lport 24a33ffc-f652-4c4c-84e0-5ba78e56a6ca ovn-installed in OVS
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00388|binding|INFO|Setting lport 24a33ffc-f652-4c4c-84e0-5ba78e56a6ca up in Southbound
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00389|binding|INFO|Setting lport 01b25ee1-016c-4dac-860c-941a5efc920a ovn-installed in OVS
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00390|binding|INFO|Setting lport 01b25ee1-016c-4dac-860c-941a5efc920a up in Southbound
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00391|binding|INFO|Setting lport a3ef806e-29d0-4489-9e8b-fb24ee625783 ovn-installed in OVS
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00392|binding|INFO|Setting lport a3ef806e-29d0-4489-9e8b-fb24ee625783 up in Southbound
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.480 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00393|binding|INFO|Setting lport 522d060c-fdf5-4d50-b0b9-94211986b4ee ovn-installed in OVS
Jan 31 08:04:58 compute-2 systemd[1]: Started Virtual Machine qemu-44-instance-00000063.
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.484 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00394|binding|INFO|Setting lport 522d060c-fdf5-4d50-b0b9-94211986b4ee up in Southbound
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.497 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[51f83c63-8c01-4872-8250-f39160972340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.500 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8d60dd5f-95ba-4d82-b978-e4235d2fdb9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.5164] device (tap79a3d7c9-b0): carrier: link connected
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.518 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[6cdbffae-3bfc-45c9-833c-509bafb1e710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.528 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[32f03e35-dc7d-4d27-aa8f-dc85f9502e34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79a3d7c9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:2e:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695098, 'reachable_time': 31880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271071, 'error': None, 'target': 'ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.537 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[63786754-f51c-43b1-816e-88f04dbc6a2d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:2e67'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695098, 'tstamp': 695098}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271072, 'error': None, 'target': 'ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.547 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[534cfc1d-042b-4e5e-9425-492f76113092]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79a3d7c9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:2e:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695098, 'reachable_time': 31880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271073, 'error': None, 'target': 'ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.566 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a4d09f-456e-48b9-bfc7-0773691d1506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.599 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e1391bcb-9d4d-4361-9c02-be520804ea2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.600 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79a3d7c9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.600 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.601 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79a3d7c9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:58 compute-2 ceph-mon[77282]: pgmap v2038: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.612 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 NetworkManager[48999]: <info>  [1769846698.6132] manager: (tap79a3d7c9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Jan 31 08:04:58 compute-2 kernel: tap79a3d7c9-b0: entered promiscuous mode
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.617 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.618 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79a3d7c9-b0, col_values=(('external_ids', {'iface-id': 'c50f1d64-569e-42f1-ae8d-1086e3dfeda9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 ovn_controller[133834]: 2026-01-31T08:04:58Z|00395|binding|INFO|Releasing lport c50f1d64-569e-42f1-ae8d-1086e3dfeda9 from this chassis (sb_readonly=0)
Jan 31 08:04:58 compute-2 nova_compute[226829]: 2026-01-31 08:04:58.627 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.628 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.629 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e794a7a7-94f6-4381-9de7-f238759c522f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.630 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa.pid.haproxy
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:04:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:58.631 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa', 'env', 'PROCESS_TAG=haproxy-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:04:58 compute-2 podman[271106]: 2026-01-31 08:04:58.980600626 +0000 UTC m=+0.058135946 container create 8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:04:59 compute-2 systemd[1]: Started libpod-conmon-8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497.scope.
Jan 31 08:04:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:04:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:04:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:59.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:04:59 compute-2 podman[271106]: 2026-01-31 08:04:58.949674499 +0000 UTC m=+0.027209839 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:04:59 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:04:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052e94cc811794da5ba9f1ceee15be14327550f64a49ca9b54ea0165236aff1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:59 compute-2 podman[271106]: 2026-01-31 08:04:59.057733985 +0000 UTC m=+0.135269325 container init 8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:04:59 compute-2 podman[271106]: 2026-01-31 08:04:59.062843384 +0000 UTC m=+0.140378704 container start 8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 08:04:59 compute-2 neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa[271164]: [NOTICE]   (271193) : New worker (271199) forked
Jan 31 08:04:59 compute-2 neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa[271164]: [NOTICE]   (271193) : Loading success.
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.116 143841 INFO neutron.agent.ovn.metadata.agent [-] Port fead71ab-29ff-4831-93d2-9a03e4ed64ac in datapath 0fc23939-3578-48f7-b98e-07928f016ed0 unbound from our chassis
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.118 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0fc23939-3578-48f7-b98e-07928f016ed0
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.124 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f63cacdb-4f4d-4582-ba86-04b1e22975aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.125 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0fc23939-31 in ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.127 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0fc23939-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.127 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2ae1db-a86a-4e10-a443-4d991df8ae6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.130 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[90c12fed-0095-4a37-a178-74f88a91130e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.137 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[a944a68b-ebe9-4e0a-b9f3-732fd35b4e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.145 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7e30f69c-189b-45fa-bea4-86e92092a2b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.165 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[22da9749-0dee-410c-a423-bd105aca4d2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.169 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ab984377-e059-4e51-bf15-67baf7dc7190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 systemd-udevd[271061]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:04:59 compute-2 NetworkManager[48999]: <info>  [1769846699.1709] manager: (tap0fc23939-30): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.194 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[52ea8bed-fb1d-4b70-b5fa-c821e07145b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.198 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b8fdf1-fc27-4b76-a815-36f92e0a8a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 NetworkManager[48999]: <info>  [1769846699.2188] device (tap0fc23939-30): carrier: link connected
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.223 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4021e401-101e-4f3f-9c98-554bb2830a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.236 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846699.2362158, b0ff8f26-937f-43e0-b422-8a0fb0226eac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.237 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] VM Started (Lifecycle Event)
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.241 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4ba301-75c2-4994-851f-1ed37dd21972]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0fc23939-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:7a:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695169, 'reachable_time': 43073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271230, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.252 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4feb17eb-8f3e-4e6c-aba9-3abd5f1a5d32]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:7a7b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695169, 'tstamp': 695169}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271231, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.255 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.259 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846699.2363405, b0ff8f26-937f-43e0-b422-8a0fb0226eac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.259 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] VM Paused (Lifecycle Event)
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.265 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[586cbf9f-01e6-4dba-b613-a2ad70e6013e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0fc23939-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:7a:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695169, 'reachable_time': 43073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271232, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.277 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.279 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.288 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[14241800-2686-473d-8ea5-bdbd6b511b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.297 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.331 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ca10b126-1ea7-4240-be04-da18cbd1c4b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.333 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0fc23939-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.333 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.333 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0fc23939-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.335 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:59 compute-2 NetworkManager[48999]: <info>  [1769846699.3363] manager: (tap0fc23939-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Jan 31 08:04:59 compute-2 kernel: tap0fc23939-30: entered promiscuous mode
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.339 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0fc23939-30, col_values=(('external_ids', {'iface-id': '9661581e-a708-4de1-b2f7-80d9d3e80a0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:59 compute-2 ovn_controller[133834]: 2026-01-31T08:04:59Z|00396|binding|INFO|Releasing lport 9661581e-a708-4de1-b2f7-80d9d3e80a0f from this chassis (sb_readonly=0)
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.340 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.341 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0fc23939-3578-48f7-b98e-07928f016ed0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0fc23939-3578-48f7-b98e-07928f016ed0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.341 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6ca04b-3bc8-410c-bef1-c2a56c2d6b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.342 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-0fc23939-3578-48f7-b98e-07928f016ed0
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/0fc23939-3578-48f7-b98e-07928f016ed0.pid.haproxy
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 0fc23939-3578-48f7-b98e-07928f016ed0
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.343 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'env', 'PROCESS_TAG=haproxy-0fc23939-3578-48f7-b98e-07928f016ed0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0fc23939-3578-48f7-b98e-07928f016ed0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.344 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:59 compute-2 podman[271265]: 2026-01-31 08:04:59.647633434 +0000 UTC m=+0.040177179 container create 42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.666 226833 DEBUG nova.network.neutron [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updated VIF entry in instance network info cache for port 522d060c-fdf5-4d50-b0b9-94211986b4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.666 226833 DEBUG nova.network.neutron [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [{"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:59 compute-2 systemd[1]: Started libpod-conmon-42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b.scope.
Jan 31 08:04:59 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.693 226833 DEBUG oslo_concurrency.lockutils [req-c0827bd2-ac9e-45bb-b2c3-fb280fe12cf8 req-057c320a-193a-43e8-9e24-03dff1253f84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64de3af829a5412905eed055b8d16f442c2e74e44fcc820b82a34cdadb2d2552/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:59 compute-2 podman[271265]: 2026-01-31 08:04:59.707750352 +0000 UTC m=+0.100294117 container init 42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 08:04:59 compute-2 podman[271265]: 2026-01-31 08:04:59.714542376 +0000 UTC m=+0.107086121 container start 42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:04:59 compute-2 podman[271265]: 2026-01-31 08:04:59.627600731 +0000 UTC m=+0.020144496 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:04:59 compute-2 neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0[271280]: [NOTICE]   (271284) : New worker (271286) forked
Jan 31 08:04:59 compute-2 neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0[271280]: [NOTICE]   (271284) : Loading success.
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.776 143841 INFO neutron.agent.ovn.metadata.agent [-] Port e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e in datapath 0fc23939-3578-48f7-b98e-07928f016ed0 unbound from our chassis
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.779 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0fc23939-3578-48f7-b98e-07928f016ed0
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.791 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[41e34d42-855b-49cf-987b-47ab26e2ecd1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.814 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c24ad162-7611-4e0c-8fdc-5fa91c89a1d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.818 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb698d0-7bb7-46cd-ba62-2c7bbc0d52f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.837 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[86ede8a5-cb45-4b45-8f89-068806e99365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.848 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[880d1a46-04ed-4902-9390-bacbb77447b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0fc23939-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:7a:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 5, 'rx_bytes': 90, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 5, 'rx_bytes': 90, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695169, 'reachable_time': 43073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271300, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.861 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3953db-84fe-44b1-800b-dbe2e3aa90ca]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0fc23939-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695178, 'tstamp': 695178}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271301, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap0fc23939-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695180, 'tstamp': 695180}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271301, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.863 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0fc23939-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.864 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.866 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0fc23939-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.866 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.867 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0fc23939-30, col_values=(('external_ids', {'iface-id': '9661581e-a708-4de1-b2f7-80d9d3e80a0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.867 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.868 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 24a33ffc-f652-4c4c-84e0-5ba78e56a6ca in datapath 0fc23939-3578-48f7-b98e-07928f016ed0 unbound from our chassis
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.872 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0fc23939-3578-48f7-b98e-07928f016ed0
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.887 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac83f82-fff9-461e-a8e8-0dc57bcdd455]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.909 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[532de71b-52b3-4696-a533-f469dffd3058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.912 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d1646229-5c0a-4d94-89f3-6be9e830e091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.932 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[36b333cd-a673-4036-a1d2-47588280f068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.936 226833 DEBUG nova.compute.manager [req-00ea2f35-2524-4e00-a560-113dfdd7d031 req-2021f520-081d-4e29-bae3-05037767db9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.936 226833 DEBUG oslo_concurrency.lockutils [req-00ea2f35-2524-4e00-a560-113dfdd7d031 req-2021f520-081d-4e29-bae3-05037767db9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.936 226833 DEBUG oslo_concurrency.lockutils [req-00ea2f35-2524-4e00-a560-113dfdd7d031 req-2021f520-081d-4e29-bae3-05037767db9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.937 226833 DEBUG oslo_concurrency.lockutils [req-00ea2f35-2524-4e00-a560-113dfdd7d031 req-2021f520-081d-4e29-bae3-05037767db9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.937 226833 DEBUG nova.compute.manager [req-00ea2f35-2524-4e00-a560-113dfdd7d031 req-2021f520-081d-4e29-bae3-05037767db9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Processing event network-vif-plugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.947 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[af20a215-14a8-4fb2-8f17-7a62b20b7f3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0fc23939-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:7a:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 7, 'rx_bytes': 90, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 7, 'rx_bytes': 90, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695169, 'reachable_time': 43073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271307, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.958 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[db4c6657-e86a-4a5a-acba-4b0981f74aff]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0fc23939-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695178, 'tstamp': 695178}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271308, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap0fc23939-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695180, 'tstamp': 695180}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271308, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.959 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0fc23939-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:59 compute-2 nova_compute[226829]: 2026-01-31 08:04:59.961 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.962 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0fc23939-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.962 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.963 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0fc23939-30, col_values=(('external_ids', {'iface-id': '9661581e-a708-4de1-b2f7-80d9d3e80a0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.963 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.964 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 01b25ee1-016c-4dac-860c-941a5efc920a in datapath 2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c unbound from our chassis
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.966 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.972 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a8699d63-abb9-462f-bed4-b0ce34aac7a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.973 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2bf3b855-d1 in ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.974 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2bf3b855-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.975 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[678494df-d070-4cb4-9574-e3dae138c913]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.975 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2ae8a9-0a03-498d-8046-c509d9ae80af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.984 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[72c9e605-5d90-4d80-a3ae-54d32a8a55f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:04:59.992 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[da3e5303-939c-44ab-b814-3283e05be384]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.012 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[804c01c3-91ce-4dc7-acd4-20a294432fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 NetworkManager[48999]: <info>  [1769846700.0180] manager: (tap2bf3b855-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/201)
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.017 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8248d2d3-2f4b-49b8-a726-0efbfb8ebac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.044 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb62b2a-5399-4441-a24e-f6aa60eadf2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.048 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8a1c42-9f2d-4381-aba6-2d854b64112f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 NetworkManager[48999]: <info>  [1769846700.0664] device (tap2bf3b855-d0): carrier: link connected
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.070 226833 DEBUG nova.compute.manager [req-e1ee97b2-c93b-49d6-8875-b46e151c8975 req-b5f27454-34d4-408d-8bbe-2127b0cb1422 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.070 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d8922f9e-abb4-4122-827c-ec06144f0b67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.071 226833 DEBUG oslo_concurrency.lockutils [req-e1ee97b2-c93b-49d6-8875-b46e151c8975 req-b5f27454-34d4-408d-8bbe-2127b0cb1422 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.072 226833 DEBUG oslo_concurrency.lockutils [req-e1ee97b2-c93b-49d6-8875-b46e151c8975 req-b5f27454-34d4-408d-8bbe-2127b0cb1422 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.072 226833 DEBUG oslo_concurrency.lockutils [req-e1ee97b2-c93b-49d6-8875-b46e151c8975 req-b5f27454-34d4-408d-8bbe-2127b0cb1422 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:00.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.073 226833 DEBUG nova.compute.manager [req-e1ee97b2-c93b-49d6-8875-b46e151c8975 req-b5f27454-34d4-408d-8bbe-2127b0cb1422 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Processing event network-vif-plugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.082 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6907a0a6-fd49-4373-89ed-c0cd6c189303]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2bf3b855-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:19:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695253, 'reachable_time': 39248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271319, 'error': None, 'target': 'ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.090 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f52ef8eb-48ae-4d79-958a-4e6ac8e25645]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:1967'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695253, 'tstamp': 695253}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271320, 'error': None, 'target': 'ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.100 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bb516b-9ab8-4c09-b8c7-9194561bdad1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2bf3b855-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:19:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695253, 'reachable_time': 39248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271321, 'error': None, 'target': 'ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.115 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a92dc6-ec01-4c90-a2ab-4ad98e51894e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.150 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc2d22b-c08e-4ce2-9c27-21b22073bae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.151 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bf3b855-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.151 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.152 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bf3b855-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-2 kernel: tap2bf3b855-d0: entered promiscuous mode
Jan 31 08:05:00 compute-2 NetworkManager[48999]: <info>  [1769846700.1540] manager: (tap2bf3b855-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.155 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.156 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2bf3b855-d0, col_values=(('external_ids', {'iface-id': '7ea68347-f5e6-4e5f-9510-beda4e01f072'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-2 ovn_controller[133834]: 2026-01-31T08:05:00Z|00397|binding|INFO|Releasing lport 7ea68347-f5e6-4e5f-9510-beda4e01f072 from this chassis (sb_readonly=0)
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.161 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.162 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.163 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bef48628-07f7-4e8d-a87e-6f9b0dddceb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.163 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c.pid.haproxy
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.164 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'env', 'PROCESS_TAG=haproxy-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.254 226833 DEBUG nova.compute.manager [req-800f20da-dbee-4945-8c98-47c2cd7ded0b req-52fba949-d975-48c0-b31f-0d62820eb0bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.254 226833 DEBUG oslo_concurrency.lockutils [req-800f20da-dbee-4945-8c98-47c2cd7ded0b req-52fba949-d975-48c0-b31f-0d62820eb0bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.255 226833 DEBUG oslo_concurrency.lockutils [req-800f20da-dbee-4945-8c98-47c2cd7ded0b req-52fba949-d975-48c0-b31f-0d62820eb0bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.255 226833 DEBUG oslo_concurrency.lockutils [req-800f20da-dbee-4945-8c98-47c2cd7ded0b req-52fba949-d975-48c0-b31f-0d62820eb0bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.255 226833 DEBUG nova.compute.manager [req-800f20da-dbee-4945-8c98-47c2cd7ded0b req-52fba949-d975-48c0-b31f-0d62820eb0bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Processing event network-vif-plugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.329 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-2 podman[271353]: 2026-01-31 08:05:00.498523572 +0000 UTC m=+0.056121011 container create 5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:05:00 compute-2 systemd[1]: Started libpod-conmon-5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd.scope.
Jan 31 08:05:00 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:05:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c1e264d30efc03541b1609adef3e6433704a6f0c8db3c941ae180ed3d142a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:00 compute-2 podman[271353]: 2026-01-31 08:05:00.553659034 +0000 UTC m=+0.111256473 container init 5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 08:05:00 compute-2 podman[271353]: 2026-01-31 08:05:00.46339048 +0000 UTC m=+0.020987999 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:05:00 compute-2 podman[271353]: 2026-01-31 08:05:00.558398383 +0000 UTC m=+0.115995812 container start 5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 08:05:00 compute-2 neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c[271369]: [NOTICE]   (271373) : New worker (271375) forked
Jan 31 08:05:00 compute-2 neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c[271369]: [NOTICE]   (271373) : Loading success.
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.609 143841 INFO neutron.agent.ovn.metadata.agent [-] Port a3ef806e-29d0-4489-9e8b-fb24ee625783 in datapath 0fc23939-3578-48f7-b98e-07928f016ed0 unbound from our chassis
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.610 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0fc23939-3578-48f7-b98e-07928f016ed0
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.615 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.620 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c40f71-f274-441b-a0a6-c54a7966a303]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.643 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea22d26-2281-4996-a53c-16b967832d3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.646 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d65e50e0-04a2-4ed9-83ef-b2010db32a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ceph-mon[77282]: pgmap v2039: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 31 08:05:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3410870166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1505055136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.668 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[16acc9dc-c125-4ff3-9704-4fc403693d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.680 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[673b993e-2852-4984-a59d-c0d968e4bc11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0fc23939-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:7a:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 9, 'rx_bytes': 266, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 9, 'rx_bytes': 266, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695169, 'reachable_time': 43073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271389, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.691 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4dd51a-c08e-4ded-ab6e-daeda23e4f51]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0fc23939-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695178, 'tstamp': 695178}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271390, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap0fc23939-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695180, 'tstamp': 695180}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271390, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.693 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0fc23939-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.698 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.700 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0fc23939-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.700 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.700 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0fc23939-30, col_values=(('external_ids', {'iface-id': '9661581e-a708-4de1-b2f7-80d9d3e80a0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.700 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.702 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 522d060c-fdf5-4d50-b0b9-94211986b4ee in datapath 2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c unbound from our chassis
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.703 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.713 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[13e4ad08-8609-4d3c-9cac-b341f60932c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.733 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[dd453a38-4371-4957-941b-9f7b78309b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.735 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[099906e0-bdc4-4ab0-8841-70b861146b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.754 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4aadfdb0-f6d7-43b0-ab6f-354203bed0ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.765 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9f2e2f-13cd-4d8a-9d99-a0e985d2cadd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2bf3b855-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:19:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695253, 'reachable_time': 39248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271396, 'error': None, 'target': 'ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.774 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2331bbc5-aa53-4db2-8fce-c386f5028f40]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tap2bf3b855-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695260, 'tstamp': 695260}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271397, 'error': None, 'target': 'ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2bf3b855-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695262, 'tstamp': 695262}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271397, 'error': None, 'target': 'ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.775 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bf3b855-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.777 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-2 nova_compute[226829]: 2026-01-31 08:05:00.779 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.779 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bf3b855-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.779 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.779 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2bf3b855-d0, col_values=(('external_ids', {'iface-id': '7ea68347-f5e6-4e5f-9510-beda4e01f072'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:00.780 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:01.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:01 compute-2 sudo[271399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:01 compute-2 sudo[271399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:01 compute-2 sudo[271399]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:01 compute-2 nova_compute[226829]: 2026-01-31 08:05:01.535 226833 DEBUG nova.network.neutron [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:01 compute-2 sudo[271424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:01 compute-2 sudo[271424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:01 compute-2 sudo[271424]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:01 compute-2 nova_compute[226829]: 2026-01-31 08:05:01.566 226833 DEBUG oslo_concurrency.lockutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:05:01 compute-2 nova_compute[226829]: 2026-01-31 08:05:01.567 226833 DEBUG nova.objects.instance [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'migration_context' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:05:01 compute-2 nova_compute[226829]: 2026-01-31 08:05:01.661 226833 DEBUG nova.storage.rbd_utils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] removing snapshot(nova-resize) on rbd image(dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.016 226833 DEBUG nova.compute.manager [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.017 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.017 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.018 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.018 226833 DEBUG nova.compute.manager [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No event matching network-vif-plugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac in dict_keys([('network-vif-plugged', 'e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e'), ('network-vif-plugged', 'a3ef806e-29d0-4489-9e8b-fb24ee625783'), ('network-vif-plugged', '01b25ee1-016c-4dac-860c-941a5efc920a'), ('network-vif-plugged', '522d060c-fdf5-4d50-b0b9-94211986b4ee')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.018 226833 WARNING nova.compute.manager [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac for instance with vm_state building and task_state spawning.
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.019 226833 DEBUG nova.compute.manager [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-01b25ee1-016c-4dac-860c-941a5efc920a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.019 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.019 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.019 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.020 226833 DEBUG nova.compute.manager [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Processing event network-vif-plugged-01b25ee1-016c-4dac-860c-941a5efc920a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.020 226833 DEBUG nova.compute.manager [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-01b25ee1-016c-4dac-860c-941a5efc920a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.020 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.021 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.021 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.021 226833 DEBUG nova.compute.manager [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No event matching network-vif-plugged-01b25ee1-016c-4dac-860c-941a5efc920a in dict_keys([('network-vif-plugged', 'e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e'), ('network-vif-plugged', 'a3ef806e-29d0-4489-9e8b-fb24ee625783'), ('network-vif-plugged', '522d060c-fdf5-4d50-b0b9-94211986b4ee')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.022 226833 WARNING nova.compute.manager [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-01b25ee1-016c-4dac-860c-941a5efc920a for instance with vm_state building and task_state spawning.
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.022 226833 DEBUG nova.compute.manager [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-522d060c-fdf5-4d50-b0b9-94211986b4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.022 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.023 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.023 226833 DEBUG oslo_concurrency.lockutils [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.023 226833 DEBUG nova.compute.manager [req-50114cf2-2c99-4996-84c6-33a08fbdafeb req-c810a483-33d1-4131-b9bc-27af734f0f14 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Processing event network-vif-plugged-522d060c-fdf5-4d50-b0b9-94211986b4ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:05:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:02.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.180 226833 DEBUG nova.compute.manager [req-9788e1d7-5db3-4fea-97f4-4f5f106edb20 req-96d01912-d8e1-45d6-9a0d-87319888bd67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.181 226833 DEBUG oslo_concurrency.lockutils [req-9788e1d7-5db3-4fea-97f4-4f5f106edb20 req-96d01912-d8e1-45d6-9a0d-87319888bd67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.181 226833 DEBUG oslo_concurrency.lockutils [req-9788e1d7-5db3-4fea-97f4-4f5f106edb20 req-96d01912-d8e1-45d6-9a0d-87319888bd67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.181 226833 DEBUG oslo_concurrency.lockutils [req-9788e1d7-5db3-4fea-97f4-4f5f106edb20 req-96d01912-d8e1-45d6-9a0d-87319888bd67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.181 226833 DEBUG nova.compute.manager [req-9788e1d7-5db3-4fea-97f4-4f5f106edb20 req-96d01912-d8e1-45d6-9a0d-87319888bd67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No event matching network-vif-plugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca in dict_keys([('network-vif-plugged', 'e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e'), ('network-vif-plugged', 'a3ef806e-29d0-4489-9e8b-fb24ee625783')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.182 226833 WARNING nova.compute.manager [req-9788e1d7-5db3-4fea-97f4-4f5f106edb20 req-96d01912-d8e1-45d6-9a0d-87319888bd67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca for instance with vm_state building and task_state spawning.
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.668 226833 DEBUG nova.compute.manager [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.669 226833 DEBUG oslo_concurrency.lockutils [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.669 226833 DEBUG oslo_concurrency.lockutils [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.669 226833 DEBUG oslo_concurrency.lockutils [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.670 226833 DEBUG nova.compute.manager [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No event matching network-vif-plugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc in dict_keys([('network-vif-plugged', 'e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e'), ('network-vif-plugged', 'a3ef806e-29d0-4489-9e8b-fb24ee625783')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.670 226833 WARNING nova.compute.manager [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc for instance with vm_state building and task_state spawning.
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.670 226833 DEBUG nova.compute.manager [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.670 226833 DEBUG oslo_concurrency.lockutils [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.671 226833 DEBUG oslo_concurrency.lockutils [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.671 226833 DEBUG oslo_concurrency.lockutils [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.671 226833 DEBUG nova.compute.manager [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Processing event network-vif-plugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.671 226833 DEBUG nova.compute.manager [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.672 226833 DEBUG oslo_concurrency.lockutils [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.672 226833 DEBUG oslo_concurrency.lockutils [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.672 226833 DEBUG oslo_concurrency.lockutils [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.672 226833 DEBUG nova.compute.manager [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No event matching network-vif-plugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 in dict_keys([('network-vif-plugged', 'e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.673 226833 WARNING nova.compute.manager [req-09f8d5b5-f9a1-4fd0-95b1-0d103ba4efb3 req-1b94330a-589c-4a6d-8cae-a6d5b65a807a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 for instance with vm_state building and task_state spawning.
Jan 31 08:05:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e258 e258: 3 total, 3 up, 3 in
Jan 31 08:05:02 compute-2 ceph-mon[77282]: pgmap v2040: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 132 op/s
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.808 226833 DEBUG nova.compute.manager [req-d1d991b3-37d3-4439-a336-3b4f82443556 req-22186157-c06d-4393-9c98-3c793fc40db1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.808 226833 DEBUG oslo_concurrency.lockutils [req-d1d991b3-37d3-4439-a336-3b4f82443556 req-22186157-c06d-4393-9c98-3c793fc40db1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.809 226833 DEBUG oslo_concurrency.lockutils [req-d1d991b3-37d3-4439-a336-3b4f82443556 req-22186157-c06d-4393-9c98-3c793fc40db1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.809 226833 DEBUG oslo_concurrency.lockutils [req-d1d991b3-37d3-4439-a336-3b4f82443556 req-22186157-c06d-4393-9c98-3c793fc40db1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.809 226833 DEBUG nova.compute.manager [req-d1d991b3-37d3-4439-a336-3b4f82443556 req-22186157-c06d-4393-9c98-3c793fc40db1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Processing event network-vif-plugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.810 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.815 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846702.8147674, b0ff8f26-937f-43e0-b422-8a0fb0226eac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.816 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] VM Resumed (Lifecycle Event)
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.818 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.823 226833 INFO nova.virt.libvirt.driver [-] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Instance spawned successfully.
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.823 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.834 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.838 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.847 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.848 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.848 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.849 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.849 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.849 226833 DEBUG nova.virt.libvirt.driver [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.863 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.945 226833 INFO nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Took 42.03 seconds to spawn the instance on the hypervisor.
Jan 31 08:05:02 compute-2 nova_compute[226829]: 2026-01-31 08:05:02.945 226833 DEBUG nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:03.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.046 226833 DEBUG nova.virt.libvirt.vif [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:04:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:04:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.046 226833 DEBUG nova.network.os_vif_util [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.047 226833 DEBUG nova.network.os_vif_util [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.047 226833 DEBUG os_vif [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.050 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.051 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86cf2bf6-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.051 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.057 226833 INFO os_vif [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.058 226833 DEBUG oslo_concurrency.lockutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.058 226833 DEBUG oslo_concurrency.lockutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.060 226833 INFO nova.compute.manager [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Took 49.94 seconds to build instance.
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.088 226833 DEBUG oslo_concurrency.lockutils [None req-03bd1e61-c60d-4171-ac68-586c47914b86 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 50.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.201 226833 DEBUG oslo_concurrency.processutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:05:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3126004755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.625 226833 DEBUG oslo_concurrency.processutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.632 226833 DEBUG nova.compute.provider_tree [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.659 226833 DEBUG nova.scheduler.client.report [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.716 226833 DEBUG oslo_concurrency.lockutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:03 compute-2 ceph-mon[77282]: osdmap e258: 3 total, 3 up, 3 in
Jan 31 08:05:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3126004755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.887 226833 INFO nova.scheduler.client.report [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Deleted allocation for migration 7aa2cc78-3afd-485b-8162-2af7bfbfb142
Jan 31 08:05:03 compute-2 nova_compute[226829]: 2026-01-31 08:05:03.943 226833 DEBUG oslo_concurrency.lockutils [None req-76dde478-5969-448f-9fa4-23627dcd1a3e d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 7.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:04.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.252 226833 DEBUG nova.compute.manager [req-8690c2a3-372c-477b-b143-1300473267ce req-559dd3a0-5441-4304-90cf-d049dc735151 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-522d060c-fdf5-4d50-b0b9-94211986b4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.253 226833 DEBUG oslo_concurrency.lockutils [req-8690c2a3-372c-477b-b143-1300473267ce req-559dd3a0-5441-4304-90cf-d049dc735151 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.253 226833 DEBUG oslo_concurrency.lockutils [req-8690c2a3-372c-477b-b143-1300473267ce req-559dd3a0-5441-4304-90cf-d049dc735151 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.253 226833 DEBUG oslo_concurrency.lockutils [req-8690c2a3-372c-477b-b143-1300473267ce req-559dd3a0-5441-4304-90cf-d049dc735151 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.253 226833 DEBUG nova.compute.manager [req-8690c2a3-372c-477b-b143-1300473267ce req-559dd3a0-5441-4304-90cf-d049dc735151 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-plugged-522d060c-fdf5-4d50-b0b9-94211986b4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.254 226833 WARNING nova.compute.manager [req-8690c2a3-372c-477b-b143-1300473267ce req-559dd3a0-5441-4304-90cf-d049dc735151 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-522d060c-fdf5-4d50-b0b9-94211986b4ee for instance with vm_state active and task_state None.
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.373 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:04 compute-2 ceph-mon[77282]: pgmap v2042: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 122 op/s
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.917 226833 DEBUG nova.compute.manager [req-22bee546-c9ab-426c-8cfc-c68d17619b83 req-c144f036-a560-40f4-a190-6c2170c9a3fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.918 226833 DEBUG oslo_concurrency.lockutils [req-22bee546-c9ab-426c-8cfc-c68d17619b83 req-c144f036-a560-40f4-a190-6c2170c9a3fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.918 226833 DEBUG oslo_concurrency.lockutils [req-22bee546-c9ab-426c-8cfc-c68d17619b83 req-c144f036-a560-40f4-a190-6c2170c9a3fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.918 226833 DEBUG oslo_concurrency.lockutils [req-22bee546-c9ab-426c-8cfc-c68d17619b83 req-c144f036-a560-40f4-a190-6c2170c9a3fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.918 226833 DEBUG nova.compute.manager [req-22bee546-c9ab-426c-8cfc-c68d17619b83 req-c144f036-a560-40f4-a190-6c2170c9a3fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-plugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:04 compute-2 nova_compute[226829]: 2026-01-31 08:05:04.919 226833 WARNING nova.compute.manager [req-22bee546-c9ab-426c-8cfc-c68d17619b83 req-c144f036-a560-40f4-a190-6c2170c9a3fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e for instance with vm_state active and task_state None.
Jan 31 08:05:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:05:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:05.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:05:05 compute-2 nova_compute[226829]: 2026-01-31 08:05:05.332 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:05 compute-2 nova_compute[226829]: 2026-01-31 08:05:05.617 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:05 compute-2 ceph-mon[77282]: pgmap v2043: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 20 KiB/s wr, 70 op/s
Jan 31 08:05:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:06.876 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:06.877 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:06.878 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:06 compute-2 nova_compute[226829]: 2026-01-31 08:05:06.894 226833 DEBUG nova.compute.manager [req-3099b188-07a1-4e80-8131-fa1b4f4ff91f req-2013d28d-282b-4f4a-84a3-517ddfa3d2fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-changed-fbf252f3-5bb0-4da8-9564-87a1f052f2dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:06 compute-2 nova_compute[226829]: 2026-01-31 08:05:06.895 226833 DEBUG nova.compute.manager [req-3099b188-07a1-4e80-8131-fa1b4f4ff91f req-2013d28d-282b-4f4a-84a3-517ddfa3d2fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing instance network info cache due to event network-changed-fbf252f3-5bb0-4da8-9564-87a1f052f2dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:05:06 compute-2 nova_compute[226829]: 2026-01-31 08:05:06.895 226833 DEBUG oslo_concurrency.lockutils [req-3099b188-07a1-4e80-8131-fa1b4f4ff91f req-2013d28d-282b-4f4a-84a3-517ddfa3d2fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:05:06 compute-2 nova_compute[226829]: 2026-01-31 08:05:06.895 226833 DEBUG oslo_concurrency.lockutils [req-3099b188-07a1-4e80-8131-fa1b4f4ff91f req-2013d28d-282b-4f4a-84a3-517ddfa3d2fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:05:06 compute-2 nova_compute[226829]: 2026-01-31 08:05:06.895 226833 DEBUG nova.network.neutron [req-3099b188-07a1-4e80-8131-fa1b4f4ff91f req-2013d28d-282b-4f4a-84a3-517ddfa3d2fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Refreshing network info cache for port fbf252f3-5bb0-4da8-9564-87a1f052f2dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:05:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:05:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:07.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:05:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:08.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:08 compute-2 ceph-mon[77282]: pgmap v2044: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 38 KiB/s wr, 96 op/s
Jan 31 08:05:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:09.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 e259: 3 total, 3 up, 3 in
Jan 31 08:05:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:10.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:10 compute-2 nova_compute[226829]: 2026-01-31 08:05:10.335 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:10 compute-2 nova_compute[226829]: 2026-01-31 08:05:10.407 226833 DEBUG nova.network.neutron [req-3099b188-07a1-4e80-8131-fa1b4f4ff91f req-2013d28d-282b-4f4a-84a3-517ddfa3d2fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updated VIF entry in instance network info cache for port fbf252f3-5bb0-4da8-9564-87a1f052f2dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:05:10 compute-2 nova_compute[226829]: 2026-01-31 08:05:10.408 226833 DEBUG nova.network.neutron [req-3099b188-07a1-4e80-8131-fa1b4f4ff91f req-2013d28d-282b-4f4a-84a3-517ddfa3d2fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [{"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:10 compute-2 nova_compute[226829]: 2026-01-31 08:05:10.437 226833 DEBUG oslo_concurrency.lockutils [req-3099b188-07a1-4e80-8131-fa1b4f4ff91f req-2013d28d-282b-4f4a-84a3-517ddfa3d2fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:05:10 compute-2 nova_compute[226829]: 2026-01-31 08:05:10.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:10 compute-2 ceph-mon[77282]: pgmap v2045: 305 pgs: 305 active+clean; 241 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 27 KiB/s wr, 269 op/s
Jan 31 08:05:10 compute-2 ceph-mon[77282]: osdmap e259: 3 total, 3 up, 3 in
Jan 31 08:05:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:05:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:11.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:05:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3053920949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:05:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:12.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:05:12 compute-2 podman[271512]: 2026-01-31 08:05:12.180946809 +0000 UTC m=+0.067236242 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:05:12 compute-2 ceph-mon[77282]: pgmap v2047: 305 pgs: 305 active+clean; 214 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 31 KiB/s wr, 304 op/s
Jan 31 08:05:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:13.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:13 compute-2 ceph-mon[77282]: pgmap v2048: 305 pgs: 305 active+clean; 222 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 433 KiB/s wr, 279 op/s
Jan 31 08:05:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:14.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:15.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:15 compute-2 nova_compute[226829]: 2026-01-31 08:05:15.336 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:15 compute-2 nova_compute[226829]: 2026-01-31 08:05:15.420 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:15 compute-2 nova_compute[226829]: 2026-01-31 08:05:15.421 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:15 compute-2 nova_compute[226829]: 2026-01-31 08:05:15.444 226833 DEBUG nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:05:15 compute-2 nova_compute[226829]: 2026-01-31 08:05:15.525 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:15 compute-2 nova_compute[226829]: 2026-01-31 08:05:15.526 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:15 compute-2 nova_compute[226829]: 2026-01-31 08:05:15.532 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:05:15 compute-2 nova_compute[226829]: 2026-01-31 08:05:15.532 226833 INFO nova.compute.claims [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:05:15 compute-2 nova_compute[226829]: 2026-01-31 08:05:15.665 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:15 compute-2 nova_compute[226829]: 2026-01-31 08:05:15.682 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:05:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1257996655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.085 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:16.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.091 226833 DEBUG nova.compute.provider_tree [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.125 226833 DEBUG nova.scheduler.client.report [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.167 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.168 226833 DEBUG nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.240 226833 DEBUG nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.241 226833 DEBUG nova.network.neutron [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.264 226833 INFO nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.292 226833 DEBUG nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.478 226833 DEBUG nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.479 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.479 226833 INFO nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Creating image(s)
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.508 226833 DEBUG nova.storage.rbd_utils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 66218154-3695-4970-8525-7eaae98f9f14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.535 226833 DEBUG nova.storage.rbd_utils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 66218154-3695-4970-8525-7eaae98f9f14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.564 226833 DEBUG nova.storage.rbd_utils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 66218154-3695-4970-8525-7eaae98f9f14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.568 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.591 226833 DEBUG nova.policy [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd9ed446fb2cf4fc0a4e619c6c766fddc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1fcec9ca13964c7191134db4420ab049', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.626 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.628 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.628 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.629 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.655 226833 DEBUG nova.storage.rbd_utils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 66218154-3695-4970-8525-7eaae98f9f14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.658 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 66218154-3695-4970-8525-7eaae98f9f14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:16 compute-2 ceph-mon[77282]: pgmap v2049: 305 pgs: 305 active+clean; 236 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.1 MiB/s wr, 276 op/s
Jan 31 08:05:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1257996655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3115382932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:16 compute-2 nova_compute[226829]: 2026-01-31 08:05:16.969 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 66218154-3695-4970-8525-7eaae98f9f14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:17 compute-2 nova_compute[226829]: 2026-01-31 08:05:17.047 226833 DEBUG nova.storage.rbd_utils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] resizing rbd image 66218154-3695-4970-8525-7eaae98f9f14_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:05:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:05:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:17.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:05:17 compute-2 nova_compute[226829]: 2026-01-31 08:05:17.153 226833 DEBUG nova.objects.instance [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'migration_context' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:05:17 compute-2 nova_compute[226829]: 2026-01-31 08:05:17.183 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:05:17 compute-2 nova_compute[226829]: 2026-01-31 08:05:17.183 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Ensure instance console log exists: /var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:05:17 compute-2 nova_compute[226829]: 2026-01-31 08:05:17.184 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:17 compute-2 nova_compute[226829]: 2026-01-31 08:05:17.185 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:17 compute-2 nova_compute[226829]: 2026-01-31 08:05:17.185 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:17 compute-2 nova_compute[226829]: 2026-01-31 08:05:17.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:17 compute-2 nova_compute[226829]: 2026-01-31 08:05:17.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3374096316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:17 compute-2 ovn_controller[133834]: 2026-01-31T08:05:17Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:33:c0 10.1.1.182
Jan 31 08:05:17 compute-2 ovn_controller[133834]: 2026-01-31T08:05:17Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:33:c0 10.1.1.182
Jan 31 08:05:18 compute-2 ovn_controller[133834]: 2026-01-31T08:05:18Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:8d:5f 10.1.1.170
Jan 31 08:05:18 compute-2 ovn_controller[133834]: 2026-01-31T08:05:18Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:8d:5f 10.1.1.170
Jan 31 08:05:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:18.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:18 compute-2 ovn_controller[133834]: 2026-01-31T08:05:18Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0a:8f:2c 10.2.2.100
Jan 31 08:05:18 compute-2 ovn_controller[133834]: 2026-01-31T08:05:18Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:8f:2c 10.2.2.100
Jan 31 08:05:18 compute-2 ovn_controller[133834]: 2026-01-31T08:05:18Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:3a:d6 10.2.2.200
Jan 31 08:05:18 compute-2 ovn_controller[133834]: 2026-01-31T08:05:18Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:3a:d6 10.2.2.200
Jan 31 08:05:18 compute-2 ovn_controller[133834]: 2026-01-31T08:05:18Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3c:a6:94 10.1.1.54
Jan 31 08:05:18 compute-2 ovn_controller[133834]: 2026-01-31T08:05:18Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3c:a6:94 10.1.1.54
Jan 31 08:05:18 compute-2 nova_compute[226829]: 2026-01-31 08:05:18.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:18 compute-2 nova_compute[226829]: 2026-01-31 08:05:18.568 226833 DEBUG nova.network.neutron [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Successfully created port: c153b6ab-5a86-443d-ad94-95d7b82bf483 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:05:18 compute-2 ovn_controller[133834]: 2026-01-31T08:05:18Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:8d:67 10.100.0.8
Jan 31 08:05:18 compute-2 ovn_controller[133834]: 2026-01-31T08:05:18Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:8d:67 10.100.0.8
Jan 31 08:05:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:19.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:19 compute-2 ceph-mon[77282]: pgmap v2050: 305 pgs: 305 active+clean; 260 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Jan 31 08:05:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:19 compute-2 ovn_controller[133834]: 2026-01-31T08:05:19Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:50:1d 10.1.1.122
Jan 31 08:05:19 compute-2 ovn_controller[133834]: 2026-01-31T08:05:19Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:50:1d 10.1.1.122
Jan 31 08:05:19 compute-2 nova_compute[226829]: 2026-01-31 08:05:19.621 226833 DEBUG nova.network.neutron [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Successfully updated port: c153b6ab-5a86-443d-ad94-95d7b82bf483 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:05:19 compute-2 nova_compute[226829]: 2026-01-31 08:05:19.644 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:05:19 compute-2 nova_compute[226829]: 2026-01-31 08:05:19.645 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:05:19 compute-2 nova_compute[226829]: 2026-01-31 08:05:19.645 226833 DEBUG nova.network.neutron [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:05:19 compute-2 nova_compute[226829]: 2026-01-31 08:05:19.759 226833 DEBUG nova.compute.manager [req-44a899f8-5446-4f0c-8597-6aaa2fbc1c72 req-f789e5ca-dc12-493d-a790-a84d59fb88d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-changed-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:19 compute-2 nova_compute[226829]: 2026-01-31 08:05:19.760 226833 DEBUG nova.compute.manager [req-44a899f8-5446-4f0c-8597-6aaa2fbc1c72 req-f789e5ca-dc12-493d-a790-a84d59fb88d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Refreshing instance network info cache due to event network-changed-c153b6ab-5a86-443d-ad94-95d7b82bf483. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:05:19 compute-2 nova_compute[226829]: 2026-01-31 08:05:19.760 226833 DEBUG oslo_concurrency.lockutils [req-44a899f8-5446-4f0c-8597-6aaa2fbc1c72 req-f789e5ca-dc12-493d-a790-a84d59fb88d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:05:19 compute-2 nova_compute[226829]: 2026-01-31 08:05:19.857 226833 DEBUG nova.network.neutron [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:05:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:20.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.198 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:20.198 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:20.200 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:05:20 compute-2 ceph-mon[77282]: pgmap v2051: 305 pgs: 305 active+clean; 322 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 877 KiB/s rd, 7.0 MiB/s wr, 152 op/s
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.339 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.519 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.683 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.709 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.709 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.709 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:05:20 compute-2 nova_compute[226829]: 2026-01-31 08:05:20.710 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b0ff8f26-937f-43e0-b422-8a0fb0226eac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:05:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:21.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:21 compute-2 sudo[271731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:21 compute-2 sudo[271731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-2 sudo[271731]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-2 sudo[271756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:05:21 compute-2 sudo[271756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-2 sudo[271756]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-2 sudo[271781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:21 compute-2 sudo[271781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-2 sudo[271781]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-2 sudo[271806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:21 compute-2 sudo[271806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-2 sudo[271806]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-2 sudo[271811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:05:21 compute-2 sudo[271811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-2 sudo[271856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:21 compute-2 sudo[271856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-2 sudo[271856]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-2 nova_compute[226829]: 2026-01-31 08:05:21.877 226833 DEBUG nova.network.neutron [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance_info_cache with network_info: [{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:22 compute-2 sudo[271811]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:22.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.098 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.098 226833 DEBUG nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance network_info: |[{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.098 226833 DEBUG oslo_concurrency.lockutils [req-44a899f8-5446-4f0c-8597-6aaa2fbc1c72 req-f789e5ca-dc12-493d-a790-a84d59fb88d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.099 226833 DEBUG nova.network.neutron [req-44a899f8-5446-4f0c-8597-6aaa2fbc1c72 req-f789e5ca-dc12-493d-a790-a84d59fb88d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Refreshing network info cache for port c153b6ab-5a86-443d-ad94-95d7b82bf483 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.101 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Start _get_guest_xml network_info=[{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.104 226833 WARNING nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.146 226833 DEBUG nova.virt.libvirt.host [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.147 226833 DEBUG nova.virt.libvirt.host [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.181 226833 DEBUG nova.virt.libvirt.host [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.182 226833 DEBUG nova.virt.libvirt.host [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.183 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.184 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.184 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.184 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.185 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.185 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.185 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.185 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.185 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.186 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.186 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.186 226833 DEBUG nova.virt.hardware [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.189 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:05:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1767179113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:22 compute-2 ceph-mon[77282]: pgmap v2052: 305 pgs: 305 active+clean; 359 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.9 MiB/s wr, 207 op/s
Jan 31 08:05:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/18868591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:05:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:05:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:05:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:05:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:05:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.787 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.810 226833 DEBUG nova.storage.rbd_utils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 66218154-3695-4970-8525-7eaae98f9f14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:05:22 compute-2 nova_compute[226829]: 2026-01-31 08:05:22.814 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:05:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:23.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:05:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:23.203 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:05:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/737999452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.267 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.268 226833 DEBUG nova.virt.libvirt.vif [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1412363862',display_name='tempest-ServerActionsTestJSON-server-1412363862',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1412363862',id=102,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-fla0xmix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:05:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=66218154-3695-4970-8525-7eaae98f9f14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.268 226833 DEBUG nova.network.os_vif_util [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.269 226833 DEBUG nova.network.os_vif_util [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.270 226833 DEBUG nova.objects.instance [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.290 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <uuid>66218154-3695-4970-8525-7eaae98f9f14</uuid>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <name>instance-00000066</name>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestJSON-server-1412363862</nova:name>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:05:22</nova:creationTime>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <nova:port uuid="c153b6ab-5a86-443d-ad94-95d7b82bf483">
Jan 31 08:05:23 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <system>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <entry name="serial">66218154-3695-4970-8525-7eaae98f9f14</entry>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <entry name="uuid">66218154-3695-4970-8525-7eaae98f9f14</entry>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     </system>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <os>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   </os>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <features>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   </features>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/66218154-3695-4970-8525-7eaae98f9f14_disk">
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       </source>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/66218154-3695-4970-8525-7eaae98f9f14_disk.config">
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       </source>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:05:23 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:f8:c3:66"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <target dev="tapc153b6ab-5a"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/console.log" append="off"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <video>
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     </video>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:05:23 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:05:23 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:05:23 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:05:23 compute-2 nova_compute[226829]: </domain>
Jan 31 08:05:23 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.291 226833 DEBUG nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Preparing to wait for external event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.291 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.291 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.291 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.292 226833 DEBUG nova.virt.libvirt.vif [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1412363862',display_name='tempest-ServerActionsTestJSON-server-1412363862',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1412363862',id=102,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-fla0xmix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:05:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=66218154-3695-4970-8525-7eaae98f9f14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.293 226833 DEBUG nova.network.os_vif_util [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.293 226833 DEBUG nova.network.os_vif_util [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.294 226833 DEBUG os_vif [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.294 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.295 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.295 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.304 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.304 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc153b6ab-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.305 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc153b6ab-5a, col_values=(('external_ids', {'iface-id': 'c153b6ab-5a86-443d-ad94-95d7b82bf483', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:c3:66', 'vm-uuid': '66218154-3695-4970-8525-7eaae98f9f14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:23 compute-2 NetworkManager[48999]: <info>  [1769846723.3080] manager: (tapc153b6ab-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.306 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.311 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.317 226833 INFO os_vif [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a')
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.380 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.380 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.380 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No VIF found with MAC fa:16:3e:f8:c3:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.381 226833 INFO nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Using config drive
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.412 226833 DEBUG nova.storage.rbd_utils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 66218154-3695-4970-8525-7eaae98f9f14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.854 226833 DEBUG nova.network.neutron [req-44a899f8-5446-4f0c-8597-6aaa2fbc1c72 req-f789e5ca-dc12-493d-a790-a84d59fb88d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updated VIF entry in instance network info cache for port c153b6ab-5a86-443d-ad94-95d7b82bf483. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.855 226833 DEBUG nova.network.neutron [req-44a899f8-5446-4f0c-8597-6aaa2fbc1c72 req-f789e5ca-dc12-493d-a790-a84d59fb88d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance_info_cache with network_info: [{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:23 compute-2 nova_compute[226829]: 2026-01-31 08:05:23.883 226833 DEBUG oslo_concurrency.lockutils [req-44a899f8-5446-4f0c-8597-6aaa2fbc1c72 req-f789e5ca-dc12-493d-a790-a84d59fb88d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:05:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1767179113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1249913127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/737999452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:24 compute-2 nova_compute[226829]: 2026-01-31 08:05:24.024 226833 INFO nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Creating config drive at /var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/disk.config
Jan 31 08:05:24 compute-2 nova_compute[226829]: 2026-01-31 08:05:24.030 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplqsy0qho execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:24.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:24 compute-2 nova_compute[226829]: 2026-01-31 08:05:24.160 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplqsy0qho" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:24 compute-2 nova_compute[226829]: 2026-01-31 08:05:24.193 226833 DEBUG nova.storage.rbd_utils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 66218154-3695-4970-8525-7eaae98f9f14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:05:24 compute-2 nova_compute[226829]: 2026-01-31 08:05:24.197 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/disk.config 66218154-3695-4970-8525-7eaae98f9f14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:24 compute-2 nova_compute[226829]: 2026-01-31 08:05:24.960 226833 DEBUG oslo_concurrency.processutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/disk.config 66218154-3695-4970-8525-7eaae98f9f14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:24 compute-2 nova_compute[226829]: 2026-01-31 08:05:24.961 226833 INFO nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Deleting local config drive /var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/disk.config because it was imported into RBD.
Jan 31 08:05:25 compute-2 kernel: tapc153b6ab-5a: entered promiscuous mode
Jan 31 08:05:25 compute-2 NetworkManager[48999]: <info>  [1769846725.0194] manager: (tapc153b6ab-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Jan 31 08:05:25 compute-2 ovn_controller[133834]: 2026-01-31T08:05:25Z|00398|binding|INFO|Claiming lport c153b6ab-5a86-443d-ad94-95d7b82bf483 for this chassis.
Jan 31 08:05:25 compute-2 ovn_controller[133834]: 2026-01-31T08:05:25Z|00399|binding|INFO|c153b6ab-5a86-443d-ad94-95d7b82bf483: Claiming fa:16:3e:f8:c3:66 10.100.0.12
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.021 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:25 compute-2 ovn_controller[133834]: 2026-01-31T08:05:25Z|00400|binding|INFO|Setting lport c153b6ab-5a86-443d-ad94-95d7b82bf483 ovn-installed in OVS
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.027 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.030 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:25 compute-2 ovn_controller[133834]: 2026-01-31T08:05:25Z|00401|binding|INFO|Setting lport c153b6ab-5a86-443d-ad94-95d7b82bf483 up in Southbound
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.045 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:c3:66 10.100.0.12'], port_security=['fa:16:3e:f8:c3:66 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '66218154-3695-4970-8525-7eaae98f9f14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c153b6ab-5a86-443d-ad94-95d7b82bf483) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.047 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c153b6ab-5a86-443d-ad94-95d7b82bf483 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.050 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:05:25 compute-2 systemd-machined[195142]: New machine qemu-45-instance-00000066.
Jan 31 08:05:25 compute-2 systemd[1]: Started Virtual Machine qemu-45-instance-00000066.
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.060 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2227fbfb-08b5-4d44-96d2-08764687c7c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.062 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.064 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.064 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c34afff6-417b-4142-949d-0b2532664b86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.066 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5241b31d-0164-4f1a-972b-b277d86036bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 systemd-udevd[272062]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:05:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:25.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.079 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3f6e73-b23a-4c3a-be6c-6f5e93e68d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 NetworkManager[48999]: <info>  [1769846725.0881] device (tapc153b6ab-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:05:25 compute-2 NetworkManager[48999]: <info>  [1769846725.0889] device (tapc153b6ab-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.090 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[45e32557-79a4-4e4e-b703-384b737e8864]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 podman[272046]: 2026-01-31 08:05:25.115849164 +0000 UTC m=+0.070358597 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.117 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[13e847bf-f799-495b-a12f-1c3829ab29d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.121 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[928d0985-27f8-49b8-be91-5ca7d5e28d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 systemd-udevd[272068]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:05:25 compute-2 NetworkManager[48999]: <info>  [1769846725.1227] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/205)
Jan 31 08:05:25 compute-2 ceph-mon[77282]: pgmap v2053: 305 pgs: 305 active+clean; 366 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.8 MiB/s wr, 219 op/s
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.147 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf54ba7-ad4d-4bc4-9bc7-8f1b98f8acf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.150 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8822cbd1-a4f0-44dc-b254-76be1d7fe4d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 NetworkManager[48999]: <info>  [1769846725.1705] device (tap5cc2535f-00): carrier: link connected
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.173 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e3bab753-17ba-4455-b986-c3a97e001c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.188 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[64704228-6841-4c1c-80d4-c93c72887211]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697764, 'reachable_time': 29361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272101, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.201 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c39ee0b3-efb9-46ae-9069-a9f31fc9337b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697764, 'tstamp': 697764}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272102, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.213 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b40d9ce3-759c-4617-984c-0f32813ead08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697764, 'reachable_time': 29361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272103, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.232 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e9b042-1a0b-46e6-8256-29881bc25998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.268 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f4752ca8-d1c5-4878-9771-3222c132cde4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.269 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.269 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.269 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.271 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:25 compute-2 NetworkManager[48999]: <info>  [1769846725.2724] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Jan 31 08:05:25 compute-2 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.274 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.276 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.277 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:25 compute-2 ovn_controller[133834]: 2026-01-31T08:05:25Z|00402|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.282 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.285 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.286 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eeefc5b9-8d8f-42af-b65f-e5fdac26e310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.287 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:05:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:25.288 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.341 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.533 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846725.53249, 66218154-3695-4970-8525-7eaae98f9f14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.534 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] VM Started (Lifecycle Event)
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.570 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.574 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846725.533921, 66218154-3695-4970-8525-7eaae98f9f14 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.574 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] VM Paused (Lifecycle Event)
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.599 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.603 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:05:25 compute-2 nova_compute[226829]: 2026-01-31 08:05:25.627 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:05:25 compute-2 podman[272175]: 2026-01-31 08:05:25.691670631 +0000 UTC m=+0.110391962 container create 2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:05:25 compute-2 podman[272175]: 2026-01-31 08:05:25.603219685 +0000 UTC m=+0.021941016 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:05:25 compute-2 systemd[1]: Started libpod-conmon-2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8.scope.
Jan 31 08:05:25 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:05:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2f88a9238a1a8c18b29dd5d0857eed6e74d819fc1dbc7579c45e107d3cd4c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:25 compute-2 podman[272175]: 2026-01-31 08:05:25.809381029 +0000 UTC m=+0.228102350 container init 2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:05:25 compute-2 podman[272175]: 2026-01-31 08:05:25.813254944 +0000 UTC m=+0.231976255 container start 2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:05:25 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[272190]: [NOTICE]   (272194) : New worker (272196) forked
Jan 31 08:05:25 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[272190]: [NOTICE]   (272194) : Loading success.
Jan 31 08:05:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:26.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:26 compute-2 ceph-mon[77282]: pgmap v2054: 305 pgs: 305 active+clean; 359 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 246 op/s
Jan 31 08:05:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1225765375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:27.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.749 226833 DEBUG nova.compute.manager [req-d62596a7-de9f-4bf4-8bc8-466a8312dfd4 req-f097b145-a3d1-4211-849a-d1a1d31dc145 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.750 226833 DEBUG oslo_concurrency.lockutils [req-d62596a7-de9f-4bf4-8bc8-466a8312dfd4 req-f097b145-a3d1-4211-849a-d1a1d31dc145 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.750 226833 DEBUG oslo_concurrency.lockutils [req-d62596a7-de9f-4bf4-8bc8-466a8312dfd4 req-f097b145-a3d1-4211-849a-d1a1d31dc145 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.750 226833 DEBUG oslo_concurrency.lockutils [req-d62596a7-de9f-4bf4-8bc8-466a8312dfd4 req-f097b145-a3d1-4211-849a-d1a1d31dc145 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.751 226833 DEBUG nova.compute.manager [req-d62596a7-de9f-4bf4-8bc8-466a8312dfd4 req-f097b145-a3d1-4211-849a-d1a1d31dc145 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Processing event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.752 226833 DEBUG nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.756 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846727.7559762, 66218154-3695-4970-8525-7eaae98f9f14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.756 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] VM Resumed (Lifecycle Event)
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.758 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.761 226833 INFO nova.virt.libvirt.driver [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance spawned successfully.
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.762 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.809 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.814 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.814 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.815 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.815 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.815 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.816 226833 DEBUG nova.virt.libvirt.driver [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.819 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.855 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.885 226833 INFO nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Took 11.41 seconds to spawn the instance on the hypervisor.
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.886 226833 DEBUG nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.960 226833 INFO nova.compute.manager [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Took 12.46 seconds to build instance.
Jan 31 08:05:27 compute-2 nova_compute[226829]: 2026-01-31 08:05:27.996 226833 DEBUG oslo_concurrency.lockutils [None req-5420b1ea-8f45-4c46-b529-7560970a4e99 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:28.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:28 compute-2 nova_compute[226829]: 2026-01-31 08:05:28.308 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:28 compute-2 ceph-mon[77282]: pgmap v2055: 305 pgs: 305 active+clean; 334 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.0 MiB/s wr, 268 op/s
Jan 31 08:05:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2404807119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:29.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:29 compute-2 nova_compute[226829]: 2026-01-31 08:05:29.937 226833 DEBUG nova.compute.manager [req-be4b5d72-3a40-475e-a514-937f52c48b12 req-334771e9-dd02-4726-813a-9248c9979f6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:29 compute-2 nova_compute[226829]: 2026-01-31 08:05:29.938 226833 DEBUG oslo_concurrency.lockutils [req-be4b5d72-3a40-475e-a514-937f52c48b12 req-334771e9-dd02-4726-813a-9248c9979f6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:29 compute-2 nova_compute[226829]: 2026-01-31 08:05:29.939 226833 DEBUG oslo_concurrency.lockutils [req-be4b5d72-3a40-475e-a514-937f52c48b12 req-334771e9-dd02-4726-813a-9248c9979f6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:29 compute-2 nova_compute[226829]: 2026-01-31 08:05:29.939 226833 DEBUG oslo_concurrency.lockutils [req-be4b5d72-3a40-475e-a514-937f52c48b12 req-334771e9-dd02-4726-813a-9248c9979f6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:29 compute-2 nova_compute[226829]: 2026-01-31 08:05:29.940 226833 DEBUG nova.compute.manager [req-be4b5d72-3a40-475e-a514-937f52c48b12 req-334771e9-dd02-4726-813a-9248c9979f6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] No waiting events found dispatching network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:29 compute-2 nova_compute[226829]: 2026-01-31 08:05:29.940 226833 WARNING nova.compute.manager [req-be4b5d72-3a40-475e-a514-937f52c48b12 req-334771e9-dd02-4726-813a-9248c9979f6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received unexpected event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 for instance with vm_state active and task_state None.
Jan 31 08:05:29 compute-2 ceph-mon[77282]: pgmap v2056: 305 pgs: 305 active+clean; 356 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.6 MiB/s wr, 325 op/s
Jan 31 08:05:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1934745028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:30.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:30 compute-2 nova_compute[226829]: 2026-01-31 08:05:30.345 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:31.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:31 compute-2 sudo[272209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:31 compute-2 sudo[272209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:31 compute-2 sudo[272209]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:31 compute-2 sudo[272234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:05:31 compute-2 sudo[272234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:31 compute-2 sudo[272234]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:31 compute-2 nova_compute[226829]: 2026-01-31 08:05:31.977 226833 DEBUG nova.compute.manager [req-620c35a7-739a-4e12-ab27-2f0db7f3ac9f req-c38368fe-0447-4d57-9403-3b674df956d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-changed-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:31 compute-2 nova_compute[226829]: 2026-01-31 08:05:31.979 226833 DEBUG nova.compute.manager [req-620c35a7-739a-4e12-ab27-2f0db7f3ac9f req-c38368fe-0447-4d57-9403-3b674df956d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Refreshing instance network info cache due to event network-changed-c153b6ab-5a86-443d-ad94-95d7b82bf483. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:05:31 compute-2 nova_compute[226829]: 2026-01-31 08:05:31.979 226833 DEBUG oslo_concurrency.lockutils [req-620c35a7-739a-4e12-ab27-2f0db7f3ac9f req-c38368fe-0447-4d57-9403-3b674df956d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:05:31 compute-2 nova_compute[226829]: 2026-01-31 08:05:31.979 226833 DEBUG oslo_concurrency.lockutils [req-620c35a7-739a-4e12-ab27-2f0db7f3ac9f req-c38368fe-0447-4d57-9403-3b674df956d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:05:31 compute-2 nova_compute[226829]: 2026-01-31 08:05:31.980 226833 DEBUG nova.network.neutron [req-620c35a7-739a-4e12-ab27-2f0db7f3ac9f req-c38368fe-0447-4d57-9403-3b674df956d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Refreshing network info cache for port c153b6ab-5a86-443d-ad94-95d7b82bf483 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:05:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:32.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:05:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:05:32 compute-2 ceph-mon[77282]: pgmap v2057: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.8 MiB/s wr, 267 op/s
Jan 31 08:05:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:33.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:33 compute-2 nova_compute[226829]: 2026-01-31 08:05:33.311 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:34.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.135 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [{"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.199 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-b0ff8f26-937f-43e0-b422-8a0fb0226eac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.199 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.199 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.200 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.200 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.200 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.255 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.255 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.256 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.256 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.256 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:34 compute-2 ceph-mon[77282]: pgmap v2058: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 198 op/s
Jan 31 08:05:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1535432295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:05:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/148157636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.667 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.745 226833 DEBUG nova.network.neutron [req-620c35a7-739a-4e12-ab27-2f0db7f3ac9f req-c38368fe-0447-4d57-9403-3b674df956d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updated VIF entry in instance network info cache for port c153b6ab-5a86-443d-ad94-95d7b82bf483. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:05:34 compute-2 nova_compute[226829]: 2026-01-31 08:05:34.746 226833 DEBUG nova.network.neutron [req-620c35a7-739a-4e12-ab27-2f0db7f3ac9f req-c38368fe-0447-4d57-9403-3b674df956d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance_info_cache with network_info: [{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:35.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:35 compute-2 nova_compute[226829]: 2026-01-31 08:05:35.349 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/148157636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:36.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.165 226833 DEBUG oslo_concurrency.lockutils [req-620c35a7-739a-4e12-ab27-2f0db7f3ac9f req-c38368fe-0447-4d57-9403-3b674df956d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.183 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.184 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.184 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.184 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.188 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.188 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.412 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.414 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4044MB free_disk=20.8760986328125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.414 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.414 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.535 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance b0ff8f26-937f-43e0-b422-8a0fb0226eac actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.535 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 66218154-3695-4970-8525-7eaae98f9f14 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.535 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.536 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.579 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:36 compute-2 ovn_controller[133834]: 2026-01-31T08:05:36Z|00403|binding|INFO|Releasing lport 9661581e-a708-4de1-b2f7-80d9d3e80a0f from this chassis (sb_readonly=0)
Jan 31 08:05:36 compute-2 ovn_controller[133834]: 2026-01-31T08:05:36Z|00404|binding|INFO|Releasing lport c50f1d64-569e-42f1-ae8d-1086e3dfeda9 from this chassis (sb_readonly=0)
Jan 31 08:05:36 compute-2 ovn_controller[133834]: 2026-01-31T08:05:36Z|00405|binding|INFO|Releasing lport 7ea68347-f5e6-4e5f-9510-beda4e01f072 from this chassis (sb_readonly=0)
Jan 31 08:05:36 compute-2 ovn_controller[133834]: 2026-01-31T08:05:36Z|00406|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:05:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/285771705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:36 compute-2 nova_compute[226829]: 2026-01-31 08:05:36.996 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:37 compute-2 nova_compute[226829]: 2026-01-31 08:05:37.000 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:05:37 compute-2 nova_compute[226829]: 2026-01-31 08:05:37.022 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:05:37 compute-2 ceph-mon[77282]: pgmap v2059: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Jan 31 08:05:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/285771705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:37 compute-2 nova_compute[226829]: 2026-01-31 08:05:37.058 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:05:37 compute-2 nova_compute[226829]: 2026-01-31 08:05:37.059 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:37.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:37 compute-2 nova_compute[226829]: 2026-01-31 08:05:37.347 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:37 compute-2 nova_compute[226829]: 2026-01-31 08:05:37.348 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:05:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:05:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:38.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:05:38 compute-2 nova_compute[226829]: 2026-01-31 08:05:38.313 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1753826075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:38 compute-2 ceph-mon[77282]: pgmap v2060: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Jan 31 08:05:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:39.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:05:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:40.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:05:40 compute-2 nova_compute[226829]: 2026-01-31 08:05:40.351 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:40 compute-2 ceph-mon[77282]: pgmap v2061: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 08:05:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:41.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:41 compute-2 sudo[272310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:41 compute-2 sudo[272310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:41 compute-2 sudo[272310]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:41 compute-2 sudo[272335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:41 compute-2 sudo[272335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:41 compute-2 sudo[272335]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:42.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:42.408 143948 DEBUG eventlet.wsgi.server [-] (143948) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:42.411 143948 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: Accept: */*
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: Connection: close
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: Content-Type: text/plain
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: Host: 169.254.169.254
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: User-Agent: curl/7.84.0
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: X-Forwarded-For: 10.100.0.8
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: X-Ovn-Network-Id: 79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 31 08:05:42 compute-2 ovn_controller[133834]: 2026-01-31T08:05:42Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:c3:66 10.100.0.12
Jan 31 08:05:42 compute-2 ovn_controller[133834]: 2026-01-31T08:05:42Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:c3:66 10.100.0.12
Jan 31 08:05:42 compute-2 ceph-mon[77282]: pgmap v2062: 305 pgs: 305 active+clean; 376 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 989 KiB/s wr, 57 op/s
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:42.894 143948 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 31 08:05:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:42.894 143948 INFO eventlet.wsgi.server [-] 10.100.0.8,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2552 time: 0.4839654
Jan 31 08:05:42 compute-2 haproxy-metadata-proxy-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa[271199]: 10.100.0.8:35722 [31/Jan/2026:08:05:42.406] listener listener/metadata 0/0/0/488/488 200 2536 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Jan 31 08:05:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:05:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:43.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:05:43 compute-2 podman[272360]: 2026-01-31 08:05:43.192048163 +0000 UTC m=+0.067444186 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.488 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.489 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.489 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.490 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.490 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.490 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.525 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.551 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.552 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Image id 7c23949f-bba8-4466-bb79-caf568852d38 yields fingerprint ff90c10b8251df1dd96780c3025774cae23123c6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.552 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] image 7c23949f-bba8-4466-bb79-caf568852d38 at (/var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6): checking
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.552 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] image 7c23949f-bba8-4466-bb79-caf568852d38 at (/var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.555 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.556 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] b0ff8f26-937f-43e0-b422-8a0fb0226eac is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.556 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] 66218154-3695-4970-8525-7eaae98f9f14 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.556 226833 WARNING nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.556 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Active base files: /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.556 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Removable base files: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.557 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.557 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.557 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.557 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 31 08:05:43 compute-2 nova_compute[226829]: 2026-01-31 08:05:43.558 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Jan 31 08:05:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1434333554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:43 compute-2 ceph-mon[77282]: pgmap v2063: 305 pgs: 305 active+clean; 388 MiB data, 1024 MiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 1.5 MiB/s wr, 35 op/s
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.031 226833 DEBUG oslo_concurrency.lockutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.031 226833 DEBUG oslo_concurrency.lockutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.031 226833 DEBUG oslo_concurrency.lockutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.032 226833 DEBUG oslo_concurrency.lockutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.032 226833 DEBUG oslo_concurrency.lockutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.033 226833 INFO nova.compute.manager [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Terminating instance
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.035 226833 DEBUG nova.compute.manager [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:05:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:44.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:44 compute-2 kernel: tapfbf252f3-5b (unregistering): left promiscuous mode
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.1595] device (tapfbf252f3-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00407|binding|INFO|Releasing lport fbf252f3-5bb0-4da8-9564-87a1f052f2dc from this chassis (sb_readonly=0)
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00408|binding|INFO|Setting lport fbf252f3-5bb0-4da8-9564-87a1f052f2dc down in Southbound
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00409|binding|INFO|Removing iface tapfbf252f3-5b ovn-installed in OVS
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.166 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.169 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.175 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:8d:67 10.100.0.8'], port_security=['fa:16:3e:57:8d:67 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2eb2c302-f4c8-4ded-ad35-1d7e03049730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dd2f7c-7ff8-4082-9961-5cfe817b3d85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=fbf252f3-5bb0-4da8-9564-87a1f052f2dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.177 143841 INFO neutron.agent.ovn.metadata.agent [-] Port fbf252f3-5bb0-4da8-9564-87a1f052f2dc in datapath 79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa unbound from our chassis
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.179 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.179 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.181 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[04369c3b-8c52-41d0-a34b-b538e6b11d1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.182 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa namespace which is not needed anymore
Jan 31 08:05:44 compute-2 kernel: tapfead71ab-29 (unregistering): left promiscuous mode
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.1871] device (tapfead71ab-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00410|binding|INFO|Releasing lport fead71ab-29ff-4831-93d2-9a03e4ed64ac from this chassis (sb_readonly=0)
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00411|binding|INFO|Setting lport fead71ab-29ff-4831-93d2-9a03e4ed64ac down in Southbound
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.195 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00412|binding|INFO|Removing iface tapfead71ab-29 ovn-installed in OVS
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.200 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.201 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:8d:5f 10.1.1.170'], port_security=['fa:16:3e:52:8d:5f 10.1.1.170'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-1955072108', 'neutron:cidrs': '10.1.1.170/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0fc23939-3578-48f7-b98e-07928f016ed0', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-1955072108', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8eca293e-dca5-4092-be7b-a615369ad61a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64450e70-c4c7-42c3-be71-768482625146, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=fead71ab-29ff-4831-93d2-9a03e4ed64ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:44 compute-2 kernel: tape0f7cff6-3b (unregistering): left promiscuous mode
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.2195] device (tape0f7cff6-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00413|binding|INFO|Releasing lport e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e from this chassis (sb_readonly=0)
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00414|binding|INFO|Setting lport e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e down in Southbound
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00415|binding|INFO|Removing iface tape0f7cff6-3b ovn-installed in OVS
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.231 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.233 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 kernel: tapa3ef806e-29 (unregistering): left promiscuous mode
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.236 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:33:c0 10.1.1.182'], port_security=['fa:16:3e:c2:33:c0 10.1.1.182'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-973870628', 'neutron:cidrs': '10.1.1.182/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0fc23939-3578-48f7-b98e-07928f016ed0', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-973870628', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8eca293e-dca5-4092-be7b-a615369ad61a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64450e70-c4c7-42c3-be71-768482625146, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.2411] device (tapa3ef806e-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.242 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00416|binding|INFO|Releasing lport a3ef806e-29d0-4489-9e8b-fb24ee625783 from this chassis (sb_readonly=0)
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00417|binding|INFO|Setting lport a3ef806e-29d0-4489-9e8b-fb24ee625783 down in Southbound
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.252 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00418|binding|INFO|Removing iface tapa3ef806e-29 ovn-installed in OVS
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.255 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 kernel: tap24a33ffc-f6 (unregistering): left promiscuous mode
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.2582] device (tap24a33ffc-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.259 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:50:1d 10.1.1.122'], port_security=['fa:16:3e:88:50:1d 10.1.1.122'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.122/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0fc23939-3578-48f7-b98e-07928f016ed0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2eb2c302-f4c8-4ded-ad35-1d7e03049730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64450e70-c4c7-42c3-be71-768482625146, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=a3ef806e-29d0-4489-9e8b-fb24ee625783) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00419|binding|INFO|Releasing lport 24a33ffc-f652-4c4c-84e0-5ba78e56a6ca from this chassis (sb_readonly=0)
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.273 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00420|binding|INFO|Setting lport 24a33ffc-f652-4c4c-84e0-5ba78e56a6ca down in Southbound
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00421|binding|INFO|Removing iface tap24a33ffc-f6 ovn-installed in OVS
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.275 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 kernel: tap01b25ee1-01 (unregistering): left promiscuous mode
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.2824] device (tap01b25ee1-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.283 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:a6:94 10.1.1.54'], port_security=['fa:16:3e:3c:a6:94 10.1.1.54'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.54/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0fc23939-3578-48f7-b98e-07928f016ed0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2eb2c302-f4c8-4ded-ad35-1d7e03049730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64450e70-c4c7-42c3-be71-768482625146, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=24a33ffc-f652-4c4c-84e0-5ba78e56a6ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.283 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa[271164]: [NOTICE]   (271193) : haproxy version is 2.8.14-c23fe91
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa[271164]: [NOTICE]   (271193) : path to executable is /usr/sbin/haproxy
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa[271164]: [WARNING]  (271193) : Exiting Master process...
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa[271164]: [WARNING]  (271193) : Exiting Master process...
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00422|binding|INFO|Releasing lport 01b25ee1-016c-4dac-860c-941a5efc920a from this chassis (sb_readonly=0)
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa[271164]: [ALERT]    (271193) : Current worker (271199) exited with code 143 (Terminated)
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa[271164]: [WARNING]  (271193) : All workers exited. Exiting... (0)
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00423|binding|INFO|Setting lport 01b25ee1-016c-4dac-860c-941a5efc920a down in Southbound
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.294 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 systemd[1]: libpod-8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497.scope: Deactivated successfully.
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00424|binding|INFO|Removing iface tap01b25ee1-01 ovn-installed in OVS
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.297 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.301 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 kernel: tap522d060c-fd (unregistering): left promiscuous mode
Jan 31 08:05:44 compute-2 podman[272423]: 2026-01-31 08:05:44.304501057 +0000 UTC m=+0.052732826 container died 8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.303 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:8f:2c 10.2.2.100'], port_security=['fa:16:3e:0a:8f:2c 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2eb2c302-f4c8-4ded-ad35-1d7e03049730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13ba0715-c991-496d-88de-9a6b20c0e088, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=01b25ee1-016c-4dac-860c-941a5efc920a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.3067] device (tap522d060c-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:05:44 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497-userdata-shm.mount: Deactivated successfully.
Jan 31 08:05:44 compute-2 systemd[1]: var-lib-containers-storage-overlay-052e94cc811794da5ba9f1ceee15be14327550f64a49ca9b54ea0165236aff1d-merged.mount: Deactivated successfully.
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00425|binding|INFO|Releasing lport 522d060c-fdf5-4d50-b0b9-94211986b4ee from this chassis (sb_readonly=0)
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.337 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00426|binding|INFO|Setting lport 522d060c-fdf5-4d50-b0b9-94211986b4ee down in Southbound
Jan 31 08:05:44 compute-2 ovn_controller[133834]: 2026-01-31T08:05:44Z|00427|binding|INFO|Removing iface tap522d060c-fd ovn-installed in OVS
Jan 31 08:05:44 compute-2 podman[272423]: 2026-01-31 08:05:44.339788396 +0000 UTC m=+0.088020165 container cleanup 8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.341 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.343 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.347 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:3a:d6 10.2.2.200'], port_security=['fa:16:3e:3b:3a:d6 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'b0ff8f26-937f-43e0-b422-8a0fb0226eac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2327b93dd7d648efad6d2b303f9e462e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2eb2c302-f4c8-4ded-ad35-1d7e03049730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13ba0715-c991-496d-88de-9a6b20c0e088, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=522d060c-fdf5-4d50-b0b9-94211986b4ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:44 compute-2 systemd[1]: libpod-conmon-8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497.scope: Deactivated successfully.
Jan 31 08:05:44 compute-2 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 31 08:05:44 compute-2 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000063.scope: Consumed 16.797s CPU time.
Jan 31 08:05:44 compute-2 systemd-machined[195142]: Machine qemu-44-instance-00000063 terminated.
Jan 31 08:05:44 compute-2 podman[272490]: 2026-01-31 08:05:44.392251134 +0000 UTC m=+0.033826171 container remove 8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.395 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8925c21f-6e90-42b1-b96e-5cd972aa9b8b]: (4, ('Sat Jan 31 08:05:44 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa (8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497)\n8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497\nSat Jan 31 08:05:44 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa (8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497)\n8009b66ad83cbe10e043738b20832097dfdba1339e232aa2604dcdbe4daf0497\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.397 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5bc7fe-5250-4bb8-8ce8-01dd76b3de6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.397 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79a3d7c9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.400 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 kernel: tap79a3d7c9-b0: left promiscuous mode
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.415 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.418 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d8eaad52-5852-48e0-b290-5b2f7d287af7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.436 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9c76dd19-907a-4bc4-8047-81f5b3a4f5f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.438 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf6aa80-810e-4952-a1a8-f1cec1b60f93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.450 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aae3e5bc-4351-4c0b-b871-c5cadcba210e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695094, 'reachable_time': 20420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272525, 'error': None, 'target': 'ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.453 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.453 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[bc8cc035-658f-4492-bc79-01397171642b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.454 143841 INFO neutron.agent.ovn.metadata.agent [-] Port fead71ab-29ff-4831-93d2-9a03e4ed64ac in datapath 0fc23939-3578-48f7-b98e-07928f016ed0 unbound from our chassis
Jan 31 08:05:44 compute-2 systemd[1]: run-netns-ovnmeta\x2d79a3d7c9\x2db5c3\x2d4358\x2dbbeb\x2d1a48b677f9fa.mount: Deactivated successfully.
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.456 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0fc23939-3578-48f7-b98e-07928f016ed0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.457 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1e089551-f5ab-43ef-8111-3f0b393e77d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.458 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0 namespace which is not needed anymore
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.4625] manager: (tapfead71ab-29): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.4747] manager: (tape0f7cff6-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.5219] manager: (tap24a33ffc-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/209)
Jan 31 08:05:44 compute-2 NetworkManager[48999]: <info>  [1769846744.5399] manager: (tap522d060c-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.553 226833 DEBUG nova.compute.manager [req-d0c59938-b49b-4958-8f32-c04676285c97 req-f4fb7ecb-3c7a-48f8-8c80-721828856adc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.554 226833 DEBUG oslo_concurrency.lockutils [req-d0c59938-b49b-4958-8f32-c04676285c97 req-f4fb7ecb-3c7a-48f8-8c80-721828856adc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.554 226833 DEBUG oslo_concurrency.lockutils [req-d0c59938-b49b-4958-8f32-c04676285c97 req-f4fb7ecb-3c7a-48f8-8c80-721828856adc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.554 226833 DEBUG oslo_concurrency.lockutils [req-d0c59938-b49b-4958-8f32-c04676285c97 req-f4fb7ecb-3c7a-48f8-8c80-721828856adc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.554 226833 DEBUG nova.compute.manager [req-d0c59938-b49b-4958-8f32-c04676285c97 req-f4fb7ecb-3c7a-48f8-8c80-721828856adc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-unplugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.554 226833 DEBUG nova.compute.manager [req-d0c59938-b49b-4958-8f32-c04676285c97 req-f4fb7ecb-3c7a-48f8-8c80-721828856adc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.560 226833 INFO nova.virt.libvirt.driver [-] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Instance destroyed successfully.
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.560 226833 DEBUG nova.objects.instance [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lazy-loading 'resources' on Instance uuid b0ff8f26-937f-43e0-b422-8a0fb0226eac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.577 226833 DEBUG nova.compute.manager [req-53c0a209-b10b-416b-9883-29bddd8947db req-0655334c-e871-408e-bbf3-55ee06e07f96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.577 226833 DEBUG oslo_concurrency.lockutils [req-53c0a209-b10b-416b-9883-29bddd8947db req-0655334c-e871-408e-bbf3-55ee06e07f96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.577 226833 DEBUG oslo_concurrency.lockutils [req-53c0a209-b10b-416b-9883-29bddd8947db req-0655334c-e871-408e-bbf3-55ee06e07f96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.578 226833 DEBUG oslo_concurrency.lockutils [req-53c0a209-b10b-416b-9883-29bddd8947db req-0655334c-e871-408e-bbf3-55ee06e07f96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0[271280]: [NOTICE]   (271284) : haproxy version is 2.8.14-c23fe91
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0[271280]: [NOTICE]   (271284) : path to executable is /usr/sbin/haproxy
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0[271280]: [WARNING]  (271284) : Exiting Master process...
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.578 226833 DEBUG nova.compute.manager [req-53c0a209-b10b-416b-9883-29bddd8947db req-0655334c-e871-408e-bbf3-55ee06e07f96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-unplugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.578 226833 DEBUG nova.compute.manager [req-53c0a209-b10b-416b-9883-29bddd8947db req-0655334c-e871-408e-bbf3-55ee06e07f96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0[271280]: [ALERT]    (271284) : Current worker (271286) exited with code 143 (Terminated)
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0[271280]: [WARNING]  (271284) : All workers exited. Exiting... (0)
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.580 226833 DEBUG nova.virt.libvirt.vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.580 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:44 compute-2 systemd[1]: libpod-42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b.scope: Deactivated successfully.
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.581 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:8d:67,bridge_name='br-int',has_traffic_filtering=True,id=fbf252f3-5bb0-4da8-9564-87a1f052f2dc,network=Network(79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbf252f3-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.581 226833 DEBUG os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:8d:67,bridge_name='br-int',has_traffic_filtering=True,id=fbf252f3-5bb0-4da8-9564-87a1f052f2dc,network=Network(79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbf252f3-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.584 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.584 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf252f3-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.585 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.588 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:05:44 compute-2 podman[272622]: 2026-01-31 08:05:44.589758907 +0000 UTC m=+0.044607304 container died 42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.598 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.603 226833 INFO os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:8d:67,bridge_name='br-int',has_traffic_filtering=True,id=fbf252f3-5bb0-4da8-9564-87a1f052f2dc,network=Network(79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbf252f3-5b')
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.603 226833 DEBUG nova.virt.libvirt.vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.604 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.604 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:52:8d:5f,bridge_name='br-int',has_traffic_filtering=True,id=fead71ab-29ff-4831-93d2-9a03e4ed64ac,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfead71ab-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.604 226833 DEBUG os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:8d:5f,bridge_name='br-int',has_traffic_filtering=True,id=fead71ab-29ff-4831-93d2-9a03e4ed64ac,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfead71ab-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.606 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.606 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfead71ab-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.607 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.609 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:05:44 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b-userdata-shm.mount: Deactivated successfully.
Jan 31 08:05:44 compute-2 systemd[1]: var-lib-containers-storage-overlay-64de3af829a5412905eed055b8d16f442c2e74e44fcc820b82a34cdadb2d2552-merged.mount: Deactivated successfully.
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.618 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.620 226833 INFO os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:8d:5f,bridge_name='br-int',has_traffic_filtering=True,id=fead71ab-29ff-4831-93d2-9a03e4ed64ac,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfead71ab-29')
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.621 226833 DEBUG nova.virt.libvirt.vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.621 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.622 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:33:c0,bridge_name='br-int',has_traffic_filtering=True,id=e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0f7cff6-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.622 226833 DEBUG os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:33:c0,bridge_name='br-int',has_traffic_filtering=True,id=e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0f7cff6-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:05:44 compute-2 podman[272622]: 2026-01-31 08:05:44.623709111 +0000 UTC m=+0.078557508 container cleanup 42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.623 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.624 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0f7cff6-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.627 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:05:44 compute-2 systemd[1]: libpod-conmon-42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b.scope: Deactivated successfully.
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.632 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.634 226833 INFO os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:33:c0,bridge_name='br-int',has_traffic_filtering=True,id=e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0f7cff6-3b')
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.635 226833 DEBUG nova.virt.libvirt.vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.635 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.636 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:50:1d,bridge_name='br-int',has_traffic_filtering=True,id=a3ef806e-29d0-4489-9e8b-fb24ee625783,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ef806e-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.636 226833 DEBUG os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:50:1d,bridge_name='br-int',has_traffic_filtering=True,id=a3ef806e-29d0-4489-9e8b-fb24ee625783,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ef806e-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.637 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.637 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ef806e-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.640 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.644 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.654 226833 INFO os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:50:1d,bridge_name='br-int',has_traffic_filtering=True,id=a3ef806e-29d0-4489-9e8b-fb24ee625783,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ef806e-29')
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.654 226833 DEBUG nova.virt.libvirt.vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.655 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.655 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:a6:94,bridge_name='br-int',has_traffic_filtering=True,id=24a33ffc-f652-4c4c-84e0-5ba78e56a6ca,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24a33ffc-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.655 226833 DEBUG os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:a6:94,bridge_name='br-int',has_traffic_filtering=True,id=24a33ffc-f652-4c4c-84e0-5ba78e56a6ca,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24a33ffc-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.657 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.657 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24a33ffc-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.660 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.663 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.664 226833 INFO os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:a6:94,bridge_name='br-int',has_traffic_filtering=True,id=24a33ffc-f652-4c4c-84e0-5ba78e56a6ca,network=Network(0fc23939-3578-48f7-b98e-07928f016ed0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24a33ffc-f6')
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.665 226833 DEBUG nova.virt.libvirt.vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.665 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.666 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0a:8f:2c,bridge_name='br-int',has_traffic_filtering=True,id=01b25ee1-016c-4dac-860c-941a5efc920a,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b25ee1-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.666 226833 DEBUG os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:8f:2c,bridge_name='br-int',has_traffic_filtering=True,id=01b25ee1-016c-4dac-860c-941a5efc920a,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b25ee1-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.668 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.668 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01b25ee1-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.670 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.672 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.673 226833 INFO os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:8f:2c,bridge_name='br-int',has_traffic_filtering=True,id=01b25ee1-016c-4dac-860c-941a5efc920a,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b25ee1-01')
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.674 226833 DEBUG nova.virt.libvirt.vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1727712073',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1727712073',id=99,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNj9e0Kbzl+E2OM6alabFUbQI9DpSInfHQfG2T4c3RhmiTeeu25YMPQfB0gfvWz7Jm1UlInYbJJ4YMwlil38MyDKg2n3hamSf3QryEahlV36B2sdJMqBp1FNXRu9/iAVA==',key_name='tempest-keypair-1025048098',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2327b93dd7d648efad6d2b303f9e462e',ramdisk_id='',reservation_id='r-0751v0zo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-948331740',owner_user_name='tempest-TaggedBootDevicesTest_v242-948331740-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8ff6e8783c3f4132b787cb0653fff9b0',uuid=b0ff8f26-937f-43e0-b422-8a0fb0226eac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.674 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converting VIF {"id": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "address": "fa:16:3e:3b:3a:d6", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522d060c-fd", "ovs_interfaceid": "522d060c-fdf5-4d50-b0b9-94211986b4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.675 226833 DEBUG nova.network.os_vif_util [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3b:3a:d6,bridge_name='br-int',has_traffic_filtering=True,id=522d060c-fdf5-4d50-b0b9-94211986b4ee,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522d060c-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.675 226833 DEBUG os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:3a:d6,bridge_name='br-int',has_traffic_filtering=True,id=522d060c-fdf5-4d50-b0b9-94211986b4ee,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522d060c-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.676 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.676 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap522d060c-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.679 226833 INFO os_vif [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:3a:d6,bridge_name='br-int',has_traffic_filtering=True,id=522d060c-fdf5-4d50-b0b9-94211986b4ee,network=Network(2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522d060c-fd')
Jan 31 08:05:44 compute-2 podman[272689]: 2026-01-31 08:05:44.687101665 +0000 UTC m=+0.047300617 container remove 42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.690 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[82739b7f-59bd-4b29-94be-2d3aebe20e8d]: (4, ('Sat Jan 31 08:05:44 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0 (42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b)\n42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b\nSat Jan 31 08:05:44 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0 (42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b)\n42843ee1744db2ae569863cdcd0b1accd185b7e479b943938d48a3b0c6769f2b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.693 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[486428e3-a18b-42fc-9eb4-78e9dfe055ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.694 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0fc23939-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:44 compute-2 kernel: tap0fc23939-30: left promiscuous mode
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.700 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[524c399b-b09d-4db3-8e8a-aaedc5479262]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.705 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.710 226833 DEBUG nova.compute.manager [req-11220aed-814b-435e-ae6f-8cf4c2039287 req-bbf4803d-88d7-4293-85c2-07df24ea5e34 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.710 226833 DEBUG oslo_concurrency.lockutils [req-11220aed-814b-435e-ae6f-8cf4c2039287 req-bbf4803d-88d7-4293-85c2-07df24ea5e34 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.711 226833 DEBUG oslo_concurrency.lockutils [req-11220aed-814b-435e-ae6f-8cf4c2039287 req-bbf4803d-88d7-4293-85c2-07df24ea5e34 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.711 226833 DEBUG oslo_concurrency.lockutils [req-11220aed-814b-435e-ae6f-8cf4c2039287 req-bbf4803d-88d7-4293-85c2-07df24ea5e34 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.711 226833 DEBUG nova.compute.manager [req-11220aed-814b-435e-ae6f-8cf4c2039287 req-bbf4803d-88d7-4293-85c2-07df24ea5e34 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-unplugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.711 226833 DEBUG nova.compute.manager [req-11220aed-814b-435e-ae6f-8cf4c2039287 req-bbf4803d-88d7-4293-85c2-07df24ea5e34 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.712 226833 DEBUG nova.compute.manager [req-5ee8f8b0-0f4d-4ac6-9218-b4191cb1e617 req-9df48de1-b9a9-4c8d-bedc-e057625e01ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.712 226833 DEBUG oslo_concurrency.lockutils [req-5ee8f8b0-0f4d-4ac6-9218-b4191cb1e617 req-9df48de1-b9a9-4c8d-bedc-e057625e01ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.712 226833 DEBUG oslo_concurrency.lockutils [req-5ee8f8b0-0f4d-4ac6-9218-b4191cb1e617 req-9df48de1-b9a9-4c8d-bedc-e057625e01ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.712 226833 DEBUG oslo_concurrency.lockutils [req-5ee8f8b0-0f4d-4ac6-9218-b4191cb1e617 req-9df48de1-b9a9-4c8d-bedc-e057625e01ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.713 226833 DEBUG nova.compute.manager [req-5ee8f8b0-0f4d-4ac6-9218-b4191cb1e617 req-9df48de1-b9a9-4c8d-bedc-e057625e01ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-unplugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.713 226833 DEBUG nova.compute.manager [req-5ee8f8b0-0f4d-4ac6-9218-b4191cb1e617 req-9df48de1-b9a9-4c8d-bedc-e057625e01ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.713 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[494b459b-b782-4e8b-8b52-2b4da51fcabe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.715 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8db29a7b-2300-46e9-8e5e-72cb47d1dd94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.729 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8d75eda9-9c0f-43b4-86e6-3087c25d70d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695163, 'reachable_time': 41385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272722, 'error': None, 'target': 'ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.731 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0fc23939-3578-48f7-b98e-07928f016ed0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.731 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[f84e1729-0808-4e7a-8f51-eec693d5910e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.732 143841 INFO neutron.agent.ovn.metadata.agent [-] Port e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e in datapath 0fc23939-3578-48f7-b98e-07928f016ed0 unbound from our chassis
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.733 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0fc23939-3578-48f7-b98e-07928f016ed0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.734 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6b7312-cf46-4c1e-b905-ca82e9aff22c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.735 143841 INFO neutron.agent.ovn.metadata.agent [-] Port a3ef806e-29d0-4489-9e8b-fb24ee625783 in datapath 0fc23939-3578-48f7-b98e-07928f016ed0 unbound from our chassis
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.736 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0fc23939-3578-48f7-b98e-07928f016ed0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.737 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[24bfd9da-f073-46a9-a6a4-8ffc1c6f6a59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.737 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 24a33ffc-f652-4c4c-84e0-5ba78e56a6ca in datapath 0fc23939-3578-48f7-b98e-07928f016ed0 unbound from our chassis
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.742 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0fc23939-3578-48f7-b98e-07928f016ed0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.743 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5560a26d-b955-4650-89d8-4f2ada79ae59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.744 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 01b25ee1-016c-4dac-860c-941a5efc920a in datapath 2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c unbound from our chassis
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.745 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.746 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2cebd70d-e8cd-4a30-bf3d-fcb4d715a77e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.747 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c namespace which is not needed anymore
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c[271369]: [NOTICE]   (271373) : haproxy version is 2.8.14-c23fe91
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c[271369]: [NOTICE]   (271373) : path to executable is /usr/sbin/haproxy
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c[271369]: [WARNING]  (271373) : Exiting Master process...
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c[271369]: [ALERT]    (271373) : Current worker (271375) exited with code 143 (Terminated)
Jan 31 08:05:44 compute-2 neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c[271369]: [WARNING]  (271373) : All workers exited. Exiting... (0)
Jan 31 08:05:44 compute-2 systemd[1]: libpod-5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd.scope: Deactivated successfully.
Jan 31 08:05:44 compute-2 conmon[271369]: conmon 5ce0984e26229efa867b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd.scope/container/memory.events
Jan 31 08:05:44 compute-2 podman[272743]: 2026-01-31 08:05:44.855339322 +0000 UTC m=+0.038689823 container died 5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:05:44 compute-2 podman[272743]: 2026-01-31 08:05:44.893773968 +0000 UTC m=+0.077124469 container cleanup 5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 08:05:44 compute-2 systemd[1]: libpod-conmon-5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd.scope: Deactivated successfully.
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.917 226833 INFO nova.virt.libvirt.driver [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Deleting instance files /var/lib/nova/instances/b0ff8f26-937f-43e0-b422-8a0fb0226eac_del
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.918 226833 INFO nova.virt.libvirt.driver [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Deletion of /var/lib/nova/instances/b0ff8f26-937f-43e0-b422-8a0fb0226eac_del complete
Jan 31 08:05:44 compute-2 podman[272774]: 2026-01-31 08:05:44.937339143 +0000 UTC m=+0.030675385 container remove 5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.940 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[036895ac-ae0a-4d98-bca3-20138e745807]: (4, ('Sat Jan 31 08:05:44 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c (5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd)\n5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd\nSat Jan 31 08:05:44 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c (5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd)\n5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.942 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b89313-2ba7-4c15-b096-8eeafd6318bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:44.943 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bf3b855-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3064647811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:05:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3064647811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.979 226833 INFO nova.compute.manager [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Took 0.94 seconds to destroy the instance on the hypervisor.
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.980 226833 DEBUG oslo.service.loopingcall [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.980 226833 DEBUG nova.compute.manager [-] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:05:44 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.981 226833 DEBUG nova.network.neutron [-] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:05:45 compute-2 kernel: tap2bf3b855-d0: left promiscuous mode
Jan 31 08:05:45 compute-2 nova_compute[226829]: 2026-01-31 08:05:44.999 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:45 compute-2 nova_compute[226829]: 2026-01-31 08:05:45.003 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:45.006 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[87f2e628-fc86-4eea-b2c7-eae52ed25e63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:45.029 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[de20571a-5cd0-466c-9d77-29ff44794089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:45.030 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6b896902-337a-45ab-8c6b-e3e089cbd9cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:45.042 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d544f0-a316-4fe3-864a-d4c454f0e48e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695248, 'reachable_time': 26251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272790, 'error': None, 'target': 'ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:45.043 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:05:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:45.043 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[83dc8c0c-1c40-4281-8de5-b3b9fae2ac5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:45.044 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 522d060c-fdf5-4d50-b0b9-94211986b4ee in datapath 2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c unbound from our chassis
Jan 31 08:05:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:45.045 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:05:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:05:45.046 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[56b31cc9-6f4f-416e-a907-ef54c7606ac4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:05:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:45.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:05:45 compute-2 systemd[1]: var-lib-containers-storage-overlay-31c1e264d30efc03541b1609adef3e6433704a6f0c8db3c941ae180ed3d142a5-merged.mount: Deactivated successfully.
Jan 31 08:05:45 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ce0984e26229efa867b1a8f8088374e81309be61ef709641256ab6c5cbbcffd-userdata-shm.mount: Deactivated successfully.
Jan 31 08:05:45 compute-2 systemd[1]: run-netns-ovnmeta\x2d2bf3b855\x2dd9f3\x2d4cd4\x2da0e4\x2d60cc09ab8f0c.mount: Deactivated successfully.
Jan 31 08:05:45 compute-2 systemd[1]: run-netns-ovnmeta\x2d0fc23939\x2d3578\x2d48f7\x2db98e\x2d07928f016ed0.mount: Deactivated successfully.
Jan 31 08:05:45 compute-2 nova_compute[226829]: 2026-01-31 08:05:45.353 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:46 compute-2 ceph-mon[77282]: pgmap v2064: 305 pgs: 305 active+clean; 401 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 197 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 31 08:05:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:05:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:46.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.701 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.702 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.702 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.702 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.703 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-plugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.703 226833 WARNING nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-fead71ab-29ff-4831-93d2-9a03e4ed64ac for instance with vm_state active and task_state deleting.
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.703 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-01b25ee1-016c-4dac-860c-941a5efc920a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.703 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.703 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.704 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.704 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-unplugged-01b25ee1-016c-4dac-860c-941a5efc920a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.704 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-01b25ee1-016c-4dac-860c-941a5efc920a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.704 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-01b25ee1-016c-4dac-860c-941a5efc920a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.705 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.705 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.705 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.705 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-plugged-01b25ee1-016c-4dac-860c-941a5efc920a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.706 226833 WARNING nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-01b25ee1-016c-4dac-860c-941a5efc920a for instance with vm_state active and task_state deleting.
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.706 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-522d060c-fdf5-4d50-b0b9-94211986b4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.706 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.706 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.707 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.707 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-unplugged-522d060c-fdf5-4d50-b0b9-94211986b4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.707 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-522d060c-fdf5-4d50-b0b9-94211986b4ee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.707 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-522d060c-fdf5-4d50-b0b9-94211986b4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.707 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.708 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.708 226833 DEBUG oslo_concurrency.lockutils [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.708 226833 DEBUG nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-plugged-522d060c-fdf5-4d50-b0b9-94211986b4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.708 226833 WARNING nova.compute.manager [req-943cf8aa-3ffd-4613-80f2-30383644612b req-50cf97ad-a4c9-4332-9d32-3ca0c275f2d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-522d060c-fdf5-4d50-b0b9-94211986b4ee for instance with vm_state active and task_state deleting.
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.743 226833 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.743 226833 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.743 226833 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.744 226833 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.744 226833 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-plugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.744 226833 WARNING nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-fbf252f3-5bb0-4da8-9564-87a1f052f2dc for instance with vm_state active and task_state deleting.
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.744 226833 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.745 226833 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.745 226833 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.745 226833 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.745 226833 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-unplugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.745 226833 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-unplugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.746 226833 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.746 226833 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.746 226833 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.746 226833 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.746 226833 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-plugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.746 226833 WARNING nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-a3ef806e-29d0-4489-9e8b-fb24ee625783 for instance with vm_state active and task_state deleting.
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.826 226833 DEBUG nova.compute.manager [req-773f742d-246f-40fb-a323-1618b6066bc6 req-2545177f-a17b-417f-a549-7a64232e06f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.826 226833 DEBUG oslo_concurrency.lockutils [req-773f742d-246f-40fb-a323-1618b6066bc6 req-2545177f-a17b-417f-a549-7a64232e06f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.827 226833 DEBUG oslo_concurrency.lockutils [req-773f742d-246f-40fb-a323-1618b6066bc6 req-2545177f-a17b-417f-a549-7a64232e06f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.827 226833 DEBUG oslo_concurrency.lockutils [req-773f742d-246f-40fb-a323-1618b6066bc6 req-2545177f-a17b-417f-a549-7a64232e06f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.827 226833 DEBUG nova.compute.manager [req-773f742d-246f-40fb-a323-1618b6066bc6 req-2545177f-a17b-417f-a549-7a64232e06f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-plugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.827 226833 WARNING nova.compute.manager [req-773f742d-246f-40fb-a323-1618b6066bc6 req-2545177f-a17b-417f-a549-7a64232e06f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e for instance with vm_state active and task_state deleting.
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.887 226833 DEBUG nova.compute.manager [req-b8ac9645-ac54-4213-93a8-057f0e373024 req-4c65dfa8-337b-45c2-9472-79d995d04069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-plugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.887 226833 DEBUG oslo_concurrency.lockutils [req-b8ac9645-ac54-4213-93a8-057f0e373024 req-4c65dfa8-337b-45c2-9472-79d995d04069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.887 226833 DEBUG oslo_concurrency.lockutils [req-b8ac9645-ac54-4213-93a8-057f0e373024 req-4c65dfa8-337b-45c2-9472-79d995d04069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.888 226833 DEBUG oslo_concurrency.lockutils [req-b8ac9645-ac54-4213-93a8-057f0e373024 req-4c65dfa8-337b-45c2-9472-79d995d04069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.888 226833 DEBUG nova.compute.manager [req-b8ac9645-ac54-4213-93a8-057f0e373024 req-4c65dfa8-337b-45c2-9472-79d995d04069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] No waiting events found dispatching network-vif-plugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:46 compute-2 nova_compute[226829]: 2026-01-31 08:05:46.888 226833 WARNING nova.compute.manager [req-b8ac9645-ac54-4213-93a8-057f0e373024 req-4c65dfa8-337b-45c2-9472-79d995d04069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received unexpected event network-vif-plugged-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca for instance with vm_state active and task_state deleting.
Jan 31 08:05:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:47.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:48.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:48 compute-2 ceph-mon[77282]: pgmap v2065: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 08:05:48 compute-2 nova_compute[226829]: 2026-01-31 08:05:48.959 226833 DEBUG nova.compute.manager [req-e7e0c99e-4234-4c06-aa97-5f901b18af17 req-a6cce2f1-4ef4-449f-b13a-aa1fc44c8411 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-deleted-522d060c-fdf5-4d50-b0b9-94211986b4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:48 compute-2 nova_compute[226829]: 2026-01-31 08:05:48.960 226833 INFO nova.compute.manager [req-e7e0c99e-4234-4c06-aa97-5f901b18af17 req-a6cce2f1-4ef4-449f-b13a-aa1fc44c8411 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Neutron deleted interface 522d060c-fdf5-4d50-b0b9-94211986b4ee; detaching it from the instance and deleting it from the info cache
Jan 31 08:05:48 compute-2 nova_compute[226829]: 2026-01-31 08:05:48.960 226833 DEBUG nova.network.neutron [req-e7e0c99e-4234-4c06-aa97-5f901b18af17 req-a6cce2f1-4ef4-449f-b13a-aa1fc44c8411 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [{"id": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "address": "fa:16:3e:57:8d:67", "network": {"id": "79a3d7c9-b5c3-4358-bbeb-1a48b677f9fa", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-638962480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbf252f3-5b", "ovs_interfaceid": "fbf252f3-5bb0-4da8-9564-87a1f052f2dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:49 compute-2 nova_compute[226829]: 2026-01-31 08:05:49.033 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:49.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:49 compute-2 nova_compute[226829]: 2026-01-31 08:05:49.286 226833 DEBUG nova.compute.manager [req-e7e0c99e-4234-4c06-aa97-5f901b18af17 req-a6cce2f1-4ef4-449f-b13a-aa1fc44c8411 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Detach interface failed, port_id=522d060c-fdf5-4d50-b0b9-94211986b4ee, reason: Instance b0ff8f26-937f-43e0-b422-8a0fb0226eac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:05:49 compute-2 nova_compute[226829]: 2026-01-31 08:05:49.706 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:49 compute-2 ceph-mon[77282]: pgmap v2066: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Jan 31 08:05:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:50 compute-2 nova_compute[226829]: 2026-01-31 08:05:50.355 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:51 compute-2 nova_compute[226829]: 2026-01-31 08:05:51.088 226833 DEBUG nova.compute.manager [req-268d47cb-1b26-4d7c-a1ba-151773f78992 req-deac6803-c39b-4ff5-ab80-e939540a4bb4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-deleted-fbf252f3-5bb0-4da8-9564-87a1f052f2dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:51 compute-2 nova_compute[226829]: 2026-01-31 08:05:51.089 226833 INFO nova.compute.manager [req-268d47cb-1b26-4d7c-a1ba-151773f78992 req-deac6803-c39b-4ff5-ab80-e939540a4bb4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Neutron deleted interface fbf252f3-5bb0-4da8-9564-87a1f052f2dc; detaching it from the instance and deleting it from the info cache
Jan 31 08:05:51 compute-2 nova_compute[226829]: 2026-01-31 08:05:51.089 226833 DEBUG nova.network.neutron [req-268d47cb-1b26-4d7c-a1ba-151773f78992 req-deac6803-c39b-4ff5-ab80-e939540a4bb4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [{"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "01b25ee1-016c-4dac-860c-941a5efc920a", "address": "fa:16:3e:0a:8f:2c", "network": {"id": "2bf3b855-d9f3-4cd4-a0e4-60cc09ab8f0c", "bridge": "br-int", "label": "tempest-device-tagging-net2-30530444", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b25ee1-01", "ovs_interfaceid": "01b25ee1-016c-4dac-860c-941a5efc920a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:51.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:51 compute-2 nova_compute[226829]: 2026-01-31 08:05:51.114 226833 DEBUG nova.compute.manager [req-268d47cb-1b26-4d7c-a1ba-151773f78992 req-deac6803-c39b-4ff5-ab80-e939540a4bb4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Detach interface failed, port_id=fbf252f3-5bb0-4da8-9564-87a1f052f2dc, reason: Instance b0ff8f26-937f-43e0-b422-8a0fb0226eac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:05:51 compute-2 nova_compute[226829]: 2026-01-31 08:05:51.114 226833 DEBUG nova.compute.manager [req-268d47cb-1b26-4d7c-a1ba-151773f78992 req-deac6803-c39b-4ff5-ab80-e939540a4bb4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-deleted-01b25ee1-016c-4dac-860c-941a5efc920a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:51 compute-2 nova_compute[226829]: 2026-01-31 08:05:51.115 226833 INFO nova.compute.manager [req-268d47cb-1b26-4d7c-a1ba-151773f78992 req-deac6803-c39b-4ff5-ab80-e939540a4bb4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Neutron deleted interface 01b25ee1-016c-4dac-860c-941a5efc920a; detaching it from the instance and deleting it from the info cache
Jan 31 08:05:51 compute-2 nova_compute[226829]: 2026-01-31 08:05:51.115 226833 DEBUG nova.network.neutron [req-268d47cb-1b26-4d7c-a1ba-151773f78992 req-deac6803-c39b-4ff5-ab80-e939540a4bb4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [{"id": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "address": "fa:16:3e:52:8d:5f", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.170", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfead71ab-29", "ovs_interfaceid": "fead71ab-29ff-4831-93d2-9a03e4ed64ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "address": "fa:16:3e:c2:33:c0", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0f7cff6-3b", "ovs_interfaceid": "e0f7cff6-3bbd-44cc-b8e4-69966c3dec7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "address": "fa:16:3e:88:50:1d", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ef806e-29", "ovs_interfaceid": "a3ef806e-29d0-4489-9e8b-fb24ee625783", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "address": "fa:16:3e:3c:a6:94", "network": {"id": "0fc23939-3578-48f7-b98e-07928f016ed0", "bridge": "br-int", "label": "tempest-device-tagging-net1-270991137", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2327b93dd7d648efad6d2b303f9e462e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24a33ffc-f6", "ovs_interfaceid": "24a33ffc-f652-4c4c-84e0-5ba78e56a6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:51 compute-2 nova_compute[226829]: 2026-01-31 08:05:51.144 226833 DEBUG nova.compute.manager [req-268d47cb-1b26-4d7c-a1ba-151773f78992 req-deac6803-c39b-4ff5-ab80-e939540a4bb4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Detach interface failed, port_id=01b25ee1-016c-4dac-860c-941a5efc920a, reason: Instance b0ff8f26-937f-43e0-b422-8a0fb0226eac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:05:52 compute-2 nova_compute[226829]: 2026-01-31 08:05:52.062 226833 DEBUG nova.network.neutron [-] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:52 compute-2 nova_compute[226829]: 2026-01-31 08:05:52.109 226833 INFO nova.compute.manager [-] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Took 7.13 seconds to deallocate network for instance.
Jan 31 08:05:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:52.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:52 compute-2 ceph-mon[77282]: pgmap v2067: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 31 08:05:52 compute-2 nova_compute[226829]: 2026-01-31 08:05:52.995 226833 INFO nova.compute.manager [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Took 0.89 seconds to detach 3 volumes for instance.
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.046 226833 DEBUG oslo_concurrency.lockutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.047 226833 DEBUG oslo_concurrency.lockutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.105 226833 DEBUG oslo_concurrency.processutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:53.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.184 226833 DEBUG nova.compute.manager [req-9ee7ced8-3d6d-47b3-99d3-80d5fccdeb9e req-908f992b-5c69-4422-9c77-b57c898f3b49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-deleted-a3ef806e-29d0-4489-9e8b-fb24ee625783 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.184 226833 DEBUG nova.compute.manager [req-9ee7ced8-3d6d-47b3-99d3-80d5fccdeb9e req-908f992b-5c69-4422-9c77-b57c898f3b49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Received event network-vif-deleted-24a33ffc-f652-4c4c-84e0-5ba78e56a6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:05:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3561526686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.538 226833 DEBUG oslo_concurrency.processutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.543 226833 DEBUG nova.compute.provider_tree [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.739 226833 DEBUG nova.scheduler.client.report [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.823 226833 DEBUG oslo_concurrency.lockutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3561526686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.872 226833 INFO nova.scheduler.client.report [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Deleted allocations for instance b0ff8f26-937f-43e0-b422-8a0fb0226eac
Jan 31 08:05:53 compute-2 nova_compute[226829]: 2026-01-31 08:05:53.954 226833 DEBUG oslo_concurrency.lockutils [None req-a25099ab-f5df-4079-afda-257ef9c805ef 8ff6e8783c3f4132b787cb0653fff9b0 2327b93dd7d648efad6d2b303f9e462e - - default default] Lock "b0ff8f26-937f-43e0-b422-8a0fb0226eac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:54 compute-2 nova_compute[226829]: 2026-01-31 08:05:54.093 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:54.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:54 compute-2 nova_compute[226829]: 2026-01-31 08:05:54.758 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:55 compute-2 ceph-mon[77282]: pgmap v2068: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 133 op/s
Jan 31 08:05:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2722636730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:05:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:55.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:05:55 compute-2 nova_compute[226829]: 2026-01-31 08:05:55.355 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:56.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:56 compute-2 ceph-mon[77282]: pgmap v2069: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 727 KiB/s wr, 120 op/s
Jan 31 08:05:56 compute-2 podman[272819]: 2026-01-31 08:05:56.152827927 +0000 UTC m=+0.042864187 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:05:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:58.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:59 compute-2 ceph-mon[77282]: pgmap v2070: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 67 KiB/s wr, 95 op/s
Jan 31 08:05:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:05:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:05:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:05:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:59 compute-2 nova_compute[226829]: 2026-01-31 08:05:59.556 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846744.553826, b0ff8f26-937f-43e0-b422-8a0fb0226eac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:05:59 compute-2 nova_compute[226829]: 2026-01-31 08:05:59.557 226833 INFO nova.compute.manager [-] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] VM Stopped (Lifecycle Event)
Jan 31 08:05:59 compute-2 nova_compute[226829]: 2026-01-31 08:05:59.593 226833 DEBUG nova.compute.manager [None req-da3ceafd-3784-4902-8644-d98fe5df3141 - - - - - -] [instance: b0ff8f26-937f-43e0-b422-8a0fb0226eac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:59 compute-2 nova_compute[226829]: 2026-01-31 08:05:59.762 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:59 compute-2 ceph-mon[77282]: pgmap v2071: 305 pgs: 305 active+clean; 411 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 619 KiB/s wr, 91 op/s
Jan 31 08:06:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:00.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:00 compute-2 nova_compute[226829]: 2026-01-31 08:06:00.357 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:01.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:01 compute-2 nova_compute[226829]: 2026-01-31 08:06:01.410 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:01 compute-2 nova_compute[226829]: 2026-01-31 08:06:01.411 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:01 compute-2 nova_compute[226829]: 2026-01-31 08:06:01.483 226833 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:06:01 compute-2 nova_compute[226829]: 2026-01-31 08:06:01.626 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:01 compute-2 nova_compute[226829]: 2026-01-31 08:06:01.627 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:01 compute-2 nova_compute[226829]: 2026-01-31 08:06:01.632 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:06:01 compute-2 nova_compute[226829]: 2026-01-31 08:06:01.632 226833 INFO nova.compute.claims [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:06:01 compute-2 sudo[272842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:01.915 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:01 compute-2 nova_compute[226829]: 2026-01-31 08:06:01.917 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:01.917 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:06:01 compute-2 sudo[272842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:01 compute-2 sudo[272842]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:01 compute-2 sudo[272867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:01 compute-2 sudo[272867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:01 compute-2 sudo[272867]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:02 compute-2 nova_compute[226829]: 2026-01-31 08:06:02.021 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:02.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/963996101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:02 compute-2 nova_compute[226829]: 2026-01-31 08:06:02.430 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:02 compute-2 nova_compute[226829]: 2026-01-31 08:06:02.434 226833 DEBUG nova.compute.provider_tree [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:06:02 compute-2 nova_compute[226829]: 2026-01-31 08:06:02.481 226833 DEBUG nova.scheduler.client.report [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:06:02 compute-2 ceph-mon[77282]: pgmap v2072: 305 pgs: 305 active+clean; 422 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 905 KiB/s rd, 1.5 MiB/s wr, 52 op/s
Jan 31 08:06:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/963996101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/411586841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1697033539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:02 compute-2 nova_compute[226829]: 2026-01-31 08:06:02.674 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:02 compute-2 nova_compute[226829]: 2026-01-31 08:06:02.675 226833 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:06:02 compute-2 nova_compute[226829]: 2026-01-31 08:06:02.882 226833 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:06:02 compute-2 nova_compute[226829]: 2026-01-31 08:06:02.883 226833 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.018 226833 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.083 226833 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:06:03 compute-2 ovn_controller[133834]: 2026-01-31T08:06:03Z|00428|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:06:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:03.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.130 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.328 226833 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.330 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.330 226833 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Creating image(s)
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.355 226833 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.385 226833 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.413 226833 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.417 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.472 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.473 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.473 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.474 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.496 226833 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.500 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.628 226833 DEBUG nova.policy [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '972b6e928f014e5394261f9c8655f1de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '43b462f5b43d48b4a33a13b069618e4c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.852 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:03 compute-2 nova_compute[226829]: 2026-01-31 08:06:03.935 226833 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] resizing rbd image 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:06:04 compute-2 nova_compute[226829]: 2026-01-31 08:06:04.091 226833 DEBUG nova.objects.instance [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lazy-loading 'migration_context' on Instance uuid 960d50f3-beb3-4e6b-9fd6-04178a47e6a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:04 compute-2 nova_compute[226829]: 2026-01-31 08:06:04.132 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:06:04 compute-2 nova_compute[226829]: 2026-01-31 08:06:04.133 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Ensure instance console log exists: /var/lib/nova/instances/960d50f3-beb3-4e6b-9fd6-04178a47e6a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:06:04 compute-2 nova_compute[226829]: 2026-01-31 08:06:04.133 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:04 compute-2 nova_compute[226829]: 2026-01-31 08:06:04.134 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:04 compute-2 nova_compute[226829]: 2026-01-31 08:06:04.134 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:04.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:04 compute-2 ceph-mon[77282]: pgmap v2073: 305 pgs: 305 active+clean; 432 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 530 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Jan 31 08:06:04 compute-2 nova_compute[226829]: 2026-01-31 08:06:04.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:06:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:05.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:06:05 compute-2 nova_compute[226829]: 2026-01-31 08:06:05.360 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:05 compute-2 nova_compute[226829]: 2026-01-31 08:06:05.718 226833 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Successfully created port: 13d49931-062a-47ff-9767-085de35254fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:06:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:06.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:06:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1254059917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:06:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:06:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1254059917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:06:06 compute-2 ceph-mon[77282]: pgmap v2074: 305 pgs: 305 active+clean; 489 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 4.4 MiB/s wr, 96 op/s
Jan 31 08:06:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1254059917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:06:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1254059917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:06:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:06.877 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:06.878 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:06.879 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:07.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:07 compute-2 nova_compute[226829]: 2026-01-31 08:06:07.957 226833 DEBUG nova.compute.manager [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 08:06:08 compute-2 ceph-mon[77282]: pgmap v2075: 305 pgs: 305 active+clean; 527 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 5.7 MiB/s wr, 116 op/s
Jan 31 08:06:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:06:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:08.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.284 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.285 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:06:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1421020183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:06:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:06:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1421020183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.475 226833 DEBUG nova.objects.instance [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_requests' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.494 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.494 226833 INFO nova.compute.claims [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.495 226833 DEBUG nova.objects.instance [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.497 226833 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Successfully updated port: 13d49931-062a-47ff-9767-085de35254fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.540 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "refresh_cache-960d50f3-beb3-4e6b-9fd6-04178a47e6a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.541 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquired lock "refresh_cache-960d50f3-beb3-4e6b-9fd6-04178a47e6a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.541 226833 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.543 226833 DEBUG nova.objects.instance [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.651 226833 DEBUG nova.compute.manager [req-37f67a57-96a6-49e2-b11b-62ffa122cc5a req-3505622b-bbe6-4c53-b086-a9bca6e54187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Received event network-changed-13d49931-062a-47ff-9767-085de35254fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.652 226833 DEBUG nova.compute.manager [req-37f67a57-96a6-49e2-b11b-62ffa122cc5a req-3505622b-bbe6-4c53-b086-a9bca6e54187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Refreshing instance network info cache due to event network-changed-13d49931-062a-47ff-9767-085de35254fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.652 226833 DEBUG oslo_concurrency.lockutils [req-37f67a57-96a6-49e2-b11b-62ffa122cc5a req-3505622b-bbe6-4c53-b086-a9bca6e54187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-960d50f3-beb3-4e6b-9fd6-04178a47e6a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.655 226833 INFO nova.compute.resource_tracker [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating resource usage from migration f225e12d-735f-43e1-853e-0277afec1d20
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.812 226833 DEBUG oslo_concurrency.processutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:08 compute-2 nova_compute[226829]: 2026-01-31 08:06:08.917 226833 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:06:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1421020183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:06:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1421020183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:06:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4236691885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2537159548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1553681548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:09 compute-2 nova_compute[226829]: 2026-01-31 08:06:09.246 226833 DEBUG oslo_concurrency.processutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:09 compute-2 nova_compute[226829]: 2026-01-31 08:06:09.251 226833 DEBUG nova.compute.provider_tree [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:06:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:09 compute-2 nova_compute[226829]: 2026-01-31 08:06:09.272 226833 DEBUG nova.scheduler.client.report [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:06:09 compute-2 nova_compute[226829]: 2026-01-31 08:06:09.305 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:09 compute-2 nova_compute[226829]: 2026-01-31 08:06:09.305 226833 INFO nova.compute.manager [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Migrating
Jan 31 08:06:09 compute-2 nova_compute[226829]: 2026-01-31 08:06:09.355 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:09 compute-2 nova_compute[226829]: 2026-01-31 08:06:09.356 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:09 compute-2 nova_compute[226829]: 2026-01-31 08:06:09.356 226833 DEBUG nova.network.neutron [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:06:09 compute-2 nova_compute[226829]: 2026-01-31 08:06:09.811 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1553681548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:10 compute-2 ceph-mon[77282]: pgmap v2076: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 7.5 MiB/s wr, 178 op/s
Jan 31 08:06:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3616744025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:06:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3616744025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:06:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:10.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:10 compute-2 nova_compute[226829]: 2026-01-31 08:06:10.362 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:06:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:11.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:06:11 compute-2 nova_compute[226829]: 2026-01-31 08:06:11.368 226833 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Updating instance_info_cache with network_info: [{"id": "13d49931-062a-47ff-9767-085de35254fe", "address": "fa:16:3e:98:58:46", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d49931-06", "ovs_interfaceid": "13d49931-062a-47ff-9767-085de35254fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:11.920 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:12.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:12 compute-2 ceph-mon[77282]: pgmap v2077: 305 pgs: 305 active+clean; 563 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 441 KiB/s rd, 6.9 MiB/s wr, 177 op/s
Jan 31 08:06:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/219294127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:13.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.646 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Releasing lock "refresh_cache-960d50f3-beb3-4e6b-9fd6-04178a47e6a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.646 226833 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Instance network_info: |[{"id": "13d49931-062a-47ff-9767-085de35254fe", "address": "fa:16:3e:98:58:46", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d49931-06", "ovs_interfaceid": "13d49931-062a-47ff-9767-085de35254fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.646 226833 DEBUG oslo_concurrency.lockutils [req-37f67a57-96a6-49e2-b11b-62ffa122cc5a req-3505622b-bbe6-4c53-b086-a9bca6e54187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-960d50f3-beb3-4e6b-9fd6-04178a47e6a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.647 226833 DEBUG nova.network.neutron [req-37f67a57-96a6-49e2-b11b-62ffa122cc5a req-3505622b-bbe6-4c53-b086-a9bca6e54187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Refreshing network info cache for port 13d49931-062a-47ff-9767-085de35254fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.649 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Start _get_guest_xml network_info=[{"id": "13d49931-062a-47ff-9767-085de35254fe", "address": "fa:16:3e:98:58:46", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d49931-06", "ovs_interfaceid": "13d49931-062a-47ff-9767-085de35254fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.654 226833 WARNING nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.659 226833 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.660 226833 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.666 226833 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.667 226833 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.668 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.669 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.669 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.669 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.670 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.670 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.670 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.670 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.671 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.671 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.671 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.671 226833 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:06:13 compute-2 nova_compute[226829]: 2026-01-31 08:06:13.675 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2044255212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:13 compute-2 ceph-mon[77282]: pgmap v2078: 305 pgs: 305 active+clean; 537 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 6.0 MiB/s wr, 170 op/s
Jan 31 08:06:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:06:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/551986456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:14 compute-2 nova_compute[226829]: 2026-01-31 08:06:14.116 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:14 compute-2 nova_compute[226829]: 2026-01-31 08:06:14.149 226833 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:14.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:14 compute-2 nova_compute[226829]: 2026-01-31 08:06:14.155 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:14 compute-2 podman[273129]: 2026-01-31 08:06:14.185744249 +0000 UTC m=+0.066658243 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:06:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:06:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/594456422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:14 compute-2 nova_compute[226829]: 2026-01-31 08:06:14.616 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:14 compute-2 nova_compute[226829]: 2026-01-31 08:06:14.619 226833 DEBUG nova.virt.libvirt.vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-19898720',display_name='tempest-ListServersNegativeTestJSON-server-19898720-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-19898720-1',id=104,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43b462f5b43d48b4a33a13b069618e4c',ramdisk_id='',reservation_id='r-c7zkmfmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1511652820',owner_user_name='tempest-ListServersNegativeTestJSON-1511652820-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:06:03Z,user_data=None,user_id='972b6e928f014e5394261f9c8655f1de',uuid=960d50f3-beb3-4e6b-9fd6-04178a47e6a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13d49931-062a-47ff-9767-085de35254fe", "address": "fa:16:3e:98:58:46", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d49931-06", "ovs_interfaceid": "13d49931-062a-47ff-9767-085de35254fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:06:14 compute-2 nova_compute[226829]: 2026-01-31 08:06:14.619 226833 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converting VIF {"id": "13d49931-062a-47ff-9767-085de35254fe", "address": "fa:16:3e:98:58:46", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d49931-06", "ovs_interfaceid": "13d49931-062a-47ff-9767-085de35254fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:14 compute-2 nova_compute[226829]: 2026-01-31 08:06:14.620 226833 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:58:46,bridge_name='br-int',has_traffic_filtering=True,id=13d49931-062a-47ff-9767-085de35254fe,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d49931-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:14 compute-2 nova_compute[226829]: 2026-01-31 08:06:14.622 226833 DEBUG nova.objects.instance [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lazy-loading 'pci_devices' on Instance uuid 960d50f3-beb3-4e6b-9fd6-04178a47e6a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:14 compute-2 nova_compute[226829]: 2026-01-31 08:06:14.814 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/551986456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/594456422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:15.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.364 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.758 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <uuid>960d50f3-beb3-4e6b-9fd6-04178a47e6a0</uuid>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <name>instance-00000068</name>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <nova:name>tempest-ListServersNegativeTestJSON-server-19898720-1</nova:name>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:06:13</nova:creationTime>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <nova:user uuid="972b6e928f014e5394261f9c8655f1de">tempest-ListServersNegativeTestJSON-1511652820-project-member</nova:user>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <nova:project uuid="43b462f5b43d48b4a33a13b069618e4c">tempest-ListServersNegativeTestJSON-1511652820</nova:project>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <nova:port uuid="13d49931-062a-47ff-9767-085de35254fe">
Jan 31 08:06:15 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <system>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <entry name="serial">960d50f3-beb3-4e6b-9fd6-04178a47e6a0</entry>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <entry name="uuid">960d50f3-beb3-4e6b-9fd6-04178a47e6a0</entry>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     </system>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <os>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   </os>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <features>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   </features>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk">
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       </source>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk.config">
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       </source>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:06:15 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:98:58:46"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <target dev="tap13d49931-06"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/960d50f3-beb3-4e6b-9fd6-04178a47e6a0/console.log" append="off"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <video>
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     </video>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:06:15 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:06:15 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:06:15 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:06:15 compute-2 nova_compute[226829]: </domain>
Jan 31 08:06:15 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.760 226833 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Preparing to wait for external event network-vif-plugged-13d49931-062a-47ff-9767-085de35254fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.761 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.761 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.761 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.762 226833 DEBUG nova.virt.libvirt.vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-19898720',display_name='tempest-ListServersNegativeTestJSON-server-19898720-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-19898720-1',id=104,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43b462f5b43d48b4a33a13b069618e4c',ramdisk_id='',reservation_id='r-c7zkmfmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1511652820',owner_user_name='tempest-ListServersNegativeTestJSON-1511652820-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:06:03Z,user_data=None,user_id='972b6e928f014e5394261f9c8655f1de',uuid=960d50f3-beb3-4e6b-9fd6-04178a47e6a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13d49931-062a-47ff-9767-085de35254fe", "address": "fa:16:3e:98:58:46", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d49931-06", "ovs_interfaceid": "13d49931-062a-47ff-9767-085de35254fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.762 226833 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converting VIF {"id": "13d49931-062a-47ff-9767-085de35254fe", "address": "fa:16:3e:98:58:46", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d49931-06", "ovs_interfaceid": "13d49931-062a-47ff-9767-085de35254fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.762 226833 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:58:46,bridge_name='br-int',has_traffic_filtering=True,id=13d49931-062a-47ff-9767-085de35254fe,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d49931-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.763 226833 DEBUG os_vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:58:46,bridge_name='br-int',has_traffic_filtering=True,id=13d49931-062a-47ff-9767-085de35254fe,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d49931-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.763 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.764 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.764 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.772 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.772 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13d49931-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.773 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13d49931-06, col_values=(('external_ids', {'iface-id': '13d49931-062a-47ff-9767-085de35254fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:58:46', 'vm-uuid': '960d50f3-beb3-4e6b-9fd6-04178a47e6a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.774 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:15 compute-2 NetworkManager[48999]: <info>  [1769846775.7767] manager: (tap13d49931-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.777 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.785 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.786 226833 INFO os_vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:58:46,bridge_name='br-int',has_traffic_filtering=True,id=13d49931-062a-47ff-9767-085de35254fe,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d49931-06')
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.870 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.871 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.871 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] No VIF found with MAC fa:16:3e:98:58:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.871 226833 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Using config drive
Jan 31 08:06:15 compute-2 nova_compute[226829]: 2026-01-31 08:06:15.895 226833 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:16.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:16 compute-2 ceph-mon[77282]: pgmap v2079: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 5.5 MiB/s wr, 166 op/s
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.367 226833 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Creating config drive at /var/lib/nova/instances/960d50f3-beb3-4e6b-9fd6-04178a47e6a0/disk.config
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.371 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/960d50f3-beb3-4e6b-9fd6-04178a47e6a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpucyfzxmd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.502 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/960d50f3-beb3-4e6b-9fd6-04178a47e6a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpucyfzxmd" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.526 226833 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.529 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/960d50f3-beb3-4e6b-9fd6-04178a47e6a0/disk.config 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.606 226833 DEBUG nova.network.neutron [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance_info_cache with network_info: [{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.630 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.737 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.745 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.960 226833 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/960d50f3-beb3-4e6b-9fd6-04178a47e6a0/disk.config 960d50f3-beb3-4e6b-9fd6-04178a47e6a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:16 compute-2 nova_compute[226829]: 2026-01-31 08:06:16.960 226833 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Deleting local config drive /var/lib/nova/instances/960d50f3-beb3-4e6b-9fd6-04178a47e6a0/disk.config because it was imported into RBD.
Jan 31 08:06:17 compute-2 kernel: tap13d49931-06: entered promiscuous mode
Jan 31 08:06:16 compute-2 NetworkManager[48999]: <info>  [1769846776.9997] manager: (tap13d49931-06): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Jan 31 08:06:17 compute-2 ovn_controller[133834]: 2026-01-31T08:06:17Z|00429|binding|INFO|Claiming lport 13d49931-062a-47ff-9767-085de35254fe for this chassis.
Jan 31 08:06:17 compute-2 ovn_controller[133834]: 2026-01-31T08:06:17Z|00430|binding|INFO|13d49931-062a-47ff-9767-085de35254fe: Claiming fa:16:3e:98:58:46 10.100.0.14
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.002 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:17 compute-2 ovn_controller[133834]: 2026-01-31T08:06:17Z|00431|binding|INFO|Setting lport 13d49931-062a-47ff-9767-085de35254fe ovn-installed in OVS
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.013 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.015 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:17 compute-2 systemd-udevd[273273]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:06:17 compute-2 systemd-machined[195142]: New machine qemu-46-instance-00000068.
Jan 31 08:06:17 compute-2 NetworkManager[48999]: <info>  [1769846777.0331] device (tap13d49931-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:06:17 compute-2 NetworkManager[48999]: <info>  [1769846777.0338] device (tap13d49931-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:06:17 compute-2 systemd[1]: Started Virtual Machine qemu-46-instance-00000068.
Jan 31 08:06:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:17.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:17 compute-2 ovn_controller[133834]: 2026-01-31T08:06:17Z|00432|binding|INFO|Setting lport 13d49931-062a-47ff-9767-085de35254fe up in Southbound
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.437 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:58:46 10.100.0.14'], port_security=['fa:16:3e:98:58:46 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '960d50f3-beb3-4e6b-9fd6-04178a47e6a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43b462f5b43d48b4a33a13b069618e4c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '001ae016-61eb-444d-a215-8f70012d923a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aaf32b66-0fe8-4826-8186-77a88483534c, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=13d49931-062a-47ff-9767-085de35254fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.438 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 13d49931-062a-47ff-9767-085de35254fe in datapath f3091b0d-0fc9-4172-b2af-6d9c678c6569 bound to our chassis
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.440 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3091b0d-0fc9-4172-b2af-6d9c678c6569
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.450 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[537fbaed-fe48-46a2-985a-6ddad7de3652]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.451 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3091b0d-01 in ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.454 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3091b0d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.454 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e4cebb-4ab1-47da-b035-fdd5407bac37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.455 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[020f820d-d4c3-4e2c-b73f-9c9310bd476c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.475 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[f89d5156-190f-494c-8c77-b7d3b8c51498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.485 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8d77e89f-8c08-438a-8069-5f2c66c9db07]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.515 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7cdd6ec9-7312-4499-8212-16400a25b393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.520 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d0cf98d2-a974-4560-8048-8177545b6cdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 NetworkManager[48999]: <info>  [1769846777.5221] manager: (tapf3091b0d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/213)
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.544 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7690ca-3536-48a2-b257-66290d859583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.548 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ca24824d-ffb3-4f8d-ae11-ce5dafe0c64f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.553 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.553 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:17 compute-2 NetworkManager[48999]: <info>  [1769846777.5667] device (tapf3091b0d-00): carrier: link connected
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.570 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[66984ed9-8853-40fb-8197-505b82998b2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.581 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a06a8752-6fa9-4ed2-8e6f-f2071dc771f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3091b0d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703004, 'reachable_time': 44102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273348, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.592 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[90f97489-a47b-40df-a433-981da7c63343]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:a99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703004, 'tstamp': 703004}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273349, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.603 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e1af45be-5262-421b-b85b-f30c8fe1e9f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3091b0d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703004, 'reachable_time': 44102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273351, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.627 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5eab4e8f-835e-46b8-bb46-8c00edd19d2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.671 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fe687af6-3b05-4b4b-a9c5-e133ba60525e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.673 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3091b0d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.673 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.674 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3091b0d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:17 compute-2 NetworkManager[48999]: <info>  [1769846777.6775] manager: (tapf3091b0d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Jan 31 08:06:17 compute-2 kernel: tapf3091b0d-00: entered promiscuous mode
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.684 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.685 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3091b0d-00, col_values=(('external_ids', {'iface-id': '67f0642a-d8ea-421d-83b2-8b692f5bc044'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:17 compute-2 ovn_controller[133834]: 2026-01-31T08:06:17Z|00433|binding|INFO|Releasing lport 67f0642a-d8ea-421d-83b2-8b692f5bc044 from this chassis (sb_readonly=1)
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.687 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846777.6850626, 960d50f3-beb3-4e6b-9fd6-04178a47e6a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.687 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] VM Started (Lifecycle Event)
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.691 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.692 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3091b0d-0fc9-4172-b2af-6d9c678c6569.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3091b0d-0fc9-4172-b2af-6d9c678c6569.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.693 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[af2624ed-3472-4536-acfc-f992aacb6132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.694 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-f3091b0d-0fc9-4172-b2af-6d9c678c6569
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/f3091b0d-0fc9-4172-b2af-6d9c678c6569.pid.haproxy
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID f3091b0d-0fc9-4172-b2af-6d9c678c6569
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:06:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:17.694 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'env', 'PROCESS_TAG=haproxy-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3091b0d-0fc9-4172-b2af-6d9c678c6569.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.784 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.787 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846777.6851983, 960d50f3-beb3-4e6b-9fd6-04178a47e6a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.787 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] VM Paused (Lifecycle Event)
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.826 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.830 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:06:17 compute-2 nova_compute[226829]: 2026-01-31 08:06:17.880 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:06:18 compute-2 podman[273383]: 2026-01-31 08:06:18.002056853 +0000 UTC m=+0.018505075 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:06:18 compute-2 podman[273383]: 2026-01-31 08:06:18.130836136 +0000 UTC m=+0.147284318 container create 4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:06:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:18.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:18 compute-2 systemd[1]: Started libpod-conmon-4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df.scope.
Jan 31 08:06:18 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:06:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64e15e6a8a9c0d69cd598e36c53eb579bc37aab7529211b993cfbbfbaf13927d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:18 compute-2 podman[273383]: 2026-01-31 08:06:18.257021049 +0000 UTC m=+0.273469261 container init 4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:06:18 compute-2 podman[273383]: 2026-01-31 08:06:18.262323553 +0000 UTC m=+0.278771755 container start 4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:06:18 compute-2 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[273399]: [NOTICE]   (273403) : New worker (273405) forked
Jan 31 08:06:18 compute-2 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[273399]: [NOTICE]   (273403) : Loading success.
Jan 31 08:06:18 compute-2 nova_compute[226829]: 2026-01-31 08:06:18.627 226833 DEBUG nova.network.neutron [req-37f67a57-96a6-49e2-b11b-62ffa122cc5a req-3505622b-bbe6-4c53-b086-a9bca6e54187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Updated VIF entry in instance network info cache for port 13d49931-062a-47ff-9767-085de35254fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:06:18 compute-2 nova_compute[226829]: 2026-01-31 08:06:18.627 226833 DEBUG nova.network.neutron [req-37f67a57-96a6-49e2-b11b-62ffa122cc5a req-3505622b-bbe6-4c53-b086-a9bca6e54187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Updating instance_info_cache with network_info: [{"id": "13d49931-062a-47ff-9767-085de35254fe", "address": "fa:16:3e:98:58:46", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d49931-06", "ovs_interfaceid": "13d49931-062a-47ff-9767-085de35254fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:18 compute-2 nova_compute[226829]: 2026-01-31 08:06:18.650 226833 DEBUG oslo_concurrency.lockutils [req-37f67a57-96a6-49e2-b11b-62ffa122cc5a req-3505622b-bbe6-4c53-b086-a9bca6e54187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-960d50f3-beb3-4e6b-9fd6-04178a47e6a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:18 compute-2 ceph-mon[77282]: pgmap v2080: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 3.1 MiB/s wr, 110 op/s
Jan 31 08:06:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:19.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:19 compute-2 kernel: tapc153b6ab-5a (unregistering): left promiscuous mode
Jan 31 08:06:19 compute-2 NetworkManager[48999]: <info>  [1769846779.4474] device (tapc153b6ab-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:06:19 compute-2 ovn_controller[133834]: 2026-01-31T08:06:19Z|00434|binding|INFO|Releasing lport c153b6ab-5a86-443d-ad94-95d7b82bf483 from this chassis (sb_readonly=0)
Jan 31 08:06:19 compute-2 ovn_controller[133834]: 2026-01-31T08:06:19Z|00435|binding|INFO|Setting lport c153b6ab-5a86-443d-ad94-95d7b82bf483 down in Southbound
Jan 31 08:06:19 compute-2 ovn_controller[133834]: 2026-01-31T08:06:19Z|00436|binding|INFO|Removing iface tapc153b6ab-5a ovn-installed in OVS
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.499 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.501 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.505 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:19.507 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:c3:66 10.100.0.12'], port_security=['fa:16:3e:f8:c3:66 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '66218154-3695-4970-8525-7eaae98f9f14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c153b6ab-5a86-443d-ad94-95d7b82bf483) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:19.508 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c153b6ab-5a86-443d-ad94-95d7b82bf483 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:06:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:19.510 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:06:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:19.511 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b463c703-5eca-45b3-b422-f7dcb04dfa34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:19.511 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore
Jan 31 08:06:19 compute-2 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 31 08:06:19 compute-2 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000066.scope: Consumed 14.863s CPU time.
Jan 31 08:06:19 compute-2 systemd-machined[195142]: Machine qemu-45-instance-00000066 terminated.
Jan 31 08:06:19 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[272190]: [NOTICE]   (272194) : haproxy version is 2.8.14-c23fe91
Jan 31 08:06:19 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[272190]: [NOTICE]   (272194) : path to executable is /usr/sbin/haproxy
Jan 31 08:06:19 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[272190]: [WARNING]  (272194) : Exiting Master process...
Jan 31 08:06:19 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[272190]: [WARNING]  (272194) : Exiting Master process...
Jan 31 08:06:19 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[272190]: [ALERT]    (272194) : Current worker (272196) exited with code 143 (Terminated)
Jan 31 08:06:19 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[272190]: [WARNING]  (272194) : All workers exited. Exiting... (0)
Jan 31 08:06:19 compute-2 systemd[1]: libpod-2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8.scope: Deactivated successfully.
Jan 31 08:06:19 compute-2 podman[273436]: 2026-01-31 08:06:19.715827945 +0000 UTC m=+0.135930958 container died 2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.743 226833 DEBUG nova.compute.manager [req-3421c713-0eb7-4272-8f30-498cb4716c33 req-feb282b4-9ed0-4e36-a818-1cb212b13820 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Received event network-vif-plugged-13d49931-062a-47ff-9767-085de35254fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.743 226833 DEBUG oslo_concurrency.lockutils [req-3421c713-0eb7-4272-8f30-498cb4716c33 req-feb282b4-9ed0-4e36-a818-1cb212b13820 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.744 226833 DEBUG oslo_concurrency.lockutils [req-3421c713-0eb7-4272-8f30-498cb4716c33 req-feb282b4-9ed0-4e36-a818-1cb212b13820 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.744 226833 DEBUG oslo_concurrency.lockutils [req-3421c713-0eb7-4272-8f30-498cb4716c33 req-feb282b4-9ed0-4e36-a818-1cb212b13820 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.744 226833 DEBUG nova.compute.manager [req-3421c713-0eb7-4272-8f30-498cb4716c33 req-feb282b4-9ed0-4e36-a818-1cb212b13820 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Processing event network-vif-plugged-13d49931-062a-47ff-9767-085de35254fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.745 226833 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.749 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846779.7491016, 960d50f3-beb3-4e6b-9fd6-04178a47e6a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.749 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] VM Resumed (Lifecycle Event)
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.753 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.756 226833 INFO nova.virt.libvirt.driver [-] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Instance spawned successfully.
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.756 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.764 226833 INFO nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance shutdown successfully after 3 seconds.
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.769 226833 INFO nova.virt.libvirt.driver [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance destroyed successfully.
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.770 226833 DEBUG nova.virt.libvirt.vif [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1412363862',display_name='tempest-ServerActionsTestJSON-server-1412363862',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1412363862',id=102,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-fla0xmix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=66218154-3695-4970-8525-7eaae98f9f14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:f8:c3:66"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.770 226833 DEBUG nova.network.os_vif_util [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:f8:c3:66"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.771 226833 DEBUG nova.network.os_vif_util [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.771 226833 DEBUG os_vif [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.773 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.773 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc153b6ab-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.775 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.778 226833 INFO os_vif [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a')
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.782 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.783 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.798 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.803 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.805 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.806 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.806 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.806 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.807 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.807 226833 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.855 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:06:19 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8-userdata-shm.mount: Deactivated successfully.
Jan 31 08:06:19 compute-2 systemd[1]: var-lib-containers-storage-overlay-cb2f88a9238a1a8c18b29dd5d0857eed6e74d819fc1dbc7579c45e107d3cd4c6-merged.mount: Deactivated successfully.
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.910 226833 INFO nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Took 16.58 seconds to spawn the instance on the hypervisor.
Jan 31 08:06:19 compute-2 nova_compute[226829]: 2026-01-31 08:06:19.910 226833 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:19 compute-2 podman[273436]: 2026-01-31 08:06:19.974986246 +0000 UTC m=+0.395089259 container cleanup 2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:06:19 compute-2 systemd[1]: libpod-conmon-2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8.scope: Deactivated successfully.
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.003 226833 INFO nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Took 18.41 seconds to build instance.
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.012 226833 DEBUG nova.compute.manager [req-50d44e4d-d9eb-4a1b-8b22-929416288666 req-af0838c5-d61c-440e-a389-f985abadec55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-unplugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.012 226833 DEBUG oslo_concurrency.lockutils [req-50d44e4d-d9eb-4a1b-8b22-929416288666 req-af0838c5-d61c-440e-a389-f985abadec55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.013 226833 DEBUG oslo_concurrency.lockutils [req-50d44e4d-d9eb-4a1b-8b22-929416288666 req-af0838c5-d61c-440e-a389-f985abadec55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.013 226833 DEBUG oslo_concurrency.lockutils [req-50d44e4d-d9eb-4a1b-8b22-929416288666 req-af0838c5-d61c-440e-a389-f985abadec55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.013 226833 DEBUG nova.compute.manager [req-50d44e4d-d9eb-4a1b-8b22-929416288666 req-af0838c5-d61c-440e-a389-f985abadec55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] No waiting events found dispatching network-vif-unplugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.014 226833 WARNING nova.compute.manager [req-50d44e4d-d9eb-4a1b-8b22-929416288666 req-af0838c5-d61c-440e-a389-f985abadec55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received unexpected event network-vif-unplugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 for instance with vm_state active and task_state resize_migrating.
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.044 226833 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.049 226833 DEBUG nova.network.neutron [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Port c153b6ab-5a86-443d-ad94-95d7b82bf483 binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 31 08:06:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:20.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.169 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.169 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.170 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:20 compute-2 podman[273474]: 2026-01-31 08:06:20.212380275 +0000 UTC m=+0.220478500 container remove 2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:06:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:20.216 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9be952a6-4565-41cd-b129-b5fedbeae6b4]: (4, ('Sat Jan 31 08:06:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8)\n2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8\nSat Jan 31 08:06:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8)\n2f49677dd8dc38491beff0a785bc1a14d0930b751365a1fe8d9aa2a6c4bf6bc8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:20.217 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9eb7f3-bb98-4a38-b173-350dbc2d0c05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:20.218 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.220 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:20 compute-2 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.225 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.227 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:20.230 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cdbc0693-a13e-4a60-9c0d-2fb6d4c16ce8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:20.246 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fcae48-1b48-4748-bdd4-ff6902c09f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:20.247 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f32ab3a3-e224-4be5-a24f-55558a089eab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:20.259 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cba0c7d2-11e3-4b8a-afb6-add6d813ba5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697758, 'reachable_time': 34542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273487, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:20 compute-2 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 08:06:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:20.262 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:06:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:20.262 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce37f23-7107-466d-9464-06c139ecaf07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.367 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.420 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.420 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.421 226833 DEBUG nova.network.neutron [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:06:20 compute-2 nova_compute[226829]: 2026-01-31 08:06:20.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:20 compute-2 ceph-mon[77282]: pgmap v2081: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Jan 31 08:06:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:21.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:21 compute-2 ovn_controller[133834]: 2026-01-31T08:06:21Z|00437|binding|INFO|Releasing lport 67f0642a-d8ea-421d-83b2-8b692f5bc044 from this chassis (sb_readonly=0)
Jan 31 08:06:21 compute-2 nova_compute[226829]: 2026-01-31 08:06:21.708 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:21 compute-2 ovn_controller[133834]: 2026-01-31T08:06:21Z|00438|binding|INFO|Releasing lport 67f0642a-d8ea-421d-83b2-8b692f5bc044 from this chassis (sb_readonly=0)
Jan 31 08:06:21 compute-2 nova_compute[226829]: 2026-01-31 08:06:21.775 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:21 compute-2 nova_compute[226829]: 2026-01-31 08:06:21.931 226833 DEBUG nova.compute.manager [req-aa4e574c-2842-43c3-a0ab-e74f3df11d00 req-fc911660-72bd-4c71-adb1-e4087667c6f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Received event network-vif-plugged-13d49931-062a-47ff-9767-085de35254fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:21 compute-2 nova_compute[226829]: 2026-01-31 08:06:21.931 226833 DEBUG oslo_concurrency.lockutils [req-aa4e574c-2842-43c3-a0ab-e74f3df11d00 req-fc911660-72bd-4c71-adb1-e4087667c6f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:21 compute-2 nova_compute[226829]: 2026-01-31 08:06:21.932 226833 DEBUG oslo_concurrency.lockutils [req-aa4e574c-2842-43c3-a0ab-e74f3df11d00 req-fc911660-72bd-4c71-adb1-e4087667c6f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:21 compute-2 nova_compute[226829]: 2026-01-31 08:06:21.932 226833 DEBUG oslo_concurrency.lockutils [req-aa4e574c-2842-43c3-a0ab-e74f3df11d00 req-fc911660-72bd-4c71-adb1-e4087667c6f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:21 compute-2 nova_compute[226829]: 2026-01-31 08:06:21.932 226833 DEBUG nova.compute.manager [req-aa4e574c-2842-43c3-a0ab-e74f3df11d00 req-fc911660-72bd-4c71-adb1-e4087667c6f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] No waiting events found dispatching network-vif-plugged-13d49931-062a-47ff-9767-085de35254fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:21 compute-2 nova_compute[226829]: 2026-01-31 08:06:21.932 226833 WARNING nova.compute.manager [req-aa4e574c-2842-43c3-a0ab-e74f3df11d00 req-fc911660-72bd-4c71-adb1-e4087667c6f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Received unexpected event network-vif-plugged-13d49931-062a-47ff-9767-085de35254fe for instance with vm_state active and task_state None.
Jan 31 08:06:22 compute-2 sudo[273494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:22 compute-2 sudo[273494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:22 compute-2 sudo[273494]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:22 compute-2 sudo[273519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:22 compute-2 sudo[273519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:22 compute-2 sudo[273519]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:22 compute-2 ceph-mon[77282]: pgmap v2082: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 84 KiB/s wr, 137 op/s
Jan 31 08:06:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:22.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:22 compute-2 nova_compute[226829]: 2026-01-31 08:06:22.162 226833 DEBUG nova.compute.manager [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:22 compute-2 nova_compute[226829]: 2026-01-31 08:06:22.163 226833 DEBUG oslo_concurrency.lockutils [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:22 compute-2 nova_compute[226829]: 2026-01-31 08:06:22.164 226833 DEBUG oslo_concurrency.lockutils [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:22 compute-2 nova_compute[226829]: 2026-01-31 08:06:22.164 226833 DEBUG oslo_concurrency.lockutils [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:22 compute-2 nova_compute[226829]: 2026-01-31 08:06:22.164 226833 DEBUG nova.compute.manager [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] No waiting events found dispatching network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:22 compute-2 nova_compute[226829]: 2026-01-31 08:06:22.164 226833 WARNING nova.compute.manager [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received unexpected event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 for instance with vm_state active and task_state resize_migrated.
Jan 31 08:06:22 compute-2 nova_compute[226829]: 2026-01-31 08:06:22.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:22 compute-2 nova_compute[226829]: 2026-01-31 08:06:22.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:06:22 compute-2 nova_compute[226829]: 2026-01-31 08:06:22.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:06:22 compute-2 nova_compute[226829]: 2026-01-31 08:06:22.522 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:23.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:24.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.318 226833 DEBUG nova.network.neutron [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance_info_cache with network_info: [{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.382 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.389 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.389 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.390 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.613 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.615 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.615 226833 INFO nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Creating image(s)
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.655 226833 DEBUG nova.storage.rbd_utils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] creating snapshot(nova-resize) on rbd image(66218154-3695-4970-8525-7eaae98f9f14_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:06:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e260 e260: 3 total, 3 up, 3 in
Jan 31 08:06:24 compute-2 ceph-mon[77282]: pgmap v2083: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 66 KiB/s wr, 167 op/s
Jan 31 08:06:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/46344885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.765907) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784766119, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2011, "num_deletes": 252, "total_data_size": 4535257, "memory_usage": 4615464, "flush_reason": "Manual Compaction"}
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.780 226833 DEBUG nova.objects.instance [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.782 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784786767, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 1805591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47163, "largest_seqno": 49169, "table_properties": {"data_size": 1799383, "index_size": 3154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16854, "raw_average_key_size": 21, "raw_value_size": 1785465, "raw_average_value_size": 2254, "num_data_blocks": 141, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846621, "oldest_key_time": 1769846621, "file_creation_time": 1769846784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 20928 microseconds, and 8522 cpu microseconds.
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.786837) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 1805591 bytes OK
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.786859) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.792656) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.792673) EVENT_LOG_v1 {"time_micros": 1769846784792667, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.792701) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 4526256, prev total WAL file size 4526256, number of live WAL files 2.
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.793589) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353038' seq:72057594037927935, type:22 .. '6D6772737461740031373539' seq:0, type:0; will stop at (end)
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(1763KB)], [90(10MB)]
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784793686, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 13260830, "oldest_snapshot_seqno": -1}
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 7529 keys, 10585370 bytes, temperature: kUnknown
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784867092, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 10585370, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10537030, "index_size": 28388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 192841, "raw_average_key_size": 25, "raw_value_size": 10404626, "raw_average_value_size": 1381, "num_data_blocks": 1128, "num_entries": 7529, "num_filter_entries": 7529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769846784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.867395) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10585370 bytes
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.876082) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.5 rd, 144.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.9 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(13.2) write-amplify(5.9) OK, records in: 7973, records dropped: 444 output_compression: NoCompression
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.876101) EVENT_LOG_v1 {"time_micros": 1769846784876092, "job": 56, "event": "compaction_finished", "compaction_time_micros": 73487, "compaction_time_cpu_micros": 24611, "output_level": 6, "num_output_files": 1, "total_output_size": 10585370, "num_input_records": 7973, "num_output_records": 7529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784876358, "job": 56, "event": "table_file_deletion", "file_number": 92}
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784877230, "job": 56, "event": "table_file_deletion", "file_number": 90}
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.793249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.877264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.877268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.877270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.877271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:24.877272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.884 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.884 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Ensure instance console log exists: /var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.885 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.885 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.885 226833 DEBUG oslo_concurrency.lockutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.887 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Start _get_guest_xml network_info=[{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:f8:c3:66"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.893 226833 WARNING nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.898 226833 DEBUG nova.virt.libvirt.host [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.899 226833 DEBUG nova.virt.libvirt.host [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.905 226833 DEBUG nova.virt.libvirt.host [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.905 226833 DEBUG nova.virt.libvirt.host [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.906 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.906 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.907 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.907 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.907 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.908 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.908 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.908 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.908 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.909 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.909 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.909 226833 DEBUG nova.virt.hardware [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.909 226833 DEBUG nova.objects.instance [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:24 compute-2 nova_compute[226829]: 2026-01-31 08:06:24.935 226833 DEBUG oslo_concurrency.processutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:06:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:25.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:06:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:06:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1547766470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.353 226833 DEBUG oslo_concurrency.processutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.385 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.390 226833 DEBUG oslo_concurrency.processutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:06:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1149175025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:25 compute-2 ceph-mon[77282]: osdmap e260: 3 total, 3 up, 3 in
Jan 31 08:06:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2562189325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/207896373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1547766470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.811 226833 DEBUG oslo_concurrency.processutils [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.812 226833 DEBUG nova.virt.libvirt.vif [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1412363862',display_name='tempest-ServerActionsTestJSON-server-1412363862',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1412363862',id=102,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-fla0xmix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:06:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=66218154-3695-4970-8525-7eaae98f9f14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:f8:c3:66"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.813 226833 DEBUG nova.network.os_vif_util [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:f8:c3:66"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.814 226833 DEBUG nova.network.os_vif_util [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.816 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <uuid>66218154-3695-4970-8525-7eaae98f9f14</uuid>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <name>instance-00000066</name>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <memory>196608</memory>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestJSON-server-1412363862</nova:name>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:06:24</nova:creationTime>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <nova:flavor name="m1.micro">
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <nova:memory>192</nova:memory>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <nova:port uuid="c153b6ab-5a86-443d-ad94-95d7b82bf483">
Jan 31 08:06:25 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <system>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <entry name="serial">66218154-3695-4970-8525-7eaae98f9f14</entry>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <entry name="uuid">66218154-3695-4970-8525-7eaae98f9f14</entry>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     </system>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <os>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   </os>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <features>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   </features>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/66218154-3695-4970-8525-7eaae98f9f14_disk">
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       </source>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/66218154-3695-4970-8525-7eaae98f9f14_disk.config">
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       </source>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:06:25 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:f8:c3:66"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <target dev="tapc153b6ab-5a"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/console.log" append="off"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <video>
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     </video>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:06:25 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:06:25 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:06:25 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:06:25 compute-2 nova_compute[226829]: </domain>
Jan 31 08:06:25 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.817 226833 DEBUG nova.virt.libvirt.vif [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1412363862',display_name='tempest-ServerActionsTestJSON-server-1412363862',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1412363862',id=102,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-fla0xmix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:06:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=66218154-3695-4970-8525-7eaae98f9f14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:f8:c3:66"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.817 226833 DEBUG nova.network.os_vif_util [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:f8:c3:66"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.818 226833 DEBUG nova.network.os_vif_util [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.818 226833 DEBUG os_vif [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.819 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.819 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.819 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.821 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.821 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc153b6ab-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.822 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc153b6ab-5a, col_values=(('external_ids', {'iface-id': 'c153b6ab-5a86-443d-ad94-95d7b82bf483', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:c3:66', 'vm-uuid': '66218154-3695-4970-8525-7eaae98f9f14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.823 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:25 compute-2 NetworkManager[48999]: <info>  [1769846785.8243] manager: (tapc153b6ab-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.825 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.827 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:25 compute-2 nova_compute[226829]: 2026-01-31 08:06:25.828 226833 INFO os_vif [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a')
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.003 226833 DEBUG oslo_concurrency.lockutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.004 226833 DEBUG oslo_concurrency.lockutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.004 226833 DEBUG oslo_concurrency.lockutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.004 226833 DEBUG oslo_concurrency.lockutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.004 226833 DEBUG oslo_concurrency.lockutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.005 226833 INFO nova.compute.manager [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Terminating instance
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.006 226833 DEBUG nova.compute.manager [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.021 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.022 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.022 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No VIF found with MAC fa:16:3e:f8:c3:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.022 226833 INFO nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Using config drive
Jan 31 08:06:26 compute-2 kernel: tap13d49931-06 (unregistering): left promiscuous mode
Jan 31 08:06:26 compute-2 NetworkManager[48999]: <info>  [1769846786.0424] device (tap13d49931-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:06:26 compute-2 ovn_controller[133834]: 2026-01-31T08:06:26Z|00439|binding|INFO|Releasing lport 13d49931-062a-47ff-9767-085de35254fe from this chassis (sb_readonly=0)
Jan 31 08:06:26 compute-2 ovn_controller[133834]: 2026-01-31T08:06:26Z|00440|binding|INFO|Setting lport 13d49931-062a-47ff-9767-085de35254fe down in Southbound
Jan 31 08:06:26 compute-2 ovn_controller[133834]: 2026-01-31T08:06:26Z|00441|binding|INFO|Removing iface tap13d49931-06 ovn-installed in OVS
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.051 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.062 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:58:46 10.100.0.14'], port_security=['fa:16:3e:98:58:46 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '960d50f3-beb3-4e6b-9fd6-04178a47e6a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43b462f5b43d48b4a33a13b069618e4c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '001ae016-61eb-444d-a215-8f70012d923a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aaf32b66-0fe8-4826-8186-77a88483534c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=13d49931-062a-47ff-9767-085de35254fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.064 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 13d49931-062a-47ff-9767-085de35254fe in datapath f3091b0d-0fc9-4172-b2af-6d9c678c6569 unbound from our chassis
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.066 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3091b0d-0fc9-4172-b2af-6d9c678c6569, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.067 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[30b37aab-d434-40fd-8981-06d9eac0ae94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.068 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 namespace which is not needed anymore
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.069 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 kernel: tapc153b6ab-5a: entered promiscuous mode
Jan 31 08:06:26 compute-2 systemd-udevd[273705]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:06:26 compute-2 NetworkManager[48999]: <info>  [1769846786.0957] manager: (tapc153b6ab-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Jan 31 08:06:26 compute-2 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000068.scope: Deactivated successfully.
Jan 31 08:06:26 compute-2 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000068.scope: Consumed 6.830s CPU time.
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.099 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 systemd-machined[195142]: Machine qemu-46-instance-00000068 terminated.
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.104 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 ovn_controller[133834]: 2026-01-31T08:06:26Z|00442|binding|INFO|Claiming lport c153b6ab-5a86-443d-ad94-95d7b82bf483 for this chassis.
Jan 31 08:06:26 compute-2 ovn_controller[133834]: 2026-01-31T08:06:26Z|00443|binding|INFO|c153b6ab-5a86-443d-ad94-95d7b82bf483: Claiming fa:16:3e:f8:c3:66 10.100.0.12
Jan 31 08:06:26 compute-2 NetworkManager[48999]: <info>  [1769846786.1089] device (tapc153b6ab-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:06:26 compute-2 NetworkManager[48999]: <info>  [1769846786.1102] device (tapc153b6ab-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.114 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.116 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 NetworkManager[48999]: <info>  [1769846786.1185] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Jan 31 08:06:26 compute-2 NetworkManager[48999]: <info>  [1769846786.1190] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Jan 31 08:06:26 compute-2 systemd-machined[195142]: New machine qemu-47-instance-00000066.
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.130 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:c3:66 10.100.0.12'], port_security=['fa:16:3e:f8:c3:66 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '66218154-3695-4970-8525-7eaae98f9f14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c153b6ab-5a86-443d-ad94-95d7b82bf483) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:26.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.167 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 systemd[1]: Started Virtual Machine qemu-47-instance-00000066.
Jan 31 08:06:26 compute-2 ovn_controller[133834]: 2026-01-31T08:06:26Z|00444|binding|INFO|Releasing lport 67f0642a-d8ea-421d-83b2-8b692f5bc044 from this chassis (sb_readonly=0)
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.184 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 ovn_controller[133834]: 2026-01-31T08:06:26Z|00445|binding|INFO|Setting lport c153b6ab-5a86-443d-ad94-95d7b82bf483 ovn-installed in OVS
Jan 31 08:06:26 compute-2 ovn_controller[133834]: 2026-01-31T08:06:26Z|00446|binding|INFO|Setting lport c153b6ab-5a86-443d-ad94-95d7b82bf483 up in Southbound
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.189 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 NetworkManager[48999]: <info>  [1769846786.2183] manager: (tap13d49931-06): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.238 226833 INFO nova.virt.libvirt.driver [-] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Instance destroyed successfully.
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.239 226833 DEBUG nova.objects.instance [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lazy-loading 'resources' on Instance uuid 960d50f3-beb3-4e6b-9fd6-04178a47e6a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.258 226833 DEBUG nova.virt.libvirt.vif [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-19898720',display_name='tempest-ListServersNegativeTestJSON-server-19898720-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-19898720-1',id=104,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:06:19Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='43b462f5b43d48b4a33a13b069618e4c',ramdisk_id='',reservation_id='r-c7zkmfmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1511652820',owner_user_name='tempest-ListServersNegativeTestJSON-1511652820-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:19Z,user_data=None,user_id='972b6e928f014e5394261f9c8655f1de',uuid=960d50f3-beb3-4e6b-9fd6-04178a47e6a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13d49931-062a-47ff-9767-085de35254fe", "address": "fa:16:3e:98:58:46", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d49931-06", "ovs_interfaceid": "13d49931-062a-47ff-9767-085de35254fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.259 226833 DEBUG nova.network.os_vif_util [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converting VIF {"id": "13d49931-062a-47ff-9767-085de35254fe", "address": "fa:16:3e:98:58:46", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d49931-06", "ovs_interfaceid": "13d49931-062a-47ff-9767-085de35254fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.259 226833 DEBUG nova.network.os_vif_util [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:58:46,bridge_name='br-int',has_traffic_filtering=True,id=13d49931-062a-47ff-9767-085de35254fe,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d49931-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.260 226833 DEBUG os_vif [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:58:46,bridge_name='br-int',has_traffic_filtering=True,id=13d49931-062a-47ff-9767-085de35254fe,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d49931-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.261 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.262 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13d49931-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[273399]: [NOTICE]   (273403) : haproxy version is 2.8.14-c23fe91
Jan 31 08:06:26 compute-2 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[273399]: [NOTICE]   (273403) : path to executable is /usr/sbin/haproxy
Jan 31 08:06:26 compute-2 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[273399]: [ALERT]    (273403) : Current worker (273405) exited with code 143 (Terminated)
Jan 31 08:06:26 compute-2 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[273399]: [WARNING]  (273403) : All workers exited. Exiting... (0)
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.267 226833 INFO os_vif [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:58:46,bridge_name='br-int',has_traffic_filtering=True,id=13d49931-062a-47ff-9767-085de35254fe,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d49931-06')
Jan 31 08:06:26 compute-2 systemd[1]: libpod-4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df.scope: Deactivated successfully.
Jan 31 08:06:26 compute-2 podman[273744]: 2026-01-31 08:06:26.277551816 +0000 UTC m=+0.132684622 container died 4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.362 226833 DEBUG nova.compute.manager [req-6651c015-d404-4dd1-90a4-d81587999403 req-dc82352e-7e5a-42fa-83e0-50a25249b47a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Received event network-vif-unplugged-13d49931-062a-47ff-9767-085de35254fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:26 compute-2 systemd[1]: var-lib-containers-storage-overlay-64e15e6a8a9c0d69cd598e36c53eb579bc37aab7529211b993cfbbfbaf13927d-merged.mount: Deactivated successfully.
Jan 31 08:06:26 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df-userdata-shm.mount: Deactivated successfully.
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.362 226833 DEBUG oslo_concurrency.lockutils [req-6651c015-d404-4dd1-90a4-d81587999403 req-dc82352e-7e5a-42fa-83e0-50a25249b47a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.363 226833 DEBUG oslo_concurrency.lockutils [req-6651c015-d404-4dd1-90a4-d81587999403 req-dc82352e-7e5a-42fa-83e0-50a25249b47a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.364 226833 DEBUG oslo_concurrency.lockutils [req-6651c015-d404-4dd1-90a4-d81587999403 req-dc82352e-7e5a-42fa-83e0-50a25249b47a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.364 226833 DEBUG nova.compute.manager [req-6651c015-d404-4dd1-90a4-d81587999403 req-dc82352e-7e5a-42fa-83e0-50a25249b47a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] No waiting events found dispatching network-vif-unplugged-13d49931-062a-47ff-9767-085de35254fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.364 226833 DEBUG nova.compute.manager [req-6651c015-d404-4dd1-90a4-d81587999403 req-dc82352e-7e5a-42fa-83e0-50a25249b47a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Received event network-vif-unplugged-13d49931-062a-47ff-9767-085de35254fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:06:26 compute-2 podman[273744]: 2026-01-31 08:06:26.383716494 +0000 UTC m=+0.238849300 container cleanup 4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:06:26 compute-2 systemd[1]: libpod-conmon-4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df.scope: Deactivated successfully.
Jan 31 08:06:26 compute-2 podman[273822]: 2026-01-31 08:06:26.445435093 +0000 UTC m=+0.043758542 container remove 4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.449 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8dc85b-44fc-4916-9df6-45f3c3241da7]: (4, ('Sat Jan 31 08:06:26 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 (4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df)\n4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df\nSat Jan 31 08:06:26 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 (4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df)\n4a710339dbcb456c5092d1ce666d627074f32ab8b6392d0bc24e1b986a4276df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.450 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[680ac365-9c5e-4b1f-b37a-4dc96ed21e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.451 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3091b0d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:26 compute-2 kernel: tapf3091b0d-00: left promiscuous mode
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.454 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.459 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.464 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[be7db5f7-7efb-49d5-9908-8a4d4ece9843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.478 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3390d2-9b87-4e02-922e-dbd4e466a308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.478 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d6083a53-2fd6-4bcb-a29c-13199dee2b59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.489 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[910919ca-b999-4725-870b-8f18ced82200]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702998, 'reachable_time': 21273, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273837, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 systemd[1]: run-netns-ovnmeta\x2df3091b0d\x2d0fc9\x2d4172\x2db2af\x2d6d9c678c6569.mount: Deactivated successfully.
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.493 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.493 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[4294dabb-1791-4554-a920-6c4d18e7b21d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.493 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c153b6ab-5a86-443d-ad94-95d7b82bf483 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.495 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.503 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[44ed13e9-1ec3-43e0-a4cd-d37350d045b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.503 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.506 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.506 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[832f6e1e-e711-49f7-a394-5826cf49cb46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.507 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[45d99316-727f-4e31-a329-e05c43865b42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.514 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[c4621b53-ee45-49ec-9342-4c956e82f2c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.534 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[97b83423-f4f3-4b97-b9f3-ef07239d2533]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.555 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5d4e60-ec4b-4da3-bcc3-0497fd68d278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 podman[273758]: 2026-01-31 08:06:26.558122868 +0000 UTC m=+0.375652960 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:06:26 compute-2 NetworkManager[48999]: <info>  [1769846786.5643] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/220)
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.563 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f21707da-2ef8-4d9e-8db2-0df35b283564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.589 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[0846b817-b774-4365-a766-5cda8653dfac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.592 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3c6eee-5ea4-45db-b480-002f29dbe411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 NetworkManager[48999]: <info>  [1769846786.6123] device (tap5cc2535f-00): carrier: link connected
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.617 226833 DEBUG nova.compute.manager [req-d8fcde81-3385-4bb7-bf1a-701614f67cc2 req-939f3a19-f5d4-4b8f-857c-a3829294a2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.617 226833 DEBUG oslo_concurrency.lockutils [req-d8fcde81-3385-4bb7-bf1a-701614f67cc2 req-939f3a19-f5d4-4b8f-857c-a3829294a2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.618 226833 DEBUG oslo_concurrency.lockutils [req-d8fcde81-3385-4bb7-bf1a-701614f67cc2 req-939f3a19-f5d4-4b8f-857c-a3829294a2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.618 226833 DEBUG oslo_concurrency.lockutils [req-d8fcde81-3385-4bb7-bf1a-701614f67cc2 req-939f3a19-f5d4-4b8f-857c-a3829294a2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.617 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5198e32a-d39f-454c-bee9-94c3dc3e9058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.618 226833 DEBUG nova.compute.manager [req-d8fcde81-3385-4bb7-bf1a-701614f67cc2 req-939f3a19-f5d4-4b8f-857c-a3829294a2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] No waiting events found dispatching network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.618 226833 WARNING nova.compute.manager [req-d8fcde81-3385-4bb7-bf1a-701614f67cc2 req-939f3a19-f5d4-4b8f-857c-a3829294a2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received unexpected event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 for instance with vm_state active and task_state resize_finish.
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.630 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9827ad0b-630a-4791-b905-2e14d434b9bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703908, 'reachable_time': 31437, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273870, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.642 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0abfe248-7ae1-4f77-8f5e-367b03c66da1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703908, 'tstamp': 703908}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273871, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.654 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bf59c8f2-4add-4487-a693-2801b6917f8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703908, 'reachable_time': 31437, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273872, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.676 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ed2326-9adc-4b91-8abc-091f1d811ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.719 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aebafe55-304a-4e30-8d93-7d19bf946bba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.720 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.720 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.721 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:26 compute-2 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 08:06:26 compute-2 NetworkManager[48999]: <info>  [1769846786.7249] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.725 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.727 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:26 compute-2 ovn_controller[133834]: 2026-01-31T08:06:26Z|00447|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.729 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.730 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[af538dee-f901-416c-91d9-0fa31a62310b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.731 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:06:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:26.731 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:06:26 compute-2 nova_compute[226829]: 2026-01-31 08:06:26.735 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-2 ceph-mon[77282]: pgmap v2085: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 59 KiB/s wr, 254 op/s
Jan 31 08:06:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1149175025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1441244861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:27 compute-2 podman[273905]: 2026-01-31 08:06:27.033674366 +0000 UTC m=+0.051318818 container create 16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:06:27 compute-2 systemd[1]: Started libpod-conmon-16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c.scope.
Jan 31 08:06:27 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:06:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871288dac33de78648179eaf62df45c9d74f76eb034be11bceb33701c3c14292/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:27 compute-2 podman[273905]: 2026-01-31 08:06:27.095691673 +0000 UTC m=+0.113336135 container init 16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:06:27 compute-2 podman[273905]: 2026-01-31 08:06:27.013792625 +0000 UTC m=+0.031437097 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:06:27 compute-2 podman[273905]: 2026-01-31 08:06:27.102493247 +0000 UTC m=+0.120137699 container start 16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:27 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[273921]: [NOTICE]   (273925) : New worker (273927) forked
Jan 31 08:06:27 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[273921]: [NOTICE]   (273925) : Loading success.
Jan 31 08:06:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:27.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.328 226833 INFO nova.virt.libvirt.driver [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Deleting instance files /var/lib/nova/instances/960d50f3-beb3-4e6b-9fd6-04178a47e6a0_del
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.330 226833 INFO nova.virt.libvirt.driver [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Deletion of /var/lib/nova/instances/960d50f3-beb3-4e6b-9fd6-04178a47e6a0_del complete
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.385 226833 INFO nova.compute.manager [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Took 1.38 seconds to destroy the instance on the hypervisor.
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.385 226833 DEBUG oslo.service.loopingcall [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.385 226833 DEBUG nova.compute.manager [-] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.386 226833 DEBUG nova.network.neutron [-] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.407 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance_info_cache with network_info: [{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.426 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.427 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.428 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.428 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.428 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.479 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.480 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.480 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.480 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.481 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.808 226833 DEBUG nova.compute.manager [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.809 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for 66218154-3695-4970-8525-7eaae98f9f14 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.810 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846787.8078318, 66218154-3695-4970-8525-7eaae98f9f14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.810 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] VM Resumed (Lifecycle Event)
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.815 226833 INFO nova.virt.libvirt.driver [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance running successfully.
Jan 31 08:06:27 compute-2 virtqemud[226546]: argument unsupported: QEMU guest agent is not configured
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.817 226833 DEBUG nova.virt.libvirt.guest [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.818 226833 DEBUG nova.virt.libvirt.driver [None req-e12ed678-b2bd-469d-b723-df8277e88cf0 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 08:06:27 compute-2 ceph-mon[77282]: pgmap v2086: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 57 KiB/s wr, 259 op/s
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.855 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.862 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:06:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2964881060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.890 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.919 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.919 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846787.8099077, 66218154-3695-4970-8525-7eaae98f9f14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.919 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] VM Started (Lifecycle Event)
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.965 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:27 compute-2 nova_compute[226829]: 2026-01-31 08:06:27.967 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.024 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.025 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.165 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.166 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4245MB free_disk=20.788639068603516GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.166 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.166 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:28.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.374 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Applying migration context for instance 66218154-3695-4970-8525-7eaae98f9f14 as it has an incoming, in-progress migration f225e12d-735f-43e1-853e-0277afec1d20. Migration status is finished _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.374 226833 INFO nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating resource usage from migration f225e12d-735f-43e1-853e-0277afec1d20
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.449 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 960d50f3-beb3-4e6b-9fd6-04178a47e6a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.450 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Migration f225e12d-735f-43e1-853e-0277afec1d20 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.450 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 66218154-3695-4970-8525-7eaae98f9f14 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.450 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.450 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.552 226833 DEBUG nova.compute.manager [req-7f5dc87f-40ec-469a-ba0a-be21ec6340f5 req-b6213ef5-6274-46f0-8042-f2625f617b10 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Received event network-vif-plugged-13d49931-062a-47ff-9767-085de35254fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.553 226833 DEBUG oslo_concurrency.lockutils [req-7f5dc87f-40ec-469a-ba0a-be21ec6340f5 req-b6213ef5-6274-46f0-8042-f2625f617b10 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.554 226833 DEBUG oslo_concurrency.lockutils [req-7f5dc87f-40ec-469a-ba0a-be21ec6340f5 req-b6213ef5-6274-46f0-8042-f2625f617b10 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.554 226833 DEBUG oslo_concurrency.lockutils [req-7f5dc87f-40ec-469a-ba0a-be21ec6340f5 req-b6213ef5-6274-46f0-8042-f2625f617b10 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.555 226833 DEBUG nova.compute.manager [req-7f5dc87f-40ec-469a-ba0a-be21ec6340f5 req-b6213ef5-6274-46f0-8042-f2625f617b10 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] No waiting events found dispatching network-vif-plugged-13d49931-062a-47ff-9767-085de35254fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.556 226833 WARNING nova.compute.manager [req-7f5dc87f-40ec-469a-ba0a-be21ec6340f5 req-b6213ef5-6274-46f0-8042-f2625f617b10 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Received unexpected event network-vif-plugged-13d49931-062a-47ff-9767-085de35254fe for instance with vm_state active and task_state deleting.
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.578 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.763 226833 DEBUG nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.763 226833 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.764 226833 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.764 226833 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.764 226833 DEBUG nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] No waiting events found dispatching network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:28 compute-2 nova_compute[226829]: 2026-01-31 08:06:28.764 226833 WARNING nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received unexpected event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 for instance with vm_state resized and task_state None.
Jan 31 08:06:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2964881060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3759180012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3666056923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.022 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.028 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.091 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.132 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.133 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:06:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:29.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.193 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.287 226833 DEBUG nova.network.neutron [-] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.312 226833 INFO nova.compute.manager [-] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Took 1.93 seconds to deallocate network for instance.
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.374 226833 DEBUG oslo_concurrency.lockutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.375 226833 DEBUG oslo_concurrency.lockutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.731 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:06:29 compute-2 nova_compute[226829]: 2026-01-31 08:06:29.884 226833 DEBUG oslo_concurrency.processutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3666056923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:30 compute-2 ceph-mon[77282]: pgmap v2087: 305 pgs: 305 active+clean; 474 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 2.1 MiB/s wr, 349 op/s
Jan 31 08:06:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3999485390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:30.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/599890779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.315 226833 DEBUG oslo_concurrency.processutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.320 226833 DEBUG nova.compute.provider_tree [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.337 226833 DEBUG nova.scheduler.client.report [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.360 226833 DEBUG oslo_concurrency.lockutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.373 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.400 226833 INFO nova.scheduler.client.report [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Deleted allocations for instance 960d50f3-beb3-4e6b-9fd6-04178a47e6a0
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.448 226833 DEBUG nova.network.neutron [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Port c153b6ab-5a86-443d-ad94-95d7b82bf483 binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.448 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.448 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.449 226833 DEBUG nova.network.neutron [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:06:30 compute-2 nova_compute[226829]: 2026-01-31 08:06:30.474 226833 DEBUG oslo_concurrency.lockutils [None req-0797579f-0eb3-4d0f-ba1d-0581cc74238e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "960d50f3-beb3-4e6b-9fd6-04178a47e6a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:31 compute-2 nova_compute[226829]: 2026-01-31 08:06:31.073 226833 DEBUG nova.compute.manager [req-4678f879-bce3-42d5-a3e9-27aa9de8b551 req-cb246dde-8611-4a20-8aa3-426c9bee99e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Received event network-vif-deleted-13d49931-062a-47ff-9767-085de35254fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/599890779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:31.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:31 compute-2 nova_compute[226829]: 2026-01-31 08:06:31.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:31 compute-2 ovn_controller[133834]: 2026-01-31T08:06:31Z|00448|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:06:31 compute-2 nova_compute[226829]: 2026-01-31 08:06:31.328 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:31 compute-2 sudo[274048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:31 compute-2 sudo[274048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:31 compute-2 sudo[274048]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:31 compute-2 sudo[274073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:31 compute-2 sudo[274073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:31 compute-2 sudo[274073]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:31 compute-2 sudo[274098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:31 compute-2 sudo[274098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:31 compute-2 sudo[274098]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:31 compute-2 sudo[274123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:06:31 compute-2 sudo[274123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:32.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:32 compute-2 nova_compute[226829]: 2026-01-31 08:06:32.612 226833 DEBUG nova.network.neutron [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance_info_cache with network_info: [{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:32 compute-2 nova_compute[226829]: 2026-01-31 08:06:32.639 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:33.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:33 compute-2 podman[274218]: 2026-01-31 08:06:33.216260641 +0000 UTC m=+1.146288905 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 08:06:33 compute-2 kernel: tapc153b6ab-5a (unregistering): left promiscuous mode
Jan 31 08:06:33 compute-2 NetworkManager[48999]: <info>  [1769846793.3846] device (tapc153b6ab-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.397 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:33 compute-2 ovn_controller[133834]: 2026-01-31T08:06:33Z|00449|binding|INFO|Releasing lport c153b6ab-5a86-443d-ad94-95d7b82bf483 from this chassis (sb_readonly=0)
Jan 31 08:06:33 compute-2 ovn_controller[133834]: 2026-01-31T08:06:33Z|00450|binding|INFO|Setting lport c153b6ab-5a86-443d-ad94-95d7b82bf483 down in Southbound
Jan 31 08:06:33 compute-2 ovn_controller[133834]: 2026-01-31T08:06:33Z|00451|binding|INFO|Removing iface tapc153b6ab-5a ovn-installed in OVS
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.401 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.408 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.409 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:c3:66 10.100.0.12'], port_security=['fa:16:3e:f8:c3:66 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '66218154-3695-4970-8525-7eaae98f9f14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c153b6ab-5a86-443d-ad94-95d7b82bf483) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.410 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c153b6ab-5a86-443d-ad94-95d7b82bf483 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.411 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.412 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7f57ca80-8c20-4ff0-bc75-2fa4071c9498]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.413 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore
Jan 31 08:06:33 compute-2 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 31 08:06:33 compute-2 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000066.scope: Consumed 6.576s CPU time.
Jan 31 08:06:33 compute-2 systemd-machined[195142]: Machine qemu-47-instance-00000066 terminated.
Jan 31 08:06:33 compute-2 podman[274218]: 2026-01-31 08:06:33.437277804 +0000 UTC m=+1.367306048 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.502 226833 INFO nova.virt.libvirt.driver [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance destroyed successfully.
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.502 226833 DEBUG nova.objects.instance [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.520 226833 DEBUG nova.virt.libvirt.vif [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1412363862',display_name='tempest-ServerActionsTestJSON-server-1412363862',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1412363862',id=102,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:06:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-fla0xmix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=66218154-3695-4970-8525-7eaae98f9f14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.520 226833 DEBUG nova.network.os_vif_util [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.521 226833 DEBUG nova.network.os_vif_util [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.522 226833 DEBUG os_vif [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:06:33 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[273921]: [NOTICE]   (273925) : haproxy version is 2.8.14-c23fe91
Jan 31 08:06:33 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[273921]: [NOTICE]   (273925) : path to executable is /usr/sbin/haproxy
Jan 31 08:06:33 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[273921]: [WARNING]  (273925) : Exiting Master process...
Jan 31 08:06:33 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[273921]: [WARNING]  (273925) : Exiting Master process...
Jan 31 08:06:33 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[273921]: [ALERT]    (273925) : Current worker (273927) exited with code 143 (Terminated)
Jan 31 08:06:33 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[273921]: [WARNING]  (273925) : All workers exited. Exiting... (0)
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.524 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.524 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc153b6ab-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:33 compute-2 systemd[1]: libpod-16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c.scope: Deactivated successfully.
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.526 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.527 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.529 226833 INFO os_vif [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a')
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.532 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.533 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:33 compute-2 podman[274276]: 2026-01-31 08:06:33.533402758 +0000 UTC m=+0.047494163 container died 16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.555 226833 DEBUG nova.objects.instance [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'migration_context' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:33 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c-userdata-shm.mount: Deactivated successfully.
Jan 31 08:06:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-871288dac33de78648179eaf62df45c9d74f76eb034be11bceb33701c3c14292-merged.mount: Deactivated successfully.
Jan 31 08:06:33 compute-2 podman[274276]: 2026-01-31 08:06:33.561643037 +0000 UTC m=+0.075734422 container cleanup 16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:06:33 compute-2 systemd[1]: libpod-conmon-16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c.scope: Deactivated successfully.
Jan 31 08:06:33 compute-2 podman[274341]: 2026-01-31 08:06:33.622379309 +0000 UTC m=+0.043310630 container remove 16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.626 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dfadbbca-f7d4-4d60-b3d4-28a6006d7863]: (4, ('Sat Jan 31 08:06:33 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c)\n16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c\nSat Jan 31 08:06:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c)\n16ae73c581d9927d713193c0a3fb90e999f52dd567bdd664e6d03be6ff5f526c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.628 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3593db5e-4a90-45f6-89fd-4a64e909808f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.629 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:33 compute-2 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.632 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.635 226833 DEBUG oslo_concurrency.processutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.640 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[efd5aa7f-686d-4654-b6c3-cd0a79f45298]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.655 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[16fd53e5-c9b8-4240-b8e7-529042c9e8c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.656 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.656 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[21e84cd6-89d6-4394-a8e5-c01cf689c374]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.668 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6fe58d-bc29-4068-a6a6-b713c4893a71]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703902, 'reachable_time': 44104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274380, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:33 compute-2 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.672 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:06:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:33.672 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[90ff587d-d3a0-4814-8657-2e0e58727bc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.697 226833 DEBUG nova.compute.manager [req-c3a062d4-a981-4901-a523-af0a9be8752c req-5b9d3323-35ea-41f7-b0fe-a4a63f3bf34c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-unplugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.697 226833 DEBUG oslo_concurrency.lockutils [req-c3a062d4-a981-4901-a523-af0a9be8752c req-5b9d3323-35ea-41f7-b0fe-a4a63f3bf34c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.697 226833 DEBUG oslo_concurrency.lockutils [req-c3a062d4-a981-4901-a523-af0a9be8752c req-5b9d3323-35ea-41f7-b0fe-a4a63f3bf34c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.697 226833 DEBUG oslo_concurrency.lockutils [req-c3a062d4-a981-4901-a523-af0a9be8752c req-5b9d3323-35ea-41f7-b0fe-a4a63f3bf34c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.698 226833 DEBUG nova.compute.manager [req-c3a062d4-a981-4901-a523-af0a9be8752c req-5b9d3323-35ea-41f7-b0fe-a4a63f3bf34c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] No waiting events found dispatching network-vif-unplugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:33 compute-2 nova_compute[226829]: 2026-01-31 08:06:33.698 226833 WARNING nova.compute.manager [req-c3a062d4-a981-4901-a523-af0a9be8752c req-5b9d3323-35ea-41f7-b0fe-a4a63f3bf34c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received unexpected event network-vif-unplugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:06:33 compute-2 ceph-mon[77282]: pgmap v2088: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 2.5 MiB/s wr, 378 op/s
Jan 31 08:06:33 compute-2 podman[274468]: 2026-01-31 08:06:33.936373961 +0000 UTC m=+0.057406183 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:06:33 compute-2 podman[274468]: 2026-01-31 08:06:33.946253519 +0000 UTC m=+0.067285711 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:06:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2213965646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:34 compute-2 nova_compute[226829]: 2026-01-31 08:06:34.048 226833 DEBUG oslo_concurrency.processutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:34 compute-2 nova_compute[226829]: 2026-01-31 08:06:34.054 226833 DEBUG nova.compute.provider_tree [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:06:34 compute-2 nova_compute[226829]: 2026-01-31 08:06:34.106 226833 DEBUG nova.scheduler.client.report [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:06:34 compute-2 podman[274533]: 2026-01-31 08:06:34.125387233 +0000 UTC m=+0.051449120 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=keepalived, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public)
Jan 31 08:06:34 compute-2 podman[274533]: 2026-01-31 08:06:34.141401819 +0000 UTC m=+0.067463696 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vendor=Red Hat, Inc., name=keepalived, version=2.2.4, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.expose-services=)
Jan 31 08:06:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:06:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:06:34 compute-2 sudo[274123]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:34 compute-2 nova_compute[226829]: 2026-01-31 08:06:34.219 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:34 compute-2 sudo[274564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:34 compute-2 sudo[274564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:34 compute-2 sudo[274564]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:34 compute-2 sudo[274589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:34 compute-2 sudo[274589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:34 compute-2 sudo[274589]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:34 compute-2 sudo[274614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:34 compute-2 sudo[274614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:34 compute-2 sudo[274614]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:34 compute-2 sudo[274639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:06:34 compute-2 sudo[274639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:34 compute-2 nova_compute[226829]: 2026-01-31 08:06:34.472 226833 INFO nova.compute.manager [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Swapping old allocation on dict_keys(['2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc']) held by migration f225e12d-735f-43e1-853e-0277afec1d20 for instance
Jan 31 08:06:34 compute-2 nova_compute[226829]: 2026-01-31 08:06:34.528 226833 DEBUG nova.scheduler.client.report [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Overwriting current allocation {'allocations': {'2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 53}}, 'project_id': '1fcec9ca13964c7191134db4420ab049', 'user_id': 'd9ed446fb2cf4fc0a4e619c6c766fddc', 'consumer_generation': 1} on consumer 66218154-3695-4970-8525-7eaae98f9f14 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 31 08:06:34 compute-2 ceph-mon[77282]: pgmap v2089: 305 pgs: 305 active+clean; 482 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 2.6 MiB/s wr, 366 op/s
Jan 31 08:06:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2213965646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3270420668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.760179) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794760219, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 409, "num_deletes": 256, "total_data_size": 428069, "memory_usage": 437312, "flush_reason": "Manual Compaction"}
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794763927, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 282570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49174, "largest_seqno": 49578, "table_properties": {"data_size": 280152, "index_size": 518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6039, "raw_average_key_size": 18, "raw_value_size": 275223, "raw_average_value_size": 846, "num_data_blocks": 21, "num_entries": 325, "num_filter_entries": 325, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846785, "oldest_key_time": 1769846785, "file_creation_time": 1769846794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 3807 microseconds, and 1564 cpu microseconds.
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.763982) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 282570 bytes OK
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.764003) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.765359) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.765377) EVENT_LOG_v1 {"time_micros": 1769846794765371, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.765394) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 425382, prev total WAL file size 425382, number of live WAL files 2.
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.765843) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373538' seq:0, type:0; will stop at (end)
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(275KB)], [93(10MB)]
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794765906, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 10867940, "oldest_snapshot_seqno": -1}
Jan 31 08:06:34 compute-2 sudo[274639]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 7329 keys, 10720241 bytes, temperature: kUnknown
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794821066, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 10720241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10672642, "index_size": 28159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 189668, "raw_average_key_size": 25, "raw_value_size": 10543073, "raw_average_value_size": 1438, "num_data_blocks": 1116, "num_entries": 7329, "num_filter_entries": 7329, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769846794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.821527) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10720241 bytes
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.823114) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.9 rd, 193.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.1 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(76.4) write-amplify(37.9) OK, records in: 7854, records dropped: 525 output_compression: NoCompression
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.823130) EVENT_LOG_v1 {"time_micros": 1769846794823122, "job": 58, "event": "compaction_finished", "compaction_time_micros": 55475, "compaction_time_cpu_micros": 18520, "output_level": 6, "num_output_files": 1, "total_output_size": 10720241, "num_input_records": 7854, "num_output_records": 7329, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794823381, "job": 58, "event": "table_file_deletion", "file_number": 95}
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794824180, "job": 58, "event": "table_file_deletion", "file_number": 93}
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.765756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.824221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.824227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.824228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.824230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:34.824231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-2 nova_compute[226829]: 2026-01-31 08:06:34.914 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:34 compute-2 nova_compute[226829]: 2026-01-31 08:06:34.914 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:34 compute-2 nova_compute[226829]: 2026-01-31 08:06:34.915 226833 DEBUG nova.network.neutron [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:06:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:35.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:35 compute-2 nova_compute[226829]: 2026-01-31 08:06:35.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:35 compute-2 nova_compute[226829]: 2026-01-31 08:06:35.811 226833 DEBUG nova.compute.manager [req-e459570e-1d45-4e30-90c3-a34775c80cd5 req-c155c1f4-bd23-495d-ae8e-ee7555e807b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:35 compute-2 nova_compute[226829]: 2026-01-31 08:06:35.811 226833 DEBUG oslo_concurrency.lockutils [req-e459570e-1d45-4e30-90c3-a34775c80cd5 req-c155c1f4-bd23-495d-ae8e-ee7555e807b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:35 compute-2 nova_compute[226829]: 2026-01-31 08:06:35.811 226833 DEBUG oslo_concurrency.lockutils [req-e459570e-1d45-4e30-90c3-a34775c80cd5 req-c155c1f4-bd23-495d-ae8e-ee7555e807b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:35 compute-2 nova_compute[226829]: 2026-01-31 08:06:35.811 226833 DEBUG oslo_concurrency.lockutils [req-e459570e-1d45-4e30-90c3-a34775c80cd5 req-c155c1f4-bd23-495d-ae8e-ee7555e807b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:35 compute-2 nova_compute[226829]: 2026-01-31 08:06:35.812 226833 DEBUG nova.compute.manager [req-e459570e-1d45-4e30-90c3-a34775c80cd5 req-c155c1f4-bd23-495d-ae8e-ee7555e807b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] No waiting events found dispatching network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:35 compute-2 nova_compute[226829]: 2026-01-31 08:06:35.812 226833 WARNING nova.compute.manager [req-e459570e-1d45-4e30-90c3-a34775c80cd5 req-c155c1f4-bd23-495d-ae8e-ee7555e807b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received unexpected event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:06:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:06:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:06:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:06:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:06:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:06:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:36.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:36 compute-2 nova_compute[226829]: 2026-01-31 08:06:36.731 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:36 compute-2 nova_compute[226829]: 2026-01-31 08:06:36.732 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:06:37 compute-2 ceph-mon[77282]: pgmap v2090: 305 pgs: 305 active+clean; 492 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.0 MiB/s wr, 295 op/s
Jan 31 08:06:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:37.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:37 compute-2 nova_compute[226829]: 2026-01-31 08:06:37.409 226833 DEBUG nova.network.neutron [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance_info_cache with network_info: [{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:37 compute-2 nova_compute[226829]: 2026-01-31 08:06:37.447 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-66218154-3695-4970-8525-7eaae98f9f14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:37 compute-2 nova_compute[226829]: 2026-01-31 08:06:37.449 226833 DEBUG nova.virt.libvirt.driver [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 31 08:06:37 compute-2 nova_compute[226829]: 2026-01-31 08:06:37.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:37 compute-2 nova_compute[226829]: 2026-01-31 08:06:37.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:06:37 compute-2 nova_compute[226829]: 2026-01-31 08:06:37.814 226833 DEBUG nova.storage.rbd_utils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rolling back rbd image(66218154-3695-4970-8525-7eaae98f9f14_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 31 08:06:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:38.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:38 compute-2 nova_compute[226829]: 2026-01-31 08:06:38.527 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:06:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:39.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:06:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:40.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:40 compute-2 nova_compute[226829]: 2026-01-31 08:06:40.376 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:40 compute-2 ceph-mon[77282]: pgmap v2091: 305 pgs: 305 active+clean; 477 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 309 op/s
Jan 31 08:06:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2586767570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:40 compute-2 nova_compute[226829]: 2026-01-31 08:06:40.808 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:41.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:41 compute-2 nova_compute[226829]: 2026-01-31 08:06:41.237 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846786.236053, 960d50f3-beb3-4e6b-9fd6-04178a47e6a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:41 compute-2 nova_compute[226829]: 2026-01-31 08:06:41.237 226833 INFO nova.compute.manager [-] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] VM Stopped (Lifecycle Event)
Jan 31 08:06:41 compute-2 nova_compute[226829]: 2026-01-31 08:06:41.327 226833 DEBUG nova.compute.manager [None req-47d88c42-6589-48f7-a591-dfd6225b7477 - - - - - -] [instance: 960d50f3-beb3-4e6b-9fd6-04178a47e6a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/641212261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:41 compute-2 ceph-mon[77282]: pgmap v2092: 305 pgs: 305 active+clean; 412 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 345 op/s
Jan 31 08:06:41 compute-2 nova_compute[226829]: 2026-01-31 08:06:41.556 226833 DEBUG nova.storage.rbd_utils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] removing snapshot(nova-resize) on rbd image(66218154-3695-4970-8525-7eaae98f9f14_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:06:42 compute-2 sudo[274755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:42 compute-2 sudo[274755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:42 compute-2 sudo[274755]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:42.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:42 compute-2 sudo[274780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:42 compute-2 sudo[274780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:42 compute-2 sudo[274780]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:43.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:43 compute-2 ceph-mon[77282]: pgmap v2093: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 223 op/s
Jan 31 08:06:43 compute-2 nova_compute[226829]: 2026-01-31 08:06:43.417 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:43 compute-2 nova_compute[226829]: 2026-01-31 08:06:43.467 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:43 compute-2 nova_compute[226829]: 2026-01-31 08:06:43.529 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e261 e261: 3 total, 3 up, 3 in
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.102 226833 DEBUG nova.virt.libvirt.driver [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Start _get_guest_xml network_info=[{"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.106 226833 WARNING nova.virt.libvirt.driver [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.116 226833 DEBUG nova.virt.libvirt.host [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.117 226833 DEBUG nova.virt.libvirt.host [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.120 226833 DEBUG nova.virt.libvirt.host [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.120 226833 DEBUG nova.virt.libvirt.host [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.121 226833 DEBUG nova.virt.libvirt.driver [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.121 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.122 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.122 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.122 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.122 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.122 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.122 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.123 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.123 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.123 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.123 226833 DEBUG nova.virt.hardware [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.124 226833 DEBUG nova.objects.instance [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:44.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.308 226833 DEBUG oslo_concurrency.processutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.747 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:06:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/507011724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.895 226833 DEBUG oslo_concurrency.processutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.938 226833 WARNING nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.938 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Triggering sync for uuid 66218154-3695-4970-8525-7eaae98f9f14 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.939 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.939 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "66218154-3695-4970-8525-7eaae98f9f14" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.940 226833 INFO nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.940 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "66218154-3695-4970-8525-7eaae98f9f14" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:44 compute-2 nova_compute[226829]: 2026-01-31 08:06:44.950 226833 DEBUG oslo_concurrency.processutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:45 compute-2 ceph-mon[77282]: pgmap v2094: 305 pgs: 305 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Jan 31 08:06:45 compute-2 ceph-mon[77282]: osdmap e261: 3 total, 3 up, 3 in
Jan 31 08:06:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:45.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:45 compute-2 podman[274867]: 2026-01-31 08:06:45.189816892 +0000 UTC m=+0.074628442 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 31 08:06:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:06:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1345138318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.362 226833 DEBUG oslo_concurrency.processutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.364 226833 DEBUG nova.virt.libvirt.vif [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1412363862',display_name='tempest-ServerActionsTestJSON-server-1412363862',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1412363862',id=102,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:06:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-fla0xmix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=66218154-3695-4970-8525-7eaae98f9f14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.365 226833 DEBUG nova.network.os_vif_util [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.366 226833 DEBUG nova.network.os_vif_util [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.369 226833 DEBUG nova.virt.libvirt.driver [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <uuid>66218154-3695-4970-8525-7eaae98f9f14</uuid>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <name>instance-00000066</name>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestJSON-server-1412363862</nova:name>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:06:44</nova:creationTime>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <nova:port uuid="c153b6ab-5a86-443d-ad94-95d7b82bf483">
Jan 31 08:06:45 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <system>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <entry name="serial">66218154-3695-4970-8525-7eaae98f9f14</entry>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <entry name="uuid">66218154-3695-4970-8525-7eaae98f9f14</entry>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     </system>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <os>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   </os>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <features>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   </features>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/66218154-3695-4970-8525-7eaae98f9f14_disk">
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       </source>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/66218154-3695-4970-8525-7eaae98f9f14_disk.config">
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       </source>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:06:45 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:f8:c3:66"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <target dev="tapc153b6ab-5a"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14/console.log" append="off"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <video>
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     </video>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:06:45 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:06:45 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:06:45 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:06:45 compute-2 nova_compute[226829]: </domain>
Jan 31 08:06:45 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.370 226833 DEBUG nova.compute.manager [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Preparing to wait for external event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.370 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.371 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.371 226833 DEBUG oslo_concurrency.lockutils [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.372 226833 DEBUG nova.virt.libvirt.vif [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1412363862',display_name='tempest-ServerActionsTestJSON-server-1412363862',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1412363862',id=102,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:06:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-fla0xmix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=66218154-3695-4970-8525-7eaae98f9f14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.372 226833 DEBUG nova.network.os_vif_util [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.373 226833 DEBUG nova.network.os_vif_util [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.373 226833 DEBUG os_vif [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.375 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.375 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.381 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.381 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc153b6ab-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.381 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc153b6ab-5a, col_values=(('external_ids', {'iface-id': 'c153b6ab-5a86-443d-ad94-95d7b82bf483', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:c3:66', 'vm-uuid': '66218154-3695-4970-8525-7eaae98f9f14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.382 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 NetworkManager[48999]: <info>  [1769846805.3838] manager: (tapc153b6ab-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.387 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.388 226833 INFO os_vif [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a')
Jan 31 08:06:45 compute-2 kernel: tapc153b6ab-5a: entered promiscuous mode
Jan 31 08:06:45 compute-2 NetworkManager[48999]: <info>  [1769846805.5402] manager: (tapc153b6ab-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.540 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 ovn_controller[133834]: 2026-01-31T08:06:45Z|00452|binding|INFO|Claiming lport c153b6ab-5a86-443d-ad94-95d7b82bf483 for this chassis.
Jan 31 08:06:45 compute-2 ovn_controller[133834]: 2026-01-31T08:06:45Z|00453|binding|INFO|c153b6ab-5a86-443d-ad94-95d7b82bf483: Claiming fa:16:3e:f8:c3:66 10.100.0.12
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.544 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.546 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 NetworkManager[48999]: <info>  [1769846805.5627] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Jan 31 08:06:45 compute-2 NetworkManager[48999]: <info>  [1769846805.5637] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Jan 31 08:06:45 compute-2 systemd-udevd[274912]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.561 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 systemd-machined[195142]: New machine qemu-48-instance-00000066.
Jan 31 08:06:45 compute-2 NetworkManager[48999]: <info>  [1769846805.5722] device (tapc153b6ab-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:06:45 compute-2 NetworkManager[48999]: <info>  [1769846805.5730] device (tapc153b6ab-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:06:45 compute-2 systemd[1]: Started Virtual Machine qemu-48-instance-00000066.
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.604 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:c3:66 10.100.0.12'], port_security=['fa:16:3e:f8:c3:66 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '66218154-3695-4970-8525-7eaae98f9f14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '7', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c153b6ab-5a86-443d-ad94-95d7b82bf483) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.607 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c153b6ab-5a86-443d-ad94-95d7b82bf483 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.609 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.616 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.627 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf95068-9e51-4061-96dd-9728b4acf31f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.629 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.630 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 ovn_controller[133834]: 2026-01-31T08:06:45Z|00454|binding|INFO|Setting lport c153b6ab-5a86-443d-ad94-95d7b82bf483 ovn-installed in OVS
Jan 31 08:06:45 compute-2 ovn_controller[133834]: 2026-01-31T08:06:45Z|00455|binding|INFO|Setting lport c153b6ab-5a86-443d-ad94-95d7b82bf483 up in Southbound
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.632 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.632 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.633 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[105f556b-6d15-4536-a176-81a4bfca497c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.634 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8caedbbb-0405-47cc-9ae9-622d725e02a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.645 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[d93e69af-d603-41d8-bfca-a8ec4b9ef961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.655 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c84c5d47-4975-417a-ab92-4916866a2cf7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.680 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[834a1ce8-34e2-419c-aabf-aa25132a1dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 NetworkManager[48999]: <info>  [1769846805.6866] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/226)
Jan 31 08:06:45 compute-2 systemd-udevd[274914]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.685 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6a76700c-2f61-4a70-ae56-98bb6e1c6dca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.707 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[edf99c9f-08b2-4916-93a9-13ef5dcfb7ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.710 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2c397197-80a1-433a-874a-ae15e1933fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 NetworkManager[48999]: <info>  [1769846805.7232] device (tap5cc2535f-00): carrier: link connected
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.726 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ae88cabf-c8bd-4c81-bb87-f2727efda0b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.737 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[af3a8239-857e-49f5-86fe-abc2153589e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705819, 'reachable_time': 41023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274945, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.747 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eb79e87f-a55a-4ce4-abf0-3d90c173fb95]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705819, 'tstamp': 705819}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274946, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.774 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eb689805-4aa8-4d89-8900-264f98aa7d61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705819, 'reachable_time': 41023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274947, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.795 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba27b09-833e-4d82-9af0-4de75793567e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.832 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4359e37a-0577-47f0-94c8-70c0bac5d1a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.834 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.834 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.835 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.837 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 08:06:45 compute-2 NetworkManager[48999]: <info>  [1769846805.8389] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.839 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.840 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.841 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-2 ovn_controller[133834]: 2026-01-31T08:06:45Z|00456|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.843 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.844 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf5dc2b-0f02-4ada-8c6d-b2f12f3fae32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.844 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:06:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:45.845 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:06:45 compute-2 nova_compute[226829]: 2026-01-31 08:06:45.846 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/507011724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1300736106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:06:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1300736106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:06:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1345138318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:46 compute-2 ceph-mon[77282]: pgmap v2096: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 590 KiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 31 08:06:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:06:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:46.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:06:46 compute-2 podman[275015]: 2026-01-31 08:06:46.149456868 +0000 UTC m=+0.022514734 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:06:46 compute-2 nova_compute[226829]: 2026-01-31 08:06:46.530 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for 66218154-3695-4970-8525-7eaae98f9f14 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:06:46 compute-2 nova_compute[226829]: 2026-01-31 08:06:46.531 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846806.528408, 66218154-3695-4970-8525-7eaae98f9f14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:46 compute-2 nova_compute[226829]: 2026-01-31 08:06:46.531 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] VM Started (Lifecycle Event)
Jan 31 08:06:46 compute-2 nova_compute[226829]: 2026-01-31 08:06:46.648 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:46 compute-2 nova_compute[226829]: 2026-01-31 08:06:46.652 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846806.5288038, 66218154-3695-4970-8525-7eaae98f9f14 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:46 compute-2 nova_compute[226829]: 2026-01-31 08:06:46.652 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] VM Paused (Lifecycle Event)
Jan 31 08:06:46 compute-2 nova_compute[226829]: 2026-01-31 08:06:46.791 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:46 compute-2 nova_compute[226829]: 2026-01-31 08:06:46.813 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:06:46 compute-2 podman[275015]: 2026-01-31 08:06:46.835576694 +0000 UTC m=+0.708634540 container create 6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:06:47 compute-2 systemd[1]: Started libpod-conmon-6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed.scope.
Jan 31 08:06:47 compute-2 nova_compute[226829]: 2026-01-31 08:06:47.052 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 08:06:47 compute-2 sudo[275034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:47 compute-2 sudo[275034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:47 compute-2 sudo[275034]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:47 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:06:47 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5f9a656feb58b268e3d674a3f35bb739fc18731421e1d5f408d69a1422cf92/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:47 compute-2 sudo[275065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:06:47 compute-2 podman[275015]: 2026-01-31 08:06:47.103338548 +0000 UTC m=+0.976396414 container init 6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 08:06:47 compute-2 sudo[275065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:47 compute-2 sudo[275065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:47 compute-2 podman[275015]: 2026-01-31 08:06:47.107930534 +0000 UTC m=+0.980988380 container start 6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:06:47 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[275061]: [NOTICE]   (275091) : New worker (275093) forked
Jan 31 08:06:47 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[275061]: [NOTICE]   (275091) : Loading success.
Jan 31 08:06:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:47.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:06:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:48.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:06:48 compute-2 ceph-mon[77282]: pgmap v2097: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 48 KiB/s wr, 126 op/s
Jan 31 08:06:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:06:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:49.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:06:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.806 226833 DEBUG nova.compute.manager [req-d88fb517-6b81-464b-a008-e14247d38f12 req-03dd89ca-8ed6-4b98-b90f-872d21cbf217 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.807 226833 DEBUG oslo_concurrency.lockutils [req-d88fb517-6b81-464b-a008-e14247d38f12 req-03dd89ca-8ed6-4b98-b90f-872d21cbf217 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.807 226833 DEBUG oslo_concurrency.lockutils [req-d88fb517-6b81-464b-a008-e14247d38f12 req-03dd89ca-8ed6-4b98-b90f-872d21cbf217 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.807 226833 DEBUG oslo_concurrency.lockutils [req-d88fb517-6b81-464b-a008-e14247d38f12 req-03dd89ca-8ed6-4b98-b90f-872d21cbf217 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.807 226833 DEBUG nova.compute.manager [req-d88fb517-6b81-464b-a008-e14247d38f12 req-03dd89ca-8ed6-4b98-b90f-872d21cbf217 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Processing event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.808 226833 DEBUG nova.compute.manager [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.812 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846809.8119488, 66218154-3695-4970-8525-7eaae98f9f14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.812 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] VM Resumed (Lifecycle Event)
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.817 226833 INFO nova.virt.libvirt.driver [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance running successfully.
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.817 226833 DEBUG nova.virt.libvirt.driver [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.847 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.850 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:06:49 compute-2 nova_compute[226829]: 2026-01-31 08:06:49.909 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 08:06:50 compute-2 nova_compute[226829]: 2026-01-31 08:06:50.069 226833 INFO nova.compute.manager [None req-10264891-8dd9-4952-8fe7-e8244db86afe d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance to original state: 'active'
Jan 31 08:06:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:50.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:50 compute-2 nova_compute[226829]: 2026-01-31 08:06:50.383 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:50 compute-2 nova_compute[226829]: 2026-01-31 08:06:50.389 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:50 compute-2 ceph-mon[77282]: pgmap v2098: 305 pgs: 305 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 27 KiB/s wr, 86 op/s
Jan 31 08:06:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:51.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:52 compute-2 nova_compute[226829]: 2026-01-31 08:06:52.136 226833 DEBUG nova.compute.manager [req-78158e0f-ab51-4b7e-a058-f3626985981c req-a97cbfe6-b202-4032-b997-6b1e614b7ab8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:52 compute-2 nova_compute[226829]: 2026-01-31 08:06:52.137 226833 DEBUG oslo_concurrency.lockutils [req-78158e0f-ab51-4b7e-a058-f3626985981c req-a97cbfe6-b202-4032-b997-6b1e614b7ab8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:52 compute-2 nova_compute[226829]: 2026-01-31 08:06:52.137 226833 DEBUG oslo_concurrency.lockutils [req-78158e0f-ab51-4b7e-a058-f3626985981c req-a97cbfe6-b202-4032-b997-6b1e614b7ab8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:52 compute-2 nova_compute[226829]: 2026-01-31 08:06:52.137 226833 DEBUG oslo_concurrency.lockutils [req-78158e0f-ab51-4b7e-a058-f3626985981c req-a97cbfe6-b202-4032-b997-6b1e614b7ab8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:52 compute-2 nova_compute[226829]: 2026-01-31 08:06:52.138 226833 DEBUG nova.compute.manager [req-78158e0f-ab51-4b7e-a058-f3626985981c req-a97cbfe6-b202-4032-b997-6b1e614b7ab8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] No waiting events found dispatching network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:52 compute-2 nova_compute[226829]: 2026-01-31 08:06:52.138 226833 WARNING nova.compute.manager [req-78158e0f-ab51-4b7e-a058-f3626985981c req-a97cbfe6-b202-4032-b997-6b1e614b7ab8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received unexpected event network-vif-plugged-c153b6ab-5a86-443d-ad94-95d7b82bf483 for instance with vm_state active and task_state None.
Jan 31 08:06:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:52.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:52 compute-2 nova_compute[226829]: 2026-01-31 08:06:52.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:52 compute-2 ceph-mon[77282]: pgmap v2099: 305 pgs: 305 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 921 KiB/s rd, 26 KiB/s wr, 79 op/s
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.052 226833 DEBUG oslo_concurrency.lockutils [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.052 226833 DEBUG oslo_concurrency.lockutils [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.052 226833 DEBUG oslo_concurrency.lockutils [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "66218154-3695-4970-8525-7eaae98f9f14-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.053 226833 DEBUG oslo_concurrency.lockutils [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.053 226833 DEBUG oslo_concurrency.lockutils [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.054 226833 INFO nova.compute.manager [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Terminating instance
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.055 226833 DEBUG nova.compute.manager [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:06:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:06:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:53.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:06:53 compute-2 kernel: tapc153b6ab-5a (unregistering): left promiscuous mode
Jan 31 08:06:53 compute-2 NetworkManager[48999]: <info>  [1769846813.2284] device (tapc153b6ab-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.235 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:53 compute-2 ovn_controller[133834]: 2026-01-31T08:06:53Z|00457|binding|INFO|Releasing lport c153b6ab-5a86-443d-ad94-95d7b82bf483 from this chassis (sb_readonly=0)
Jan 31 08:06:53 compute-2 ovn_controller[133834]: 2026-01-31T08:06:53Z|00458|binding|INFO|Setting lport c153b6ab-5a86-443d-ad94-95d7b82bf483 down in Southbound
Jan 31 08:06:53 compute-2 ovn_controller[133834]: 2026-01-31T08:06:53Z|00459|binding|INFO|Removing iface tapc153b6ab-5a ovn-installed in OVS
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:53.263 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:c3:66 10.100.0.12'], port_security=['fa:16:3e:f8:c3:66 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '66218154-3695-4970-8525-7eaae98f9f14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c153b6ab-5a86-443d-ad94-95d7b82bf483) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:53.265 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c153b6ab-5a86-443d-ad94-95d7b82bf483 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis
Jan 31 08:06:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:53.267 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:06:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:53.267 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[67cffd12-9f60-4ef5-a874-3b6a2af7a851]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:53.268 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore
Jan 31 08:06:53 compute-2 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 31 08:06:53 compute-2 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000066.scope: Consumed 3.821s CPU time.
Jan 31 08:06:53 compute-2 systemd-machined[195142]: Machine qemu-48-instance-00000066 terminated.
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.470 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.472 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.488 226833 INFO nova.virt.libvirt.driver [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Instance destroyed successfully.
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.489 226833 DEBUG nova.objects.instance [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid 66218154-3695-4970-8525-7eaae98f9f14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:53 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[275061]: [NOTICE]   (275091) : haproxy version is 2.8.14-c23fe91
Jan 31 08:06:53 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[275061]: [NOTICE]   (275091) : path to executable is /usr/sbin/haproxy
Jan 31 08:06:53 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[275061]: [WARNING]  (275091) : Exiting Master process...
Jan 31 08:06:53 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[275061]: [ALERT]    (275091) : Current worker (275093) exited with code 143 (Terminated)
Jan 31 08:06:53 compute-2 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[275061]: [WARNING]  (275091) : All workers exited. Exiting... (0)
Jan 31 08:06:53 compute-2 systemd[1]: libpod-6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed.scope: Deactivated successfully.
Jan 31 08:06:53 compute-2 podman[275130]: 2026-01-31 08:06:53.505737304 +0000 UTC m=+0.176078962 container died 6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.613 226833 DEBUG nova.virt.libvirt.vif [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1412363862',display_name='tempest-ServerActionsTestJSON-server-1412363862',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1412363862',id=102,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:06:49Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-fla0xmix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=66218154-3695-4970-8525-7eaae98f9f14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.614 226833 DEBUG nova.network.os_vif_util [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "address": "fa:16:3e:f8:c3:66", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc153b6ab-5a", "ovs_interfaceid": "c153b6ab-5a86-443d-ad94-95d7b82bf483", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.614 226833 DEBUG nova.network.os_vif_util [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.615 226833 DEBUG os_vif [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.617 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.617 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc153b6ab-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.618 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:53 compute-2 nova_compute[226829]: 2026-01-31 08:06:53.622 226833 INFO os_vif [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:c3:66,bridge_name='br-int',has_traffic_filtering=True,id=c153b6ab-5a86-443d-ad94-95d7b82bf483,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc153b6ab-5a')
Jan 31 08:06:53 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed-userdata-shm.mount: Deactivated successfully.
Jan 31 08:06:53 compute-2 systemd[1]: var-lib-containers-storage-overlay-ef5f9a656feb58b268e3d674a3f35bb739fc18731421e1d5f408d69a1422cf92-merged.mount: Deactivated successfully.
Jan 31 08:06:54 compute-2 podman[275130]: 2026-01-31 08:06:54.050833743 +0000 UTC m=+0.721175431 container cleanup 6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 08:06:54 compute-2 systemd[1]: libpod-conmon-6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed.scope: Deactivated successfully.
Jan 31 08:06:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:06:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:54.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:06:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:54 compute-2 podman[275189]: 2026-01-31 08:06:54.555660137 +0000 UTC m=+0.483510494 container remove 6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:06:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:54.559 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0149b5a1-fe47-423c-afb2-e5829872e67d]: (4, ('Sat Jan 31 08:06:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed)\n6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed\nSat Jan 31 08:06:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed)\n6dfd203a0addef4055bc2a8a837da1a9fb9fa0042b56c23aa0246cf0baac9bed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:54.562 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5124e5-3e89-42cc-9c50-1d210ea53888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:54.563 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:54 compute-2 nova_compute[226829]: 2026-01-31 08:06:54.566 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-2 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 08:06:54 compute-2 nova_compute[226829]: 2026-01-31 08:06:54.571 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:54.573 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aa863719-b9be-4480-8d3b-c51b0f839a77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-2 nova_compute[226829]: 2026-01-31 08:06:54.575 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:54.588 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd8adb0-36f0-467b-8689-a5fde1b36beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:54.590 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbdebbf-0871-4737-918a-95a18dd2cb73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:54.601 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[59fd45fb-ab93-4feb-bf88-a364b26b8e32]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705814, 'reachable_time': 26053, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275205, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-2 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 08:06:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:54.604 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:06:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:06:54.604 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[41c7b1ab-6209-4d84-9b11-668ca3d39cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-2 nova_compute[226829]: 2026-01-31 08:06:54.636 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 e262: 3 total, 3 up, 3 in
Jan 31 08:06:54 compute-2 ceph-mon[77282]: pgmap v2100: 305 pgs: 305 active+clean; 361 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 26 KiB/s wr, 84 op/s
Jan 31 08:06:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:06:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:55.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:06:55 compute-2 nova_compute[226829]: 2026-01-31 08:06:55.391 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:55 compute-2 nova_compute[226829]: 2026-01-31 08:06:55.689 226833 INFO nova.virt.libvirt.driver [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Deleting instance files /var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14_del
Jan 31 08:06:55 compute-2 nova_compute[226829]: 2026-01-31 08:06:55.691 226833 INFO nova.virt.libvirt.driver [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Deletion of /var/lib/nova/instances/66218154-3695-4970-8525-7eaae98f9f14_del complete
Jan 31 08:06:55 compute-2 nova_compute[226829]: 2026-01-31 08:06:55.987 226833 INFO nova.compute.manager [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Took 2.93 seconds to destroy the instance on the hypervisor.
Jan 31 08:06:55 compute-2 nova_compute[226829]: 2026-01-31 08:06:55.988 226833 DEBUG oslo.service.loopingcall [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:06:55 compute-2 nova_compute[226829]: 2026-01-31 08:06:55.988 226833 DEBUG nova.compute.manager [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:06:55 compute-2 nova_compute[226829]: 2026-01-31 08:06:55.988 226833 DEBUG nova.network.neutron [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:06:56 compute-2 ceph-mon[77282]: osdmap e262: 3 total, 3 up, 3 in
Jan 31 08:06:56 compute-2 ceph-mon[77282]: pgmap v2102: 305 pgs: 305 active+clean; 361 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 83 op/s
Jan 31 08:06:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:56.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:57 compute-2 podman[275208]: 2026-01-31 08:06:57.158796854 +0000 UTC m=+0.044465361 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 08:06:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:57.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:57 compute-2 nova_compute[226829]: 2026-01-31 08:06:57.831 226833 DEBUG nova.compute.manager [req-fde05a5c-e8b7-437f-9976-f8abd1c62cd4 req-05b7b3ef-f03f-48d7-8da4-13943691018a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Received event network-vif-deleted-c153b6ab-5a86-443d-ad94-95d7b82bf483 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:57 compute-2 nova_compute[226829]: 2026-01-31 08:06:57.832 226833 INFO nova.compute.manager [req-fde05a5c-e8b7-437f-9976-f8abd1c62cd4 req-05b7b3ef-f03f-48d7-8da4-13943691018a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Neutron deleted interface c153b6ab-5a86-443d-ad94-95d7b82bf483; detaching it from the instance and deleting it from the info cache
Jan 31 08:06:57 compute-2 nova_compute[226829]: 2026-01-31 08:06:57.832 226833 DEBUG nova.network.neutron [req-fde05a5c-e8b7-437f-9976-f8abd1c62cd4 req-05b7b3ef-f03f-48d7-8da4-13943691018a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:57 compute-2 nova_compute[226829]: 2026-01-31 08:06:57.902 226833 DEBUG nova.network.neutron [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:58 compute-2 ceph-mon[77282]: pgmap v2103: 305 pgs: 305 active+clean; 312 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 14 KiB/s wr, 112 op/s
Jan 31 08:06:58 compute-2 nova_compute[226829]: 2026-01-31 08:06:58.115 226833 INFO nova.compute.manager [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Took 2.13 seconds to deallocate network for instance.
Jan 31 08:06:58 compute-2 nova_compute[226829]: 2026-01-31 08:06:58.119 226833 DEBUG nova.compute.manager [req-fde05a5c-e8b7-437f-9976-f8abd1c62cd4 req-05b7b3ef-f03f-48d7-8da4-13943691018a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Detach interface failed, port_id=c153b6ab-5a86-443d-ad94-95d7b82bf483, reason: Instance 66218154-3695-4970-8525-7eaae98f9f14 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:06:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:58.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:58 compute-2 nova_compute[226829]: 2026-01-31 08:06:58.363 226833 DEBUG oslo_concurrency.lockutils [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:58 compute-2 nova_compute[226829]: 2026-01-31 08:06:58.364 226833 DEBUG oslo_concurrency.lockutils [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:58 compute-2 nova_compute[226829]: 2026-01-31 08:06:58.372 226833 DEBUG oslo_concurrency.lockutils [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:58 compute-2 nova_compute[226829]: 2026-01-31 08:06:58.620 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:58 compute-2 nova_compute[226829]: 2026-01-31 08:06:58.843 226833 INFO nova.scheduler.client.report [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Deleted allocations for instance 66218154-3695-4970-8525-7eaae98f9f14
Jan 31 08:06:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:06:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:59.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9876 writes, 49K keys, 9876 commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 9876 writes, 9876 syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1580 writes, 7747 keys, 1580 commit groups, 1.0 writes per commit group, ingest: 15.76 MB, 0.03 MB/s
                                           Interval WAL: 1580 writes, 1580 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     68.4      0.88              0.18        29    0.030       0      0       0.0       0.0
                                             L6      1/0   10.22 MB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   4.3    144.7    121.4      2.14              0.79        28    0.076    167K    15K       0.0       0.0
                                            Sum      1/0   10.22 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3    102.5    105.9      3.02              0.96        57    0.053    167K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0    124.1    122.2      0.57              0.18        12    0.047     45K   3005       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   0.0    144.7    121.4      2.14              0.79        28    0.076    167K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     68.5      0.88              0.18        28    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.059, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.31 GB write, 0.09 MB/s write, 0.30 GB read, 0.09 MB/s read, 3.0 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 33.37 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000343 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1952,32.11 MB,10.5631%) FilterBlock(57,473.80 KB,0.152201%) IndexBlock(57,817.59 KB,0.262642%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:06:59 compute-2 nova_compute[226829]: 2026-01-31 08:06:59.561 226833 DEBUG oslo_concurrency.lockutils [None req-149d8db9-acfb-4895-8e35-dff1d75a056a d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "66218154-3695-4970-8525-7eaae98f9f14" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.761411) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819761643, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 537, "num_deletes": 252, "total_data_size": 793236, "memory_usage": 803384, "flush_reason": "Manual Compaction"}
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819781896, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 523258, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49583, "largest_seqno": 50115, "table_properties": {"data_size": 520325, "index_size": 905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7142, "raw_average_key_size": 19, "raw_value_size": 514368, "raw_average_value_size": 1413, "num_data_blocks": 39, "num_entries": 364, "num_filter_entries": 364, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846794, "oldest_key_time": 1769846794, "file_creation_time": 1769846819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 20552 microseconds, and 3231 cpu microseconds.
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.781967) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 523258 bytes OK
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.781991) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.801928) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.802070) EVENT_LOG_v1 {"time_micros": 1769846819801974, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.802095) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 790077, prev total WAL file size 790077, number of live WAL files 2.
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.803291) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(510KB)], [96(10MB)]
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819803367, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 11243499, "oldest_snapshot_seqno": -1}
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7173 keys, 9276617 bytes, temperature: kUnknown
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819891882, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 9276617, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9231150, "index_size": 26411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 187170, "raw_average_key_size": 26, "raw_value_size": 9105501, "raw_average_value_size": 1269, "num_data_blocks": 1034, "num_entries": 7173, "num_filter_entries": 7173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769846819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.892404) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9276617 bytes
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.894949) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.5 rd, 104.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(39.2) write-amplify(17.7) OK, records in: 7693, records dropped: 520 output_compression: NoCompression
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.895055) EVENT_LOG_v1 {"time_micros": 1769846819894963, "job": 60, "event": "compaction_finished", "compaction_time_micros": 88872, "compaction_time_cpu_micros": 26146, "output_level": 6, "num_output_files": 1, "total_output_size": 9276617, "num_input_records": 7693, "num_output_records": 7173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819895465, "job": 60, "event": "table_file_deletion", "file_number": 98}
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819896798, "job": 60, "event": "table_file_deletion", "file_number": 96}
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.803106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.896890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.896896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.896897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.896899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:59 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:06:59.896901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:07:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:00.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:00 compute-2 nova_compute[226829]: 2026-01-31 08:07:00.393 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:00 compute-2 ceph-mon[77282]: pgmap v2104: 305 pgs: 305 active+clean; 281 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 128 op/s
Jan 31 08:07:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/27261143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:01.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:02 compute-2 ceph-mon[77282]: pgmap v2105: 305 pgs: 305 active+clean; 250 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.0 KiB/s wr, 122 op/s
Jan 31 08:07:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:02.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:02 compute-2 sudo[275230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:02 compute-2 sudo[275230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:02 compute-2 sudo[275230]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:02 compute-2 sudo[275255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:02 compute-2 sudo[275255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:02 compute-2 sudo[275255]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:03.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:03 compute-2 nova_compute[226829]: 2026-01-31 08:07:03.622 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:04.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:04 compute-2 nova_compute[226829]: 2026-01-31 08:07:04.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:04 compute-2 ceph-mon[77282]: pgmap v2106: 305 pgs: 305 active+clean; 235 MiB data, 942 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 KiB/s wr, 118 op/s
Jan 31 08:07:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2669144292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:07:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2669144292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:07:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:05.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:05 compute-2 nova_compute[226829]: 2026-01-31 08:07:05.333 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:05.334 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:05.336 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:07:05 compute-2 nova_compute[226829]: 2026-01-31 08:07:05.395 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:06 compute-2 ceph-mon[77282]: pgmap v2107: 305 pgs: 305 active+clean; 200 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 2.6 KiB/s wr, 90 op/s
Jan 31 08:07:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:06.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:06.878 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:06.879 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:06.879 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:07.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:08.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:08 compute-2 nova_compute[226829]: 2026-01-31 08:07:08.487 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846813.4862492, 66218154-3695-4970-8525-7eaae98f9f14 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:08 compute-2 nova_compute[226829]: 2026-01-31 08:07:08.488 226833 INFO nova.compute.manager [-] [instance: 66218154-3695-4970-8525-7eaae98f9f14] VM Stopped (Lifecycle Event)
Jan 31 08:07:08 compute-2 nova_compute[226829]: 2026-01-31 08:07:08.570 226833 DEBUG nova.compute.manager [None req-6d30f42f-71fb-4e55-84c2-32bced6f85b8 - - - - - -] [instance: 66218154-3695-4970-8525-7eaae98f9f14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:08 compute-2 ceph-mon[77282]: pgmap v2108: 305 pgs: 305 active+clean; 181 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 842 KiB/s rd, 2.3 KiB/s wr, 88 op/s
Jan 31 08:07:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1370603499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:08 compute-2 nova_compute[226829]: 2026-01-31 08:07:08.624 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:08 compute-2 nova_compute[226829]: 2026-01-31 08:07:08.644 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:09.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:10.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:10 compute-2 nova_compute[226829]: 2026-01-31 08:07:10.397 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:10 compute-2 ceph-mon[77282]: pgmap v2109: 305 pgs: 305 active+clean; 141 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.6 KiB/s wr, 60 op/s
Jan 31 08:07:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/325742308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2707410916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:11.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:12.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:12.340 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:12 compute-2 ceph-mon[77282]: pgmap v2110: 305 pgs: 305 active+clean; 138 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 506 KiB/s wr, 44 op/s
Jan 31 08:07:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:13 compute-2 nova_compute[226829]: 2026-01-31 08:07:13.626 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:14.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:14 compute-2 ceph-mon[77282]: pgmap v2111: 305 pgs: 305 active+clean; 157 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 836 KiB/s wr, 57 op/s
Jan 31 08:07:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:15 compute-2 nova_compute[226829]: 2026-01-31 08:07:15.400 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:07:16 compute-2 podman[275287]: 2026-01-31 08:07:16.192226154 +0000 UTC m=+0.079453633 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:07:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:16.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:16 compute-2 nova_compute[226829]: 2026-01-31 08:07:16.494 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:16 compute-2 nova_compute[226829]: 2026-01-31 08:07:16.494 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:16 compute-2 nova_compute[226829]: 2026-01-31 08:07:16.708 226833 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:07:16 compute-2 ceph-mon[77282]: pgmap v2112: 305 pgs: 305 active+clean; 202 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.1 MiB/s wr, 79 op/s
Jan 31 08:07:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3047707787' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:07:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3047707787' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:07:17 compute-2 nova_compute[226829]: 2026-01-31 08:07:17.095 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:17 compute-2 nova_compute[226829]: 2026-01-31 08:07:17.096 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:17 compute-2 nova_compute[226829]: 2026-01-31 08:07:17.105 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:07:17 compute-2 nova_compute[226829]: 2026-01-31 08:07:17.106 226833 INFO nova.compute.claims [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:07:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:17 compute-2 nova_compute[226829]: 2026-01-31 08:07:17.580 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:17 compute-2 nova_compute[226829]: 2026-01-31 08:07:17.720 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:17 compute-2 nova_compute[226829]: 2026-01-31 08:07:17.721 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:18.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:18 compute-2 nova_compute[226829]: 2026-01-31 08:07:18.629 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:18 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/719574498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:18 compute-2 nova_compute[226829]: 2026-01-31 08:07:18.772 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:18 compute-2 nova_compute[226829]: 2026-01-31 08:07:18.779 226833 DEBUG nova.compute.provider_tree [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:07:18 compute-2 ceph-mon[77282]: pgmap v2113: 305 pgs: 305 active+clean; 213 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 3.5 MiB/s wr, 94 op/s
Jan 31 08:07:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:19.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:19 compute-2 nova_compute[226829]: 2026-01-31 08:07:19.389 226833 DEBUG nova.scheduler.client.report [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:07:19 compute-2 nova_compute[226829]: 2026-01-31 08:07:19.476 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:19 compute-2 nova_compute[226829]: 2026-01-31 08:07:19.477 226833 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:07:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:19 compute-2 nova_compute[226829]: 2026-01-31 08:07:19.689 226833 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:07:19 compute-2 nova_compute[226829]: 2026-01-31 08:07:19.690 226833 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:07:19 compute-2 nova_compute[226829]: 2026-01-31 08:07:19.964 226833 DEBUG nova.policy [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ba00f420cd940ff802c16e8c25c35c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b97d933ec6c34696b0483a895f47feef', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:07:19 compute-2 nova_compute[226829]: 2026-01-31 08:07:19.974 226833 INFO nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:07:20 compute-2 nova_compute[226829]: 2026-01-31 08:07:20.156 226833 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:07:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:07:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:20.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:07:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2761439231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/719574498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:20 compute-2 nova_compute[226829]: 2026-01-31 08:07:20.402 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:20 compute-2 nova_compute[226829]: 2026-01-31 08:07:20.562 226833 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:07:20 compute-2 nova_compute[226829]: 2026-01-31 08:07:20.563 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:07:20 compute-2 nova_compute[226829]: 2026-01-31 08:07:20.563 226833 INFO nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Creating image(s)
Jan 31 08:07:20 compute-2 nova_compute[226829]: 2026-01-31 08:07:20.981 226833 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:21 compute-2 nova_compute[226829]: 2026-01-31 08:07:21.014 226833 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:21 compute-2 nova_compute[226829]: 2026-01-31 08:07:21.042 226833 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:21 compute-2 nova_compute[226829]: 2026-01-31 08:07:21.046 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:21 compute-2 nova_compute[226829]: 2026-01-31 08:07:21.095 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:21 compute-2 nova_compute[226829]: 2026-01-31 08:07:21.096 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:21 compute-2 nova_compute[226829]: 2026-01-31 08:07:21.097 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:21 compute-2 nova_compute[226829]: 2026-01-31 08:07:21.097 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:21 compute-2 ceph-mon[77282]: pgmap v2114: 305 pgs: 305 active+clean; 213 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Jan 31 08:07:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3349957308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:22.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:22 compute-2 sudo[275409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:22 compute-2 sudo[275409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:22 compute-2 sudo[275409]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:22 compute-2 sudo[275434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:22 compute-2 sudo[275434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:22 compute-2 sudo[275434]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:22 compute-2 nova_compute[226829]: 2026-01-31 08:07:22.660 226833 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:22 compute-2 nova_compute[226829]: 2026-01-31 08:07:22.666 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:22 compute-2 nova_compute[226829]: 2026-01-31 08:07:22.695 226833 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Successfully created port: 29eb75c3-cb12-4842-8a9c-44444a26662c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:07:22 compute-2 nova_compute[226829]: 2026-01-31 08:07:22.700 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2887923547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:22 compute-2 ceph-mon[77282]: pgmap v2115: 305 pgs: 305 active+clean; 213 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 3.5 MiB/s wr, 76 op/s
Jan 31 08:07:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:23.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:23 compute-2 nova_compute[226829]: 2026-01-31 08:07:23.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:23 compute-2 nova_compute[226829]: 2026-01-31 08:07:23.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:07:23 compute-2 nova_compute[226829]: 2026-01-31 08:07:23.583 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:07:23 compute-2 nova_compute[226829]: 2026-01-31 08:07:23.583 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:23 compute-2 nova_compute[226829]: 2026-01-31 08:07:23.633 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:24.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:24 compute-2 ceph-mon[77282]: pgmap v2116: 305 pgs: 305 active+clean; 221 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 3.1 MiB/s wr, 71 op/s
Jan 31 08:07:24 compute-2 nova_compute[226829]: 2026-01-31 08:07:24.583 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.917s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:24 compute-2 nova_compute[226829]: 2026-01-31 08:07:24.668 226833 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] resizing rbd image 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:07:24 compute-2 nova_compute[226829]: 2026-01-31 08:07:24.806 226833 DEBUG nova.objects.instance [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'migration_context' on Instance uuid 492e9a08-e874-4d99-9ea5-e5f4f0895fde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:07:24 compute-2 nova_compute[226829]: 2026-01-31 08:07:24.842 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:07:24 compute-2 nova_compute[226829]: 2026-01-31 08:07:24.842 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Ensure instance console log exists: /var/lib/nova/instances/492e9a08-e874-4d99-9ea5-e5f4f0895fde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:07:24 compute-2 nova_compute[226829]: 2026-01-31 08:07:24.843 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:24 compute-2 nova_compute[226829]: 2026-01-31 08:07:24.843 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:24 compute-2 nova_compute[226829]: 2026-01-31 08:07:24.843 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:24 compute-2 nova_compute[226829]: 2026-01-31 08:07:24.972 226833 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Successfully updated port: 29eb75c3-cb12-4842-8a9c-44444a26662c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:07:25 compute-2 nova_compute[226829]: 2026-01-31 08:07:25.085 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "refresh_cache-492e9a08-e874-4d99-9ea5-e5f4f0895fde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:07:25 compute-2 nova_compute[226829]: 2026-01-31 08:07:25.085 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquired lock "refresh_cache-492e9a08-e874-4d99-9ea5-e5f4f0895fde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:07:25 compute-2 nova_compute[226829]: 2026-01-31 08:07:25.085 226833 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:07:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:25.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:25 compute-2 nova_compute[226829]: 2026-01-31 08:07:25.404 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/853840470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1519004852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:25 compute-2 nova_compute[226829]: 2026-01-31 08:07:25.694 226833 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:07:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:26.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:26 compute-2 nova_compute[226829]: 2026-01-31 08:07:26.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:26 compute-2 ceph-mon[77282]: pgmap v2117: 305 pgs: 305 active+clean; 242 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Jan 31 08:07:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:27.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:27 compute-2 nova_compute[226829]: 2026-01-31 08:07:27.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:27 compute-2 nova_compute[226829]: 2026-01-31 08:07:27.590 226833 DEBUG nova.compute.manager [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Received event network-changed-29eb75c3-cb12-4842-8a9c-44444a26662c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:27 compute-2 nova_compute[226829]: 2026-01-31 08:07:27.590 226833 DEBUG nova.compute.manager [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Refreshing instance network info cache due to event network-changed-29eb75c3-cb12-4842-8a9c-44444a26662c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:07:27 compute-2 nova_compute[226829]: 2026-01-31 08:07:27.590 226833 DEBUG oslo_concurrency.lockutils [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-492e9a08-e874-4d99-9ea5-e5f4f0895fde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:07:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1126133665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:28 compute-2 podman[275557]: 2026-01-31 08:07:28.164054574 +0000 UTC m=+0.052677804 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:07:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.479 226833 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Updating instance_info_cache with network_info: [{"id": "29eb75c3-cb12-4842-8a9c-44444a26662c", "address": "fa:16:3e:a5:9b:6d", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eb75c3-cb", "ovs_interfaceid": "29eb75c3-cb12-4842-8a9c-44444a26662c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.628 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.629 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.629 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.629 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.630 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.650 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.709 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Releasing lock "refresh_cache-492e9a08-e874-4d99-9ea5-e5f4f0895fde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.710 226833 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Instance network_info: |[{"id": "29eb75c3-cb12-4842-8a9c-44444a26662c", "address": "fa:16:3e:a5:9b:6d", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eb75c3-cb", "ovs_interfaceid": "29eb75c3-cb12-4842-8a9c-44444a26662c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.711 226833 DEBUG oslo_concurrency.lockutils [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-492e9a08-e874-4d99-9ea5-e5f4f0895fde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.711 226833 DEBUG nova.network.neutron [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Refreshing network info cache for port 29eb75c3-cb12-4842-8a9c-44444a26662c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.714 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Start _get_guest_xml network_info=[{"id": "29eb75c3-cb12-4842-8a9c-44444a26662c", "address": "fa:16:3e:a5:9b:6d", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eb75c3-cb", "ovs_interfaceid": "29eb75c3-cb12-4842-8a9c-44444a26662c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.720 226833 WARNING nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.727 226833 DEBUG nova.virt.libvirt.host [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.728 226833 DEBUG nova.virt.libvirt.host [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.731 226833 DEBUG nova.virt.libvirt.host [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.732 226833 DEBUG nova.virt.libvirt.host [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.733 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.734 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.734 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.735 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.735 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.735 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.735 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.735 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.736 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.736 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.736 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.736 226833 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:07:28 compute-2 nova_compute[226829]: 2026-01-31 08:07:28.739 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 40K writes, 167K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.05 MB/s
                                           Cumulative WAL: 40K writes, 13K syncs, 2.93 writes per sync, written: 0.17 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8361 writes, 35K keys, 8361 commit groups, 1.0 writes per commit group, ingest: 36.03 MB, 0.06 MB/s
                                           Interval WAL: 8362 writes, 3149 syncs, 2.66 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:07:28 compute-2 ceph-mon[77282]: pgmap v2118: 305 pgs: 305 active+clean; 226 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 3.2 MiB/s wr, 121 op/s
Jan 31 08:07:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/216643493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1322478330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.039 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:07:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1080046223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.151 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.170 226833 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.173 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.241 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.243 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4350MB free_disk=20.90032958984375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.243 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.243 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.526 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 492e9a08-e874-4d99-9ea5-e5f4f0895fde actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.527 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.527 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:07:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:07:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/828928554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.588 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.589 226833 DEBUG nova.virt.libvirt.vif [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-261953401',display_name='tempest-tempest.common.compute-instance-261953401-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-261953401-1',id=109,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-w1iilz7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:20Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=492e9a08-e874-4d99-9ea5-e5f4f0895fde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29eb75c3-cb12-4842-8a9c-44444a26662c", "address": "fa:16:3e:a5:9b:6d", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eb75c3-cb", "ovs_interfaceid": "29eb75c3-cb12-4842-8a9c-44444a26662c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.589 226833 DEBUG nova.network.os_vif_util [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "29eb75c3-cb12-4842-8a9c-44444a26662c", "address": "fa:16:3e:a5:9b:6d", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eb75c3-cb", "ovs_interfaceid": "29eb75c3-cb12-4842-8a9c-44444a26662c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.590 226833 DEBUG nova.network.os_vif_util [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9b:6d,bridge_name='br-int',has_traffic_filtering=True,id=29eb75c3-cb12-4842-8a9c-44444a26662c,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eb75c3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.591 226833 DEBUG nova.objects.instance [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'pci_devices' on Instance uuid 492e9a08-e874-4d99-9ea5-e5f4f0895fde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:07:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.647 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <uuid>492e9a08-e874-4d99-9ea5-e5f4f0895fde</uuid>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <name>instance-0000006d</name>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <nova:name>tempest-tempest.common.compute-instance-261953401-1</nova:name>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:07:28</nova:creationTime>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <nova:user uuid="5ba00f420cd940ff802c16e8c25c35c4">tempest-MultipleCreateTestJSON-744612571-project-member</nova:user>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <nova:project uuid="b97d933ec6c34696b0483a895f47feef">tempest-MultipleCreateTestJSON-744612571</nova:project>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <nova:port uuid="29eb75c3-cb12-4842-8a9c-44444a26662c">
Jan 31 08:07:29 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <system>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <entry name="serial">492e9a08-e874-4d99-9ea5-e5f4f0895fde</entry>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <entry name="uuid">492e9a08-e874-4d99-9ea5-e5f4f0895fde</entry>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     </system>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <os>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   </os>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <features>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   </features>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk">
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       </source>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk.config">
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       </source>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:07:29 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:a5:9b:6d"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <target dev="tap29eb75c3-cb"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/492e9a08-e874-4d99-9ea5-e5f4f0895fde/console.log" append="off"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <video>
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     </video>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:07:29 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:07:29 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:07:29 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:07:29 compute-2 nova_compute[226829]: </domain>
Jan 31 08:07:29 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.648 226833 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Preparing to wait for external event network-vif-plugged-29eb75c3-cb12-4842-8a9c-44444a26662c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.648 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.649 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.649 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.650 226833 DEBUG nova.virt.libvirt.vif [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-261953401',display_name='tempest-tempest.common.compute-instance-261953401-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-261953401-1',id=109,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-w1iilz7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:20Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=492e9a08-e874-4d99-9ea5-e5f4f0895fde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29eb75c3-cb12-4842-8a9c-44444a26662c", "address": "fa:16:3e:a5:9b:6d", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eb75c3-cb", "ovs_interfaceid": "29eb75c3-cb12-4842-8a9c-44444a26662c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.650 226833 DEBUG nova.network.os_vif_util [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "29eb75c3-cb12-4842-8a9c-44444a26662c", "address": "fa:16:3e:a5:9b:6d", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eb75c3-cb", "ovs_interfaceid": "29eb75c3-cb12-4842-8a9c-44444a26662c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.651 226833 DEBUG nova.network.os_vif_util [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9b:6d,bridge_name='br-int',has_traffic_filtering=True,id=29eb75c3-cb12-4842-8a9c-44444a26662c,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eb75c3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.651 226833 DEBUG os_vif [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9b:6d,bridge_name='br-int',has_traffic_filtering=True,id=29eb75c3-cb12-4842-8a9c-44444a26662c,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eb75c3-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.651 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.652 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.652 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.655 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.656 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29eb75c3-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.656 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29eb75c3-cb, col_values=(('external_ids', {'iface-id': '29eb75c3-cb12-4842-8a9c-44444a26662c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:9b:6d', 'vm-uuid': '492e9a08-e874-4d99-9ea5-e5f4f0895fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.658 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:29 compute-2 NetworkManager[48999]: <info>  [1769846849.6597] manager: (tap29eb75c3-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.661 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.664 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.665 226833 INFO os_vif [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9b:6d,bridge_name='br-int',has_traffic_filtering=True,id=29eb75c3-cb12-4842-8a9c-44444a26662c,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eb75c3-cb')
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.837 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.838 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.838 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No VIF found with MAC fa:16:3e:a5:9b:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:07:29 compute-2 nova_compute[226829]: 2026-01-31 08:07:29.839 226833 INFO nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Using config drive
Jan 31 08:07:30 compute-2 nova_compute[226829]: 2026-01-31 08:07:30.058 226833 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:30 compute-2 nova_compute[226829]: 2026-01-31 08:07:30.064 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2733325384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1322478330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1080046223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4248679973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2150270100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-2 ceph-mon[77282]: pgmap v2119: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Jan 31 08:07:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/828928554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:07:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:07:30 compute-2 nova_compute[226829]: 2026-01-31 08:07:30.407 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1898498656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-2 nova_compute[226829]: 2026-01-31 08:07:30.512 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:30 compute-2 nova_compute[226829]: 2026-01-31 08:07:30.517 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:07:30 compute-2 nova_compute[226829]: 2026-01-31 08:07:30.629 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:07:30 compute-2 nova_compute[226829]: 2026-01-31 08:07:30.827 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:07:30 compute-2 nova_compute[226829]: 2026-01-31 08:07:30.828 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:30 compute-2 nova_compute[226829]: 2026-01-31 08:07:30.917 226833 INFO nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Creating config drive at /var/lib/nova/instances/492e9a08-e874-4d99-9ea5-e5f4f0895fde/disk.config
Jan 31 08:07:30 compute-2 nova_compute[226829]: 2026-01-31 08:07:30.921 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/492e9a08-e874-4d99-9ea5-e5f4f0895fde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqrlvnb2y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.044 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/492e9a08-e874-4d99-9ea5-e5f4f0895fde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqrlvnb2y" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.070 226833 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.073 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/492e9a08-e874-4d99-9ea5-e5f4f0895fde/disk.config 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1898498656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:31.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.253 226833 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/492e9a08-e874-4d99-9ea5-e5f4f0895fde/disk.config 492e9a08-e874-4d99-9ea5-e5f4f0895fde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.255 226833 INFO nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Deleting local config drive /var/lib/nova/instances/492e9a08-e874-4d99-9ea5-e5f4f0895fde/disk.config because it was imported into RBD.
Jan 31 08:07:31 compute-2 kernel: tap29eb75c3-cb: entered promiscuous mode
Jan 31 08:07:31 compute-2 NetworkManager[48999]: <info>  [1769846851.3081] manager: (tap29eb75c3-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.307 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-2 ovn_controller[133834]: 2026-01-31T08:07:31Z|00460|binding|INFO|Claiming lport 29eb75c3-cb12-4842-8a9c-44444a26662c for this chassis.
Jan 31 08:07:31 compute-2 ovn_controller[133834]: 2026-01-31T08:07:31Z|00461|binding|INFO|29eb75c3-cb12-4842-8a9c-44444a26662c: Claiming fa:16:3e:a5:9b:6d 10.100.0.14
Jan 31 08:07:31 compute-2 ovn_controller[133834]: 2026-01-31T08:07:31Z|00462|binding|INFO|Setting lport 29eb75c3-cb12-4842-8a9c-44444a26662c ovn-installed in OVS
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.318 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-2 systemd-machined[195142]: New machine qemu-49-instance-0000006d.
Jan 31 08:07:31 compute-2 systemd[1]: Started Virtual Machine qemu-49-instance-0000006d.
Jan 31 08:07:31 compute-2 systemd-udevd[275758]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:07:31 compute-2 ovn_controller[133834]: 2026-01-31T08:07:31Z|00463|binding|INFO|Setting lport 29eb75c3-cb12-4842-8a9c-44444a26662c up in Southbound
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.364 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:9b:6d 10.100.0.14'], port_security=['fa:16:3e:a5:9b:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492e9a08-e874-4d99-9ea5-e5f4f0895fde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aae9ebc2-f854-4add-b86c-a5209381ad20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97d933ec6c34696b0483a895f47feef', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5e2db8d-c3a5-46be-bb72-92eb36b476fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14b206ab-a379-4d4a-9b80-58ba0ce20e17, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=29eb75c3-cb12-4842-8a9c-44444a26662c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.366 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 29eb75c3-cb12-4842-8a9c-44444a26662c in datapath aae9ebc2-f854-4add-b86c-a5209381ad20 bound to our chassis
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.368 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:07:31 compute-2 NetworkManager[48999]: <info>  [1769846851.3699] device (tap29eb75c3-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:07:31 compute-2 NetworkManager[48999]: <info>  [1769846851.3723] device (tap29eb75c3-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.377 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1d751ad2-b72f-422c-ae5e-e3f172c5f3c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.378 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaae9ebc2-f1 in ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.380 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaae9ebc2-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.380 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[496a7195-151c-405a-bfe0-1a2c07cc713a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.381 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5b190794-9a9e-4a32-84b3-ed76a845f2bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.394 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[fefcf01f-fc25-40b5-9b77-ea86e18b471a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.402 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b94624-a05f-4511-b1e2-cc3ad3753c93]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.424 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[879bfd46-1c5b-4609-acfb-c960f61f4f34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.428 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f44a4b5e-1bf7-4504-abfe-8d63368e0913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 NetworkManager[48999]: <info>  [1769846851.4304] manager: (tapaae9ebc2-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/230)
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.453 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4b14fdb5-f8ab-4908-b077-530f2ea9db46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.456 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d690064f-8678-4288-8a57-66a247544541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 NetworkManager[48999]: <info>  [1769846851.4728] device (tapaae9ebc2-f0): carrier: link connected
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.478 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7a8d3b-9f67-4032-ac46-d1d9e3ba65b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.489 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[01bbe5cd-4e9b-4309-ad7a-bdae3cadf077]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaae9ebc2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:4b:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710394, 'reachable_time': 24098, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275791, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.497 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b4a0c4-eae5-42ef-b99e-886f7f1ca35c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:4b98'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710394, 'tstamp': 710394}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275792, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.506 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[48bd6f8a-d9cc-48f0-a3fd-9d19baae9fed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaae9ebc2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:4b:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710394, 'reachable_time': 24098, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275793, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.520 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[20f8a581-412d-484f-9b29-bb1f70e909e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.555 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5c60fa40-205f-491c-8002-ced57b81c244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.556 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae9ebc2-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.557 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.557 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaae9ebc2-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.559 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-2 kernel: tapaae9ebc2-f0: entered promiscuous mode
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.562 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.563 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaae9ebc2-f0, col_values=(('external_ids', {'iface-id': '18dd3685-0abe-42cf-9017-dc52c6cb4266'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:31 compute-2 ovn_controller[133834]: 2026-01-31T08:07:31Z|00464|binding|INFO|Releasing lport 18dd3685-0abe-42cf-9017-dc52c6cb4266 from this chassis (sb_readonly=0)
Jan 31 08:07:31 compute-2 NetworkManager[48999]: <info>  [1769846851.5658] manager: (tapaae9ebc2-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.565 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.568 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.569 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0c56ae60-9d5a-400f-8499-1d8411d4fe2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.570 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:07:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:31.571 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'env', 'PROCESS_TAG=haproxy-aae9ebc2-f854-4add-b86c-a5209381ad20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aae9ebc2-f854-4add-b86c-a5209381ad20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.570 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.767 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846851.7671087, 492e9a08-e874-4d99-9ea5-e5f4f0895fde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.768 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] VM Started (Lifecycle Event)
Jan 31 08:07:31 compute-2 podman[275867]: 2026-01-31 08:07:31.877532118 +0000 UTC m=+0.045776427 container create 9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.894 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.898 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846851.7672617, 492e9a08-e874-4d99-9ea5-e5f4f0895fde => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:31 compute-2 nova_compute[226829]: 2026-01-31 08:07:31.899 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] VM Paused (Lifecycle Event)
Jan 31 08:07:31 compute-2 systemd[1]: Started libpod-conmon-9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce.scope.
Jan 31 08:07:31 compute-2 podman[275867]: 2026-01-31 08:07:31.850211405 +0000 UTC m=+0.018455764 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:07:31 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:07:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/888025b91aa0b8bb5f2cf4d90d6a267e6d99b36f6cdbbc82ac6f83819c569c49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:31 compute-2 podman[275867]: 2026-01-31 08:07:31.964864364 +0000 UTC m=+0.133108683 container init 9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:07:31 compute-2 podman[275867]: 2026-01-31 08:07:31.968586195 +0000 UTC m=+0.136830504 container start 9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 08:07:31 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[275882]: [NOTICE]   (275886) : New worker (275888) forked
Jan 31 08:07:31 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[275882]: [NOTICE]   (275886) : Loading success.
Jan 31 08:07:32 compute-2 nova_compute[226829]: 2026-01-31 08:07:32.025 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:32 compute-2 nova_compute[226829]: 2026-01-31 08:07:32.030 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:07:32 compute-2 nova_compute[226829]: 2026-01-31 08:07:32.153 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:07:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1252239058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:32 compute-2 ceph-mon[77282]: pgmap v2120: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 222 op/s
Jan 31 08:07:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:32.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:32 compute-2 nova_compute[226829]: 2026-01-31 08:07:32.254 226833 DEBUG nova.network.neutron [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Updated VIF entry in instance network info cache for port 29eb75c3-cb12-4842-8a9c-44444a26662c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:07:32 compute-2 nova_compute[226829]: 2026-01-31 08:07:32.254 226833 DEBUG nova.network.neutron [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Updating instance_info_cache with network_info: [{"id": "29eb75c3-cb12-4842-8a9c-44444a26662c", "address": "fa:16:3e:a5:9b:6d", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eb75c3-cb", "ovs_interfaceid": "29eb75c3-cb12-4842-8a9c-44444a26662c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:32 compute-2 nova_compute[226829]: 2026-01-31 08:07:32.423 226833 DEBUG oslo_concurrency.lockutils [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-492e9a08-e874-4d99-9ea5-e5f4f0895fde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:07:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1848110150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:34.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:34 compute-2 ceph-mgr[77635]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 08:07:34 compute-2 ceph-mon[77282]: pgmap v2121: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 226 op/s
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.383 226833 DEBUG nova.compute.manager [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Received event network-vif-plugged-29eb75c3-cb12-4842-8a9c-44444a26662c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.384 226833 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.384 226833 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.385 226833 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.385 226833 DEBUG nova.compute.manager [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Processing event network-vif-plugged-29eb75c3-cb12-4842-8a9c-44444a26662c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.386 226833 DEBUG nova.compute.manager [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Received event network-vif-plugged-29eb75c3-cb12-4842-8a9c-44444a26662c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.386 226833 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.387 226833 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.387 226833 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.387 226833 DEBUG nova.compute.manager [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] No waiting events found dispatching network-vif-plugged-29eb75c3-cb12-4842-8a9c-44444a26662c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.388 226833 WARNING nova.compute.manager [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Received unexpected event network-vif-plugged-29eb75c3-cb12-4842-8a9c-44444a26662c for instance with vm_state building and task_state spawning.
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.389 226833 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.393 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846854.392979, 492e9a08-e874-4d99-9ea5-e5f4f0895fde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.393 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] VM Resumed (Lifecycle Event)
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.396 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.400 226833 INFO nova.virt.libvirt.driver [-] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Instance spawned successfully.
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.400 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.433 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.433 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.434 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.434 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.434 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.434 226833 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.440 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.443 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.480 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.517 226833 INFO nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Took 13.95 seconds to spawn the instance on the hypervisor.
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.517 226833 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.610 226833 INFO nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Took 17.60 seconds to build instance.
Jan 31 08:07:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.659 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:34 compute-2 nova_compute[226829]: 2026-01-31 08:07:34.671 226833 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:35.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:35 compute-2 nova_compute[226829]: 2026-01-31 08:07:35.408 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:07:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:36.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:07:36 compute-2 ceph-mon[77282]: pgmap v2122: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.5 MiB/s wr, 261 op/s
Jan 31 08:07:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:37.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:37 compute-2 ceph-mon[77282]: pgmap v2123: 305 pgs: 305 active+clean; 235 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 3.1 MiB/s wr, 347 op/s
Jan 31 08:07:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:38.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.364 226833 DEBUG oslo_concurrency.lockutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.365 226833 DEBUG oslo_concurrency.lockutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.365 226833 DEBUG oslo_concurrency.lockutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.365 226833 DEBUG oslo_concurrency.lockutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.366 226833 DEBUG oslo_concurrency.lockutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.367 226833 INFO nova.compute.manager [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Terminating instance
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.369 226833 DEBUG nova.compute.manager [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:07:38 compute-2 kernel: tap29eb75c3-cb (unregistering): left promiscuous mode
Jan 31 08:07:38 compute-2 NetworkManager[48999]: <info>  [1769846858.4120] device (tap29eb75c3-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:07:38 compute-2 ovn_controller[133834]: 2026-01-31T08:07:38Z|00465|binding|INFO|Releasing lport 29eb75c3-cb12-4842-8a9c-44444a26662c from this chassis (sb_readonly=0)
Jan 31 08:07:38 compute-2 ovn_controller[133834]: 2026-01-31T08:07:38Z|00466|binding|INFO|Setting lport 29eb75c3-cb12-4842-8a9c-44444a26662c down in Southbound
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.421 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:38 compute-2 ovn_controller[133834]: 2026-01-31T08:07:38Z|00467|binding|INFO|Removing iface tap29eb75c3-cb ovn-installed in OVS
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.431 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.444 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:9b:6d 10.100.0.14'], port_security=['fa:16:3e:a5:9b:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492e9a08-e874-4d99-9ea5-e5f4f0895fde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aae9ebc2-f854-4add-b86c-a5209381ad20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97d933ec6c34696b0483a895f47feef', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5e2db8d-c3a5-46be-bb72-92eb36b476fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14b206ab-a379-4d4a-9b80-58ba0ce20e17, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=29eb75c3-cb12-4842-8a9c-44444a26662c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.446 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 29eb75c3-cb12-4842-8a9c-44444a26662c in datapath aae9ebc2-f854-4add-b86c-a5209381ad20 unbound from our chassis
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.448 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aae9ebc2-f854-4add-b86c-a5209381ad20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.450 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eb09bf65-ec15-4303-87f0-83efeb6483e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.451 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 namespace which is not needed anymore
Jan 31 08:07:38 compute-2 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Jan 31 08:07:38 compute-2 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006d.scope: Consumed 4.483s CPU time.
Jan 31 08:07:38 compute-2 systemd-machined[195142]: Machine qemu-49-instance-0000006d terminated.
Jan 31 08:07:38 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[275882]: [NOTICE]   (275886) : haproxy version is 2.8.14-c23fe91
Jan 31 08:07:38 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[275882]: [NOTICE]   (275886) : path to executable is /usr/sbin/haproxy
Jan 31 08:07:38 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[275882]: [WARNING]  (275886) : Exiting Master process...
Jan 31 08:07:38 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[275882]: [ALERT]    (275886) : Current worker (275888) exited with code 143 (Terminated)
Jan 31 08:07:38 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[275882]: [WARNING]  (275886) : All workers exited. Exiting... (0)
Jan 31 08:07:38 compute-2 systemd[1]: libpod-9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce.scope: Deactivated successfully.
Jan 31 08:07:38 compute-2 podman[275927]: 2026-01-31 08:07:38.561984797 +0000 UTC m=+0.043173296 container died 9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:07:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-888025b91aa0b8bb5f2cf4d90d6a267e6d99b36f6cdbbc82ac6f83819c569c49-merged.mount: Deactivated successfully.
Jan 31 08:07:38 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce-userdata-shm.mount: Deactivated successfully.
Jan 31 08:07:38 compute-2 podman[275927]: 2026-01-31 08:07:38.603429574 +0000 UTC m=+0.084618083 container cleanup 9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.607 226833 INFO nova.virt.libvirt.driver [-] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Instance destroyed successfully.
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.608 226833 DEBUG nova.objects.instance [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'resources' on Instance uuid 492e9a08-e874-4d99-9ea5-e5f4f0895fde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:07:38 compute-2 systemd[1]: libpod-conmon-9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce.scope: Deactivated successfully.
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.646 226833 DEBUG nova.virt.libvirt.vif [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-261953401',display_name='tempest-tempest.common.compute-instance-261953401-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-261953401-1',id=109,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:34Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-w1iilz7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:07:34Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=492e9a08-e874-4d99-9ea5-e5f4f0895fde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29eb75c3-cb12-4842-8a9c-44444a26662c", "address": "fa:16:3e:a5:9b:6d", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eb75c3-cb", "ovs_interfaceid": "29eb75c3-cb12-4842-8a9c-44444a26662c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.646 226833 DEBUG nova.network.os_vif_util [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "29eb75c3-cb12-4842-8a9c-44444a26662c", "address": "fa:16:3e:a5:9b:6d", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eb75c3-cb", "ovs_interfaceid": "29eb75c3-cb12-4842-8a9c-44444a26662c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.647 226833 DEBUG nova.network.os_vif_util [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9b:6d,bridge_name='br-int',has_traffic_filtering=True,id=29eb75c3-cb12-4842-8a9c-44444a26662c,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eb75c3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.647 226833 DEBUG os_vif [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9b:6d,bridge_name='br-int',has_traffic_filtering=True,id=29eb75c3-cb12-4842-8a9c-44444a26662c,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eb75c3-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.651 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.651 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29eb75c3-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.653 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.654 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.659 226833 INFO os_vif [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9b:6d,bridge_name='br-int',has_traffic_filtering=True,id=29eb75c3-cb12-4842-8a9c-44444a26662c,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eb75c3-cb')
Jan 31 08:07:38 compute-2 podman[275967]: 2026-01-31 08:07:38.679465993 +0000 UTC m=+0.053094186 container remove 9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.683 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[929ce672-2f50-4d4c-9779-a8b5bd40c598]: (4, ('Sat Jan 31 08:07:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 (9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce)\n9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce\nSat Jan 31 08:07:38 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 (9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce)\n9e119f49f386ecaba4b08182ebdb21f8cf91ec5d6bf643804c0637525ed217ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.685 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[59ba6cb7-c352-4b47-b54d-aafe55b86576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.686 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae9ebc2-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.688 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:38 compute-2 kernel: tapaae9ebc2-f0: left promiscuous mode
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.695 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[11fed94b-4574-4877-9c8b-0d115184f0f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.711 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1b053a83-ce99-45ea-9e46-db47589496aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.712 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[966921ac-4c9f-4519-8c6d-4926b1aeab6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.725 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[937369a5-db13-496d-8709-e438fee71e4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710389, 'reachable_time': 38257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275997, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.728 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:07:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:38.728 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[76593afa-a61f-455e-b576-f237c95358c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:38 compute-2 systemd[1]: run-netns-ovnmeta\x2daae9ebc2\x2df854\x2d4add\x2db86c\x2da5209381ad20.mount: Deactivated successfully.
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.857 226833 DEBUG nova.compute.manager [req-8650de9b-3a4b-40fa-91ba-cd91eca7e820 req-f65b2725-bde2-4a51-82e5-f41d7d5d6a56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Received event network-vif-unplugged-29eb75c3-cb12-4842-8a9c-44444a26662c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.857 226833 DEBUG oslo_concurrency.lockutils [req-8650de9b-3a4b-40fa-91ba-cd91eca7e820 req-f65b2725-bde2-4a51-82e5-f41d7d5d6a56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.857 226833 DEBUG oslo_concurrency.lockutils [req-8650de9b-3a4b-40fa-91ba-cd91eca7e820 req-f65b2725-bde2-4a51-82e5-f41d7d5d6a56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.857 226833 DEBUG oslo_concurrency.lockutils [req-8650de9b-3a4b-40fa-91ba-cd91eca7e820 req-f65b2725-bde2-4a51-82e5-f41d7d5d6a56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.858 226833 DEBUG nova.compute.manager [req-8650de9b-3a4b-40fa-91ba-cd91eca7e820 req-f65b2725-bde2-4a51-82e5-f41d7d5d6a56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] No waiting events found dispatching network-vif-unplugged-29eb75c3-cb12-4842-8a9c-44444a26662c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:07:38 compute-2 nova_compute[226829]: 2026-01-31 08:07:38.858 226833 DEBUG nova.compute.manager [req-8650de9b-3a4b-40fa-91ba-cd91eca7e820 req-f65b2725-bde2-4a51-82e5-f41d7d5d6a56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Received event network-vif-unplugged-29eb75c3-cb12-4842-8a9c-44444a26662c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:07:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:39.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:39 compute-2 nova_compute[226829]: 2026-01-31 08:07:39.300 226833 INFO nova.virt.libvirt.driver [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Deleting instance files /var/lib/nova/instances/492e9a08-e874-4d99-9ea5-e5f4f0895fde_del
Jan 31 08:07:39 compute-2 nova_compute[226829]: 2026-01-31 08:07:39.301 226833 INFO nova.virt.libvirt.driver [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Deletion of /var/lib/nova/instances/492e9a08-e874-4d99-9ea5-e5f4f0895fde_del complete
Jan 31 08:07:39 compute-2 nova_compute[226829]: 2026-01-31 08:07:39.441 226833 INFO nova.compute.manager [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Took 1.07 seconds to destroy the instance on the hypervisor.
Jan 31 08:07:39 compute-2 nova_compute[226829]: 2026-01-31 08:07:39.441 226833 DEBUG oslo.service.loopingcall [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:07:39 compute-2 nova_compute[226829]: 2026-01-31 08:07:39.442 226833 DEBUG nova.compute.manager [-] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:07:39 compute-2 nova_compute[226829]: 2026-01-31 08:07:39.442 226833 DEBUG nova.network.neutron [-] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:07:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:39 compute-2 nova_compute[226829]: 2026-01-31 08:07:39.828 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:39 compute-2 nova_compute[226829]: 2026-01-31 08:07:39.828 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:07:39 compute-2 nova_compute[226829]: 2026-01-31 08:07:39.830 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:39 compute-2 nova_compute[226829]: 2026-01-31 08:07:39.931 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:40.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:40 compute-2 nova_compute[226829]: 2026-01-31 08:07:40.410 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:40 compute-2 ceph-mon[77282]: pgmap v2124: 305 pgs: 305 active+clean; 226 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 2.2 MiB/s wr, 330 op/s
Jan 31 08:07:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:41.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:41 compute-2 nova_compute[226829]: 2026-01-31 08:07:41.456 226833 DEBUG nova.compute.manager [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Received event network-vif-plugged-29eb75c3-cb12-4842-8a9c-44444a26662c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:41 compute-2 nova_compute[226829]: 2026-01-31 08:07:41.456 226833 DEBUG oslo_concurrency.lockutils [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:41 compute-2 nova_compute[226829]: 2026-01-31 08:07:41.457 226833 DEBUG oslo_concurrency.lockutils [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:41 compute-2 nova_compute[226829]: 2026-01-31 08:07:41.457 226833 DEBUG oslo_concurrency.lockutils [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:41 compute-2 nova_compute[226829]: 2026-01-31 08:07:41.457 226833 DEBUG nova.compute.manager [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] No waiting events found dispatching network-vif-plugged-29eb75c3-cb12-4842-8a9c-44444a26662c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:07:41 compute-2 nova_compute[226829]: 2026-01-31 08:07:41.457 226833 WARNING nova.compute.manager [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Received unexpected event network-vif-plugged-29eb75c3-cb12-4842-8a9c-44444a26662c for instance with vm_state active and task_state deleting.
Jan 31 08:07:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:42.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:42 compute-2 ceph-mon[77282]: pgmap v2125: 305 pgs: 305 active+clean; 217 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.0 MiB/s wr, 296 op/s
Jan 31 08:07:42 compute-2 sudo[276002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:42 compute-2 sudo[276002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:42 compute-2 sudo[276002]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:42 compute-2 sudo[276027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:42 compute-2 sudo[276027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:42 compute-2 sudo[276027]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:42 compute-2 nova_compute[226829]: 2026-01-31 08:07:42.652 226833 DEBUG nova.network.neutron [-] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:42.724 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:42 compute-2 nova_compute[226829]: 2026-01-31 08:07:42.724 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:42.725 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:07:42 compute-2 nova_compute[226829]: 2026-01-31 08:07:42.763 226833 INFO nova.compute.manager [-] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Took 3.32 seconds to deallocate network for instance.
Jan 31 08:07:42 compute-2 nova_compute[226829]: 2026-01-31 08:07:42.998 226833 DEBUG oslo_concurrency.lockutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:42 compute-2 nova_compute[226829]: 2026-01-31 08:07:42.999 226833 DEBUG oslo_concurrency.lockutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:43 compute-2 nova_compute[226829]: 2026-01-31 08:07:43.048 226833 DEBUG oslo_concurrency.processutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:43.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3483021904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:43 compute-2 nova_compute[226829]: 2026-01-31 08:07:43.500 226833 DEBUG oslo_concurrency.processutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:43 compute-2 nova_compute[226829]: 2026-01-31 08:07:43.506 226833 DEBUG nova.compute.provider_tree [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:07:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3483021904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:43 compute-2 nova_compute[226829]: 2026-01-31 08:07:43.655 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:43 compute-2 nova_compute[226829]: 2026-01-31 08:07:43.729 226833 DEBUG nova.scheduler.client.report [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:07:43 compute-2 nova_compute[226829]: 2026-01-31 08:07:43.757 226833 DEBUG nova.compute.manager [req-1ba0f81e-75de-419c-889b-2f384fb8e817 req-dfef31cb-575d-443a-9efa-995a4971ac04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Received event network-vif-deleted-29eb75c3-cb12-4842-8a9c-44444a26662c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:43 compute-2 nova_compute[226829]: 2026-01-31 08:07:43.884 226833 DEBUG oslo_concurrency.lockutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:44 compute-2 nova_compute[226829]: 2026-01-31 08:07:44.166 226833 INFO nova.scheduler.client.report [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Deleted allocations for instance 492e9a08-e874-4d99-9ea5-e5f4f0895fde
Jan 31 08:07:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:44.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:44 compute-2 nova_compute[226829]: 2026-01-31 08:07:44.486 226833 DEBUG oslo_concurrency.lockutils [None req-2fad3204-a7f0-4551-8441-790260af608d 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "492e9a08-e874-4d99-9ea5-e5f4f0895fde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:44 compute-2 ceph-mon[77282]: pgmap v2126: 305 pgs: 305 active+clean; 206 MiB data, 933 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.5 MiB/s wr, 285 op/s
Jan 31 08:07:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:07:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3791122339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:07:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:07:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3791122339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:07:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:45.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:45 compute-2 nova_compute[226829]: 2026-01-31 08:07:45.412 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3791122339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:07:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3791122339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:07:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2089367561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:46.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:46 compute-2 ceph-mon[77282]: pgmap v2127: 305 pgs: 305 active+clean; 196 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 314 op/s
Jan 31 08:07:47 compute-2 sudo[276087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:47 compute-2 sudo[276087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-2 sudo[276087]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-2 sudo[276124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:47 compute-2 sudo[276124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-2 podman[276076]: 2026-01-31 08:07:47.237725997 +0000 UTC m=+0.128506957 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:07:47 compute-2 sudo[276124]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:47.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:47 compute-2 sudo[276152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:47 compute-2 sudo[276152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-2 sudo[276152]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-2 sudo[276177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:07:47 compute-2 sudo[276177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-2 sudo[276177]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:07:47.728 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:47 compute-2 sudo[276233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:47 compute-2 sudo[276233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-2 sudo[276233]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-2 sudo[276258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:47 compute-2 sudo[276258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-2 sudo[276258]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-2 sudo[276283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:47 compute-2 sudo[276283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-2 sudo[276283]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-2 sudo[276308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 08:07:47 compute-2 sudo[276308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:48 compute-2 sudo[276308]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:48.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:48 compute-2 ceph-mon[77282]: pgmap v2128: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 292 op/s
Jan 31 08:07:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-2 nova_compute[226829]: 2026-01-31 08:07:48.657 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:49.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:50.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:50 compute-2 nova_compute[226829]: 2026-01-31 08:07:50.414 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:50 compute-2 ceph-mon[77282]: pgmap v2129: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.0 MiB/s wr, 204 op/s
Jan 31 08:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:07:50 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:07:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.189 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.190 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:52.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.334 226833 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.559 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.559 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.567 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.567 226833 INFO nova.compute.claims [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:07:52 compute-2 ceph-mon[77282]: pgmap v2130: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 3.1 MiB/s wr, 135 op/s
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.767 226833 DEBUG nova.scheduler.client.report [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.825 226833 DEBUG nova.scheduler.client.report [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.826 226833 DEBUG nova.compute.provider_tree [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.845 226833 DEBUG nova.scheduler.client.report [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.870 226833 DEBUG nova.scheduler.client.report [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:07:52 compute-2 nova_compute[226829]: 2026-01-31 08:07:52.929 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:53.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2015146701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.452 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.458 226833 DEBUG nova.compute.provider_tree [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.516 226833 DEBUG nova.scheduler.client.report [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.601 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846858.600957, 492e9a08-e874-4d99-9ea5-e5f4f0895fde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.604 226833 INFO nova.compute.manager [-] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] VM Stopped (Lifecycle Event)
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.607 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.607 226833 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.629 226833 DEBUG nova.compute.manager [None req-855cdb65-918f-4413-9a04-b8462116c821 - - - - - -] [instance: 492e9a08-e874-4d99-9ea5-e5f4f0895fde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.659 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.691 226833 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.691 226833 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:07:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3487880865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2015146701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.796 226833 INFO nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:07:53 compute-2 nova_compute[226829]: 2026-01-31 08:07:53.871 226833 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.103 226833 DEBUG nova.policy [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ba00f420cd940ff802c16e8c25c35c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b97d933ec6c34696b0483a895f47feef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.136 226833 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.137 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.137 226833 INFO nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Creating image(s)
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.162 226833 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.192 226833 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.224 226833 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.229 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:54.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.287 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.287 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.288 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.288 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.318 226833 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:54 compute-2 nova_compute[226829]: 2026-01-31 08:07:54.323 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:54 compute-2 ceph-mon[77282]: pgmap v2131: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 268 KiB/s rd, 1.4 MiB/s wr, 73 op/s
Jan 31 08:07:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/755635357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3162634110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:55.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:55 compute-2 nova_compute[226829]: 2026-01-31 08:07:55.663 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:56 compute-2 ceph-mon[77282]: pgmap v2132: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 235 KiB/s rd, 893 KiB/s wr, 54 op/s
Jan 31 08:07:56 compute-2 nova_compute[226829]: 2026-01-31 08:07:56.169 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.846s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:56 compute-2 nova_compute[226829]: 2026-01-31 08:07:56.231 226833 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] resizing rbd image 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:07:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:56.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:56 compute-2 nova_compute[226829]: 2026-01-31 08:07:56.335 226833 DEBUG nova.objects.instance [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'migration_context' on Instance uuid 286c72fb-d5ca-43f7-9b53-3d1b5c000db4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:07:56 compute-2 sudo[276544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:56 compute-2 sudo[276544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:56 compute-2 sudo[276544]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:56 compute-2 sudo[276569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:07:56 compute-2 sudo[276569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:56 compute-2 sudo[276569]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e263 e263: 3 total, 3 up, 3 in
Jan 31 08:07:57 compute-2 nova_compute[226829]: 2026-01-31 08:07:57.188 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:07:57 compute-2 nova_compute[226829]: 2026-01-31 08:07:57.190 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Ensure instance console log exists: /var/lib/nova/instances/286c72fb-d5ca-43f7-9b53-3d1b5c000db4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:07:57 compute-2 nova_compute[226829]: 2026-01-31 08:07:57.191 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:57 compute-2 nova_compute[226829]: 2026-01-31 08:07:57.191 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:57 compute-2 nova_compute[226829]: 2026-01-31 08:07:57.192 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:57 compute-2 ceph-mon[77282]: osdmap e263: 3 total, 3 up, 3 in
Jan 31 08:07:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:07:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:57.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:07:57 compute-2 nova_compute[226829]: 2026-01-31 08:07:57.818 226833 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Successfully created port: 82b29aed-298c-485b-89e3-31b60500ecab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:07:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:58.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e264 e264: 3 total, 3 up, 3 in
Jan 31 08:07:58 compute-2 ceph-mon[77282]: pgmap v2134: 305 pgs: 305 active+clean; 258 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 78 op/s
Jan 31 08:07:58 compute-2 nova_compute[226829]: 2026-01-31 08:07:58.661 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:59 compute-2 podman[276595]: 2026-01-31 08:07:59.165794978 +0000 UTC m=+0.052538930 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 08:07:59 compute-2 ceph-mon[77282]: osdmap e264: 3 total, 3 up, 3 in
Jan 31 08:07:59 compute-2 nova_compute[226829]: 2026-01-31 08:07:59.300 226833 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Successfully updated port: 82b29aed-298c-485b-89e3-31b60500ecab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:07:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e265 e265: 3 total, 3 up, 3 in
Jan 31 08:07:59 compute-2 nova_compute[226829]: 2026-01-31 08:07:59.343 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "refresh_cache-286c72fb-d5ca-43f7-9b53-3d1b5c000db4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:07:59 compute-2 nova_compute[226829]: 2026-01-31 08:07:59.344 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquired lock "refresh_cache-286c72fb-d5ca-43f7-9b53-3d1b5c000db4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:07:59 compute-2 nova_compute[226829]: 2026-01-31 08:07:59.344 226833 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:07:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:07:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:59.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:59 compute-2 nova_compute[226829]: 2026-01-31 08:07:59.747 226833 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:08:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:00.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:00 compute-2 ceph-mon[77282]: osdmap e265: 3 total, 3 up, 3 in
Jan 31 08:08:00 compute-2 ceph-mon[77282]: pgmap v2137: 305 pgs: 305 active+clean; 333 MiB data, 994 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 11 MiB/s wr, 316 op/s
Jan 31 08:08:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3387995892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.664 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.834 226833 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Updating instance_info_cache with network_info: [{"id": "82b29aed-298c-485b-89e3-31b60500ecab", "address": "fa:16:3e:55:d4:fa", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82b29aed-29", "ovs_interfaceid": "82b29aed-298c-485b-89e3-31b60500ecab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.840 226833 DEBUG nova.compute.manager [req-309ac1bb-7a25-4b47-957c-63e91e72d536 req-53922df6-80ce-4682-af55-ae29b9a4b23e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Received event network-changed-82b29aed-298c-485b-89e3-31b60500ecab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.841 226833 DEBUG nova.compute.manager [req-309ac1bb-7a25-4b47-957c-63e91e72d536 req-53922df6-80ce-4682-af55-ae29b9a4b23e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Refreshing instance network info cache due to event network-changed-82b29aed-298c-485b-89e3-31b60500ecab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.841 226833 DEBUG oslo_concurrency.lockutils [req-309ac1bb-7a25-4b47-957c-63e91e72d536 req-53922df6-80ce-4682-af55-ae29b9a4b23e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-286c72fb-d5ca-43f7-9b53-3d1b5c000db4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.984 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Releasing lock "refresh_cache-286c72fb-d5ca-43f7-9b53-3d1b5c000db4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.984 226833 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Instance network_info: |[{"id": "82b29aed-298c-485b-89e3-31b60500ecab", "address": "fa:16:3e:55:d4:fa", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82b29aed-29", "ovs_interfaceid": "82b29aed-298c-485b-89e3-31b60500ecab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.985 226833 DEBUG oslo_concurrency.lockutils [req-309ac1bb-7a25-4b47-957c-63e91e72d536 req-53922df6-80ce-4682-af55-ae29b9a4b23e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-286c72fb-d5ca-43f7-9b53-3d1b5c000db4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.985 226833 DEBUG nova.network.neutron [req-309ac1bb-7a25-4b47-957c-63e91e72d536 req-53922df6-80ce-4682-af55-ae29b9a4b23e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Refreshing network info cache for port 82b29aed-298c-485b-89e3-31b60500ecab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.989 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Start _get_guest_xml network_info=[{"id": "82b29aed-298c-485b-89e3-31b60500ecab", "address": "fa:16:3e:55:d4:fa", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82b29aed-29", "ovs_interfaceid": "82b29aed-298c-485b-89e3-31b60500ecab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:08:00 compute-2 nova_compute[226829]: 2026-01-31 08:08:00.994 226833 WARNING nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.003 226833 DEBUG nova.virt.libvirt.host [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.003 226833 DEBUG nova.virt.libvirt.host [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.010 226833 DEBUG nova.virt.libvirt.host [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.010 226833 DEBUG nova.virt.libvirt.host [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.012 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.013 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.013 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.014 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.014 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.014 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.014 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.015 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.015 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.015 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.016 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.016 226833 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.020 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:08:01 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1609935540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.435 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.458 226833 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.461 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:01.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:08:01 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4021844921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4031210827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.976 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.978 226833 DEBUG nova.virt.libvirt.vif [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-882171554',display_name='tempest-MultipleCreateTestJSON-server-882171554-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-882171554-1',id=111,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-z1qzyfi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:53Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=286c72fb-d5ca-43f7-9b53-3d1b5c000db4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82b29aed-298c-485b-89e3-31b60500ecab", "address": "fa:16:3e:55:d4:fa", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82b29aed-29", "ovs_interfaceid": "82b29aed-298c-485b-89e3-31b60500ecab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.978 226833 DEBUG nova.network.os_vif_util [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "82b29aed-298c-485b-89e3-31b60500ecab", "address": "fa:16:3e:55:d4:fa", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82b29aed-29", "ovs_interfaceid": "82b29aed-298c-485b-89e3-31b60500ecab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.979 226833 DEBUG nova.network.os_vif_util [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:d4:fa,bridge_name='br-int',has_traffic_filtering=True,id=82b29aed-298c-485b-89e3-31b60500ecab,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82b29aed-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:08:01 compute-2 nova_compute[226829]: 2026-01-31 08:08:01.980 226833 DEBUG nova.objects.instance [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'pci_devices' on Instance uuid 286c72fb-d5ca-43f7-9b53-3d1b5c000db4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.050 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <uuid>286c72fb-d5ca-43f7-9b53-3d1b5c000db4</uuid>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <name>instance-0000006f</name>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <nova:name>tempest-MultipleCreateTestJSON-server-882171554-1</nova:name>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:08:00</nova:creationTime>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <nova:user uuid="5ba00f420cd940ff802c16e8c25c35c4">tempest-MultipleCreateTestJSON-744612571-project-member</nova:user>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <nova:project uuid="b97d933ec6c34696b0483a895f47feef">tempest-MultipleCreateTestJSON-744612571</nova:project>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <nova:port uuid="82b29aed-298c-485b-89e3-31b60500ecab">
Jan 31 08:08:02 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <system>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <entry name="serial">286c72fb-d5ca-43f7-9b53-3d1b5c000db4</entry>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <entry name="uuid">286c72fb-d5ca-43f7-9b53-3d1b5c000db4</entry>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     </system>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <os>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   </os>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <features>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   </features>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk">
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       </source>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk.config">
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       </source>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:08:02 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:55:d4:fa"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <target dev="tap82b29aed-29"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/286c72fb-d5ca-43f7-9b53-3d1b5c000db4/console.log" append="off"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <video>
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     </video>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:08:02 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:08:02 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:08:02 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:08:02 compute-2 nova_compute[226829]: </domain>
Jan 31 08:08:02 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.052 226833 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Preparing to wait for external event network-vif-plugged-82b29aed-298c-485b-89e3-31b60500ecab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.052 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.052 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.053 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.054 226833 DEBUG nova.virt.libvirt.vif [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-882171554',display_name='tempest-MultipleCreateTestJSON-server-882171554-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-882171554-1',id=111,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-z1qzyfi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:53Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=286c72fb-d5ca-43f7-9b53-3d1b5c000db4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82b29aed-298c-485b-89e3-31b60500ecab", "address": "fa:16:3e:55:d4:fa", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82b29aed-29", "ovs_interfaceid": "82b29aed-298c-485b-89e3-31b60500ecab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.054 226833 DEBUG nova.network.os_vif_util [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "82b29aed-298c-485b-89e3-31b60500ecab", "address": "fa:16:3e:55:d4:fa", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82b29aed-29", "ovs_interfaceid": "82b29aed-298c-485b-89e3-31b60500ecab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.055 226833 DEBUG nova.network.os_vif_util [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:d4:fa,bridge_name='br-int',has_traffic_filtering=True,id=82b29aed-298c-485b-89e3-31b60500ecab,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82b29aed-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.055 226833 DEBUG os_vif [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:d4:fa,bridge_name='br-int',has_traffic_filtering=True,id=82b29aed-298c-485b-89e3-31b60500ecab,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82b29aed-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.056 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.056 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.057 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.060 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.060 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82b29aed-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.061 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82b29aed-29, col_values=(('external_ids', {'iface-id': '82b29aed-298c-485b-89e3-31b60500ecab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:d4:fa', 'vm-uuid': '286c72fb-d5ca-43f7-9b53-3d1b5c000db4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.062 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-2 NetworkManager[48999]: <info>  [1769846882.0643] manager: (tap82b29aed-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.065 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.070 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.071 226833 INFO os_vif [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:d4:fa,bridge_name='br-int',has_traffic_filtering=True,id=82b29aed-298c-485b-89e3-31b60500ecab,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82b29aed-29')
Jan 31 08:08:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:02.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.447 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.447 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.447 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No VIF found with MAC fa:16:3e:55:d4:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.448 226833 INFO nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Using config drive
Jan 31 08:08:02 compute-2 nova_compute[226829]: 2026-01-31 08:08:02.472 226833 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:02 compute-2 sudo[276700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:02 compute-2 sudo[276700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:02 compute-2 sudo[276700]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:02 compute-2 sudo[276725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:02 compute-2 sudo[276725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:02 compute-2 sudo[276725]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1609935540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:02 compute-2 ceph-mon[77282]: pgmap v2138: 305 pgs: 305 active+clean; 364 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 14 MiB/s wr, 386 op/s
Jan 31 08:08:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4021844921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.057 226833 DEBUG nova.network.neutron [req-309ac1bb-7a25-4b47-957c-63e91e72d536 req-53922df6-80ce-4682-af55-ae29b9a4b23e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Updated VIF entry in instance network info cache for port 82b29aed-298c-485b-89e3-31b60500ecab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.058 226833 DEBUG nova.network.neutron [req-309ac1bb-7a25-4b47-957c-63e91e72d536 req-53922df6-80ce-4682-af55-ae29b9a4b23e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Updating instance_info_cache with network_info: [{"id": "82b29aed-298c-485b-89e3-31b60500ecab", "address": "fa:16:3e:55:d4:fa", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82b29aed-29", "ovs_interfaceid": "82b29aed-298c-485b-89e3-31b60500ecab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.081 226833 DEBUG oslo_concurrency.lockutils [req-309ac1bb-7a25-4b47-957c-63e91e72d536 req-53922df6-80ce-4682-af55-ae29b9a4b23e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-286c72fb-d5ca-43f7-9b53-3d1b5c000db4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.179 226833 INFO nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Creating config drive at /var/lib/nova/instances/286c72fb-d5ca-43f7-9b53-3d1b5c000db4/disk.config
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.184 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/286c72fb-d5ca-43f7-9b53-3d1b5c000db4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6fx70cpy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.307 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/286c72fb-d5ca-43f7-9b53-3d1b5c000db4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6fx70cpy" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.344 226833 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.348 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/286c72fb-d5ca-43f7-9b53-3d1b5c000db4/disk.config 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.527 226833 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/286c72fb-d5ca-43f7-9b53-3d1b5c000db4/disk.config 286c72fb-d5ca-43f7-9b53-3d1b5c000db4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.527 226833 INFO nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Deleting local config drive /var/lib/nova/instances/286c72fb-d5ca-43f7-9b53-3d1b5c000db4/disk.config because it was imported into RBD.
Jan 31 08:08:03 compute-2 kernel: tap82b29aed-29: entered promiscuous mode
Jan 31 08:08:03 compute-2 NetworkManager[48999]: <info>  [1769846883.5679] manager: (tap82b29aed-29): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.567 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:03 compute-2 ovn_controller[133834]: 2026-01-31T08:08:03Z|00468|binding|INFO|Claiming lport 82b29aed-298c-485b-89e3-31b60500ecab for this chassis.
Jan 31 08:08:03 compute-2 ovn_controller[133834]: 2026-01-31T08:08:03Z|00469|binding|INFO|82b29aed-298c-485b-89e3-31b60500ecab: Claiming fa:16:3e:55:d4:fa 10.100.0.7
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.574 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:03 compute-2 systemd-udevd[276802]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:08:03 compute-2 systemd-machined[195142]: New machine qemu-50-instance-0000006f.
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.598 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:03 compute-2 NetworkManager[48999]: <info>  [1769846883.6025] device (tap82b29aed-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:08:03 compute-2 ovn_controller[133834]: 2026-01-31T08:08:03Z|00470|binding|INFO|Setting lport 82b29aed-298c-485b-89e3-31b60500ecab ovn-installed in OVS
Jan 31 08:08:03 compute-2 NetworkManager[48999]: <info>  [1769846883.6029] device (tap82b29aed-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.604 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:03 compute-2 systemd[1]: Started Virtual Machine qemu-50-instance-0000006f.
Jan 31 08:08:03 compute-2 ovn_controller[133834]: 2026-01-31T08:08:03Z|00471|binding|INFO|Setting lport 82b29aed-298c-485b-89e3-31b60500ecab up in Southbound
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.618 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:d4:fa 10.100.0.7'], port_security=['fa:16:3e:55:d4:fa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '286c72fb-d5ca-43f7-9b53-3d1b5c000db4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aae9ebc2-f854-4add-b86c-a5209381ad20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97d933ec6c34696b0483a895f47feef', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5e2db8d-c3a5-46be-bb72-92eb36b476fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14b206ab-a379-4d4a-9b80-58ba0ce20e17, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=82b29aed-298c-485b-89e3-31b60500ecab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.619 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 82b29aed-298c-485b-89e3-31b60500ecab in datapath aae9ebc2-f854-4add-b86c-a5209381ad20 bound to our chassis
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.621 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.630 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[80bce4cb-6476-411c-b8db-81fc5357d971]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.630 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaae9ebc2-f1 in ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.633 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaae9ebc2-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.633 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1a038a-16b4-4e38-9f4b-5f86e9900e75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.634 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[11e6e4ee-e2ce-4969-983c-18bec8e56e54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.646 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[c4deaa40-de41-4710-91f0-f45783ec56b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.656 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8aae1251-ac8d-41d1-a9c7-7a84f8210f48]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.679 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[af4408f2-ed83-47a2-a9e0-7acf7544cf1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.685 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f387d600-81b9-487e-9336-027dd5794b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 NetworkManager[48999]: <info>  [1769846883.6863] manager: (tapaae9ebc2-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/234)
Jan 31 08:08:03 compute-2 systemd-udevd[276805]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.713 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9964696c-6f27-4ca6-8d67-289e05802ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.715 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[26a2eeee-4007-49a2-b5fe-11076fad4487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 NetworkManager[48999]: <info>  [1769846883.7337] device (tapaae9ebc2-f0): carrier: link connected
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.737 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5eaa1a-b341-4567-a024-1b22661f65cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.756 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffca175-c347-45be-b97a-be7c3df2cad6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaae9ebc2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:4b:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713620, 'reachable_time': 35932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276837, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.770 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[88e4b4a8-0272-44f9-ae77-a59597de00f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:4b98'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713620, 'tstamp': 713620}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276838, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.785 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c206888a-fa80-4dec-ae60-8a9195a2c8b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaae9ebc2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:4b:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713620, 'reachable_time': 35932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276839, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.811 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[04894586-4f3e-485e-92f7-28373a0f3d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.852 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[028ab280-3beb-448c-8fa5-09ffb237efec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.853 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae9ebc2-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.854 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.854 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaae9ebc2-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.856 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:03 compute-2 NetworkManager[48999]: <info>  [1769846883.8568] manager: (tapaae9ebc2-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Jan 31 08:08:03 compute-2 kernel: tapaae9ebc2-f0: entered promiscuous mode
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.858 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaae9ebc2-f0, col_values=(('external_ids', {'iface-id': '18dd3685-0abe-42cf-9017-dc52c6cb4266'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:03 compute-2 ovn_controller[133834]: 2026-01-31T08:08:03Z|00472|binding|INFO|Releasing lport 18dd3685-0abe-42cf-9017-dc52c6cb4266 from this chassis (sb_readonly=0)
Jan 31 08:08:03 compute-2 nova_compute[226829]: 2026-01-31 08:08:03.866 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.866 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.867 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[375a724c-2b70-4fc9-93d9-793f76497126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.868 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:08:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:03.868 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'env', 'PROCESS_TAG=haproxy-aae9ebc2-f854-4add-b86c-a5209381ad20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aae9ebc2-f854-4add-b86c-a5209381ad20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:08:03 compute-2 ceph-mon[77282]: pgmap v2139: 305 pgs: 305 active+clean; 372 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 337 op/s
Jan 31 08:08:04 compute-2 nova_compute[226829]: 2026-01-31 08:08:04.089 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846884.0887055, 286c72fb-d5ca-43f7-9b53-3d1b5c000db4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:04 compute-2 nova_compute[226829]: 2026-01-31 08:08:04.090 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] VM Started (Lifecycle Event)
Jan 31 08:08:04 compute-2 nova_compute[226829]: 2026-01-31 08:08:04.122 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:04 compute-2 nova_compute[226829]: 2026-01-31 08:08:04.129 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846884.0888953, 286c72fb-d5ca-43f7-9b53-3d1b5c000db4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:04 compute-2 nova_compute[226829]: 2026-01-31 08:08:04.129 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] VM Paused (Lifecycle Event)
Jan 31 08:08:04 compute-2 podman[276913]: 2026-01-31 08:08:04.19733345 +0000 UTC m=+0.040071241 container create d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:08:04 compute-2 systemd[1]: Started libpod-conmon-d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08.scope.
Jan 31 08:08:04 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:08:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6092afe5d4f9e69998d6cbcfe3efaaa351029dfe79fbd740134a34d6eb2457de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:08:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:04.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:08:04 compute-2 podman[276913]: 2026-01-31 08:08:04.269915185 +0000 UTC m=+0.112652986 container init d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:08:04 compute-2 podman[276913]: 2026-01-31 08:08:04.175314061 +0000 UTC m=+0.018051872 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:08:04 compute-2 podman[276913]: 2026-01-31 08:08:04.274078407 +0000 UTC m=+0.116816198 container start d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:08:04 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[276929]: [NOTICE]   (276933) : New worker (276935) forked
Jan 31 08:08:04 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[276929]: [NOTICE]   (276933) : Loading success.
Jan 31 08:08:04 compute-2 nova_compute[226829]: 2026-01-31 08:08:04.340 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:04 compute-2 nova_compute[226829]: 2026-01-31 08:08:04.343 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:08:04 compute-2 nova_compute[226829]: 2026-01-31 08:08:04.590 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:08:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e266 e266: 3 total, 3 up, 3 in
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.576 226833 DEBUG nova.compute.manager [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Received event network-vif-plugged-82b29aed-298c-485b-89e3-31b60500ecab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.577 226833 DEBUG oslo_concurrency.lockutils [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.577 226833 DEBUG oslo_concurrency.lockutils [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.577 226833 DEBUG oslo_concurrency.lockutils [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.578 226833 DEBUG nova.compute.manager [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Processing event network-vif-plugged-82b29aed-298c-485b-89e3-31b60500ecab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.578 226833 DEBUG nova.compute.manager [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Received event network-vif-plugged-82b29aed-298c-485b-89e3-31b60500ecab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.579 226833 DEBUG oslo_concurrency.lockutils [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.579 226833 DEBUG oslo_concurrency.lockutils [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.579 226833 DEBUG oslo_concurrency.lockutils [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.580 226833 DEBUG nova.compute.manager [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] No waiting events found dispatching network-vif-plugged-82b29aed-298c-485b-89e3-31b60500ecab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.580 226833 WARNING nova.compute.manager [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Received unexpected event network-vif-plugged-82b29aed-298c-485b-89e3-31b60500ecab for instance with vm_state building and task_state spawning.
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.581 226833 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.585 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846885.5854952, 286c72fb-d5ca-43f7-9b53-3d1b5c000db4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.585 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] VM Resumed (Lifecycle Event)
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.587 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.590 226833 INFO nova.virt.libvirt.driver [-] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Instance spawned successfully.
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.590 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.663 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.664 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.665 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.666 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.667 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.668 226833 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.673 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.677 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.682 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:08:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:05.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.744 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:08:05 compute-2 ceph-mon[77282]: osdmap e266: 3 total, 3 up, 3 in
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.870 226833 INFO nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Took 11.73 seconds to spawn the instance on the hypervisor.
Jan 31 08:08:05 compute-2 nova_compute[226829]: 2026-01-31 08:08:05.870 226833 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:06 compute-2 nova_compute[226829]: 2026-01-31 08:08:06.009 226833 INFO nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Took 13.49 seconds to build instance.
Jan 31 08:08:06 compute-2 nova_compute[226829]: 2026-01-31 08:08:06.126 226833 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:06.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:06 compute-2 ceph-mon[77282]: pgmap v2141: 305 pgs: 305 active+clean; 372 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 8.2 MiB/s wr, 338 op/s
Jan 31 08:08:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:06.880 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:06.881 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:06.881 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:07 compute-2 nova_compute[226829]: 2026-01-31 08:08:07.064 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e267 e267: 3 total, 3 up, 3 in
Jan 31 08:08:07 compute-2 ceph-mon[77282]: pgmap v2142: 305 pgs: 305 active+clean; 372 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.6 MiB/s wr, 195 op/s
Jan 31 08:08:07 compute-2 ceph-mon[77282]: osdmap e267: 3 total, 3 up, 3 in
Jan 31 08:08:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:08.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:08 compute-2 nova_compute[226829]: 2026-01-31 08:08:08.911 226833 DEBUG oslo_concurrency.lockutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:08 compute-2 nova_compute[226829]: 2026-01-31 08:08:08.912 226833 DEBUG oslo_concurrency.lockutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:08 compute-2 nova_compute[226829]: 2026-01-31 08:08:08.912 226833 DEBUG oslo_concurrency.lockutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:08 compute-2 nova_compute[226829]: 2026-01-31 08:08:08.912 226833 DEBUG oslo_concurrency.lockutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:08 compute-2 nova_compute[226829]: 2026-01-31 08:08:08.912 226833 DEBUG oslo_concurrency.lockutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:08 compute-2 nova_compute[226829]: 2026-01-31 08:08:08.914 226833 INFO nova.compute.manager [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Terminating instance
Jan 31 08:08:08 compute-2 nova_compute[226829]: 2026-01-31 08:08:08.915 226833 DEBUG nova.compute.manager [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:08:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e268 e268: 3 total, 3 up, 3 in
Jan 31 08:08:08 compute-2 kernel: tap82b29aed-29 (unregistering): left promiscuous mode
Jan 31 08:08:08 compute-2 NetworkManager[48999]: <info>  [1769846888.9776] device (tap82b29aed-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:08:08 compute-2 nova_compute[226829]: 2026-01-31 08:08:08.980 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:08 compute-2 ovn_controller[133834]: 2026-01-31T08:08:08Z|00473|binding|INFO|Releasing lport 82b29aed-298c-485b-89e3-31b60500ecab from this chassis (sb_readonly=0)
Jan 31 08:08:08 compute-2 ovn_controller[133834]: 2026-01-31T08:08:08Z|00474|binding|INFO|Setting lport 82b29aed-298c-485b-89e3-31b60500ecab down in Southbound
Jan 31 08:08:08 compute-2 ovn_controller[133834]: 2026-01-31T08:08:08Z|00475|binding|INFO|Removing iface tap82b29aed-29 ovn-installed in OVS
Jan 31 08:08:08 compute-2 nova_compute[226829]: 2026-01-31 08:08:08.983 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:08 compute-2 nova_compute[226829]: 2026-01-31 08:08:08.990 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-2 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Jan 31 08:08:09 compute-2 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006f.scope: Consumed 3.886s CPU time.
Jan 31 08:08:09 compute-2 systemd-machined[195142]: Machine qemu-50-instance-0000006f terminated.
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.117 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:d4:fa 10.100.0.7'], port_security=['fa:16:3e:55:d4:fa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '286c72fb-d5ca-43f7-9b53-3d1b5c000db4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aae9ebc2-f854-4add-b86c-a5209381ad20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97d933ec6c34696b0483a895f47feef', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5e2db8d-c3a5-46be-bb72-92eb36b476fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14b206ab-a379-4d4a-9b80-58ba0ce20e17, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=82b29aed-298c-485b-89e3-31b60500ecab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.118 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 82b29aed-298c-485b-89e3-31b60500ecab in datapath aae9ebc2-f854-4add-b86c-a5209381ad20 unbound from our chassis
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.120 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aae9ebc2-f854-4add-b86c-a5209381ad20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.121 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[09fa3fc7-afea-478d-b111-9ed162c4359b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.121 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 namespace which is not needed anymore
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.147 226833 INFO nova.virt.libvirt.driver [-] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Instance destroyed successfully.
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.148 226833 DEBUG nova.objects.instance [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'resources' on Instance uuid 286c72fb-d5ca-43f7-9b53-3d1b5c000db4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.194 226833 DEBUG nova.virt.libvirt.vif [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-882171554',display_name='tempest-MultipleCreateTestJSON-server-882171554-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-882171554-1',id=111,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:08:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-z1qzyfi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:08:05Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=286c72fb-d5ca-43f7-9b53-3d1b5c000db4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82b29aed-298c-485b-89e3-31b60500ecab", "address": "fa:16:3e:55:d4:fa", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82b29aed-29", "ovs_interfaceid": "82b29aed-298c-485b-89e3-31b60500ecab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.194 226833 DEBUG nova.network.os_vif_util [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "82b29aed-298c-485b-89e3-31b60500ecab", "address": "fa:16:3e:55:d4:fa", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82b29aed-29", "ovs_interfaceid": "82b29aed-298c-485b-89e3-31b60500ecab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.195 226833 DEBUG nova.network.os_vif_util [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:d4:fa,bridge_name='br-int',has_traffic_filtering=True,id=82b29aed-298c-485b-89e3-31b60500ecab,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82b29aed-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.195 226833 DEBUG os_vif [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:d4:fa,bridge_name='br-int',has_traffic_filtering=True,id=82b29aed-298c-485b-89e3-31b60500ecab,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82b29aed-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.197 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.198 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82b29aed-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.199 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.202 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.204 226833 INFO os_vif [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:d4:fa,bridge_name='br-int',has_traffic_filtering=True,id=82b29aed-298c-485b-89e3-31b60500ecab,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82b29aed-29')
Jan 31 08:08:09 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[276929]: [NOTICE]   (276933) : haproxy version is 2.8.14-c23fe91
Jan 31 08:08:09 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[276929]: [NOTICE]   (276933) : path to executable is /usr/sbin/haproxy
Jan 31 08:08:09 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[276929]: [WARNING]  (276933) : Exiting Master process...
Jan 31 08:08:09 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[276929]: [ALERT]    (276933) : Current worker (276935) exited with code 143 (Terminated)
Jan 31 08:08:09 compute-2 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[276929]: [WARNING]  (276933) : All workers exited. Exiting... (0)
Jan 31 08:08:09 compute-2 systemd[1]: libpod-d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08.scope: Deactivated successfully.
Jan 31 08:08:09 compute-2 podman[276982]: 2026-01-31 08:08:09.243218892 +0000 UTC m=+0.048586323 container died d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:08:09 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08-userdata-shm.mount: Deactivated successfully.
Jan 31 08:08:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-6092afe5d4f9e69998d6cbcfe3efaaa351029dfe79fbd740134a34d6eb2457de-merged.mount: Deactivated successfully.
Jan 31 08:08:09 compute-2 podman[276982]: 2026-01-31 08:08:09.281352259 +0000 UTC m=+0.086719690 container cleanup d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:08:09 compute-2 systemd[1]: libpod-conmon-d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08.scope: Deactivated successfully.
Jan 31 08:08:09 compute-2 podman[277029]: 2026-01-31 08:08:09.330246069 +0000 UTC m=+0.035229359 container remove d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.333 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c0883fd8-994a-4e5f-aba3-e993bef907f3]: (4, ('Sat Jan 31 08:08:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 (d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08)\nd7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08\nSat Jan 31 08:08:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 (d7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08)\nd7fd92f83aef718c2f3ec07c83ab43351e410cf6d0d2b7f396dae27981be0f08\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.335 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8c4dd7-2686-483d-8619-b0f84023dade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.336 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae9ebc2-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.338 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-2 kernel: tapaae9ebc2-f0: left promiscuous mode
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.343 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.346 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cdffb68a-bebf-4784-ab02-1b17c68d27e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.365 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[81287f70-e714-4009-b01e-8364cb0fbd58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.367 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfa4202-310d-4212-a789-675c9894ed56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.377 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c313181c-014b-414a-996a-73282fbc4faf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713614, 'reachable_time': 43370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277044, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-2 systemd[1]: run-netns-ovnmeta\x2daae9ebc2\x2df854\x2d4add\x2db86c\x2da5209381ad20.mount: Deactivated successfully.
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.380 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:08:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:09.381 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[87595d09-ca0c-4a65-b5b2-c8d57818e0fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.521 226833 DEBUG nova.compute.manager [req-6fd94102-32f0-47b5-81df-41ae3d477133 req-6a2ce1f8-ff5d-4d61-a124-51e318b8ab96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Received event network-vif-unplugged-82b29aed-298c-485b-89e3-31b60500ecab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.522 226833 DEBUG oslo_concurrency.lockutils [req-6fd94102-32f0-47b5-81df-41ae3d477133 req-6a2ce1f8-ff5d-4d61-a124-51e318b8ab96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.522 226833 DEBUG oslo_concurrency.lockutils [req-6fd94102-32f0-47b5-81df-41ae3d477133 req-6a2ce1f8-ff5d-4d61-a124-51e318b8ab96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.522 226833 DEBUG oslo_concurrency.lockutils [req-6fd94102-32f0-47b5-81df-41ae3d477133 req-6a2ce1f8-ff5d-4d61-a124-51e318b8ab96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.523 226833 DEBUG nova.compute.manager [req-6fd94102-32f0-47b5-81df-41ae3d477133 req-6a2ce1f8-ff5d-4d61-a124-51e318b8ab96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] No waiting events found dispatching network-vif-unplugged-82b29aed-298c-485b-89e3-31b60500ecab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.523 226833 DEBUG nova.compute.manager [req-6fd94102-32f0-47b5-81df-41ae3d477133 req-6a2ce1f8-ff5d-4d61-a124-51e318b8ab96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Received event network-vif-unplugged-82b29aed-298c-485b-89e3-31b60500ecab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:08:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:09.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.856 226833 INFO nova.virt.libvirt.driver [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Deleting instance files /var/lib/nova/instances/286c72fb-d5ca-43f7-9b53-3d1b5c000db4_del
Jan 31 08:08:09 compute-2 nova_compute[226829]: 2026-01-31 08:08:09.857 226833 INFO nova.virt.libvirt.driver [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Deletion of /var/lib/nova/instances/286c72fb-d5ca-43f7-9b53-3d1b5c000db4_del complete
Jan 31 08:08:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e269 e269: 3 total, 3 up, 3 in
Jan 31 08:08:09 compute-2 ceph-mon[77282]: osdmap e268: 3 total, 3 up, 3 in
Jan 31 08:08:09 compute-2 ceph-mon[77282]: pgmap v2145: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 389 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 2.1 MiB/s wr, 352 op/s
Jan 31 08:08:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:10.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:10 compute-2 nova_compute[226829]: 2026-01-31 08:08:10.281 226833 INFO nova.compute.manager [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Took 1.37 seconds to destroy the instance on the hypervisor.
Jan 31 08:08:10 compute-2 nova_compute[226829]: 2026-01-31 08:08:10.281 226833 DEBUG oslo.service.loopingcall [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:08:10 compute-2 nova_compute[226829]: 2026-01-31 08:08:10.282 226833 DEBUG nova.compute.manager [-] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:08:10 compute-2 nova_compute[226829]: 2026-01-31 08:08:10.282 226833 DEBUG nova.network.neutron [-] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:08:10 compute-2 nova_compute[226829]: 2026-01-31 08:08:10.668 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:11 compute-2 ceph-mon[77282]: osdmap e269: 3 total, 3 up, 3 in
Jan 31 08:08:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:11.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:11 compute-2 nova_compute[226829]: 2026-01-31 08:08:11.972 226833 DEBUG nova.compute.manager [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Received event network-vif-plugged-82b29aed-298c-485b-89e3-31b60500ecab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:11 compute-2 nova_compute[226829]: 2026-01-31 08:08:11.972 226833 DEBUG oslo_concurrency.lockutils [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:11 compute-2 nova_compute[226829]: 2026-01-31 08:08:11.973 226833 DEBUG oslo_concurrency.lockutils [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:11 compute-2 nova_compute[226829]: 2026-01-31 08:08:11.973 226833 DEBUG oslo_concurrency.lockutils [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:11 compute-2 nova_compute[226829]: 2026-01-31 08:08:11.973 226833 DEBUG nova.compute.manager [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] No waiting events found dispatching network-vif-plugged-82b29aed-298c-485b-89e3-31b60500ecab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:08:11 compute-2 nova_compute[226829]: 2026-01-31 08:08:11.973 226833 WARNING nova.compute.manager [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Received unexpected event network-vif-plugged-82b29aed-298c-485b-89e3-31b60500ecab for instance with vm_state active and task_state deleting.
Jan 31 08:08:12 compute-2 nova_compute[226829]: 2026-01-31 08:08:12.033 226833 DEBUG nova.network.neutron [-] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:08:12 compute-2 nova_compute[226829]: 2026-01-31 08:08:12.225 226833 INFO nova.compute.manager [-] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Took 1.94 seconds to deallocate network for instance.
Jan 31 08:08:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:12.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:12 compute-2 ceph-mon[77282]: pgmap v2147: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 396 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.4 MiB/s wr, 348 op/s
Jan 31 08:08:12 compute-2 nova_compute[226829]: 2026-01-31 08:08:12.344 226833 DEBUG oslo_concurrency.lockutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:12 compute-2 nova_compute[226829]: 2026-01-31 08:08:12.344 226833 DEBUG oslo_concurrency.lockutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:12 compute-2 nova_compute[226829]: 2026-01-31 08:08:12.425 226833 DEBUG oslo_concurrency.processutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:08:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4039298737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:12 compute-2 nova_compute[226829]: 2026-01-31 08:08:12.874 226833 DEBUG oslo_concurrency.processutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:12 compute-2 nova_compute[226829]: 2026-01-31 08:08:12.880 226833 DEBUG nova.compute.provider_tree [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:08:12 compute-2 nova_compute[226829]: 2026-01-31 08:08:12.942 226833 DEBUG nova.scheduler.client.report [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:08:13 compute-2 nova_compute[226829]: 2026-01-31 08:08:13.033 226833 DEBUG oslo_concurrency.lockutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:13 compute-2 nova_compute[226829]: 2026-01-31 08:08:13.102 226833 INFO nova.scheduler.client.report [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Deleted allocations for instance 286c72fb-d5ca-43f7-9b53-3d1b5c000db4
Jan 31 08:08:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4039298737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4167090398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:13 compute-2 nova_compute[226829]: 2026-01-31 08:08:13.372 226833 DEBUG oslo_concurrency.lockutils [None req-a65a286e-0ef7-49a8-9be5-56fad90b2f14 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "286c72fb-d5ca-43f7-9b53-3d1b5c000db4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:13.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:14 compute-2 nova_compute[226829]: 2026-01-31 08:08:14.201 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:14.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:14 compute-2 nova_compute[226829]: 2026-01-31 08:08:14.449 226833 DEBUG nova.compute.manager [req-c9c03ce6-de3d-4d96-98f4-19ac37038aa0 req-0d594f13-0d9f-4172-b894-70c4e9d2b85f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Received event network-vif-deleted-82b29aed-298c-485b-89e3-31b60500ecab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:14 compute-2 ceph-mon[77282]: pgmap v2148: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 411 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 378 op/s
Jan 31 08:08:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:15 compute-2 nova_compute[226829]: 2026-01-31 08:08:15.669 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:15.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:16.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:16 compute-2 ceph-mon[77282]: pgmap v2149: 305 pgs: 305 active+clean; 297 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 6.1 MiB/s wr, 346 op/s
Jan 31 08:08:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/589736456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e270 e270: 3 total, 3 up, 3 in
Jan 31 08:08:17 compute-2 nova_compute[226829]: 2026-01-31 08:08:17.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:17 compute-2 nova_compute[226829]: 2026-01-31 08:08:17.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:17.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:17 compute-2 ceph-mon[77282]: osdmap e270: 3 total, 3 up, 3 in
Jan 31 08:08:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3096185768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e271 e271: 3 total, 3 up, 3 in
Jan 31 08:08:18 compute-2 podman[277073]: 2026-01-31 08:08:18.211831291 +0000 UTC m=+0.095778827 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:08:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:18.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:18 compute-2 ceph-mon[77282]: pgmap v2151: 305 pgs: 305 active+clean; 320 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.8 MiB/s wr, 246 op/s
Jan 31 08:08:18 compute-2 ceph-mon[77282]: osdmap e271: 3 total, 3 up, 3 in
Jan 31 08:08:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e272 e272: 3 total, 3 up, 3 in
Jan 31 08:08:19 compute-2 nova_compute[226829]: 2026-01-31 08:08:19.203 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:19.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:19 compute-2 ceph-mon[77282]: osdmap e272: 3 total, 3 up, 3 in
Jan 31 08:08:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e273 e273: 3 total, 3 up, 3 in
Jan 31 08:08:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:20.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:20 compute-2 nova_compute[226829]: 2026-01-31 08:08:20.670 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:20 compute-2 ceph-mon[77282]: pgmap v2154: 305 pgs: 305 active+clean; 357 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 7.5 MiB/s wr, 257 op/s
Jan 31 08:08:20 compute-2 ceph-mon[77282]: osdmap e273: 3 total, 3 up, 3 in
Jan 31 08:08:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:21.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:21 compute-2 nova_compute[226829]: 2026-01-31 08:08:21.878 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:22 compute-2 ceph-mon[77282]: pgmap v2156: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 14 MiB/s wr, 378 op/s
Jan 31 08:08:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:22.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:22 compute-2 sudo[277101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:22 compute-2 sudo[277101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:22 compute-2 sudo[277101]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:22 compute-2 sudo[277126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:22 compute-2 sudo[277126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:22 compute-2 sudo[277126]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e274 e274: 3 total, 3 up, 3 in
Jan 31 08:08:23 compute-2 nova_compute[226829]: 2026-01-31 08:08:23.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 31 08:08:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:23.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 31 08:08:24 compute-2 nova_compute[226829]: 2026-01-31 08:08:24.146 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846889.1447575, 286c72fb-d5ca-43f7-9b53-3d1b5c000db4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:24 compute-2 nova_compute[226829]: 2026-01-31 08:08:24.147 226833 INFO nova.compute.manager [-] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] VM Stopped (Lifecycle Event)
Jan 31 08:08:24 compute-2 ceph-mon[77282]: osdmap e274: 3 total, 3 up, 3 in
Jan 31 08:08:24 compute-2 ceph-mon[77282]: pgmap v2158: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 424 op/s
Jan 31 08:08:24 compute-2 nova_compute[226829]: 2026-01-31 08:08:24.205 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:24 compute-2 nova_compute[226829]: 2026-01-31 08:08:24.256 226833 DEBUG nova.compute.manager [None req-d94112ca-a8cb-4c5a-aee6-551260fea8e7 - - - - - -] [instance: 286c72fb-d5ca-43f7-9b53-3d1b5c000db4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:24.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:24 compute-2 nova_compute[226829]: 2026-01-31 08:08:24.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:24 compute-2 nova_compute[226829]: 2026-01-31 08:08:24.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:08:24 compute-2 nova_compute[226829]: 2026-01-31 08:08:24.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:08:24 compute-2 nova_compute[226829]: 2026-01-31 08:08:24.626 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:08:24 compute-2 nova_compute[226829]: 2026-01-31 08:08:24.626 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e275 e275: 3 total, 3 up, 3 in
Jan 31 08:08:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/701679411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:25 compute-2 ceph-mon[77282]: osdmap e275: 3 total, 3 up, 3 in
Jan 31 08:08:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2592313796' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:25 compute-2 nova_compute[226829]: 2026-01-31 08:08:25.672 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:25.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:26.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:26 compute-2 ceph-mon[77282]: pgmap v2160: 305 pgs: 305 active+clean; 350 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 500 op/s
Jan 31 08:08:27 compute-2 nova_compute[226829]: 2026-01-31 08:08:27.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:27.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:28.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:28 compute-2 nova_compute[226829]: 2026-01-31 08:08:28.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:28 compute-2 nova_compute[226829]: 2026-01-31 08:08:28.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:28 compute-2 ceph-mon[77282]: pgmap v2161: 305 pgs: 305 active+clean; 326 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 439 op/s
Jan 31 08:08:28 compute-2 nova_compute[226829]: 2026-01-31 08:08:28.716 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:28 compute-2 nova_compute[226829]: 2026-01-31 08:08:28.717 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:28 compute-2 nova_compute[226829]: 2026-01-31 08:08:28.717 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:28 compute-2 nova_compute[226829]: 2026-01-31 08:08:28.717 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:08:28 compute-2 nova_compute[226829]: 2026-01-31 08:08:28.717 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:08:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2030909049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.199 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.208 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:29 compute-2 podman[277178]: 2026-01-31 08:08:29.28603892 +0000 UTC m=+0.045790468 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.348 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.350 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4360MB free_disk=20.921802520751953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.350 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.350 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.485 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.485 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.505 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e276 e276: 3 total, 3 up, 3 in
Jan 31 08:08:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/59320802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2030909049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:29.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e277 e277: 3 total, 3 up, 3 in
Jan 31 08:08:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:08:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2460577806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.927 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.933 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:08:29 compute-2 nova_compute[226829]: 2026-01-31 08:08:29.996 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:08:30 compute-2 nova_compute[226829]: 2026-01-31 08:08:30.130 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:08:30 compute-2 nova_compute[226829]: 2026-01-31 08:08:30.131 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:30.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:30 compute-2 nova_compute[226829]: 2026-01-31 08:08:30.674 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:30 compute-2 ceph-mon[77282]: pgmap v2162: 305 pgs: 305 active+clean; 326 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 34 KiB/s wr, 364 op/s
Jan 31 08:08:30 compute-2 ceph-mon[77282]: osdmap e276: 3 total, 3 up, 3 in
Jan 31 08:08:30 compute-2 ceph-mon[77282]: osdmap e277: 3 total, 3 up, 3 in
Jan 31 08:08:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2460577806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/477472252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:31.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:32.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:32 compute-2 ceph-mon[77282]: pgmap v2165: 305 pgs: 305 active+clean; 293 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 226 op/s
Jan 31 08:08:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:08:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:33.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:08:34 compute-2 nova_compute[226829]: 2026-01-31 08:08:34.210 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:34 compute-2 ceph-mon[77282]: pgmap v2166: 305 pgs: 305 active+clean; 260 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 34 KiB/s wr, 158 op/s
Jan 31 08:08:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1859182276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:34.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e278 e278: 3 total, 3 up, 3 in
Jan 31 08:08:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:35 compute-2 nova_compute[226829]: 2026-01-31 08:08:35.676 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:35.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2428103131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:35 compute-2 ceph-mon[77282]: osdmap e278: 3 total, 3 up, 3 in
Jan 31 08:08:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:36.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:36.434 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:08:36 compute-2 nova_compute[226829]: 2026-01-31 08:08:36.434 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:36.436 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:08:37 compute-2 ceph-mon[77282]: pgmap v2168: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.5 KiB/s wr, 167 op/s
Jan 31 08:08:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:08:37.438 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:37.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:38.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:38 compute-2 ceph-mon[77282]: pgmap v2169: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 KiB/s wr, 131 op/s
Jan 31 08:08:39 compute-2 nova_compute[226829]: 2026-01-31 08:08:39.131 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:39 compute-2 nova_compute[226829]: 2026-01-31 08:08:39.131 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:08:39 compute-2 nova_compute[226829]: 2026-01-31 08:08:39.213 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:39.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 e279: 3 total, 3 up, 3 in
Jan 31 08:08:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:40.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:40 compute-2 nova_compute[226829]: 2026-01-31 08:08:40.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:40 compute-2 ceph-mon[77282]: pgmap v2170: 305 pgs: 305 active+clean; 230 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 KiB/s wr, 113 op/s
Jan 31 08:08:40 compute-2 ceph-mon[77282]: osdmap e279: 3 total, 3 up, 3 in
Jan 31 08:08:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:41.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:42 compute-2 ceph-mon[77282]: pgmap v2172: 305 pgs: 305 active+clean; 184 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 982 KiB/s rd, 2.2 KiB/s wr, 77 op/s
Jan 31 08:08:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:42.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:42 compute-2 sudo[277231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:42 compute-2 sudo[277231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:42 compute-2 sudo[277231]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:43 compute-2 sudo[277256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:43 compute-2 sudo[277256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:43 compute-2 sudo[277256]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3666301356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:43 compute-2 nova_compute[226829]: 2026-01-31 08:08:43.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:43.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:44 compute-2 ceph-mon[77282]: pgmap v2173: 305 pgs: 305 active+clean; 144 MiB data, 880 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.9 KiB/s wr, 58 op/s
Jan 31 08:08:44 compute-2 nova_compute[226829]: 2026-01-31 08:08:44.216 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:44.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/537857212' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:08:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/537857212' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:08:45 compute-2 nova_compute[226829]: 2026-01-31 08:08:45.678 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:45.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:46 compute-2 ceph-mon[77282]: pgmap v2174: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 49 op/s
Jan 31 08:08:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:46.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:47.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:08:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:48.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:08:48 compute-2 ceph-mon[77282]: pgmap v2175: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 KiB/s wr, 46 op/s
Jan 31 08:08:49 compute-2 podman[277284]: 2026-01-31 08:08:49.187710852 +0000 UTC m=+0.076460191 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:08:49 compute-2 nova_compute[226829]: 2026-01-31 08:08:49.217 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:49.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:50 compute-2 ceph-mon[77282]: pgmap v2176: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 KiB/s wr, 39 op/s
Jan 31 08:08:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:50.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:50 compute-2 nova_compute[226829]: 2026-01-31 08:08:50.681 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:51.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:52 compute-2 ceph-mon[77282]: pgmap v2177: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 34 op/s
Jan 31 08:08:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:53.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:54 compute-2 nova_compute[226829]: 2026-01-31 08:08:54.219 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:54.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:55 compute-2 ceph-mon[77282]: pgmap v2178: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 852 B/s wr, 18 op/s
Jan 31 08:08:55 compute-2 nova_compute[226829]: 2026-01-31 08:08:55.684 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:55.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:56.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:56 compute-2 ceph-mon[77282]: pgmap v2179: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 08:08:56 compute-2 sudo[277314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:56 compute-2 sudo[277314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:56 compute-2 sudo[277314]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:56 compute-2 sudo[277339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:56 compute-2 sudo[277339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:56 compute-2 sudo[277339]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:56 compute-2 sudo[277364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:56 compute-2 sudo[277364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:56 compute-2 sudo[277364]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:56 compute-2 sudo[277389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:08:56 compute-2 sudo[277389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:57 compute-2 sudo[277389]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:57 compute-2 sudo[277446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:57 compute-2 sudo[277446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:57 compute-2 sudo[277446]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:57 compute-2 sudo[277471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:57 compute-2 sudo[277471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:57 compute-2 sudo[277471]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:57 compute-2 sudo[277496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:57 compute-2 sudo[277496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:57 compute-2 sudo[277496]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:57 compute-2 sudo[277521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 08:08:57 compute-2 sudo[277521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:57.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:57 compute-2 podman[277589]: 2026-01-31 08:08:57.802907795 +0000 UTC m=+0.042646161 container create c77f8933a5abbf636fb20a62b7258da8e528bf10990c77ea5ed75f06dec592fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:08:57 compute-2 systemd[1]: Started libpod-conmon-c77f8933a5abbf636fb20a62b7258da8e528bf10990c77ea5ed75f06dec592fa.scope.
Jan 31 08:08:57 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:08:57 compute-2 podman[277589]: 2026-01-31 08:08:57.861603402 +0000 UTC m=+0.101341768 container init c77f8933a5abbf636fb20a62b7258da8e528bf10990c77ea5ed75f06dec592fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:57 compute-2 podman[277589]: 2026-01-31 08:08:57.867743769 +0000 UTC m=+0.107482145 container start c77f8933a5abbf636fb20a62b7258da8e528bf10990c77ea5ed75f06dec592fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:08:57 compute-2 podman[277589]: 2026-01-31 08:08:57.870550355 +0000 UTC m=+0.110288721 container attach c77f8933a5abbf636fb20a62b7258da8e528bf10990c77ea5ed75f06dec592fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:57 compute-2 nifty_fermat[277606]: 167 167
Jan 31 08:08:57 compute-2 systemd[1]: libpod-c77f8933a5abbf636fb20a62b7258da8e528bf10990c77ea5ed75f06dec592fa.scope: Deactivated successfully.
Jan 31 08:08:57 compute-2 podman[277589]: 2026-01-31 08:08:57.874832742 +0000 UTC m=+0.114571138 container died c77f8933a5abbf636fb20a62b7258da8e528bf10990c77ea5ed75f06dec592fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:57 compute-2 podman[277589]: 2026-01-31 08:08:57.784777103 +0000 UTC m=+0.024515499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:08:57 compute-2 systemd[1]: var-lib-containers-storage-overlay-4bf604f70283b71661ca2ccc430419053adf14f027ff9153f161f184c1a20231-merged.mount: Deactivated successfully.
Jan 31 08:08:57 compute-2 podman[277589]: 2026-01-31 08:08:57.912415274 +0000 UTC m=+0.152153630 container remove c77f8933a5abbf636fb20a62b7258da8e528bf10990c77ea5ed75f06dec592fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:57 compute-2 systemd[1]: libpod-conmon-c77f8933a5abbf636fb20a62b7258da8e528bf10990c77ea5ed75f06dec592fa.scope: Deactivated successfully.
Jan 31 08:08:58 compute-2 podman[277629]: 2026-01-31 08:08:58.056221227 +0000 UTC m=+0.046599339 container create 9520689e0c377542061aa194d42d9ae86453e73be2676e5c8a4ba4a5612454ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 08:08:58 compute-2 systemd[1]: Started libpod-conmon-9520689e0c377542061aa194d42d9ae86453e73be2676e5c8a4ba4a5612454ea.scope.
Jan 31 08:08:58 compute-2 podman[277629]: 2026-01-31 08:08:58.035265837 +0000 UTC m=+0.025643979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:08:58 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:08:58 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c65f8a2cde7a64e5619255298ee02f9e626b5792e5d88c1ce52a34d2bfc9324/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:58 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c65f8a2cde7a64e5619255298ee02f9e626b5792e5d88c1ce52a34d2bfc9324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:58 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c65f8a2cde7a64e5619255298ee02f9e626b5792e5d88c1ce52a34d2bfc9324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:58 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c65f8a2cde7a64e5619255298ee02f9e626b5792e5d88c1ce52a34d2bfc9324/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:58 compute-2 podman[277629]: 2026-01-31 08:08:58.160326989 +0000 UTC m=+0.150705121 container init 9520689e0c377542061aa194d42d9ae86453e73be2676e5c8a4ba4a5612454ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:08:58 compute-2 podman[277629]: 2026-01-31 08:08:58.166730123 +0000 UTC m=+0.157108225 container start 9520689e0c377542061aa194d42d9ae86453e73be2676e5c8a4ba4a5612454ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:08:58 compute-2 podman[277629]: 2026-01-31 08:08:58.169986922 +0000 UTC m=+0.160365054 container attach 9520689e0c377542061aa194d42d9ae86453e73be2676e5c8a4ba4a5612454ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:08:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:08:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:08:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4022534877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:58 compute-2 ceph-mon[77282]: pgmap v2180: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail
Jan 31 08:08:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:58.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:59 compute-2 nova_compute[226829]: 2026-01-31 08:08:59.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]: [
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:     {
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:         "available": false,
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:         "ceph_device": false,
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:         "lsm_data": {},
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:         "lvs": [],
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:         "path": "/dev/sr0",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:         "rejected_reasons": [
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "Insufficient space (<5GB)",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "Has a FileSystem"
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:         ],
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:         "sys_api": {
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "actuators": null,
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "device_nodes": "sr0",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "devname": "sr0",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "human_readable_size": "482.00 KB",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "id_bus": "ata",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "model": "QEMU DVD-ROM",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "nr_requests": "2",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "parent": "/dev/sr0",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "partitions": {},
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "path": "/dev/sr0",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "removable": "1",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "rev": "2.5+",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "ro": "0",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "rotational": "1",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "sas_address": "",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "sas_device_handle": "",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "scheduler_mode": "mq-deadline",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "sectors": 0,
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "sectorsize": "2048",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "size": 493568.0,
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "support_discard": "2048",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "type": "disk",
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:             "vendor": "QEMU"
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:         }
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]:     }
Jan 31 08:08:59 compute-2 sleepy_blackwell[277646]: ]
Jan 31 08:08:59 compute-2 systemd[1]: libpod-9520689e0c377542061aa194d42d9ae86453e73be2676e5c8a4ba4a5612454ea.scope: Deactivated successfully.
Jan 31 08:08:59 compute-2 systemd[1]: libpod-9520689e0c377542061aa194d42d9ae86453e73be2676e5c8a4ba4a5612454ea.scope: Consumed 1.123s CPU time.
Jan 31 08:08:59 compute-2 podman[277629]: 2026-01-31 08:08:59.563637476 +0000 UTC m=+1.554015608 container died 9520689e0c377542061aa194d42d9ae86453e73be2676e5c8a4ba4a5612454ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 08:08:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:08:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:08:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:59.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:08:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:00 compute-2 systemd[1]: var-lib-containers-storage-overlay-3c65f8a2cde7a64e5619255298ee02f9e626b5792e5d88c1ce52a34d2bfc9324-merged.mount: Deactivated successfully.
Jan 31 08:09:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:00.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:00 compute-2 podman[277629]: 2026-01-31 08:09:00.663397294 +0000 UTC m=+2.653775436 container remove 9520689e0c377542061aa194d42d9ae86453e73be2676e5c8a4ba4a5612454ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:09:00 compute-2 systemd[1]: libpod-conmon-9520689e0c377542061aa194d42d9ae86453e73be2676e5c8a4ba4a5612454ea.scope: Deactivated successfully.
Jan 31 08:09:00 compute-2 nova_compute[226829]: 2026-01-31 08:09:00.685 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:00 compute-2 sudo[277521]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:00 compute-2 podman[278724]: 2026-01-31 08:09:00.774761794 +0000 UTC m=+1.185852822 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 08:09:01 compute-2 ceph-mon[77282]: pgmap v2181: 305 pgs: 305 active+clean; 123 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s rd, 92 KiB/s wr, 3 op/s
Jan 31 08:09:01 compute-2 nova_compute[226829]: 2026-01-31 08:09:01.310 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquiring lock "63136e80-d3bf-4057-aaff-910e5c7b1606" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:01 compute-2 nova_compute[226829]: 2026-01-31 08:09:01.310 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:01 compute-2 nova_compute[226829]: 2026-01-31 08:09:01.346 226833 DEBUG nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:09:01 compute-2 nova_compute[226829]: 2026-01-31 08:09:01.455 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:01 compute-2 nova_compute[226829]: 2026-01-31 08:09:01.456 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:01 compute-2 nova_compute[226829]: 2026-01-31 08:09:01.466 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:09:01 compute-2 nova_compute[226829]: 2026-01-31 08:09:01.466 226833 INFO nova.compute.claims [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:09:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:01.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:01 compute-2 anacron[52153]: Job `cron.weekly' started
Jan 31 08:09:01 compute-2 anacron[52153]: Job `cron.weekly' terminated
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.089 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:02.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:09:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:09:02 compute-2 ceph-mon[77282]: pgmap v2182: 305 pgs: 305 active+clean; 140 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 1008 KiB/s wr, 4 op/s
Jan 31 08:09:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:09:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:09:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:09:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:09:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/983292820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.496 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.501 226833 DEBUG nova.compute.provider_tree [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.516 226833 DEBUG nova.scheduler.client.report [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.536 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.596 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquiring lock "23df5f49-dd10-4819-816a-9a86abd39192" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.596 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "23df5f49-dd10-4819-816a-9a86abd39192" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.606 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "23df5f49-dd10-4819-816a-9a86abd39192" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.606 226833 DEBUG nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.700 226833 DEBUG nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.701 226833 DEBUG nova.network.neutron [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.765 226833 INFO nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.787 226833 DEBUG nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.937 226833 DEBUG nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.938 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.939 226833 INFO nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Creating image(s)
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.966 226833 DEBUG nova.storage.rbd_utils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] rbd image 63136e80-d3bf-4057-aaff-910e5c7b1606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:02 compute-2 nova_compute[226829]: 2026-01-31 08:09:02.992 226833 DEBUG nova.storage.rbd_utils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] rbd image 63136e80-d3bf-4057-aaff-910e5c7b1606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:03 compute-2 nova_compute[226829]: 2026-01-31 08:09:03.017 226833 DEBUG nova.storage.rbd_utils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] rbd image 63136e80-d3bf-4057-aaff-910e5c7b1606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:03 compute-2 nova_compute[226829]: 2026-01-31 08:09:03.021 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:03 compute-2 nova_compute[226829]: 2026-01-31 08:09:03.039 226833 DEBUG nova.policy [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f694162f685a48139cf09d24531572e2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acc43e4b8be149d9adc0a3c8e9f9ddd7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:09:03 compute-2 nova_compute[226829]: 2026-01-31 08:09:03.074 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:03 compute-2 nova_compute[226829]: 2026-01-31 08:09:03.075 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:03 compute-2 nova_compute[226829]: 2026-01-31 08:09:03.075 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:03 compute-2 nova_compute[226829]: 2026-01-31 08:09:03.076 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:03 compute-2 sudo[278836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:03 compute-2 sudo[278836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:03 compute-2 sudo[278836]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:03 compute-2 nova_compute[226829]: 2026-01-31 08:09:03.103 226833 DEBUG nova.storage.rbd_utils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] rbd image 63136e80-d3bf-4057-aaff-910e5c7b1606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:03 compute-2 nova_compute[226829]: 2026-01-31 08:09:03.107 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 63136e80-d3bf-4057-aaff-910e5c7b1606_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:03 compute-2 sudo[278881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:03 compute-2 sudo[278881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:03 compute-2 sudo[278881]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/983292820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:03.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:04 compute-2 nova_compute[226829]: 2026-01-31 08:09:04.226 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:04 compute-2 nova_compute[226829]: 2026-01-31 08:09:04.632 226833 DEBUG nova.network.neutron [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Successfully created port: 1ab6830a-b918-43d5-aef2-ef65be77a24d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:09:04 compute-2 ceph-mon[77282]: pgmap v2183: 305 pgs: 305 active+clean; 156 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Jan 31 08:09:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4061122625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:05 compute-2 nova_compute[226829]: 2026-01-31 08:09:05.687 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:05.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:05 compute-2 nova_compute[226829]: 2026-01-31 08:09:05.991 226833 DEBUG nova.network.neutron [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Successfully updated port: 1ab6830a-b918-43d5-aef2-ef65be77a24d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:09:06 compute-2 nova_compute[226829]: 2026-01-31 08:09:06.016 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquiring lock "refresh_cache-63136e80-d3bf-4057-aaff-910e5c7b1606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:09:06 compute-2 nova_compute[226829]: 2026-01-31 08:09:06.016 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquired lock "refresh_cache-63136e80-d3bf-4057-aaff-910e5c7b1606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:09:06 compute-2 nova_compute[226829]: 2026-01-31 08:09:06.017 226833 DEBUG nova.network.neutron [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:09:06 compute-2 nova_compute[226829]: 2026-01-31 08:09:06.135 226833 DEBUG nova.compute.manager [req-adc9a71d-c12e-4da2-8d17-9fa145e913d5 req-20fee0f5-04c7-429d-9c48-2aa4fe48eef5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Received event network-changed-1ab6830a-b918-43d5-aef2-ef65be77a24d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:06 compute-2 nova_compute[226829]: 2026-01-31 08:09:06.135 226833 DEBUG nova.compute.manager [req-adc9a71d-c12e-4da2-8d17-9fa145e913d5 req-20fee0f5-04c7-429d-9c48-2aa4fe48eef5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Refreshing instance network info cache due to event network-changed-1ab6830a-b918-43d5-aef2-ef65be77a24d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:09:06 compute-2 nova_compute[226829]: 2026-01-31 08:09:06.136 226833 DEBUG oslo_concurrency.lockutils [req-adc9a71d-c12e-4da2-8d17-9fa145e913d5 req-20fee0f5-04c7-429d-9c48-2aa4fe48eef5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-63136e80-d3bf-4057-aaff-910e5c7b1606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:09:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2860341834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2926926082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:06 compute-2 nova_compute[226829]: 2026-01-31 08:09:06.222 226833 DEBUG nova.network.neutron [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:09:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:06.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:06 compute-2 nova_compute[226829]: 2026-01-31 08:09:06.642 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 63136e80-d3bf-4057-aaff-910e5c7b1606_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:06.881 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:06.884 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:06.885 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:07 compute-2 nova_compute[226829]: 2026-01-31 08:09:07.007 226833 DEBUG nova.storage.rbd_utils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] resizing rbd image 63136e80-d3bf-4057-aaff-910e5c7b1606_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:09:07 compute-2 nova_compute[226829]: 2026-01-31 08:09:07.396 226833 DEBUG nova.network.neutron [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Updating instance_info_cache with network_info: [{"id": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "address": "fa:16:3e:e7:69:77", "network": {"id": "f3218607-8582-4d39-9ab8-13affa1e103a", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1145150035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc43e4b8be149d9adc0a3c8e9f9ddd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ab6830a-b9", "ovs_interfaceid": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:09:07 compute-2 nova_compute[226829]: 2026-01-31 08:09:07.440 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Releasing lock "refresh_cache-63136e80-d3bf-4057-aaff-910e5c7b1606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:09:07 compute-2 nova_compute[226829]: 2026-01-31 08:09:07.441 226833 DEBUG nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Instance network_info: |[{"id": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "address": "fa:16:3e:e7:69:77", "network": {"id": "f3218607-8582-4d39-9ab8-13affa1e103a", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1145150035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc43e4b8be149d9adc0a3c8e9f9ddd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ab6830a-b9", "ovs_interfaceid": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:09:07 compute-2 nova_compute[226829]: 2026-01-31 08:09:07.441 226833 DEBUG oslo_concurrency.lockutils [req-adc9a71d-c12e-4da2-8d17-9fa145e913d5 req-20fee0f5-04c7-429d-9c48-2aa4fe48eef5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-63136e80-d3bf-4057-aaff-910e5c7b1606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:09:07 compute-2 nova_compute[226829]: 2026-01-31 08:09:07.441 226833 DEBUG nova.network.neutron [req-adc9a71d-c12e-4da2-8d17-9fa145e913d5 req-20fee0f5-04c7-429d-9c48-2aa4fe48eef5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Refreshing network info cache for port 1ab6830a-b918-43d5-aef2-ef65be77a24d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:09:07 compute-2 ceph-mon[77282]: pgmap v2184: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:09:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:09:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:07.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:09:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:08.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.832 226833 DEBUG nova.objects.instance [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lazy-loading 'migration_context' on Instance uuid 63136e80-d3bf-4057-aaff-910e5c7b1606 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.858 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.859 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Ensure instance console log exists: /var/lib/nova/instances/63136e80-d3bf-4057-aaff-910e5c7b1606/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.859 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.859 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.859 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.861 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Start _get_guest_xml network_info=[{"id": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "address": "fa:16:3e:e7:69:77", "network": {"id": "f3218607-8582-4d39-9ab8-13affa1e103a", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1145150035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc43e4b8be149d9adc0a3c8e9f9ddd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ab6830a-b9", "ovs_interfaceid": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.865 226833 WARNING nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.870 226833 DEBUG nova.virt.libvirt.host [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.870 226833 DEBUG nova.virt.libvirt.host [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.873 226833 DEBUG nova.virt.libvirt.host [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.873 226833 DEBUG nova.virt.libvirt.host [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.874 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.874 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.874 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.875 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.875 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.875 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.875 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.875 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.876 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.876 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.876 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.876 226833 DEBUG nova.virt.hardware [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:09:08 compute-2 nova_compute[226829]: 2026-01-31 08:09:08.879 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:08 compute-2 ceph-mon[77282]: pgmap v2185: 305 pgs: 305 active+clean; 181 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 31 08:09:09 compute-2 nova_compute[226829]: 2026-01-31 08:09:09.261 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:09:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3208905229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:09 compute-2 nova_compute[226829]: 2026-01-31 08:09:09.306 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:09 compute-2 nova_compute[226829]: 2026-01-31 08:09:09.332 226833 DEBUG nova.storage.rbd_utils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] rbd image 63136e80-d3bf-4057-aaff-910e5c7b1606_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:09 compute-2 nova_compute[226829]: 2026-01-31 08:09:09.336 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:09:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1987200555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:09 compute-2 nova_compute[226829]: 2026-01-31 08:09:09.745 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:09 compute-2 nova_compute[226829]: 2026-01-31 08:09:09.748 226833 DEBUG nova.virt.libvirt.vif [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:08:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-2072569271',display_name='tempest-ServerGroupTestJSON-server-2072569271',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-2072569271',id=115,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acc43e4b8be149d9adc0a3c8e9f9ddd7',ramdisk_id='',reservation_id='r-v5cjikku',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1767433902',owner_user_name='tempest-ServerGroupTestJSON-1767433902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:02Z,user_data=None,user_id='f694162f685a48139cf09d24531572e2',uuid=63136e80-d3bf-4057-aaff-910e5c7b1606,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "address": "fa:16:3e:e7:69:77", "network": {"id": "f3218607-8582-4d39-9ab8-13affa1e103a", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1145150035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc43e4b8be149d9adc0a3c8e9f9ddd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ab6830a-b9", "ovs_interfaceid": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:09:09 compute-2 nova_compute[226829]: 2026-01-31 08:09:09.748 226833 DEBUG nova.network.os_vif_util [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Converting VIF {"id": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "address": "fa:16:3e:e7:69:77", "network": {"id": "f3218607-8582-4d39-9ab8-13affa1e103a", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1145150035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc43e4b8be149d9adc0a3c8e9f9ddd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ab6830a-b9", "ovs_interfaceid": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:09:09 compute-2 nova_compute[226829]: 2026-01-31 08:09:09.749 226833 DEBUG nova.network.os_vif_util [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:69:77,bridge_name='br-int',has_traffic_filtering=True,id=1ab6830a-b918-43d5-aef2-ef65be77a24d,network=Network(f3218607-8582-4d39-9ab8-13affa1e103a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ab6830a-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:09:09 compute-2 nova_compute[226829]: 2026-01-31 08:09:09.751 226833 DEBUG nova.objects.instance [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 63136e80-d3bf-4057-aaff-910e5c7b1606 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:09:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:09.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.003 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <uuid>63136e80-d3bf-4057-aaff-910e5c7b1606</uuid>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <name>instance-00000073</name>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerGroupTestJSON-server-2072569271</nova:name>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:09:08</nova:creationTime>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <nova:user uuid="f694162f685a48139cf09d24531572e2">tempest-ServerGroupTestJSON-1767433902-project-member</nova:user>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <nova:project uuid="acc43e4b8be149d9adc0a3c8e9f9ddd7">tempest-ServerGroupTestJSON-1767433902</nova:project>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <nova:port uuid="1ab6830a-b918-43d5-aef2-ef65be77a24d">
Jan 31 08:09:10 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <system>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <entry name="serial">63136e80-d3bf-4057-aaff-910e5c7b1606</entry>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <entry name="uuid">63136e80-d3bf-4057-aaff-910e5c7b1606</entry>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     </system>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <os>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   </os>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <features>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   </features>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/63136e80-d3bf-4057-aaff-910e5c7b1606_disk">
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       </source>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/63136e80-d3bf-4057-aaff-910e5c7b1606_disk.config">
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       </source>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:09:10 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:e7:69:77"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <target dev="tap1ab6830a-b9"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/63136e80-d3bf-4057-aaff-910e5c7b1606/console.log" append="off"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <video>
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     </video>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:09:10 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:09:10 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:09:10 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:09:10 compute-2 nova_compute[226829]: </domain>
Jan 31 08:09:10 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.005 226833 DEBUG nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Preparing to wait for external event network-vif-plugged-1ab6830a-b918-43d5-aef2-ef65be77a24d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.006 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquiring lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.006 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.006 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.007 226833 DEBUG nova.virt.libvirt.vif [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:08:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-2072569271',display_name='tempest-ServerGroupTestJSON-server-2072569271',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-2072569271',id=115,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acc43e4b8be149d9adc0a3c8e9f9ddd7',ramdisk_id='',reservation_id='r-v5cjikku',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1767433902',owner_user_name='tempest-ServerGroupTestJSON-1767433902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:02Z,user_data=None,user_id='f694162f685a48139cf09d24531572e2',uuid=63136e80-d3bf-4057-aaff-910e5c7b1606,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "address": "fa:16:3e:e7:69:77", "network": {"id": "f3218607-8582-4d39-9ab8-13affa1e103a", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1145150035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc43e4b8be149d9adc0a3c8e9f9ddd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ab6830a-b9", "ovs_interfaceid": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.007 226833 DEBUG nova.network.os_vif_util [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Converting VIF {"id": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "address": "fa:16:3e:e7:69:77", "network": {"id": "f3218607-8582-4d39-9ab8-13affa1e103a", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1145150035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc43e4b8be149d9adc0a3c8e9f9ddd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ab6830a-b9", "ovs_interfaceid": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.008 226833 DEBUG nova.network.os_vif_util [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:69:77,bridge_name='br-int',has_traffic_filtering=True,id=1ab6830a-b918-43d5-aef2-ef65be77a24d,network=Network(f3218607-8582-4d39-9ab8-13affa1e103a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ab6830a-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.008 226833 DEBUG os_vif [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:69:77,bridge_name='br-int',has_traffic_filtering=True,id=1ab6830a-b918-43d5-aef2-ef65be77a24d,network=Network(f3218607-8582-4d39-9ab8-13affa1e103a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ab6830a-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.009 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.009 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.010 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.019 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.019 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ab6830a-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.020 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ab6830a-b9, col_values=(('external_ids', {'iface-id': '1ab6830a-b918-43d5-aef2-ef65be77a24d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:69:77', 'vm-uuid': '63136e80-d3bf-4057-aaff-910e5c7b1606'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.021 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:10 compute-2 NetworkManager[48999]: <info>  [1769846950.0230] manager: (tap1ab6830a-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.024 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.029 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.029 226833 INFO os_vif [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:69:77,bridge_name='br-int',has_traffic_filtering=True,id=1ab6830a-b918-43d5-aef2-ef65be77a24d,network=Network(f3218607-8582-4d39-9ab8-13affa1e103a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ab6830a-b9')
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.089 226833 DEBUG nova.network.neutron [req-adc9a71d-c12e-4da2-8d17-9fa145e913d5 req-20fee0f5-04c7-429d-9c48-2aa4fe48eef5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Updated VIF entry in instance network info cache for port 1ab6830a-b918-43d5-aef2-ef65be77a24d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.089 226833 DEBUG nova.network.neutron [req-adc9a71d-c12e-4da2-8d17-9fa145e913d5 req-20fee0f5-04c7-429d-9c48-2aa4fe48eef5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Updating instance_info_cache with network_info: [{"id": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "address": "fa:16:3e:e7:69:77", "network": {"id": "f3218607-8582-4d39-9ab8-13affa1e103a", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1145150035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc43e4b8be149d9adc0a3c8e9f9ddd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ab6830a-b9", "ovs_interfaceid": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.106 226833 DEBUG oslo_concurrency.lockutils [req-adc9a71d-c12e-4da2-8d17-9fa145e913d5 req-20fee0f5-04c7-429d-9c48-2aa4fe48eef5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-63136e80-d3bf-4057-aaff-910e5c7b1606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.164 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.165 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.165 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] No VIF found with MAC fa:16:3e:e7:69:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.165 226833 INFO nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Using config drive
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.192 226833 DEBUG nova.storage.rbd_utils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] rbd image 63136e80-d3bf-4057-aaff-910e5c7b1606_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:10.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3208905229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:10 compute-2 ceph-mon[77282]: pgmap v2186: 305 pgs: 305 active+clean; 228 MiB data, 942 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 3.9 MiB/s wr, 64 op/s
Jan 31 08:09:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1987200555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.646 226833 INFO nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Creating config drive at /var/lib/nova/instances/63136e80-d3bf-4057-aaff-910e5c7b1606/disk.config
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.650 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63136e80-d3bf-4057-aaff-910e5c7b1606/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpptz8w9on execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.689 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.772 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63136e80-d3bf-4057-aaff-910e5c7b1606/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpptz8w9on" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.872 226833 DEBUG nova.storage.rbd_utils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] rbd image 63136e80-d3bf-4057-aaff-910e5c7b1606_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:10 compute-2 nova_compute[226829]: 2026-01-31 08:09:10.876 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63136e80-d3bf-4057-aaff-910e5c7b1606/disk.config 63136e80-d3bf-4057-aaff-910e5c7b1606_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:11.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/686841026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1666735505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:11 compute-2 nova_compute[226829]: 2026-01-31 08:09:11.989 226833 DEBUG oslo_concurrency.processutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63136e80-d3bf-4057-aaff-910e5c7b1606/disk.config 63136e80-d3bf-4057-aaff-910e5c7b1606_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:11 compute-2 nova_compute[226829]: 2026-01-31 08:09:11.990 226833 INFO nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Deleting local config drive /var/lib/nova/instances/63136e80-d3bf-4057-aaff-910e5c7b1606/disk.config because it was imported into RBD.
Jan 31 08:09:12 compute-2 kernel: tap1ab6830a-b9: entered promiscuous mode
Jan 31 08:09:12 compute-2 ovn_controller[133834]: 2026-01-31T08:09:12Z|00476|binding|INFO|Claiming lport 1ab6830a-b918-43d5-aef2-ef65be77a24d for this chassis.
Jan 31 08:09:12 compute-2 ovn_controller[133834]: 2026-01-31T08:09:12Z|00477|binding|INFO|1ab6830a-b918-43d5-aef2-ef65be77a24d: Claiming fa:16:3e:e7:69:77 10.100.0.6
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.032 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:12 compute-2 NetworkManager[48999]: <info>  [1769846952.0348] manager: (tap1ab6830a-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.039 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:12 compute-2 systemd-machined[195142]: New machine qemu-51-instance-00000073.
Jan 31 08:09:12 compute-2 systemd-udevd[279137]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.059 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:12 compute-2 ovn_controller[133834]: 2026-01-31T08:09:12Z|00478|binding|INFO|Setting lport 1ab6830a-b918-43d5-aef2-ef65be77a24d ovn-installed in OVS
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.062 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:12 compute-2 systemd[1]: Started Virtual Machine qemu-51-instance-00000073.
Jan 31 08:09:12 compute-2 NetworkManager[48999]: <info>  [1769846952.0652] device (tap1ab6830a-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:09:12 compute-2 NetworkManager[48999]: <info>  [1769846952.0661] device (tap1ab6830a-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:09:12 compute-2 ovn_controller[133834]: 2026-01-31T08:09:12Z|00479|binding|INFO|Setting lport 1ab6830a-b918-43d5-aef2-ef65be77a24d up in Southbound
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.093 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:69:77 10.100.0.6'], port_security=['fa:16:3e:e7:69:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '63136e80-d3bf-4057-aaff-910e5c7b1606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3218607-8582-4d39-9ab8-13affa1e103a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acc43e4b8be149d9adc0a3c8e9f9ddd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3df6c13-cb44-43d6-9c95-e4e80ef10ce9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1851fefc-8922-4cbd-8503-cba2f4d4576b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=1ab6830a-b918-43d5-aef2-ef65be77a24d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.094 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 1ab6830a-b918-43d5-aef2-ef65be77a24d in datapath f3218607-8582-4d39-9ab8-13affa1e103a bound to our chassis
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.095 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3218607-8582-4d39-9ab8-13affa1e103a
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.104 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fbeaa0bf-6920-4db1-9d22-72be2d428dd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.105 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3218607-81 in ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.107 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3218607-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.107 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[00ff8c3a-7115-4799-a59d-7bf85747453e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.108 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[51db2b20-0aed-4531-b1a1-6ffeeb7f9053]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.117 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[98cfff54-137a-4d37-a6d3-4da37e087f4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.128 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bf594830-34b7-453c-baa5-adb2f27aa2e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.146 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8288e5-ca5c-4e26-adf0-5da0769a0c49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 systemd-udevd[279139]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:09:12 compute-2 NetworkManager[48999]: <info>  [1769846952.1522] manager: (tapf3218607-80): new Veth device (/org/freedesktop/NetworkManager/Devices/238)
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.151 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[86c7ae87-f2c8-4458-82c1-e58d47a8f3a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.174 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4941d223-7b0a-4217-a622-8121a96f5a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.177 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[606f6ce7-1f91-4825-8fce-cb26ea2ef2f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 NetworkManager[48999]: <info>  [1769846952.1934] device (tapf3218607-80): carrier: link connected
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.195 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ff80f17f-a1d7-4278-b7b5-35c94f4fec1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.206 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b805c9bf-d1b6-48bc-9abc-dcf1e40c8c8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3218607-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:34:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720466, 'reachable_time': 32361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279170, 'error': None, 'target': 'ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.218 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0c378061-4806-456d-85be-604f457cd30e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:3440'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720466, 'tstamp': 720466}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279171, 'error': None, 'target': 'ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.230 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f5ae68-49e5-48e8-aa24-dd32788b3a1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3218607-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:34:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720466, 'reachable_time': 32361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279172, 'error': None, 'target': 'ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.250 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7e66bdbd-fac6-477b-9ef3-2f27a9a4cbb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.282 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8278427f-dd14-4c2e-80ab-7ac05e56448c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.283 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3218607-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.284 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.284 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3218607-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.285 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:12 compute-2 NetworkManager[48999]: <info>  [1769846952.2866] manager: (tapf3218607-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Jan 31 08:09:12 compute-2 kernel: tapf3218607-80: entered promiscuous mode
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.288 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.290 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3218607-80, col_values=(('external_ids', {'iface-id': '0f841463-9cb0-4093-aafc-319647b349c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.291 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:12 compute-2 ovn_controller[133834]: 2026-01-31T08:09:12Z|00480|binding|INFO|Releasing lport 0f841463-9cb0-4093-aafc-319647b349c1 from this chassis (sb_readonly=0)
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.293 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3218607-8582-4d39-9ab8-13affa1e103a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3218607-8582-4d39-9ab8-13affa1e103a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.294 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[04c217a5-e74e-4ea2-bd20-2ae4274a13fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.294 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-f3218607-8582-4d39-9ab8-13affa1e103a
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/f3218607-8582-4d39-9ab8-13affa1e103a.pid.haproxy
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID f3218607-8582-4d39-9ab8-13affa1e103a
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:12.295 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a', 'env', 'PROCESS_TAG=haproxy-f3218607-8582-4d39-9ab8-13affa1e103a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3218607-8582-4d39-9ab8-13affa1e103a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.297 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.325 226833 DEBUG nova.compute.manager [req-e5c94f9c-e6f2-4796-b1c8-524494fa3d77 req-3ccd0a13-e718-4c8d-9ecb-e1fbdf9dfa8e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Received event network-vif-plugged-1ab6830a-b918-43d5-aef2-ef65be77a24d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.326 226833 DEBUG oslo_concurrency.lockutils [req-e5c94f9c-e6f2-4796-b1c8-524494fa3d77 req-3ccd0a13-e718-4c8d-9ecb-e1fbdf9dfa8e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.326 226833 DEBUG oslo_concurrency.lockutils [req-e5c94f9c-e6f2-4796-b1c8-524494fa3d77 req-3ccd0a13-e718-4c8d-9ecb-e1fbdf9dfa8e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.327 226833 DEBUG oslo_concurrency.lockutils [req-e5c94f9c-e6f2-4796-b1c8-524494fa3d77 req-3ccd0a13-e718-4c8d-9ecb-e1fbdf9dfa8e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.327 226833 DEBUG nova.compute.manager [req-e5c94f9c-e6f2-4796-b1c8-524494fa3d77 req-3ccd0a13-e718-4c8d-9ecb-e1fbdf9dfa8e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Processing event network-vif-plugged-1ab6830a-b918-43d5-aef2-ef65be77a24d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:09:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:12.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:12 compute-2 podman[279245]: 2026-01-31 08:09:12.616977797 +0000 UTC m=+0.042422125 container create aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.623 226833 DEBUG nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.624 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846952.6224225, 63136e80-d3bf-4057-aaff-910e5c7b1606 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.624 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] VM Started (Lifecycle Event)
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.628 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.631 226833 INFO nova.virt.libvirt.driver [-] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Instance spawned successfully.
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.631 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.657 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:12 compute-2 systemd[1]: Started libpod-conmon-aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd.scope.
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.664 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.666 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.667 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.667 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.668 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.668 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.668 226833 DEBUG nova.virt.libvirt.driver [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:12 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:09:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a6ec8bb2872e2a4d600bc15cfc7f650ae90a56daab544b1cc2fe0f25301aff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:12 compute-2 podman[279245]: 2026-01-31 08:09:12.593712864 +0000 UTC m=+0.019157202 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:09:12 compute-2 podman[279245]: 2026-01-31 08:09:12.69097455 +0000 UTC m=+0.116418888 container init aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.693 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.694 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846952.6226683, 63136e80-d3bf-4057-aaff-910e5c7b1606 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.694 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] VM Paused (Lifecycle Event)
Jan 31 08:09:12 compute-2 podman[279245]: 2026-01-31 08:09:12.694977879 +0000 UTC m=+0.120422197 container start aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:09:12 compute-2 neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a[279261]: [NOTICE]   (279265) : New worker (279267) forked
Jan 31 08:09:12 compute-2 neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a[279261]: [NOTICE]   (279265) : Loading success.
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.741 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.744 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846952.6278281, 63136e80-d3bf-4057-aaff-910e5c7b1606 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.745 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] VM Resumed (Lifecycle Event)
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.764 226833 INFO nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Took 9.83 seconds to spawn the instance on the hypervisor.
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.765 226833 DEBUG nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.777 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.780 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.818 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.844 226833 INFO nova.compute.manager [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Took 11.43 seconds to build instance.
Jan 31 08:09:12 compute-2 nova_compute[226829]: 2026-01-31 08:09:12.869 226833 DEBUG oslo_concurrency.lockutils [None req-4b8c491d-39cc-468d-b409-d54e6368fd11 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:12 compute-2 ceph-mon[77282]: pgmap v2187: 305 pgs: 305 active+clean; 252 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 5.2 MiB/s wr, 80 op/s
Jan 31 08:09:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:13.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:14 compute-2 sudo[279277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:14 compute-2 sudo[279277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:14 compute-2 sudo[279277]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:14 compute-2 sudo[279302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:09:14 compute-2 sudo[279302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:14 compute-2 sudo[279302]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:14.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.436 226833 DEBUG nova.compute.manager [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Received event network-vif-plugged-1ab6830a-b918-43d5-aef2-ef65be77a24d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.437 226833 DEBUG oslo_concurrency.lockutils [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.437 226833 DEBUG oslo_concurrency.lockutils [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.437 226833 DEBUG oslo_concurrency.lockutils [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.438 226833 DEBUG nova.compute.manager [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] No waiting events found dispatching network-vif-plugged-1ab6830a-b918-43d5-aef2-ef65be77a24d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.438 226833 WARNING nova.compute.manager [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Received unexpected event network-vif-plugged-1ab6830a-b918-43d5-aef2-ef65be77a24d for instance with vm_state active and task_state None.
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.629 226833 DEBUG oslo_concurrency.lockutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquiring lock "63136e80-d3bf-4057-aaff-910e5c7b1606" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.630 226833 DEBUG oslo_concurrency.lockutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.630 226833 DEBUG oslo_concurrency.lockutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquiring lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.630 226833 DEBUG oslo_concurrency.lockutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.630 226833 DEBUG oslo_concurrency.lockutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.631 226833 INFO nova.compute.manager [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Terminating instance
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.632 226833 DEBUG nova.compute.manager [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:09:14 compute-2 kernel: tap1ab6830a-b9 (unregistering): left promiscuous mode
Jan 31 08:09:14 compute-2 ceph-mon[77282]: pgmap v2188: 305 pgs: 305 active+clean; 260 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 4.3 MiB/s wr, 108 op/s
Jan 31 08:09:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:14 compute-2 NetworkManager[48999]: <info>  [1769846954.6937] device (tap1ab6830a-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 ovn_controller[133834]: 2026-01-31T08:09:14Z|00481|binding|INFO|Releasing lport 1ab6830a-b918-43d5-aef2-ef65be77a24d from this chassis (sb_readonly=0)
Jan 31 08:09:14 compute-2 ovn_controller[133834]: 2026-01-31T08:09:14Z|00482|binding|INFO|Setting lport 1ab6830a-b918-43d5-aef2-ef65be77a24d down in Southbound
Jan 31 08:09:14 compute-2 ovn_controller[133834]: 2026-01-31T08:09:14Z|00483|binding|INFO|Removing iface tap1ab6830a-b9 ovn-installed in OVS
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.701 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.703 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.708 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:69:77 10.100.0.6'], port_security=['fa:16:3e:e7:69:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '63136e80-d3bf-4057-aaff-910e5c7b1606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3218607-8582-4d39-9ab8-13affa1e103a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acc43e4b8be149d9adc0a3c8e9f9ddd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3df6c13-cb44-43d6-9c95-e4e80ef10ce9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1851fefc-8922-4cbd-8503-cba2f4d4576b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=1ab6830a-b918-43d5-aef2-ef65be77a24d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.709 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.710 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 1ab6830a-b918-43d5-aef2-ef65be77a24d in datapath f3218607-8582-4d39-9ab8-13affa1e103a unbound from our chassis
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.712 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3218607-8582-4d39-9ab8-13affa1e103a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.713 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dc04f3d6-2093-404a-b9ef-d0292f22e7af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.714 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a namespace which is not needed anymore
Jan 31 08:09:14 compute-2 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000073.scope: Deactivated successfully.
Jan 31 08:09:14 compute-2 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000073.scope: Consumed 2.650s CPU time.
Jan 31 08:09:14 compute-2 systemd-machined[195142]: Machine qemu-51-instance-00000073 terminated.
Jan 31 08:09:14 compute-2 neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a[279261]: [NOTICE]   (279265) : haproxy version is 2.8.14-c23fe91
Jan 31 08:09:14 compute-2 neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a[279261]: [NOTICE]   (279265) : path to executable is /usr/sbin/haproxy
Jan 31 08:09:14 compute-2 neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a[279261]: [ALERT]    (279265) : Current worker (279267) exited with code 143 (Terminated)
Jan 31 08:09:14 compute-2 neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a[279261]: [WARNING]  (279265) : All workers exited. Exiting... (0)
Jan 31 08:09:14 compute-2 systemd[1]: libpod-aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd.scope: Deactivated successfully.
Jan 31 08:09:14 compute-2 podman[279347]: 2026-01-31 08:09:14.826286901 +0000 UTC m=+0.044476581 container died aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:09:14 compute-2 kernel: tap1ab6830a-b9: entered promiscuous mode
Jan 31 08:09:14 compute-2 NetworkManager[48999]: <info>  [1769846954.8472] manager: (tap1ab6830a-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Jan 31 08:09:14 compute-2 systemd-udevd[279157]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:09:14 compute-2 kernel: tap1ab6830a-b9 (unregistering): left promiscuous mode
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.851 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 ovn_controller[133834]: 2026-01-31T08:09:14Z|00484|binding|INFO|Claiming lport 1ab6830a-b918-43d5-aef2-ef65be77a24d for this chassis.
Jan 31 08:09:14 compute-2 ovn_controller[133834]: 2026-01-31T08:09:14Z|00485|binding|INFO|1ab6830a-b918-43d5-aef2-ef65be77a24d: Claiming fa:16:3e:e7:69:77 10.100.0.6
Jan 31 08:09:14 compute-2 ovn_controller[133834]: 2026-01-31T08:09:14Z|00486|if_status|INFO|Dropped 5 log messages in last 440 seconds (most recently, 440 seconds ago) due to excessive rate
Jan 31 08:09:14 compute-2 ovn_controller[133834]: 2026-01-31T08:09:14Z|00487|if_status|INFO|Not setting lport 1ab6830a-b918-43d5-aef2-ef65be77a24d down as sb is readonly
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.862 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.864 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.866 226833 INFO nova.virt.libvirt.driver [-] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Instance destroyed successfully.
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.867 226833 DEBUG nova.objects.instance [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lazy-loading 'resources' on Instance uuid 63136e80-d3bf-4057-aaff-910e5c7b1606 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:09:14 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd-userdata-shm.mount: Deactivated successfully.
Jan 31 08:09:14 compute-2 systemd[1]: var-lib-containers-storage-overlay-30a6ec8bb2872e2a4d600bc15cfc7f650ae90a56daab544b1cc2fe0f25301aff-merged.mount: Deactivated successfully.
Jan 31 08:09:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:14 compute-2 podman[279347]: 2026-01-31 08:09:14.880247468 +0000 UTC m=+0.098437158 container cleanup aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:14 compute-2 systemd[1]: libpod-conmon-aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd.scope: Deactivated successfully.
Jan 31 08:09:14 compute-2 podman[279381]: 2026-01-31 08:09:14.93616585 +0000 UTC m=+0.038533980 container remove aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 08:09:14 compute-2 ovn_controller[133834]: 2026-01-31T08:09:14Z|00488|binding|INFO|Releasing lport 1ab6830a-b918-43d5-aef2-ef65be77a24d from this chassis (sb_readonly=0)
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.938 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:69:77 10.100.0.6'], port_security=['fa:16:3e:e7:69:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '63136e80-d3bf-4057-aaff-910e5c7b1606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3218607-8582-4d39-9ab8-13affa1e103a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acc43e4b8be149d9adc0a3c8e9f9ddd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3df6c13-cb44-43d6-9c95-e4e80ef10ce9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1851fefc-8922-4cbd-8503-cba2f4d4576b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=1ab6830a-b918-43d5-aef2-ef65be77a24d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.940 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[40487bfa-795f-43c4-b3d4-7450a6a5260a]: (4, ('Sat Jan 31 08:09:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a (aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd)\naea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd\nSat Jan 31 08:09:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a (aea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd)\naea9081b31f228c9b36c72739c24c251a50b82df85baf5df49d695a0ffd053dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.942 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f14d31-cc83-47f5-b1c9-40768b7c7040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.942 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3218607-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.983 226833 DEBUG nova.virt.libvirt.vif [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:08:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-2072569271',display_name='tempest-ServerGroupTestJSON-server-2072569271',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-2072569271',id=115,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:09:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acc43e4b8be149d9adc0a3c8e9f9ddd7',ramdisk_id='',reservation_id='r-v5cjikku',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1767433902',owner_user_name='tempest-ServerGroupTestJSON-1767433902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:09:12Z,user_data=None,user_id='f694162f685a48139cf09d24531572e2',uuid=63136e80-d3bf-4057-aaff-910e5c7b1606,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "address": "fa:16:3e:e7:69:77", "network": {"id": "f3218607-8582-4d39-9ab8-13affa1e103a", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1145150035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc43e4b8be149d9adc0a3c8e9f9ddd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ab6830a-b9", "ovs_interfaceid": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.984 226833 DEBUG nova.network.os_vif_util [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Converting VIF {"id": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "address": "fa:16:3e:e7:69:77", "network": {"id": "f3218607-8582-4d39-9ab8-13affa1e103a", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1145150035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc43e4b8be149d9adc0a3c8e9f9ddd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ab6830a-b9", "ovs_interfaceid": "1ab6830a-b918-43d5-aef2-ef65be77a24d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.985 226833 DEBUG nova.network.os_vif_util [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:69:77,bridge_name='br-int',has_traffic_filtering=True,id=1ab6830a-b918-43d5-aef2-ef65be77a24d,network=Network(f3218607-8582-4d39-9ab8-13affa1e103a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ab6830a-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.985 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:69:77 10.100.0.6'], port_security=['fa:16:3e:e7:69:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '63136e80-d3bf-4057-aaff-910e5c7b1606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3218607-8582-4d39-9ab8-13affa1e103a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acc43e4b8be149d9adc0a3c8e9f9ddd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3df6c13-cb44-43d6-9c95-e4e80ef10ce9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1851fefc-8922-4cbd-8503-cba2f4d4576b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=1ab6830a-b918-43d5-aef2-ef65be77a24d) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.985 226833 DEBUG os_vif [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:69:77,bridge_name='br-int',has_traffic_filtering=True,id=1ab6830a-b918-43d5-aef2-ef65be77a24d,network=Network(f3218607-8582-4d39-9ab8-13affa1e103a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ab6830a-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.988 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 kernel: tapf3218607-80: left promiscuous mode
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.989 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ab6830a-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.991 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.992 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.993 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:14.995 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f4fb43-68bd-4663-94d3-ba23e917fb61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:14 compute-2 nova_compute[226829]: 2026-01-31 08:09:14.996 226833 INFO os_vif [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:69:77,bridge_name='br-int',has_traffic_filtering=True,id=1ab6830a-b918-43d5-aef2-ef65be77a24d,network=Network(f3218607-8582-4d39-9ab8-13affa1e103a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ab6830a-b9')
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.009 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[877d9099-ada1-4aa7-b26b-0b99faa18305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.011 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[72173d21-56d9-46f0-a203-f30023c00ebc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.022 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6b8d61-9246-47fd-8b8d-da0870bb41da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720461, 'reachable_time': 22320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279411, 'error': None, 'target': 'ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:15 compute-2 systemd[1]: run-netns-ovnmeta\x2df3218607\x2d8582\x2d4d39\x2d9ab8\x2d13affa1e103a.mount: Deactivated successfully.
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.026 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3218607-8582-4d39-9ab8-13affa1e103a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.026 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[aac2f518-9cb7-4581-bfdf-75c3d7cb6830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.026 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 1ab6830a-b918-43d5-aef2-ef65be77a24d in datapath f3218607-8582-4d39-9ab8-13affa1e103a unbound from our chassis
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.027 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3218607-8582-4d39-9ab8-13affa1e103a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.028 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[85611e0a-a7ec-4248-8237-fe616771ccaf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.029 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 1ab6830a-b918-43d5-aef2-ef65be77a24d in datapath f3218607-8582-4d39-9ab8-13affa1e103a unbound from our chassis
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.030 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3218607-8582-4d39-9ab8-13affa1e103a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:09:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:15.030 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[92d02595-6163-42e0-9fe2-270fe593f6c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:15 compute-2 nova_compute[226829]: 2026-01-31 08:09:15.690 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:15.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:16.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.742 226833 DEBUG nova.compute.manager [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Received event network-vif-unplugged-1ab6830a-b918-43d5-aef2-ef65be77a24d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.742 226833 DEBUG oslo_concurrency.lockutils [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.742 226833 DEBUG oslo_concurrency.lockutils [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.743 226833 DEBUG oslo_concurrency.lockutils [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.743 226833 DEBUG nova.compute.manager [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] No waiting events found dispatching network-vif-unplugged-1ab6830a-b918-43d5-aef2-ef65be77a24d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.743 226833 DEBUG nova.compute.manager [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Received event network-vif-unplugged-1ab6830a-b918-43d5-aef2-ef65be77a24d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.743 226833 DEBUG nova.compute.manager [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Received event network-vif-plugged-1ab6830a-b918-43d5-aef2-ef65be77a24d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.743 226833 DEBUG oslo_concurrency.lockutils [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.743 226833 DEBUG oslo_concurrency.lockutils [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.744 226833 DEBUG oslo_concurrency.lockutils [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.744 226833 DEBUG nova.compute.manager [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] No waiting events found dispatching network-vif-plugged-1ab6830a-b918-43d5-aef2-ef65be77a24d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:09:16 compute-2 nova_compute[226829]: 2026-01-31 08:09:16.744 226833 WARNING nova.compute.manager [req-3ec44122-2049-433d-af29-d907b5697547 req-18c07d95-2941-4511-9fba-8e26c22c7407 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Received unexpected event network-vif-plugged-1ab6830a-b918-43d5-aef2-ef65be77a24d for instance with vm_state active and task_state deleting.
Jan 31 08:09:17 compute-2 ceph-mon[77282]: pgmap v2189: 305 pgs: 305 active+clean; 260 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 154 op/s
Jan 31 08:09:17 compute-2 nova_compute[226829]: 2026-01-31 08:09:17.508 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:17.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:09:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:18.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:09:18 compute-2 ceph-mon[77282]: pgmap v2190: 305 pgs: 305 active+clean; 238 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 250 op/s
Jan 31 08:09:19 compute-2 nova_compute[226829]: 2026-01-31 08:09:19.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:19.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:19 compute-2 nova_compute[226829]: 2026-01-31 08:09:19.990 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:20 compute-2 podman[279419]: 2026-01-31 08:09:20.184699157 +0000 UTC m=+0.071208349 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:09:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:20.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:20 compute-2 nova_compute[226829]: 2026-01-31 08:09:20.692 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:20 compute-2 nova_compute[226829]: 2026-01-31 08:09:20.721 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:20 compute-2 nova_compute[226829]: 2026-01-31 08:09:20.721 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:20 compute-2 ceph-mon[77282]: pgmap v2191: 305 pgs: 305 active+clean; 214 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.3 MiB/s wr, 269 op/s
Jan 31 08:09:21 compute-2 nova_compute[226829]: 2026-01-31 08:09:21.123 226833 DEBUG nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:09:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:21.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:21 compute-2 nova_compute[226829]: 2026-01-31 08:09:21.901 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:21 compute-2 nova_compute[226829]: 2026-01-31 08:09:21.902 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:21 compute-2 nova_compute[226829]: 2026-01-31 08:09:21.911 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:09:21 compute-2 nova_compute[226829]: 2026-01-31 08:09:21.911 226833 INFO nova.compute.claims [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:09:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:22.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:22 compute-2 nova_compute[226829]: 2026-01-31 08:09:22.536 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:09:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2203133916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:22 compute-2 nova_compute[226829]: 2026-01-31 08:09:22.944 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:22 compute-2 nova_compute[226829]: 2026-01-31 08:09:22.950 226833 DEBUG nova.compute.provider_tree [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.022 226833 DEBUG nova.scheduler.client.report [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.142 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.143 226833 DEBUG nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:09:23 compute-2 ceph-mon[77282]: pgmap v2192: 305 pgs: 305 active+clean; 214 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.4 MiB/s wr, 261 op/s
Jan 31 08:09:23 compute-2 sudo[279468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:23 compute-2 sudo[279468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:23 compute-2 sudo[279468]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:23 compute-2 sudo[279493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:23 compute-2 sudo[279493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:23 compute-2 sudo[279493]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.387 226833 DEBUG nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.388 226833 DEBUG nova.network.neutron [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.505 226833 INFO nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.593 226833 DEBUG nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.802 226833 DEBUG nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.803 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.803 226833 INFO nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Creating image(s)
Jan 31 08:09:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:23.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.837 226833 DEBUG nova.storage.rbd_utils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.874 226833 DEBUG nova.storage.rbd_utils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.905 226833 DEBUG nova.storage.rbd_utils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.908 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.926 226833 DEBUG nova.policy [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '162a66f52a10400aad586654cbabfbfd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1bd92af54c2f44f1913c48ef5ebd6c42', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.934 226833 INFO nova.virt.libvirt.driver [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Deleting instance files /var/lib/nova/instances/63136e80-d3bf-4057-aaff-910e5c7b1606_del
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.935 226833 INFO nova.virt.libvirt.driver [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Deletion of /var/lib/nova/instances/63136e80-d3bf-4057-aaff-910e5c7b1606_del complete
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.957 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.958 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.958 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.959 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.982 226833 DEBUG nova.storage.rbd_utils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:23 compute-2 nova_compute[226829]: 2026-01-31 08:09:23.986 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 05d236a3-3a52-4bdd-bb11-8956ea342070_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:24 compute-2 nova_compute[226829]: 2026-01-31 08:09:24.079 226833 INFO nova.compute.manager [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Took 9.45 seconds to destroy the instance on the hypervisor.
Jan 31 08:09:24 compute-2 nova_compute[226829]: 2026-01-31 08:09:24.082 226833 DEBUG oslo.service.loopingcall [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:09:24 compute-2 nova_compute[226829]: 2026-01-31 08:09:24.082 226833 DEBUG nova.compute.manager [-] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:09:24 compute-2 nova_compute[226829]: 2026-01-31 08:09:24.082 226833 DEBUG nova.network.neutron [-] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:09:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:24.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:24 compute-2 nova_compute[226829]: 2026-01-31 08:09:24.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2203133916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:24 compute-2 ceph-mon[77282]: pgmap v2193: 305 pgs: 305 active+clean; 214 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 116 KiB/s wr, 242 op/s
Jan 31 08:09:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:24 compute-2 nova_compute[226829]: 2026-01-31 08:09:24.992 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:25 compute-2 nova_compute[226829]: 2026-01-31 08:09:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:25 compute-2 nova_compute[226829]: 2026-01-31 08:09:25.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:09:25 compute-2 nova_compute[226829]: 2026-01-31 08:09:25.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:09:25 compute-2 nova_compute[226829]: 2026-01-31 08:09:25.531 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:09:25 compute-2 nova_compute[226829]: 2026-01-31 08:09:25.531 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 08:09:25 compute-2 nova_compute[226829]: 2026-01-31 08:09:25.531 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:09:25 compute-2 nova_compute[226829]: 2026-01-31 08:09:25.652 226833 DEBUG nova.network.neutron [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Successfully created port: bc9935ef-4dc8-4b06-8b86-0c380a0034f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:09:25 compute-2 nova_compute[226829]: 2026-01-31 08:09:25.694 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:09:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:25.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:09:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:26.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:26 compute-2 nova_compute[226829]: 2026-01-31 08:09:26.898 226833 DEBUG nova.network.neutron [-] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:09:26 compute-2 ceph-mon[77282]: pgmap v2194: 305 pgs: 305 active+clean; 214 MiB data, 933 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 220 KiB/s wr, 228 op/s
Jan 31 08:09:26 compute-2 nova_compute[226829]: 2026-01-31 08:09:26.930 226833 DEBUG nova.compute.manager [req-c023d69d-50a7-45a2-9ac0-65690cb8729e req-c271f6b3-171c-434d-a5f7-dfd45d659037 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Received event network-vif-deleted-1ab6830a-b918-43d5-aef2-ef65be77a24d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:26 compute-2 nova_compute[226829]: 2026-01-31 08:09:26.931 226833 INFO nova.compute.manager [req-c023d69d-50a7-45a2-9ac0-65690cb8729e req-c271f6b3-171c-434d-a5f7-dfd45d659037 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Neutron deleted interface 1ab6830a-b918-43d5-aef2-ef65be77a24d; detaching it from the instance and deleting it from the info cache
Jan 31 08:09:26 compute-2 nova_compute[226829]: 2026-01-31 08:09:26.931 226833 DEBUG nova.network.neutron [req-c023d69d-50a7-45a2-9ac0-65690cb8729e req-c271f6b3-171c-434d-a5f7-dfd45d659037 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:09:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:27.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:27 compute-2 nova_compute[226829]: 2026-01-31 08:09:27.955 226833 INFO nova.compute.manager [-] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Took 3.87 seconds to deallocate network for instance.
Jan 31 08:09:27 compute-2 nova_compute[226829]: 2026-01-31 08:09:27.963 226833 DEBUG nova.compute.manager [req-c023d69d-50a7-45a2-9ac0-65690cb8729e req-c271f6b3-171c-434d-a5f7-dfd45d659037 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Detach interface failed, port_id=1ab6830a-b918-43d5-aef2-ef65be77a24d, reason: Instance 63136e80-d3bf-4057-aaff-910e5c7b1606 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:09:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:28.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:28 compute-2 nova_compute[226829]: 2026-01-31 08:09:28.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:28 compute-2 nova_compute[226829]: 2026-01-31 08:09:28.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:28 compute-2 nova_compute[226829]: 2026-01-31 08:09:28.575 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 05d236a3-3a52-4bdd-bb11-8956ea342070_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:28 compute-2 nova_compute[226829]: 2026-01-31 08:09:28.656 226833 DEBUG nova.storage.rbd_utils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] resizing rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:09:29 compute-2 nova_compute[226829]: 2026-01-31 08:09:29.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:29 compute-2 ceph-mon[77282]: pgmap v2195: 305 pgs: 305 active+clean; 214 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Jan 31 08:09:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:29.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:29 compute-2 nova_compute[226829]: 2026-01-31 08:09:29.862 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846954.8607562, 63136e80-d3bf-4057-aaff-910e5c7b1606 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:09:29 compute-2 nova_compute[226829]: 2026-01-31 08:09:29.862 226833 INFO nova.compute.manager [-] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] VM Stopped (Lifecycle Event)
Jan 31 08:09:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:29 compute-2 nova_compute[226829]: 2026-01-31 08:09:29.994 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:30 compute-2 nova_compute[226829]: 2026-01-31 08:09:30.309 226833 DEBUG oslo_concurrency.lockutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:30 compute-2 nova_compute[226829]: 2026-01-31 08:09:30.310 226833 DEBUG oslo_concurrency.lockutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:30.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:30 compute-2 nova_compute[226829]: 2026-01-31 08:09:30.625 226833 DEBUG nova.objects.instance [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'migration_context' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:09:30 compute-2 nova_compute[226829]: 2026-01-31 08:09:30.697 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:30 compute-2 nova_compute[226829]: 2026-01-31 08:09:30.747 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:30 compute-2 nova_compute[226829]: 2026-01-31 08:09:30.750 226833 DEBUG nova.compute.manager [None req-a77f1c9f-cfa0-4d7f-a555-079f3688f91e - - - - - -] [instance: 63136e80-d3bf-4057-aaff-910e5c7b1606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:30 compute-2 ceph-mon[77282]: pgmap v2196: 305 pgs: 305 active+clean; 225 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 115 op/s
Jan 31 08:09:31 compute-2 nova_compute[226829]: 2026-01-31 08:09:31.042 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:09:31 compute-2 nova_compute[226829]: 2026-01-31 08:09:31.042 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Ensure instance console log exists: /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:09:31 compute-2 nova_compute[226829]: 2026-01-31 08:09:31.043 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:31 compute-2 nova_compute[226829]: 2026-01-31 08:09:31.043 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:31 compute-2 nova_compute[226829]: 2026-01-31 08:09:31.043 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:31 compute-2 podman[279688]: 2026-01-31 08:09:31.162748281 +0000 UTC m=+0.046363722 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:09:31 compute-2 nova_compute[226829]: 2026-01-31 08:09:31.801 226833 DEBUG oslo_concurrency.processutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:31.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:09:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2588865719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:32 compute-2 nova_compute[226829]: 2026-01-31 08:09:32.217 226833 DEBUG oslo_concurrency.processutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:32 compute-2 nova_compute[226829]: 2026-01-31 08:09:32.222 226833 DEBUG nova.compute.provider_tree [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:09:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:32.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:32 compute-2 nova_compute[226829]: 2026-01-31 08:09:32.362 226833 DEBUG nova.scheduler.client.report [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:09:32 compute-2 ceph-mon[77282]: pgmap v2197: 305 pgs: 305 active+clean; 239 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 3.8 MiB/s wr, 86 op/s
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.168 226833 DEBUG oslo_concurrency.lockutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.170 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 2.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.171 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.171 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.171 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.231 226833 DEBUG nova.network.neutron [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Successfully updated port: bc9935ef-4dc8-4b06-8b86-0c380a0034f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:09:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:09:33 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2999692136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.667 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.752 226833 INFO nova.scheduler.client.report [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Deleted allocations for instance 63136e80-d3bf-4057-aaff-910e5c7b1606
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.811 226833 DEBUG nova.compute.manager [req-e80ff8ac-7c1d-4bab-beb6-ad461e5fa56f req-f3d2d9f7-0fe8-4aff-8d5c-81383d495fb2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-changed-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.812 226833 DEBUG nova.compute.manager [req-e80ff8ac-7c1d-4bab-beb6-ad461e5fa56f req-f3d2d9f7-0fe8-4aff-8d5c-81383d495fb2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Refreshing instance network info cache due to event network-changed-bc9935ef-4dc8-4b06-8b86-0c380a0034f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.812 226833 DEBUG oslo_concurrency.lockutils [req-e80ff8ac-7c1d-4bab-beb6-ad461e5fa56f req-f3d2d9f7-0fe8-4aff-8d5c-81383d495fb2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.812 226833 DEBUG oslo_concurrency.lockutils [req-e80ff8ac-7c1d-4bab-beb6-ad461e5fa56f req-f3d2d9f7-0fe8-4aff-8d5c-81383d495fb2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.813 226833 DEBUG nova.network.neutron [req-e80ff8ac-7c1d-4bab-beb6-ad461e5fa56f req-f3d2d9f7-0fe8-4aff-8d5c-81383d495fb2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Refreshing network info cache for port bc9935ef-4dc8-4b06-8b86-0c380a0034f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:09:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2588865719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:33.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.848 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.850 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4341MB free_disk=20.877243041992188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.850 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:33 compute-2 nova_compute[226829]: 2026-01-31 08:09:33.850 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:34 compute-2 nova_compute[226829]: 2026-01-31 08:09:34.237 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:09:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:34.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:34 compute-2 nova_compute[226829]: 2026-01-31 08:09:34.715 226833 DEBUG nova.network.neutron [req-e80ff8ac-7c1d-4bab-beb6-ad461e5fa56f req-f3d2d9f7-0fe8-4aff-8d5c-81383d495fb2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:09:34 compute-2 nova_compute[226829]: 2026-01-31 08:09:34.852 226833 DEBUG oslo_concurrency.lockutils [None req-8c32eb40-cb1a-4c80-a15a-252b4a61ab51 f694162f685a48139cf09d24531572e2 acc43e4b8be149d9adc0a3c8e9f9ddd7 - - default default] Lock "63136e80-d3bf-4057-aaff-910e5c7b1606" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 20.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:34 compute-2 nova_compute[226829]: 2026-01-31 08:09:34.879 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 05d236a3-3a52-4bdd-bb11-8956ea342070 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:09:34 compute-2 nova_compute[226829]: 2026-01-31 08:09:34.880 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:09:34 compute-2 nova_compute[226829]: 2026-01-31 08:09:34.880 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:09:34 compute-2 nova_compute[226829]: 2026-01-31 08:09:34.919 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:34 compute-2 nova_compute[226829]: 2026-01-31 08:09:34.996 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:35 compute-2 ceph-mon[77282]: pgmap v2198: 305 pgs: 305 active+clean; 239 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 244 KiB/s rd, 3.8 MiB/s wr, 88 op/s
Jan 31 08:09:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2999692136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:09:35 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/307380271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:35 compute-2 nova_compute[226829]: 2026-01-31 08:09:35.312 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:35 compute-2 nova_compute[226829]: 2026-01-31 08:09:35.318 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:09:35 compute-2 nova_compute[226829]: 2026-01-31 08:09:35.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:35.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:35 compute-2 nova_compute[226829]: 2026-01-31 08:09:35.964 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:09:36 compute-2 nova_compute[226829]: 2026-01-31 08:09:36.225 226833 DEBUG nova.network.neutron [req-e80ff8ac-7c1d-4bab-beb6-ad461e5fa56f req-f3d2d9f7-0fe8-4aff-8d5c-81383d495fb2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:09:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:36.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/307380271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:36 compute-2 ceph-mon[77282]: pgmap v2199: 305 pgs: 305 active+clean; 239 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 106 op/s
Jan 31 08:09:36 compute-2 nova_compute[226829]: 2026-01-31 08:09:36.510 226833 DEBUG oslo_concurrency.lockutils [req-e80ff8ac-7c1d-4bab-beb6-ad461e5fa56f req-f3d2d9f7-0fe8-4aff-8d5c-81383d495fb2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:09:36 compute-2 nova_compute[226829]: 2026-01-31 08:09:36.511 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquired lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:09:36 compute-2 nova_compute[226829]: 2026-01-31 08:09:36.511 226833 DEBUG nova.network.neutron [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:09:36 compute-2 nova_compute[226829]: 2026-01-31 08:09:36.531 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:09:36 compute-2 nova_compute[226829]: 2026-01-31 08:09:36.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:36 compute-2 nova_compute[226829]: 2026-01-31 08:09:36.794 226833 DEBUG nova.network.neutron [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:09:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:37.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/804275966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/969850163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1831518582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:38.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:39 compute-2 ceph-mon[77282]: pgmap v2200: 305 pgs: 305 active+clean; 240 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 3.7 MiB/s wr, 100 op/s
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.434 226833 DEBUG nova.network.neutron [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updating instance_info_cache with network_info: [{"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.461 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Releasing lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.462 226833 DEBUG nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance network_info: |[{"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.464 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Start _get_guest_xml network_info=[{"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.469 226833 WARNING nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.475 226833 DEBUG nova.virt.libvirt.host [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.475 226833 DEBUG nova.virt.libvirt.host [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.479 226833 DEBUG nova.virt.libvirt.host [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.480 226833 DEBUG nova.virt.libvirt.host [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.482 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.483 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.483 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.483 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.484 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.484 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.484 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.484 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.485 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.485 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.485 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.486 226833 DEBUG nova.virt.hardware [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.491 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:39.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:09:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2347554547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:39 compute-2 nova_compute[226829]: 2026-01-31 08:09:39.976 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.009 226833 DEBUG nova.storage.rbd_utils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.014 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/821602593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4259734391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:40 compute-2 ceph-mon[77282]: pgmap v2201: 305 pgs: 305 active+clean; 242 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 308 KiB/s rd, 2.0 MiB/s wr, 76 op/s
Jan 31 08:09:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2347554547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.034 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.059 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:40.060 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:09:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:40.062 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:09:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:40.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:09:40 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/966626106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.440 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.442 226833 DEBUG nova.virt.libvirt.vif [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-366487887',display_name='tempest-ServerRescueTestJSONUnderV235-server-366487887',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-366487887',id=117,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1bd92af54c2f44f1913c48ef5ebd6c42',ramdisk_id='',reservation_id='r-6xr0lcg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1402654657',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1402654657-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:23Z,user_data=None,user_id='162a66f52a10400aad586654cbabfbfd',uuid=05d236a3-3a52-4bdd-bb11-8956ea342070,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.442 226833 DEBUG nova.network.os_vif_util [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Converting VIF {"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.443 226833 DEBUG nova.network.os_vif_util [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:e3:1c,bridge_name='br-int',has_traffic_filtering=True,id=bc9935ef-4dc8-4b06-8b86-0c380a0034f7,network=Network(d94421cd-9003-407a-863b-8275eed6d7d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc9935ef-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.444 226833 DEBUG nova.objects.instance [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'pci_devices' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.700 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.889 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <uuid>05d236a3-3a52-4bdd-bb11-8956ea342070</uuid>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <name>instance-00000075</name>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-366487887</nova:name>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:09:39</nova:creationTime>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <nova:user uuid="162a66f52a10400aad586654cbabfbfd">tempest-ServerRescueTestJSONUnderV235-1402654657-project-member</nova:user>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <nova:project uuid="1bd92af54c2f44f1913c48ef5ebd6c42">tempest-ServerRescueTestJSONUnderV235-1402654657</nova:project>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <nova:port uuid="bc9935ef-4dc8-4b06-8b86-0c380a0034f7">
Jan 31 08:09:40 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <system>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <entry name="serial">05d236a3-3a52-4bdd-bb11-8956ea342070</entry>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <entry name="uuid">05d236a3-3a52-4bdd-bb11-8956ea342070</entry>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     </system>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <os>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   </os>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <features>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   </features>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/05d236a3-3a52-4bdd-bb11-8956ea342070_disk">
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       </source>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config">
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       </source>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:09:40 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:cf:e3:1c"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <target dev="tapbc9935ef-4d"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/console.log" append="off"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <video>
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     </video>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:09:40 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:09:40 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:09:40 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:09:40 compute-2 nova_compute[226829]: </domain>
Jan 31 08:09:40 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.890 226833 DEBUG nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Preparing to wait for external event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.891 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.891 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.891 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.892 226833 DEBUG nova.virt.libvirt.vif [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-366487887',display_name='tempest-ServerRescueTestJSONUnderV235-server-366487887',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-366487887',id=117,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1bd92af54c2f44f1913c48ef5ebd6c42',ramdisk_id='',reservation_id='r-6xr0lcg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1402654657',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1402654657-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:23Z,user_data=None,user_id='162a66f52a10400aad586654cbabfbfd',uuid=05d236a3-3a52-4bdd-bb11-8956ea342070,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.892 226833 DEBUG nova.network.os_vif_util [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Converting VIF {"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.893 226833 DEBUG nova.network.os_vif_util [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:e3:1c,bridge_name='br-int',has_traffic_filtering=True,id=bc9935ef-4dc8-4b06-8b86-0c380a0034f7,network=Network(d94421cd-9003-407a-863b-8275eed6d7d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc9935ef-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.893 226833 DEBUG os_vif [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:e3:1c,bridge_name='br-int',has_traffic_filtering=True,id=bc9935ef-4dc8-4b06-8b86-0c380a0034f7,network=Network(d94421cd-9003-407a-863b-8275eed6d7d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc9935ef-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.894 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.894 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.894 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.897 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.897 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc9935ef-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.897 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc9935ef-4d, col_values=(('external_ids', {'iface-id': 'bc9935ef-4dc8-4b06-8b86-0c380a0034f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:e3:1c', 'vm-uuid': '05d236a3-3a52-4bdd-bb11-8956ea342070'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.899 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:40 compute-2 NetworkManager[48999]: <info>  [1769846980.9019] manager: (tapbc9935ef-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.902 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.905 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:40 compute-2 nova_compute[226829]: 2026-01-31 08:09:40.906 226833 INFO os_vif [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:e3:1c,bridge_name='br-int',has_traffic_filtering=True,id=bc9935ef-4dc8-4b06-8b86-0c380a0034f7,network=Network(d94421cd-9003-407a-863b-8275eed6d7d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc9935ef-4d')
Jan 31 08:09:41 compute-2 nova_compute[226829]: 2026-01-31 08:09:41.142 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:09:41 compute-2 nova_compute[226829]: 2026-01-31 08:09:41.142 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:09:41 compute-2 nova_compute[226829]: 2026-01-31 08:09:41.142 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] No VIF found with MAC fa:16:3e:cf:e3:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:09:41 compute-2 nova_compute[226829]: 2026-01-31 08:09:41.143 226833 INFO nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Using config drive
Jan 31 08:09:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2439194594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/966626106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:41 compute-2 nova_compute[226829]: 2026-01-31 08:09:41.239 226833 DEBUG nova.storage.rbd_utils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:41.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:42 compute-2 nova_compute[226829]: 2026-01-31 08:09:42.275 226833 INFO nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Creating config drive at /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config
Jan 31 08:09:42 compute-2 nova_compute[226829]: 2026-01-31 08:09:42.283 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1y3ft205 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:42 compute-2 ceph-mon[77282]: pgmap v2202: 305 pgs: 305 active+clean; 246 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 870 KiB/s wr, 53 op/s
Jan 31 08:09:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:42.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:42 compute-2 nova_compute[226829]: 2026-01-31 08:09:42.412 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1y3ft205" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:42 compute-2 nova_compute[226829]: 2026-01-31 08:09:42.440 226833 DEBUG nova.storage.rbd_utils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:42 compute-2 nova_compute[226829]: 2026-01-31 08:09:42.444 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:42 compute-2 nova_compute[226829]: 2026-01-31 08:09:42.797 226833 DEBUG oslo_concurrency.processutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:42 compute-2 nova_compute[226829]: 2026-01-31 08:09:42.798 226833 INFO nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Deleting local config drive /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config because it was imported into RBD.
Jan 31 08:09:42 compute-2 kernel: tapbc9935ef-4d: entered promiscuous mode
Jan 31 08:09:42 compute-2 NetworkManager[48999]: <info>  [1769846982.8477] manager: (tapbc9935ef-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Jan 31 08:09:42 compute-2 ovn_controller[133834]: 2026-01-31T08:09:42Z|00489|binding|INFO|Claiming lport bc9935ef-4dc8-4b06-8b86-0c380a0034f7 for this chassis.
Jan 31 08:09:42 compute-2 ovn_controller[133834]: 2026-01-31T08:09:42Z|00490|binding|INFO|bc9935ef-4dc8-4b06-8b86-0c380a0034f7: Claiming fa:16:3e:cf:e3:1c 10.100.0.13
Jan 31 08:09:42 compute-2 nova_compute[226829]: 2026-01-31 08:09:42.851 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:42 compute-2 systemd-udevd[279916]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:09:42 compute-2 systemd-machined[195142]: New machine qemu-52-instance-00000075.
Jan 31 08:09:42 compute-2 NetworkManager[48999]: <info>  [1769846982.8879] device (tapbc9935ef-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:09:42 compute-2 NetworkManager[48999]: <info>  [1769846982.8899] device (tapbc9935ef-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:09:42 compute-2 systemd[1]: Started Virtual Machine qemu-52-instance-00000075.
Jan 31 08:09:42 compute-2 nova_compute[226829]: 2026-01-31 08:09:42.892 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:42 compute-2 ovn_controller[133834]: 2026-01-31T08:09:42Z|00491|binding|INFO|Setting lport bc9935ef-4dc8-4b06-8b86-0c380a0034f7 ovn-installed in OVS
Jan 31 08:09:42 compute-2 nova_compute[226829]: 2026-01-31 08:09:42.897 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:42 compute-2 ovn_controller[133834]: 2026-01-31T08:09:42Z|00492|binding|INFO|Setting lport bc9935ef-4dc8-4b06-8b86-0c380a0034f7 up in Southbound
Jan 31 08:09:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:42.951 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:e3:1c 10.100.0.13'], port_security=['fa:16:3e:cf:e3:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '05d236a3-3a52-4bdd-bb11-8956ea342070', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d94421cd-9003-407a-863b-8275eed6d7d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1bd92af54c2f44f1913c48ef5ebd6c42', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33613802-2801-4317-b08b-ab67e917317b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fef7fb59-4f6c-4909-8c67-2195494eb965, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=bc9935ef-4dc8-4b06-8b86-0c380a0034f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:09:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:42.952 143841 INFO neutron.agent.ovn.metadata.agent [-] Port bc9935ef-4dc8-4b06-8b86-0c380a0034f7 in datapath d94421cd-9003-407a-863b-8275eed6d7d2 bound to our chassis
Jan 31 08:09:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:42.954 143841 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d94421cd-9003-407a-863b-8275eed6d7d2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:09:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:42.955 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[939adbf4-5761-4639-9a28-a15f11d81a04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:43 compute-2 sudo[279925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:43 compute-2 sudo[279925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:43 compute-2 sudo[279925]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:43 compute-2 sudo[279950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:43 compute-2 sudo[279950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:43 compute-2 sudo[279950]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:43 compute-2 nova_compute[226829]: 2026-01-31 08:09:43.617 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846983.6171095, 05d236a3-3a52-4bdd-bb11-8956ea342070 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:09:43 compute-2 nova_compute[226829]: 2026-01-31 08:09:43.617 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] VM Started (Lifecycle Event)
Jan 31 08:09:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:43.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:43 compute-2 nova_compute[226829]: 2026-01-31 08:09:43.924 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:43 compute-2 nova_compute[226829]: 2026-01-31 08:09:43.928 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846983.6180675, 05d236a3-3a52-4bdd-bb11-8956ea342070 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:09:43 compute-2 nova_compute[226829]: 2026-01-31 08:09:43.929 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] VM Paused (Lifecycle Event)
Jan 31 08:09:44 compute-2 nova_compute[226829]: 2026-01-31 08:09:44.047 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:44 compute-2 nova_compute[226829]: 2026-01-31 08:09:44.268 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:44 compute-2 nova_compute[226829]: 2026-01-31 08:09:44.272 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:09:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:44.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:44 compute-2 nova_compute[226829]: 2026-01-31 08:09:44.555 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:09:44 compute-2 ceph-mon[77282]: pgmap v2203: 305 pgs: 305 active+clean; 246 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 139 KiB/s rd, 89 KiB/s wr, 41 op/s
Jan 31 08:09:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.533 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.534 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.702 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1188548440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:09:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1188548440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.784 226833 DEBUG nova.compute.manager [req-20b8aabf-c6a2-4741-a5aa-962897f27b98 req-c01b81e6-1053-44e1-b567-8fdcfbc65805 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.784 226833 DEBUG oslo_concurrency.lockutils [req-20b8aabf-c6a2-4741-a5aa-962897f27b98 req-c01b81e6-1053-44e1-b567-8fdcfbc65805 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.785 226833 DEBUG oslo_concurrency.lockutils [req-20b8aabf-c6a2-4741-a5aa-962897f27b98 req-c01b81e6-1053-44e1-b567-8fdcfbc65805 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.785 226833 DEBUG oslo_concurrency.lockutils [req-20b8aabf-c6a2-4741-a5aa-962897f27b98 req-c01b81e6-1053-44e1-b567-8fdcfbc65805 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.785 226833 DEBUG nova.compute.manager [req-20b8aabf-c6a2-4741-a5aa-962897f27b98 req-c01b81e6-1053-44e1-b567-8fdcfbc65805 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Processing event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.786 226833 DEBUG nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.789 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769846985.7889168, 05d236a3-3a52-4bdd-bb11-8956ea342070 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.790 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] VM Resumed (Lifecycle Event)
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.792 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.795 226833 INFO nova.virt.libvirt.driver [-] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance spawned successfully.
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.795 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:09:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:45.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:45 compute-2 nova_compute[226829]: 2026-01-31 08:09:45.899 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.002 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.006 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.006 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.007 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.007 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.008 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.008 226833 DEBUG nova.virt.libvirt.driver [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.013 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.050 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:09:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:46.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.393 226833 INFO nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Took 22.59 seconds to spawn the instance on the hypervisor.
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.393 226833 DEBUG nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.703 226833 INFO nova.compute.manager [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Took 24.84 seconds to build instance.
Jan 31 08:09:46 compute-2 nova_compute[226829]: 2026-01-31 08:09:46.740 226833 DEBUG oslo_concurrency.lockutils [None req-79de9e7a-6d2e-4f58-bb81-e9d60ae43eb3 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 26.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:46 compute-2 ceph-mon[77282]: pgmap v2204: 305 pgs: 305 active+clean; 246 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 110 KiB/s wr, 31 op/s
Jan 31 08:09:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:09:47.064 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:47.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:48 compute-2 nova_compute[226829]: 2026-01-31 08:09:48.068 226833 DEBUG nova.compute.manager [req-ff287d33-ab17-419b-b3cd-832af46eef7f req-f9790240-5485-4875-96d4-71aaa89543ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:48 compute-2 nova_compute[226829]: 2026-01-31 08:09:48.069 226833 DEBUG oslo_concurrency.lockutils [req-ff287d33-ab17-419b-b3cd-832af46eef7f req-f9790240-5485-4875-96d4-71aaa89543ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:48 compute-2 nova_compute[226829]: 2026-01-31 08:09:48.069 226833 DEBUG oslo_concurrency.lockutils [req-ff287d33-ab17-419b-b3cd-832af46eef7f req-f9790240-5485-4875-96d4-71aaa89543ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:48 compute-2 nova_compute[226829]: 2026-01-31 08:09:48.070 226833 DEBUG oslo_concurrency.lockutils [req-ff287d33-ab17-419b-b3cd-832af46eef7f req-f9790240-5485-4875-96d4-71aaa89543ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:48 compute-2 nova_compute[226829]: 2026-01-31 08:09:48.070 226833 DEBUG nova.compute.manager [req-ff287d33-ab17-419b-b3cd-832af46eef7f req-f9790240-5485-4875-96d4-71aaa89543ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] No waiting events found dispatching network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:09:48 compute-2 nova_compute[226829]: 2026-01-31 08:09:48.070 226833 WARNING nova.compute.manager [req-ff287d33-ab17-419b-b3cd-832af46eef7f req-f9790240-5485-4875-96d4-71aaa89543ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received unexpected event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 for instance with vm_state active and task_state None.
Jan 31 08:09:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:48.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:48 compute-2 ceph-mon[77282]: pgmap v2205: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 69 KiB/s wr, 44 op/s
Jan 31 08:09:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e280 e280: 3 total, 3 up, 3 in
Jan 31 08:09:49 compute-2 nova_compute[226829]: 2026-01-31 08:09:49.854 226833 INFO nova.compute.manager [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Rescuing
Jan 31 08:09:49 compute-2 nova_compute[226829]: 2026-01-31 08:09:49.855 226833 DEBUG oslo_concurrency.lockutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:09:49 compute-2 nova_compute[226829]: 2026-01-31 08:09:49.855 226833 DEBUG oslo_concurrency.lockutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquired lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:09:49 compute-2 nova_compute[226829]: 2026-01-31 08:09:49.856 226833 DEBUG nova.network.neutron [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:09:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:49.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:50 compute-2 ceph-mon[77282]: osdmap e280: 3 total, 3 up, 3 in
Jan 31 08:09:50 compute-2 ceph-mon[77282]: pgmap v2207: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 82 op/s
Jan 31 08:09:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:50.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:50 compute-2 nova_compute[226829]: 2026-01-31 08:09:50.703 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:50 compute-2 nova_compute[226829]: 2026-01-31 08:09:50.901 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:51 compute-2 podman[280021]: 2026-01-31 08:09:51.235267138 +0000 UTC m=+0.119072571 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 08:09:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/611290882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2197638865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:51.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:52.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:52 compute-2 ceph-mon[77282]: pgmap v2208: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 95 op/s
Jan 31 08:09:52 compute-2 nova_compute[226829]: 2026-01-31 08:09:52.759 226833 DEBUG nova.network.neutron [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updating instance_info_cache with network_info: [{"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:09:52 compute-2 nova_compute[226829]: 2026-01-31 08:09:52.892 226833 DEBUG oslo_concurrency.lockutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Releasing lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:09:53 compute-2 nova_compute[226829]: 2026-01-31 08:09:53.415 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:09:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:53.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:54.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:54 compute-2 ceph-mon[77282]: pgmap v2209: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 101 op/s
Jan 31 08:09:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:55 compute-2 nova_compute[226829]: 2026-01-31 08:09:55.704 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:09:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:55.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:09:55 compute-2 nova_compute[226829]: 2026-01-31 08:09:55.903 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:56.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:56 compute-2 ceph-mon[77282]: pgmap v2210: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 108 op/s
Jan 31 08:09:57 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Jan 31 08:09:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:57.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:58.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:58 compute-2 ceph-mon[77282]: pgmap v2211: 305 pgs: 305 active+clean; 247 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 19 KiB/s wr, 76 op/s
Jan 31 08:09:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:09:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:59.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:00.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:00 compute-2 nova_compute[226829]: 2026-01-31 08:10:00.705 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:00 compute-2 ceph-mon[77282]: pgmap v2212: 305 pgs: 305 active+clean; 258 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 648 KiB/s rd, 1.5 MiB/s wr, 75 op/s
Jan 31 08:10:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 08:10:00 compute-2 nova_compute[226829]: 2026-01-31 08:10:00.905 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:01.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:02 compute-2 podman[280052]: 2026-01-31 08:10:02.18785663 +0000 UTC m=+0.077230552 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:10:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:10:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:02.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:10:02 compute-2 ceph-mon[77282]: pgmap v2213: 305 pgs: 305 active+clean; 270 MiB data, 975 MiB used, 20 GiB / 21 GiB avail; 601 KiB/s rd, 1.5 MiB/s wr, 73 op/s
Jan 31 08:10:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e281 e281: 3 total, 3 up, 3 in
Jan 31 08:10:03 compute-2 nova_compute[226829]: 2026-01-31 08:10:03.464 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 08:10:03 compute-2 sudo[280072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:03 compute-2 sudo[280072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:03 compute-2 sudo[280072]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:03 compute-2 sudo[280097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:03 compute-2 sudo[280097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:03 compute-2 sudo[280097]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:03 compute-2 ceph-mon[77282]: osdmap e281: 3 total, 3 up, 3 in
Jan 31 08:10:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:03.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:10:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:04.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:10:04 compute-2 ceph-mon[77282]: pgmap v2215: 305 pgs: 305 active+clean; 279 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 370 KiB/s rd, 2.6 MiB/s wr, 80 op/s
Jan 31 08:10:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1587739989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:05 compute-2 nova_compute[226829]: 2026-01-31 08:10:05.707 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:05 compute-2 kernel: tapbc9935ef-4d (unregistering): left promiscuous mode
Jan 31 08:10:05 compute-2 NetworkManager[48999]: <info>  [1769847005.7672] device (tapbc9935ef-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:10:05 compute-2 nova_compute[226829]: 2026-01-31 08:10:05.768 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:05 compute-2 ovn_controller[133834]: 2026-01-31T08:10:05Z|00493|binding|INFO|Releasing lport bc9935ef-4dc8-4b06-8b86-0c380a0034f7 from this chassis (sb_readonly=0)
Jan 31 08:10:05 compute-2 ovn_controller[133834]: 2026-01-31T08:10:05Z|00494|binding|INFO|Setting lport bc9935ef-4dc8-4b06-8b86-0c380a0034f7 down in Southbound
Jan 31 08:10:05 compute-2 ovn_controller[133834]: 2026-01-31T08:10:05Z|00495|binding|INFO|Removing iface tapbc9935ef-4d ovn-installed in OVS
Jan 31 08:10:05 compute-2 nova_compute[226829]: 2026-01-31 08:10:05.770 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:05 compute-2 nova_compute[226829]: 2026-01-31 08:10:05.773 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:05 compute-2 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000075.scope: Deactivated successfully.
Jan 31 08:10:05 compute-2 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000075.scope: Consumed 14.045s CPU time.
Jan 31 08:10:05 compute-2 systemd-machined[195142]: Machine qemu-52-instance-00000075 terminated.
Jan 31 08:10:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:05.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:05 compute-2 nova_compute[226829]: 2026-01-31 08:10:05.907 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:05 compute-2 nova_compute[226829]: 2026-01-31 08:10:05.991 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:05 compute-2 nova_compute[226829]: 2026-01-31 08:10:05.997 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:06.006 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:e3:1c 10.100.0.13'], port_security=['fa:16:3e:cf:e3:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '05d236a3-3a52-4bdd-bb11-8956ea342070', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d94421cd-9003-407a-863b-8275eed6d7d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1bd92af54c2f44f1913c48ef5ebd6c42', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33613802-2801-4317-b08b-ab67e917317b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fef7fb59-4f6c-4909-8c67-2195494eb965, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=bc9935ef-4dc8-4b06-8b86-0c380a0034f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:10:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:06.011 143841 INFO neutron.agent.ovn.metadata.agent [-] Port bc9935ef-4dc8-4b06-8b86-0c380a0034f7 in datapath d94421cd-9003-407a-863b-8275eed6d7d2 unbound from our chassis
Jan 31 08:10:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:06.014 143841 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d94421cd-9003-407a-863b-8275eed6d7d2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:10:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:06.017 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[15deaa5d-0f27-452d-8cf4-8eaf3c96cc40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:06.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.481 226833 INFO nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance shutdown successfully after 13 seconds.
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.488 226833 INFO nova.virt.libvirt.driver [-] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance destroyed successfully.
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.488 226833 DEBUG nova.objects.instance [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'numa_topology' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.559 226833 INFO nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Attempting rescue
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.560 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.563 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.564 226833 INFO nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Creating image(s)
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.592 226833 DEBUG nova.storage.rbd_utils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.595 226833 DEBUG nova.objects.instance [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.693 226833 DEBUG nova.storage.rbd_utils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.727 226833 DEBUG nova.storage.rbd_utils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.730 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.784 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.785 226833 DEBUG oslo_concurrency.lockutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.786 226833 DEBUG oslo_concurrency.lockutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.787 226833 DEBUG oslo_concurrency.lockutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.821 226833 DEBUG nova.storage.rbd_utils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:06 compute-2 nova_compute[226829]: 2026-01-31 08:10:06.826 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:06 compute-2 ceph-mon[77282]: pgmap v2216: 305 pgs: 305 active+clean; 279 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 2.6 MiB/s wr, 85 op/s
Jan 31 08:10:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:06.881 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:06.882 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:06.882 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.142 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.143 226833 DEBUG nova.objects.instance [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'migration_context' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.187 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.188 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Start _get_guest_xml network_info=[{"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "vif_mac": "fa:16:3e:cf:e3:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '7c23949f-bba8-4466-bb79-caf568852d38', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.188 226833 DEBUG nova.objects.instance [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'resources' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.340 226833 WARNING nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.349 226833 DEBUG nova.virt.libvirt.host [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.350 226833 DEBUG nova.virt.libvirt.host [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.353 226833 DEBUG nova.virt.libvirt.host [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.354 226833 DEBUG nova.virt.libvirt.host [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.355 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.355 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.356 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.356 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.356 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.356 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.356 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.357 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.357 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.357 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.357 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.358 226833 DEBUG nova.virt.hardware [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.358 226833 DEBUG nova.objects.instance [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.671 226833 DEBUG nova.compute.manager [req-fbaf7ae2-c16a-4be3-910d-1bae42969179 req-2015bcae-5c85-4a4f-9ce6-cc7f0f2a44a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-vif-unplugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.671 226833 DEBUG oslo_concurrency.lockutils [req-fbaf7ae2-c16a-4be3-910d-1bae42969179 req-2015bcae-5c85-4a4f-9ce6-cc7f0f2a44a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.672 226833 DEBUG oslo_concurrency.lockutils [req-fbaf7ae2-c16a-4be3-910d-1bae42969179 req-2015bcae-5c85-4a4f-9ce6-cc7f0f2a44a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.672 226833 DEBUG oslo_concurrency.lockutils [req-fbaf7ae2-c16a-4be3-910d-1bae42969179 req-2015bcae-5c85-4a4f-9ce6-cc7f0f2a44a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.672 226833 DEBUG nova.compute.manager [req-fbaf7ae2-c16a-4be3-910d-1bae42969179 req-2015bcae-5c85-4a4f-9ce6-cc7f0f2a44a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] No waiting events found dispatching network-vif-unplugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.672 226833 WARNING nova.compute.manager [req-fbaf7ae2-c16a-4be3-910d-1bae42969179 req-2015bcae-5c85-4a4f-9ce6-cc7f0f2a44a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received unexpected event network-vif-unplugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 for instance with vm_state active and task_state rescuing.
Jan 31 08:10:07 compute-2 nova_compute[226829]: 2026-01-31 08:10:07.681 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:10:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:07.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.905557) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007905628, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2490, "num_deletes": 258, "total_data_size": 5806108, "memory_usage": 5877840, "flush_reason": "Manual Compaction"}
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007925531, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 3773074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50120, "largest_seqno": 52605, "table_properties": {"data_size": 3762830, "index_size": 6546, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21677, "raw_average_key_size": 20, "raw_value_size": 3742244, "raw_average_value_size": 3619, "num_data_blocks": 282, "num_entries": 1034, "num_filter_entries": 1034, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846820, "oldest_key_time": 1769846820, "file_creation_time": 1769847007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 20028 microseconds, and 6407 cpu microseconds.
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.925582) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 3773074 bytes OK
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.925599) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.927518) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.927530) EVENT_LOG_v1 {"time_micros": 1769847007927526, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.927546) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 5795116, prev total WAL file size 5795116, number of live WAL files 2.
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.928356) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(3684KB)], [99(9059KB)]
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007928408, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 13049691, "oldest_snapshot_seqno": -1}
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 7675 keys, 11136899 bytes, temperature: kUnknown
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007984766, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 11136899, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11086353, "index_size": 30274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 198638, "raw_average_key_size": 25, "raw_value_size": 10950167, "raw_average_value_size": 1426, "num_data_blocks": 1192, "num_entries": 7675, "num_filter_entries": 7675, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.985107) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11136899 bytes
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.986230) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.1 rd, 197.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.8 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8207, records dropped: 532 output_compression: NoCompression
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.986251) EVENT_LOG_v1 {"time_micros": 1769847007986241, "job": 62, "event": "compaction_finished", "compaction_time_micros": 56470, "compaction_time_cpu_micros": 22620, "output_level": 6, "num_output_files": 1, "total_output_size": 11136899, "num_input_records": 8207, "num_output_records": 7675, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007986894, "job": 62, "event": "table_file_deletion", "file_number": 101}
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007988399, "job": 62, "event": "table_file_deletion", "file_number": 99}
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.928291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.988489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.988500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.988502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.988503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:10:07.988525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:10:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3304592696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:08 compute-2 nova_compute[226829]: 2026-01-31 08:10:08.152 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:08 compute-2 nova_compute[226829]: 2026-01-31 08:10:08.154 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:08.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:10:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2532497826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:08 compute-2 nova_compute[226829]: 2026-01-31 08:10:08.559 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:08 compute-2 nova_compute[226829]: 2026-01-31 08:10:08.560 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:08 compute-2 ceph-mon[77282]: pgmap v2217: 305 pgs: 305 active+clean; 290 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Jan 31 08:10:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3304592696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2532497826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:10:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/544179835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:08 compute-2 nova_compute[226829]: 2026-01-31 08:10:08.961 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:08 compute-2 nova_compute[226829]: 2026-01-31 08:10:08.963 226833 DEBUG nova.virt.libvirt.vif [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-366487887',display_name='tempest-ServerRescueTestJSONUnderV235-server-366487887',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-366487887',id=117,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:09:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1bd92af54c2f44f1913c48ef5ebd6c42',ramdisk_id='',reservation_id='r-6xr0lcg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1402654657',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1402654657-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:46Z,user_data=None,user_id='162a66f52a10400aad586654cbabfbfd',uuid=05d236a3-3a52-4bdd-bb11-8956ea342070,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "vif_mac": "fa:16:3e:cf:e3:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:10:08 compute-2 nova_compute[226829]: 2026-01-31 08:10:08.964 226833 DEBUG nova.network.os_vif_util [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Converting VIF {"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "vif_mac": "fa:16:3e:cf:e3:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:10:08 compute-2 nova_compute[226829]: 2026-01-31 08:10:08.964 226833 DEBUG nova.network.os_vif_util [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:e3:1c,bridge_name='br-int',has_traffic_filtering=True,id=bc9935ef-4dc8-4b06-8b86-0c380a0034f7,network=Network(d94421cd-9003-407a-863b-8275eed6d7d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc9935ef-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:10:08 compute-2 nova_compute[226829]: 2026-01-31 08:10:08.965 226833 DEBUG nova.objects.instance [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'pci_devices' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:09 compute-2 nova_compute[226829]: 2026-01-31 08:10:09.424 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <uuid>05d236a3-3a52-4bdd-bb11-8956ea342070</uuid>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <name>instance-00000075</name>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-366487887</nova:name>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:10:07</nova:creationTime>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <nova:user uuid="162a66f52a10400aad586654cbabfbfd">tempest-ServerRescueTestJSONUnderV235-1402654657-project-member</nova:user>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <nova:project uuid="1bd92af54c2f44f1913c48ef5ebd6c42">tempest-ServerRescueTestJSONUnderV235-1402654657</nova:project>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <nova:port uuid="bc9935ef-4dc8-4b06-8b86-0c380a0034f7">
Jan 31 08:10:09 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <system>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <entry name="serial">05d236a3-3a52-4bdd-bb11-8956ea342070</entry>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <entry name="uuid">05d236a3-3a52-4bdd-bb11-8956ea342070</entry>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     </system>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <os>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   </os>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <features>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   </features>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/05d236a3-3a52-4bdd-bb11-8956ea342070_disk.rescue">
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       </source>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/05d236a3-3a52-4bdd-bb11-8956ea342070_disk">
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       </source>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config.rescue">
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       </source>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:10:09 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:cf:e3:1c"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <target dev="tapbc9935ef-4d"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/console.log" append="off"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <video>
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     </video>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:10:09 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:10:09 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:10:09 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:10:09 compute-2 nova_compute[226829]: </domain>
Jan 31 08:10:09 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:10:09 compute-2 nova_compute[226829]: 2026-01-31 08:10:09.434 226833 INFO nova.virt.libvirt.driver [-] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance destroyed successfully.
Jan 31 08:10:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:10:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:09.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:10:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/544179835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:09 compute-2 ceph-mon[77282]: pgmap v2218: 305 pgs: 305 active+clean; 311 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.9 MiB/s wr, 59 op/s
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.015 226833 DEBUG nova.compute.manager [req-8c3b218b-64fa-4e19-a71c-bde03975904b req-d0c365a7-2d42-4a2d-aa0c-410f86d1543c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.015 226833 DEBUG oslo_concurrency.lockutils [req-8c3b218b-64fa-4e19-a71c-bde03975904b req-d0c365a7-2d42-4a2d-aa0c-410f86d1543c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.016 226833 DEBUG oslo_concurrency.lockutils [req-8c3b218b-64fa-4e19-a71c-bde03975904b req-d0c365a7-2d42-4a2d-aa0c-410f86d1543c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.016 226833 DEBUG oslo_concurrency.lockutils [req-8c3b218b-64fa-4e19-a71c-bde03975904b req-d0c365a7-2d42-4a2d-aa0c-410f86d1543c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.016 226833 DEBUG nova.compute.manager [req-8c3b218b-64fa-4e19-a71c-bde03975904b req-d0c365a7-2d42-4a2d-aa0c-410f86d1543c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] No waiting events found dispatching network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.016 226833 WARNING nova.compute.manager [req-8c3b218b-64fa-4e19-a71c-bde03975904b req-d0c365a7-2d42-4a2d-aa0c-410f86d1543c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received unexpected event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 for instance with vm_state active and task_state rescuing.
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.033 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.034 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.034 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.034 226833 DEBUG nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] No VIF found with MAC fa:16:3e:cf:e3:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.035 226833 INFO nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Using config drive
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.065 226833 DEBUG nova.storage.rbd_utils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.155 226833 DEBUG nova.objects.instance [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.326 226833 DEBUG nova.objects.instance [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'keypairs' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:10.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 e282: 3 total, 3 up, 3 in
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.709 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:10 compute-2 nova_compute[226829]: 2026-01-31 08:10:10.908 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:11 compute-2 nova_compute[226829]: 2026-01-31 08:10:11.146 226833 INFO nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Creating config drive at /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config.rescue
Jan 31 08:10:11 compute-2 nova_compute[226829]: 2026-01-31 08:10:11.151 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3mnn4vyx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:11 compute-2 nova_compute[226829]: 2026-01-31 08:10:11.275 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3mnn4vyx" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:11 compute-2 nova_compute[226829]: 2026-01-31 08:10:11.310 226833 DEBUG nova.storage.rbd_utils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] rbd image 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:11 compute-2 nova_compute[226829]: 2026-01-31 08:10:11.315 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config.rescue 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:11 compute-2 nova_compute[226829]: 2026-01-31 08:10:11.520 226833 DEBUG oslo_concurrency.processutils [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config.rescue 05d236a3-3a52-4bdd-bb11-8956ea342070_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:11 compute-2 nova_compute[226829]: 2026-01-31 08:10:11.520 226833 INFO nova.virt.libvirt.driver [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Deleting local config drive /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070/disk.config.rescue because it was imported into RBD.
Jan 31 08:10:11 compute-2 kernel: tapbc9935ef-4d: entered promiscuous mode
Jan 31 08:10:11 compute-2 NetworkManager[48999]: <info>  [1769847011.5616] manager: (tapbc9935ef-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Jan 31 08:10:11 compute-2 ovn_controller[133834]: 2026-01-31T08:10:11Z|00496|binding|INFO|Claiming lport bc9935ef-4dc8-4b06-8b86-0c380a0034f7 for this chassis.
Jan 31 08:10:11 compute-2 ovn_controller[133834]: 2026-01-31T08:10:11Z|00497|binding|INFO|bc9935ef-4dc8-4b06-8b86-0c380a0034f7: Claiming fa:16:3e:cf:e3:1c 10.100.0.13
Jan 31 08:10:11 compute-2 nova_compute[226829]: 2026-01-31 08:10:11.563 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:11 compute-2 ovn_controller[133834]: 2026-01-31T08:10:11Z|00498|binding|INFO|Setting lport bc9935ef-4dc8-4b06-8b86-0c380a0034f7 ovn-installed in OVS
Jan 31 08:10:11 compute-2 ovn_controller[133834]: 2026-01-31T08:10:11Z|00499|binding|INFO|Setting lport bc9935ef-4dc8-4b06-8b86-0c380a0034f7 up in Southbound
Jan 31 08:10:11 compute-2 nova_compute[226829]: 2026-01-31 08:10:11.569 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:11 compute-2 nova_compute[226829]: 2026-01-31 08:10:11.571 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:11.570 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:e3:1c 10.100.0.13'], port_security=['fa:16:3e:cf:e3:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '05d236a3-3a52-4bdd-bb11-8956ea342070', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d94421cd-9003-407a-863b-8275eed6d7d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1bd92af54c2f44f1913c48ef5ebd6c42', 'neutron:revision_number': '5', 'neutron:security_group_ids': '33613802-2801-4317-b08b-ab67e917317b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fef7fb59-4f6c-4909-8c67-2195494eb965, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=bc9935ef-4dc8-4b06-8b86-0c380a0034f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:10:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:11.572 143841 INFO neutron.agent.ovn.metadata.agent [-] Port bc9935ef-4dc8-4b06-8b86-0c380a0034f7 in datapath d94421cd-9003-407a-863b-8275eed6d7d2 bound to our chassis
Jan 31 08:10:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:11.574 143841 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d94421cd-9003-407a-863b-8275eed6d7d2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:10:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:11.574 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cd28fe9a-c13b-446c-aba0-50e66590b260]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:11 compute-2 systemd-udevd[280375]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:10:11 compute-2 systemd-machined[195142]: New machine qemu-53-instance-00000075.
Jan 31 08:10:11 compute-2 ceph-mon[77282]: osdmap e282: 3 total, 3 up, 3 in
Jan 31 08:10:11 compute-2 NetworkManager[48999]: <info>  [1769847011.5906] device (tapbc9935ef-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:10:11 compute-2 NetworkManager[48999]: <info>  [1769847011.5911] device (tapbc9935ef-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:10:11 compute-2 systemd[1]: Started Virtual Machine qemu-53-instance-00000075.
Jan 31 08:10:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:11.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.369 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for 05d236a3-3a52-4bdd-bb11-8956ea342070 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.370 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847012.3688679, 05d236a3-3a52-4bdd-bb11-8956ea342070 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.370 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] VM Resumed (Lifecycle Event)
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.375 226833 DEBUG nova.compute.manager [None req-4bf58cd4-41e0-45a5-89ce-2c96482d172a 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:12.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.405 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.408 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.541 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.541 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847012.3727863, 05d236a3-3a52-4bdd-bb11-8956ea342070 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.542 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] VM Started (Lifecycle Event)
Jan 31 08:10:12 compute-2 ceph-mon[77282]: pgmap v2220: 305 pgs: 305 active+clean; 326 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.5 MiB/s wr, 42 op/s
Jan 31 08:10:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1128975270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.628 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:12 compute-2 nova_compute[226829]: 2026-01-31 08:10:12.631 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:10:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/14629453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:13.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:14 compute-2 nova_compute[226829]: 2026-01-31 08:10:14.075 226833 DEBUG nova.compute.manager [req-a02cc9d3-189e-4fc0-8b8c-b7d0ce0fbfe2 req-cd09c8fa-57c0-40a0-beab-ee1a5a40520d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:14 compute-2 nova_compute[226829]: 2026-01-31 08:10:14.075 226833 DEBUG oslo_concurrency.lockutils [req-a02cc9d3-189e-4fc0-8b8c-b7d0ce0fbfe2 req-cd09c8fa-57c0-40a0-beab-ee1a5a40520d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:14 compute-2 nova_compute[226829]: 2026-01-31 08:10:14.075 226833 DEBUG oslo_concurrency.lockutils [req-a02cc9d3-189e-4fc0-8b8c-b7d0ce0fbfe2 req-cd09c8fa-57c0-40a0-beab-ee1a5a40520d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:14 compute-2 nova_compute[226829]: 2026-01-31 08:10:14.076 226833 DEBUG oslo_concurrency.lockutils [req-a02cc9d3-189e-4fc0-8b8c-b7d0ce0fbfe2 req-cd09c8fa-57c0-40a0-beab-ee1a5a40520d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:14 compute-2 nova_compute[226829]: 2026-01-31 08:10:14.076 226833 DEBUG nova.compute.manager [req-a02cc9d3-189e-4fc0-8b8c-b7d0ce0fbfe2 req-cd09c8fa-57c0-40a0-beab-ee1a5a40520d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] No waiting events found dispatching network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:10:14 compute-2 nova_compute[226829]: 2026-01-31 08:10:14.076 226833 WARNING nova.compute.manager [req-a02cc9d3-189e-4fc0-8b8c-b7d0ce0fbfe2 req-cd09c8fa-57c0-40a0-beab-ee1a5a40520d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received unexpected event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 for instance with vm_state rescued and task_state None.
Jan 31 08:10:14 compute-2 sudo[280447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:14 compute-2 sudo[280447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-2 sudo[280447]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-2 sudo[280472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:14 compute-2 sudo[280472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-2 sudo[280472]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-2 sudo[280497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:14 compute-2 sudo[280497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-2 sudo[280497]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-2 sudo[280522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 08:10:14 compute-2 sudo[280522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:14.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:14 compute-2 sudo[280522]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-2 sudo[280567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:14 compute-2 sudo[280567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-2 sudo[280567]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-2 sudo[280592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:14 compute-2 sudo[280592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-2 sudo[280592]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-2 ceph-mon[77282]: pgmap v2221: 305 pgs: 305 active+clean; 326 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.2 MiB/s wr, 45 op/s
Jan 31 08:10:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:10:14 compute-2 sudo[280617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:14 compute-2 sudo[280617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-2 sudo[280617]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:14 compute-2 sudo[280642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:10:14 compute-2 sudo[280642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:15 compute-2 sudo[280642]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:15 compute-2 nova_compute[226829]: 2026-01-31 08:10:15.712 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:10:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:10:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:10:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:10:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:10:15 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:10:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:15 compute-2 nova_compute[226829]: 2026-01-31 08:10:15.910 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:16.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:16 compute-2 nova_compute[226829]: 2026-01-31 08:10:16.601 226833 DEBUG nova.compute.manager [req-d5753cc6-56a8-4576-9f8a-96e9a86f28d4 req-d98ec16d-95ac-43a2-836f-c8a59315378f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:16 compute-2 nova_compute[226829]: 2026-01-31 08:10:16.602 226833 DEBUG oslo_concurrency.lockutils [req-d5753cc6-56a8-4576-9f8a-96e9a86f28d4 req-d98ec16d-95ac-43a2-836f-c8a59315378f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:16 compute-2 nova_compute[226829]: 2026-01-31 08:10:16.602 226833 DEBUG oslo_concurrency.lockutils [req-d5753cc6-56a8-4576-9f8a-96e9a86f28d4 req-d98ec16d-95ac-43a2-836f-c8a59315378f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:16 compute-2 nova_compute[226829]: 2026-01-31 08:10:16.603 226833 DEBUG oslo_concurrency.lockutils [req-d5753cc6-56a8-4576-9f8a-96e9a86f28d4 req-d98ec16d-95ac-43a2-836f-c8a59315378f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:16 compute-2 nova_compute[226829]: 2026-01-31 08:10:16.603 226833 DEBUG nova.compute.manager [req-d5753cc6-56a8-4576-9f8a-96e9a86f28d4 req-d98ec16d-95ac-43a2-836f-c8a59315378f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] No waiting events found dispatching network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:10:16 compute-2 nova_compute[226829]: 2026-01-31 08:10:16.603 226833 WARNING nova.compute.manager [req-d5753cc6-56a8-4576-9f8a-96e9a86f28d4 req-d98ec16d-95ac-43a2-836f-c8a59315378f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received unexpected event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 for instance with vm_state rescued and task_state None.
Jan 31 08:10:16 compute-2 ceph-mon[77282]: pgmap v2222: 305 pgs: 305 active+clean; 326 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 81 op/s
Jan 31 08:10:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:10:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:17.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:10:18 compute-2 ceph-mon[77282]: pgmap v2223: 305 pgs: 305 active+clean; 306 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 156 op/s
Jan 31 08:10:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:18.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:18 compute-2 nova_compute[226829]: 2026-01-31 08:10:18.485 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:19.805 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:10:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:19.806 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:10:19 compute-2 nova_compute[226829]: 2026-01-31 08:10:19.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:19.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:20.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:20 compute-2 nova_compute[226829]: 2026-01-31 08:10:20.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:20 compute-2 ceph-mon[77282]: pgmap v2224: 305 pgs: 305 active+clean; 270 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 226 KiB/s wr, 204 op/s
Jan 31 08:10:20 compute-2 nova_compute[226829]: 2026-01-31 08:10:20.714 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:20 compute-2 nova_compute[226829]: 2026-01-31 08:10:20.898 226833 DEBUG nova.compute.manager [req-8394d860-6347-4968-b2c9-566de2ec9893 req-0b56d207-9ab5-4697-b0e5-6690f4d96579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-changed-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:20 compute-2 nova_compute[226829]: 2026-01-31 08:10:20.899 226833 DEBUG nova.compute.manager [req-8394d860-6347-4968-b2c9-566de2ec9893 req-0b56d207-9ab5-4697-b0e5-6690f4d96579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Refreshing instance network info cache due to event network-changed-bc9935ef-4dc8-4b06-8b86-0c380a0034f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:10:20 compute-2 nova_compute[226829]: 2026-01-31 08:10:20.899 226833 DEBUG oslo_concurrency.lockutils [req-8394d860-6347-4968-b2c9-566de2ec9893 req-0b56d207-9ab5-4697-b0e5-6690f4d96579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:20 compute-2 nova_compute[226829]: 2026-01-31 08:10:20.899 226833 DEBUG oslo_concurrency.lockutils [req-8394d860-6347-4968-b2c9-566de2ec9893 req-0b56d207-9ab5-4697-b0e5-6690f4d96579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:20 compute-2 nova_compute[226829]: 2026-01-31 08:10:20.900 226833 DEBUG nova.network.neutron [req-8394d860-6347-4968-b2c9-566de2ec9893 req-0b56d207-9ab5-4697-b0e5-6690f4d96579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Refreshing network info cache for port bc9935ef-4dc8-4b06-8b86-0c380a0034f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:10:20 compute-2 nova_compute[226829]: 2026-01-31 08:10:20.912 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:21 compute-2 sudo[280701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:21 compute-2 sudo[280701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:21 compute-2 sudo[280701]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:21 compute-2 sudo[280733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:10:21 compute-2 sudo[280733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:21 compute-2 sudo[280733]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:21 compute-2 podman[280725]: 2026-01-31 08:10:21.865246041 +0000 UTC m=+0.079926956 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:10:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:22.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:22 compute-2 ceph-mon[77282]: pgmap v2225: 305 pgs: 305 active+clean; 246 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 206 KiB/s wr, 191 op/s
Jan 31 08:10:22 compute-2 nova_compute[226829]: 2026-01-31 08:10:22.634 226833 DEBUG nova.network.neutron [req-8394d860-6347-4968-b2c9-566de2ec9893 req-0b56d207-9ab5-4697-b0e5-6690f4d96579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updated VIF entry in instance network info cache for port bc9935ef-4dc8-4b06-8b86-0c380a0034f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:10:22 compute-2 nova_compute[226829]: 2026-01-31 08:10:22.635 226833 DEBUG nova.network.neutron [req-8394d860-6347-4968-b2c9-566de2ec9893 req-0b56d207-9ab5-4697-b0e5-6690f4d96579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updating instance_info_cache with network_info: [{"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:22 compute-2 nova_compute[226829]: 2026-01-31 08:10:22.698 226833 DEBUG oslo_concurrency.lockutils [req-8394d860-6347-4968-b2c9-566de2ec9893 req-0b56d207-9ab5-4697-b0e5-6690f4d96579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:23 compute-2 nova_compute[226829]: 2026-01-31 08:10:23.457 226833 DEBUG nova.compute.manager [req-11cb5675-2187-4373-a4a8-bb00cb06b3c6 req-4625315b-3bf9-4bf2-ba99-72cafc472a32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-changed-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:23 compute-2 nova_compute[226829]: 2026-01-31 08:10:23.458 226833 DEBUG nova.compute.manager [req-11cb5675-2187-4373-a4a8-bb00cb06b3c6 req-4625315b-3bf9-4bf2-ba99-72cafc472a32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Refreshing instance network info cache due to event network-changed-bc9935ef-4dc8-4b06-8b86-0c380a0034f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:10:23 compute-2 nova_compute[226829]: 2026-01-31 08:10:23.458 226833 DEBUG oslo_concurrency.lockutils [req-11cb5675-2187-4373-a4a8-bb00cb06b3c6 req-4625315b-3bf9-4bf2-ba99-72cafc472a32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:23 compute-2 nova_compute[226829]: 2026-01-31 08:10:23.459 226833 DEBUG oslo_concurrency.lockutils [req-11cb5675-2187-4373-a4a8-bb00cb06b3c6 req-4625315b-3bf9-4bf2-ba99-72cafc472a32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:23 compute-2 nova_compute[226829]: 2026-01-31 08:10:23.459 226833 DEBUG nova.network.neutron [req-11cb5675-2187-4373-a4a8-bb00cb06b3c6 req-4625315b-3bf9-4bf2-ba99-72cafc472a32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Refreshing network info cache for port bc9935ef-4dc8-4b06-8b86-0c380a0034f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:10:23 compute-2 sudo[280778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:23 compute-2 sudo[280778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:23 compute-2 sudo[280778]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:23 compute-2 sudo[280803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:23 compute-2 sudo[280803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:23 compute-2 sudo[280803]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:23.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:24.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:24 compute-2 nova_compute[226829]: 2026-01-31 08:10:24.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:24 compute-2 ceph-mon[77282]: pgmap v2226: 305 pgs: 305 active+clean; 246 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 175 op/s
Jan 31 08:10:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:25 compute-2 nova_compute[226829]: 2026-01-31 08:10:25.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:25 compute-2 nova_compute[226829]: 2026-01-31 08:10:25.609 226833 DEBUG nova.network.neutron [req-11cb5675-2187-4373-a4a8-bb00cb06b3c6 req-4625315b-3bf9-4bf2-ba99-72cafc472a32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updated VIF entry in instance network info cache for port bc9935ef-4dc8-4b06-8b86-0c380a0034f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:10:25 compute-2 nova_compute[226829]: 2026-01-31 08:10:25.610 226833 DEBUG nova.network.neutron [req-11cb5675-2187-4373-a4a8-bb00cb06b3c6 req-4625315b-3bf9-4bf2-ba99-72cafc472a32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updating instance_info_cache with network_info: [{"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:25 compute-2 nova_compute[226829]: 2026-01-31 08:10:25.639 226833 DEBUG oslo_concurrency.lockutils [req-11cb5675-2187-4373-a4a8-bb00cb06b3c6 req-4625315b-3bf9-4bf2-ba99-72cafc472a32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:25 compute-2 nova_compute[226829]: 2026-01-31 08:10:25.716 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:25 compute-2 nova_compute[226829]: 2026-01-31 08:10:25.914 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:25.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:26.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:26 compute-2 nova_compute[226829]: 2026-01-31 08:10:26.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:26 compute-2 nova_compute[226829]: 2026-01-31 08:10:26.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:10:26 compute-2 nova_compute[226829]: 2026-01-31 08:10:26.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:10:26 compute-2 nova_compute[226829]: 2026-01-31 08:10:26.706 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:26 compute-2 nova_compute[226829]: 2026-01-31 08:10:26.706 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:26 compute-2 nova_compute[226829]: 2026-01-31 08:10:26.706 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:10:26 compute-2 nova_compute[226829]: 2026-01-31 08:10:26.707 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:26 compute-2 ceph-mon[77282]: pgmap v2227: 305 pgs: 305 active+clean; 246 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 174 op/s
Jan 31 08:10:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1854736004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/498864749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:27.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:28 compute-2 NetworkManager[48999]: <info>  [1769847028.2400] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Jan 31 08:10:28 compute-2 nova_compute[226829]: 2026-01-31 08:10:28.238 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:28 compute-2 NetworkManager[48999]: <info>  [1769847028.2410] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Jan 31 08:10:28 compute-2 nova_compute[226829]: 2026-01-31 08:10:28.279 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:28 compute-2 nova_compute[226829]: 2026-01-31 08:10:28.293 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:28 compute-2 nova_compute[226829]: 2026-01-31 08:10:28.379 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updating instance_info_cache with network_info: [{"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:28 compute-2 nova_compute[226829]: 2026-01-31 08:10:28.400 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:28 compute-2 nova_compute[226829]: 2026-01-31 08:10:28.400 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:10:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:28.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:28 compute-2 nova_compute[226829]: 2026-01-31 08:10:28.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:28 compute-2 ceph-mon[77282]: pgmap v2228: 305 pgs: 305 active+clean; 256 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 502 KiB/s wr, 174 op/s
Jan 31 08:10:29 compute-2 nova_compute[226829]: 2026-01-31 08:10:29.066 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:29 compute-2 nova_compute[226829]: 2026-01-31 08:10:29.251 226833 DEBUG nova.compute.manager [req-04f03df1-948b-402f-954c-693196d1282c req-ff8bc28b-3224-4c86-89eb-590a0cf0d30c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-changed-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:29 compute-2 nova_compute[226829]: 2026-01-31 08:10:29.252 226833 DEBUG nova.compute.manager [req-04f03df1-948b-402f-954c-693196d1282c req-ff8bc28b-3224-4c86-89eb-590a0cf0d30c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Refreshing instance network info cache due to event network-changed-bc9935ef-4dc8-4b06-8b86-0c380a0034f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:10:29 compute-2 nova_compute[226829]: 2026-01-31 08:10:29.252 226833 DEBUG oslo_concurrency.lockutils [req-04f03df1-948b-402f-954c-693196d1282c req-ff8bc28b-3224-4c86-89eb-590a0cf0d30c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:29 compute-2 nova_compute[226829]: 2026-01-31 08:10:29.252 226833 DEBUG oslo_concurrency.lockutils [req-04f03df1-948b-402f-954c-693196d1282c req-ff8bc28b-3224-4c86-89eb-590a0cf0d30c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:29 compute-2 nova_compute[226829]: 2026-01-31 08:10:29.253 226833 DEBUG nova.network.neutron [req-04f03df1-948b-402f-954c-693196d1282c req-ff8bc28b-3224-4c86-89eb-590a0cf0d30c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Refreshing network info cache for port bc9935ef-4dc8-4b06-8b86-0c380a0034f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:10:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:29.809 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:10:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:29.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:10:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:30.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:30 compute-2 nova_compute[226829]: 2026-01-31 08:10:30.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:30 compute-2 nova_compute[226829]: 2026-01-31 08:10:30.718 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:30 compute-2 ceph-mon[77282]: pgmap v2229: 305 pgs: 305 active+clean; 290 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 145 op/s
Jan 31 08:10:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2174295167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2742948316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:30 compute-2 nova_compute[226829]: 2026-01-31 08:10:30.913 226833 DEBUG nova.network.neutron [req-04f03df1-948b-402f-954c-693196d1282c req-ff8bc28b-3224-4c86-89eb-590a0cf0d30c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updated VIF entry in instance network info cache for port bc9935ef-4dc8-4b06-8b86-0c380a0034f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:10:30 compute-2 nova_compute[226829]: 2026-01-31 08:10:30.914 226833 DEBUG nova.network.neutron [req-04f03df1-948b-402f-954c-693196d1282c req-ff8bc28b-3224-4c86-89eb-590a0cf0d30c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updating instance_info_cache with network_info: [{"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:30 compute-2 nova_compute[226829]: 2026-01-31 08:10:30.915 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:30 compute-2 nova_compute[226829]: 2026-01-31 08:10:30.940 226833 DEBUG oslo_concurrency.lockutils [req-04f03df1-948b-402f-954c-693196d1282c req-ff8bc28b-3224-4c86-89eb-590a0cf0d30c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:31 compute-2 nova_compute[226829]: 2026-01-31 08:10:31.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:31 compute-2 nova_compute[226829]: 2026-01-31 08:10:31.518 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:31 compute-2 nova_compute[226829]: 2026-01-31 08:10:31.519 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:31 compute-2 nova_compute[226829]: 2026-01-31 08:10:31.519 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:31 compute-2 nova_compute[226829]: 2026-01-31 08:10:31.519 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:10:31 compute-2 nova_compute[226829]: 2026-01-31 08:10:31.519 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:10:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/20760702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2845828736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1891158710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:31.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:31 compute-2 nova_compute[226829]: 2026-01-31 08:10:31.926 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:31 compute-2 nova_compute[226829]: 2026-01-31 08:10:31.994 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:10:31 compute-2 nova_compute[226829]: 2026-01-31 08:10:31.994 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:10:31 compute-2 nova_compute[226829]: 2026-01-31 08:10:31.994 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.116 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.117 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4200MB free_disk=20.842449188232422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.117 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.118 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.195 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 05d236a3-3a52-4bdd-bb11-8956ea342070 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.196 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.196 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.227 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:32.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:10:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/398891581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.643 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.648 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.666 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.692 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:10:32 compute-2 nova_compute[226829]: 2026-01-31 08:10:32.693 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:32 compute-2 ceph-mon[77282]: pgmap v2230: 305 pgs: 305 active+clean; 323 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 641 KiB/s rd, 2.9 MiB/s wr, 106 op/s
Jan 31 08:10:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/20760702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1707992801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/398891581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:33 compute-2 podman[280879]: 2026-01-31 08:10:33.153167515 +0000 UTC m=+0.040845352 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:10:33 compute-2 nova_compute[226829]: 2026-01-31 08:10:33.559 226833 DEBUG nova.compute.manager [req-1f2dfb34-ea63-47c2-af02-8cb0e44e7844 req-6a3af9e7-4ec0-4eb3-bf2c-fbcdac72430f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-changed-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:33 compute-2 nova_compute[226829]: 2026-01-31 08:10:33.560 226833 DEBUG nova.compute.manager [req-1f2dfb34-ea63-47c2-af02-8cb0e44e7844 req-6a3af9e7-4ec0-4eb3-bf2c-fbcdac72430f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Refreshing instance network info cache due to event network-changed-bc9935ef-4dc8-4b06-8b86-0c380a0034f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:10:33 compute-2 nova_compute[226829]: 2026-01-31 08:10:33.560 226833 DEBUG oslo_concurrency.lockutils [req-1f2dfb34-ea63-47c2-af02-8cb0e44e7844 req-6a3af9e7-4ec0-4eb3-bf2c-fbcdac72430f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:33 compute-2 nova_compute[226829]: 2026-01-31 08:10:33.560 226833 DEBUG oslo_concurrency.lockutils [req-1f2dfb34-ea63-47c2-af02-8cb0e44e7844 req-6a3af9e7-4ec0-4eb3-bf2c-fbcdac72430f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:33 compute-2 nova_compute[226829]: 2026-01-31 08:10:33.560 226833 DEBUG nova.network.neutron [req-1f2dfb34-ea63-47c2-af02-8cb0e44e7844 req-6a3af9e7-4ec0-4eb3-bf2c-fbcdac72430f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Refreshing network info cache for port bc9935ef-4dc8-4b06-8b86-0c380a0034f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:10:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:10:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:33.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:10:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/428002861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2691849931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3134985353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:34 compute-2 ceph-mon[77282]: pgmap v2231: 305 pgs: 305 active+clean; 339 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 645 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Jan 31 08:10:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:34.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:35 compute-2 nova_compute[226829]: 2026-01-31 08:10:35.465 226833 DEBUG nova.network.neutron [req-1f2dfb34-ea63-47c2-af02-8cb0e44e7844 req-6a3af9e7-4ec0-4eb3-bf2c-fbcdac72430f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updated VIF entry in instance network info cache for port bc9935ef-4dc8-4b06-8b86-0c380a0034f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:10:35 compute-2 nova_compute[226829]: 2026-01-31 08:10:35.466 226833 DEBUG nova.network.neutron [req-1f2dfb34-ea63-47c2-af02-8cb0e44e7844 req-6a3af9e7-4ec0-4eb3-bf2c-fbcdac72430f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updating instance_info_cache with network_info: [{"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:35 compute-2 nova_compute[226829]: 2026-01-31 08:10:35.487 226833 DEBUG oslo_concurrency.lockutils [req-1f2dfb34-ea63-47c2-af02-8cb0e44e7844 req-6a3af9e7-4ec0-4eb3-bf2c-fbcdac72430f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-05d236a3-3a52-4bdd-bb11-8956ea342070" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:35 compute-2 nova_compute[226829]: 2026-01-31 08:10:35.720 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:35 compute-2 nova_compute[226829]: 2026-01-31 08:10:35.916 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:10:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:35.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:10:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:36.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:37 compute-2 ceph-mon[77282]: pgmap v2232: 305 pgs: 305 active+clean; 341 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 651 KiB/s rd, 3.6 MiB/s wr, 122 op/s
Jan 31 08:10:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:10:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:37.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:10:38 compute-2 ceph-mon[77282]: pgmap v2233: 305 pgs: 305 active+clean; 341 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 185 op/s
Jan 31 08:10:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:38.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.241 226833 DEBUG oslo_concurrency.lockutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.241 226833 DEBUG oslo_concurrency.lockutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.242 226833 DEBUG oslo_concurrency.lockutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.242 226833 DEBUG oslo_concurrency.lockutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.242 226833 DEBUG oslo_concurrency.lockutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.244 226833 INFO nova.compute.manager [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Terminating instance
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.245 226833 DEBUG nova.compute.manager [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:10:39 compute-2 kernel: tapbc9935ef-4d (unregistering): left promiscuous mode
Jan 31 08:10:39 compute-2 NetworkManager[48999]: <info>  [1769847039.3381] device (tapbc9935ef-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:10:39 compute-2 ovn_controller[133834]: 2026-01-31T08:10:39Z|00500|binding|INFO|Releasing lport bc9935ef-4dc8-4b06-8b86-0c380a0034f7 from this chassis (sb_readonly=0)
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.345 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-2 ovn_controller[133834]: 2026-01-31T08:10:39Z|00501|binding|INFO|Setting lport bc9935ef-4dc8-4b06-8b86-0c380a0034f7 down in Southbound
Jan 31 08:10:39 compute-2 ovn_controller[133834]: 2026-01-31T08:10:39Z|00502|binding|INFO|Removing iface tapbc9935ef-4d ovn-installed in OVS
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.349 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:39.355 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:e3:1c 10.100.0.13'], port_security=['fa:16:3e:cf:e3:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '05d236a3-3a52-4bdd-bb11-8956ea342070', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d94421cd-9003-407a-863b-8275eed6d7d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1bd92af54c2f44f1913c48ef5ebd6c42', 'neutron:revision_number': '8', 'neutron:security_group_ids': '33613802-2801-4317-b08b-ab67e917317b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fef7fb59-4f6c-4909-8c67-2195494eb965, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=bc9935ef-4dc8-4b06-8b86-0c380a0034f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.356 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:39.358 143841 INFO neutron.agent.ovn.metadata.agent [-] Port bc9935ef-4dc8-4b06-8b86-0c380a0034f7 in datapath d94421cd-9003-407a-863b-8275eed6d7d2 unbound from our chassis
Jan 31 08:10:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:39.359 143841 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d94421cd-9003-407a-863b-8275eed6d7d2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:10:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:10:39.362 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa84c26-43ab-464f-a9ae-cebcf8bb1ae2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:39 compute-2 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000075.scope: Deactivated successfully.
Jan 31 08:10:39 compute-2 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000075.scope: Consumed 14.015s CPU time.
Jan 31 08:10:39 compute-2 systemd-machined[195142]: Machine qemu-53-instance-00000075 terminated.
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.446 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.472 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.477 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.491 226833 INFO nova.virt.libvirt.driver [-] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Instance destroyed successfully.
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.492 226833 DEBUG nova.objects.instance [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lazy-loading 'resources' on Instance uuid 05d236a3-3a52-4bdd-bb11-8956ea342070 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.517 226833 DEBUG nova.virt.libvirt.vif [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-366487887',display_name='tempest-ServerRescueTestJSONUnderV235-server-366487887',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-366487887',id=117,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:10:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1bd92af54c2f44f1913c48ef5ebd6c42',ramdisk_id='',reservation_id='r-6xr0lcg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1402654657',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1402654657-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:10:12Z,user_data=None,user_id='162a66f52a10400aad586654cbabfbfd',uuid=05d236a3-3a52-4bdd-bb11-8956ea342070,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.518 226833 DEBUG nova.network.os_vif_util [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Converting VIF {"id": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "address": "fa:16:3e:cf:e3:1c", "network": {"id": "d94421cd-9003-407a-863b-8275eed6d7d2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-226703466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "1bd92af54c2f44f1913c48ef5ebd6c42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc9935ef-4d", "ovs_interfaceid": "bc9935ef-4dc8-4b06-8b86-0c380a0034f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.519 226833 DEBUG nova.network.os_vif_util [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:e3:1c,bridge_name='br-int',has_traffic_filtering=True,id=bc9935ef-4dc8-4b06-8b86-0c380a0034f7,network=Network(d94421cd-9003-407a-863b-8275eed6d7d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc9935ef-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.520 226833 DEBUG os_vif [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:e3:1c,bridge_name='br-int',has_traffic_filtering=True,id=bc9935ef-4dc8-4b06-8b86-0c380a0034f7,network=Network(d94421cd-9003-407a-863b-8275eed6d7d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc9935ef-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.523 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.524 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc9935ef-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.529 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:10:39 compute-2 nova_compute[226829]: 2026-01-31 08:10:39.534 226833 INFO os_vif [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:e3:1c,bridge_name='br-int',has_traffic_filtering=True,id=bc9935ef-4dc8-4b06-8b86-0c380a0034f7,network=Network(d94421cd-9003-407a-863b-8275eed6d7d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc9935ef-4d')
Jan 31 08:10:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:39.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:40.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:40 compute-2 nova_compute[226829]: 2026-01-31 08:10:40.465 226833 INFO nova.virt.libvirt.driver [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Deleting instance files /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070_del
Jan 31 08:10:40 compute-2 nova_compute[226829]: 2026-01-31 08:10:40.466 226833 INFO nova.virt.libvirt.driver [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Deletion of /var/lib/nova/instances/05d236a3-3a52-4bdd-bb11-8956ea342070_del complete
Jan 31 08:10:40 compute-2 ceph-mon[77282]: pgmap v2234: 305 pgs: 305 active+clean; 341 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 206 op/s
Jan 31 08:10:40 compute-2 nova_compute[226829]: 2026-01-31 08:10:40.694 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:40 compute-2 nova_compute[226829]: 2026-01-31 08:10:40.694 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:10:40 compute-2 nova_compute[226829]: 2026-01-31 08:10:40.721 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:40 compute-2 nova_compute[226829]: 2026-01-31 08:10:40.730 226833 INFO nova.compute.manager [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Took 1.48 seconds to destroy the instance on the hypervisor.
Jan 31 08:10:40 compute-2 nova_compute[226829]: 2026-01-31 08:10:40.730 226833 DEBUG oslo.service.loopingcall [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:10:40 compute-2 nova_compute[226829]: 2026-01-31 08:10:40.731 226833 DEBUG nova.compute.manager [-] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:10:40 compute-2 nova_compute[226829]: 2026-01-31 08:10:40.731 226833 DEBUG nova.network.neutron [-] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:10:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:41.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.966 226833 DEBUG nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-vif-unplugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.966 226833 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.966 226833 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.967 226833 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.967 226833 DEBUG nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] No waiting events found dispatching network-vif-unplugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.967 226833 DEBUG nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-vif-unplugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.968 226833 DEBUG nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.968 226833 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.968 226833 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.969 226833 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.969 226833 DEBUG nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] No waiting events found dispatching network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:10:41 compute-2 nova_compute[226829]: 2026-01-31 08:10:41.969 226833 WARNING nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received unexpected event network-vif-plugged-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 for instance with vm_state rescued and task_state deleting.
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.159 226833 DEBUG nova.network.neutron [-] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.205 226833 INFO nova.compute.manager [-] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Took 1.47 seconds to deallocate network for instance.
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.275 226833 DEBUG oslo_concurrency.lockutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.276 226833 DEBUG oslo_concurrency.lockutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.322 226833 DEBUG nova.compute.manager [req-05b87d1e-c639-4ddc-bc6b-9f9dd533d4f4 req-12a82bf6-8ad4-42ce-91df-e607af1b7108 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Received event network-vif-deleted-bc9935ef-4dc8-4b06-8b86-0c380a0034f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.343 226833 DEBUG oslo_concurrency.processutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:42.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:42 compute-2 ceph-mon[77282]: pgmap v2235: 305 pgs: 305 active+clean; 292 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 195 op/s
Jan 31 08:10:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:10:42 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1321363103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.826 226833 DEBUG oslo_concurrency.processutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.831 226833 DEBUG nova.compute.provider_tree [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.850 226833 DEBUG nova.scheduler.client.report [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.873 226833 DEBUG oslo_concurrency.lockutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.901 226833 INFO nova.scheduler.client.report [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Deleted allocations for instance 05d236a3-3a52-4bdd-bb11-8956ea342070
Jan 31 08:10:42 compute-2 nova_compute[226829]: 2026-01-31 08:10:42.987 226833 DEBUG oslo_concurrency.lockutils [None req-21b7c0df-5a4b-4dc2-b49b-ae577a71935e 162a66f52a10400aad586654cbabfbfd 1bd92af54c2f44f1913c48ef5ebd6c42 - - default default] Lock "05d236a3-3a52-4bdd-bb11-8956ea342070" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1321363103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4087330589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:43 compute-2 sudo[280963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:43 compute-2 sudo[280963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:43 compute-2 sudo[280963]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:43 compute-2 sudo[280988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:43 compute-2 sudo[280988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:43 compute-2 sudo[280988]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:43.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:44 compute-2 nova_compute[226829]: 2026-01-31 08:10:44.269 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:44.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:44 compute-2 nova_compute[226829]: 2026-01-31 08:10:44.526 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:44 compute-2 ceph-mon[77282]: pgmap v2236: 305 pgs: 305 active+clean; 249 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 718 KiB/s wr, 214 op/s
Jan 31 08:10:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:45 compute-2 nova_compute[226829]: 2026-01-31 08:10:45.723 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4221475382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:10:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4221475382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:10:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:45.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:46.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:46 compute-2 ceph-mon[77282]: pgmap v2237: 305 pgs: 305 active+clean; 167 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 50 KiB/s wr, 229 op/s
Jan 31 08:10:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:47.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:48.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:48 compute-2 nova_compute[226829]: 2026-01-31 08:10:48.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:48 compute-2 ceph-mon[77282]: pgmap v2238: 305 pgs: 305 active+clean; 167 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 41 KiB/s wr, 222 op/s
Jan 31 08:10:48 compute-2 nova_compute[226829]: 2026-01-31 08:10:48.809 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:48 compute-2 nova_compute[226829]: 2026-01-31 08:10:48.889 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:49 compute-2 nova_compute[226829]: 2026-01-31 08:10:49.527 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:10:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:49.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:10:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:50.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:50 compute-2 nova_compute[226829]: 2026-01-31 08:10:50.724 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:50 compute-2 ceph-mon[77282]: pgmap v2239: 305 pgs: 305 active+clean; 188 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 171 op/s
Jan 31 08:10:51 compute-2 nova_compute[226829]: 2026-01-31 08:10:51.234 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:51 compute-2 nova_compute[226829]: 2026-01-31 08:10:51.235 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:51 compute-2 nova_compute[226829]: 2026-01-31 08:10:51.692 226833 DEBUG nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:10:51 compute-2 nova_compute[226829]: 2026-01-31 08:10:51.942 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:51 compute-2 nova_compute[226829]: 2026-01-31 08:10:51.942 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:51 compute-2 nova_compute[226829]: 2026-01-31 08:10:51.948 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:10:51 compute-2 nova_compute[226829]: 2026-01-31 08:10:51.948 226833 INFO nova.compute.claims [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:10:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:52 compute-2 podman[281019]: 2026-01-31 08:10:52.182989248 +0000 UTC m=+0.073406668 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:10:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:52.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:52 compute-2 nova_compute[226829]: 2026-01-31 08:10:52.452 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:52 compute-2 ceph-mon[77282]: pgmap v2240: 305 pgs: 305 active+clean; 191 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 793 KiB/s rd, 1.8 MiB/s wr, 138 op/s
Jan 31 08:10:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:10:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3915785734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:52 compute-2 nova_compute[226829]: 2026-01-31 08:10:52.891 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:52 compute-2 nova_compute[226829]: 2026-01-31 08:10:52.898 226833 DEBUG nova.compute.provider_tree [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.162 226833 DEBUG nova.scheduler.client.report [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.358 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.359 226833 DEBUG nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.535 226833 DEBUG nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.535 226833 DEBUG nova.network.neutron [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.589 226833 INFO nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.638 226833 DEBUG nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:10:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3915785734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.932 226833 DEBUG nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.933 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.934 226833 INFO nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Creating image(s)
Jan 31 08:10:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:10:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:53.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.960 226833 DEBUG nova.storage.rbd_utils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:53 compute-2 nova_compute[226829]: 2026-01-31 08:10:53.985 226833 DEBUG nova.storage.rbd_utils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.012 226833 DEBUG nova.storage.rbd_utils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.016 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.035 226833 DEBUG nova.policy [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5366d122b359489fb9d2bda8d19611a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.071 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.072 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.072 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.072 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.096 226833 DEBUG nova.storage.rbd_utils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.099 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.362 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.431 226833 DEBUG nova.storage.rbd_utils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] resizing rbd image 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:10:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:54.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.489 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847039.486752, 05d236a3-3a52-4bdd-bb11-8956ea342070 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.489 226833 INFO nova.compute.manager [-] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] VM Stopped (Lifecycle Event)
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.529 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.534 226833 DEBUG nova.objects.instance [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.713 226833 DEBUG nova.compute.manager [None req-2924e5e7-f6b0-48a6-8f9a-ff82f857316a - - - - - -] [instance: 05d236a3-3a52-4bdd-bb11-8956ea342070] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.717 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.717 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Ensure instance console log exists: /var/lib/nova/instances/4e4e24bf-e5fe-4be2-9d89-52432f07cca0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.717 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.718 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:54 compute-2 nova_compute[226829]: 2026-01-31 08:10:54.718 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:54 compute-2 ceph-mon[77282]: pgmap v2241: 305 pgs: 305 active+clean; 194 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 31 08:10:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:55 compute-2 nova_compute[226829]: 2026-01-31 08:10:55.726 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:55.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:56 compute-2 nova_compute[226829]: 2026-01-31 08:10:56.274 226833 DEBUG nova.network.neutron [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Successfully created port: a2ea5794-4bd8-4ecd-84df-7b5500101840 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:10:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:56.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:56 compute-2 ceph-mon[77282]: pgmap v2242: 305 pgs: 305 active+clean; 200 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 31 08:10:57 compute-2 nova_compute[226829]: 2026-01-31 08:10:57.569 226833 DEBUG nova.network.neutron [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Successfully updated port: a2ea5794-4bd8-4ecd-84df-7b5500101840 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:10:57 compute-2 nova_compute[226829]: 2026-01-31 08:10:57.584 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:57 compute-2 nova_compute[226829]: 2026-01-31 08:10:57.584 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquired lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:57 compute-2 nova_compute[226829]: 2026-01-31 08:10:57.584 226833 DEBUG nova.network.neutron [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:10:57 compute-2 nova_compute[226829]: 2026-01-31 08:10:57.866 226833 DEBUG nova.compute.manager [req-c0952d96-5e86-43d6-ac4d-fce26564b1c0 req-389d4eb7-107f-44ee-9c13-cd504d73f885 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Received event network-changed-a2ea5794-4bd8-4ecd-84df-7b5500101840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:57 compute-2 nova_compute[226829]: 2026-01-31 08:10:57.867 226833 DEBUG nova.compute.manager [req-c0952d96-5e86-43d6-ac4d-fce26564b1c0 req-389d4eb7-107f-44ee-9c13-cd504d73f885 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Refreshing instance network info cache due to event network-changed-a2ea5794-4bd8-4ecd-84df-7b5500101840. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:10:57 compute-2 nova_compute[226829]: 2026-01-31 08:10:57.867 226833 DEBUG oslo_concurrency.lockutils [req-c0952d96-5e86-43d6-ac4d-fce26564b1c0 req-389d4eb7-107f-44ee-9c13-cd504d73f885 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:57.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:57 compute-2 ceph-mon[77282]: pgmap v2243: 305 pgs: 305 active+clean; 218 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 2.9 MiB/s wr, 78 op/s
Jan 31 08:10:58 compute-2 nova_compute[226829]: 2026-01-31 08:10:58.077 226833 DEBUG nova.network.neutron [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:10:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:58.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.328 226833 DEBUG nova.network.neutron [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Updating instance_info_cache with network_info: [{"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.378 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Releasing lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.379 226833 DEBUG nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Instance network_info: |[{"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.379 226833 DEBUG oslo_concurrency.lockutils [req-c0952d96-5e86-43d6-ac4d-fce26564b1c0 req-389d4eb7-107f-44ee-9c13-cd504d73f885 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.380 226833 DEBUG nova.network.neutron [req-c0952d96-5e86-43d6-ac4d-fce26564b1c0 req-389d4eb7-107f-44ee-9c13-cd504d73f885 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Refreshing network info cache for port a2ea5794-4bd8-4ecd-84df-7b5500101840 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.384 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Start _get_guest_xml network_info=[{"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.389 226833 WARNING nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.402 226833 DEBUG nova.virt.libvirt.host [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.403 226833 DEBUG nova.virt.libvirt.host [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.406 226833 DEBUG nova.virt.libvirt.host [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.407 226833 DEBUG nova.virt.libvirt.host [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.408 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.409 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.409 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.409 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.410 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.411 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.411 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.411 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.411 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.412 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.412 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.412 226833 DEBUG nova.virt.hardware [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.416 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.531 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:10:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1226629130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.827 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.858 226833 DEBUG nova.storage.rbd_utils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:59 compute-2 nova_compute[226829]: 2026-01-31 08:10:59.862 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:10:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:59.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:11:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3936287298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.262 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.264 226833 DEBUG nova.virt.libvirt.vif [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:10:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-921122999',display_name='tempest-₡-921122999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest--921122999',id=120,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-y30ad00c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:10:53Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=4e4e24bf-e5fe-4be2-9d89-52432f07cca0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.264 226833 DEBUG nova.network.os_vif_util [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.265 226833 DEBUG nova.network.os_vif_util [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:3b:a1,bridge_name='br-int',has_traffic_filtering=True,id=a2ea5794-4bd8-4ecd-84df-7b5500101840,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ea5794-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.267 226833 DEBUG nova.objects.instance [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.289 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <uuid>4e4e24bf-e5fe-4be2-9d89-52432f07cca0</uuid>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <name>instance-00000078</name>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <nova:name>tempest-₡-921122999</nova:name>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:10:59</nova:creationTime>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <nova:user uuid="5366d122b359489fb9d2bda8d19611a6">tempest-ServersTestJSON-327201738-project-member</nova:user>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <nova:project uuid="4aa06cf35d8c468fb16884f19dc8ce71">tempest-ServersTestJSON-327201738</nova:project>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <nova:port uuid="a2ea5794-4bd8-4ecd-84df-7b5500101840">
Jan 31 08:11:00 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <system>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <entry name="serial">4e4e24bf-e5fe-4be2-9d89-52432f07cca0</entry>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <entry name="uuid">4e4e24bf-e5fe-4be2-9d89-52432f07cca0</entry>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     </system>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <os>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   </os>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <features>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   </features>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk">
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       </source>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk.config">
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       </source>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:11:00 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:42:3b:a1"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <target dev="tapa2ea5794-4b"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/4e4e24bf-e5fe-4be2-9d89-52432f07cca0/console.log" append="off"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <video>
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     </video>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:11:00 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:11:00 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:11:00 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:11:00 compute-2 nova_compute[226829]: </domain>
Jan 31 08:11:00 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.290 226833 DEBUG nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Preparing to wait for external event network-vif-plugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.291 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.291 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.291 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.292 226833 DEBUG nova.virt.libvirt.vif [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:10:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-921122999',display_name='tempest-₡-921122999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest--921122999',id=120,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-y30ad00c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:10:53Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=4e4e24bf-e5fe-4be2-9d89-52432f07cca0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.293 226833 DEBUG nova.network.os_vif_util [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.294 226833 DEBUG nova.network.os_vif_util [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:3b:a1,bridge_name='br-int',has_traffic_filtering=True,id=a2ea5794-4bd8-4ecd-84df-7b5500101840,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ea5794-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.294 226833 DEBUG os_vif [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:3b:a1,bridge_name='br-int',has_traffic_filtering=True,id=a2ea5794-4bd8-4ecd-84df-7b5500101840,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ea5794-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.295 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.296 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.296 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.306 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.307 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2ea5794-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.308 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa2ea5794-4b, col_values=(('external_ids', {'iface-id': 'a2ea5794-4bd8-4ecd-84df-7b5500101840', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:3b:a1', 'vm-uuid': '4e4e24bf-e5fe-4be2-9d89-52432f07cca0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.309 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:00 compute-2 NetworkManager[48999]: <info>  [1769847060.3113] manager: (tapa2ea5794-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.312 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.316 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.318 226833 INFO os_vif [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:3b:a1,bridge_name='br-int',has_traffic_filtering=True,id=a2ea5794-4bd8-4ecd-84df-7b5500101840,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ea5794-4b')
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.374 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.375 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.375 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No VIF found with MAC fa:16:3e:42:3b:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.376 226833 INFO nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Using config drive
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.415 226833 DEBUG nova.storage.rbd_utils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:11:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:00.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:00 compute-2 ceph-mon[77282]: pgmap v2244: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 08:11:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1226629130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3936287298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:00 compute-2 nova_compute[226829]: 2026-01-31 08:11:00.728 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:01 compute-2 nova_compute[226829]: 2026-01-31 08:11:01.257 226833 INFO nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Creating config drive at /var/lib/nova/instances/4e4e24bf-e5fe-4be2-9d89-52432f07cca0/disk.config
Jan 31 08:11:01 compute-2 nova_compute[226829]: 2026-01-31 08:11:01.264 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e4e24bf-e5fe-4be2-9d89-52432f07cca0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprv85qise execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:01 compute-2 nova_compute[226829]: 2026-01-31 08:11:01.338 226833 DEBUG nova.network.neutron [req-c0952d96-5e86-43d6-ac4d-fce26564b1c0 req-389d4eb7-107f-44ee-9c13-cd504d73f885 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Updated VIF entry in instance network info cache for port a2ea5794-4bd8-4ecd-84df-7b5500101840. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:11:01 compute-2 nova_compute[226829]: 2026-01-31 08:11:01.339 226833 DEBUG nova.network.neutron [req-c0952d96-5e86-43d6-ac4d-fce26564b1c0 req-389d4eb7-107f-44ee-9c13-cd504d73f885 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Updating instance_info_cache with network_info: [{"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:11:01 compute-2 nova_compute[226829]: 2026-01-31 08:11:01.365 226833 DEBUG oslo_concurrency.lockutils [req-c0952d96-5e86-43d6-ac4d-fce26564b1c0 req-389d4eb7-107f-44ee-9c13-cd504d73f885 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:11:01 compute-2 nova_compute[226829]: 2026-01-31 08:11:01.391 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e4e24bf-e5fe-4be2-9d89-52432f07cca0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprv85qise" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:01 compute-2 nova_compute[226829]: 2026-01-31 08:11:01.418 226833 DEBUG nova.storage.rbd_utils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:11:01 compute-2 nova_compute[226829]: 2026-01-31 08:11:01.422 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e4e24bf-e5fe-4be2-9d89-52432f07cca0/disk.config 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:01.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:02.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.568 226833 DEBUG oslo_concurrency.processutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e4e24bf-e5fe-4be2-9d89-52432f07cca0/disk.config 4e4e24bf-e5fe-4be2-9d89-52432f07cca0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.569 226833 INFO nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Deleting local config drive /var/lib/nova/instances/4e4e24bf-e5fe-4be2-9d89-52432f07cca0/disk.config because it was imported into RBD.
Jan 31 08:11:02 compute-2 kernel: tapa2ea5794-4b: entered promiscuous mode
Jan 31 08:11:02 compute-2 NetworkManager[48999]: <info>  [1769847062.6094] manager: (tapa2ea5794-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/247)
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.610 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:02 compute-2 ovn_controller[133834]: 2026-01-31T08:11:02Z|00503|binding|INFO|Claiming lport a2ea5794-4bd8-4ecd-84df-7b5500101840 for this chassis.
Jan 31 08:11:02 compute-2 ovn_controller[133834]: 2026-01-31T08:11:02Z|00504|binding|INFO|a2ea5794-4bd8-4ecd-84df-7b5500101840: Claiming fa:16:3e:42:3b:a1 10.100.0.10
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.611 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.617 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.630 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:3b:a1 10.100.0.10'], port_security=['fa:16:3e:42:3b:a1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4e4e24bf-e5fe-4be2-9d89-52432f07cca0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=a2ea5794-4bd8-4ecd-84df-7b5500101840) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:11:02 compute-2 systemd-udevd[281375]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.633 143841 INFO neutron.agent.ovn.metadata.agent [-] Port a2ea5794-4bd8-4ecd-84df-7b5500101840 in datapath b88251fc-7610-460a-ba55-2ed186c6f696 bound to our chassis
Jan 31 08:11:02 compute-2 systemd-machined[195142]: New machine qemu-54-instance-00000078.
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.635 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.642 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:02 compute-2 NetworkManager[48999]: <info>  [1769847062.6448] device (tapa2ea5794-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:11:02 compute-2 NetworkManager[48999]: <info>  [1769847062.6457] device (tapa2ea5794-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:11:02 compute-2 ovn_controller[133834]: 2026-01-31T08:11:02Z|00505|binding|INFO|Setting lport a2ea5794-4bd8-4ecd-84df-7b5500101840 ovn-installed in OVS
Jan 31 08:11:02 compute-2 ovn_controller[133834]: 2026-01-31T08:11:02Z|00506|binding|INFO|Setting lport a2ea5794-4bd8-4ecd-84df-7b5500101840 up in Southbound
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.646 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a62d86e2-e9a8-4b3f-a4c6-976ea21bca0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.647 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb88251fc-71 in ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.647 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:02 compute-2 systemd[1]: Started Virtual Machine qemu-54-instance-00000078.
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.652 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb88251fc-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.652 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a63b49b6-cabe-47de-ba23-7a8f7f5e1005]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.653 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd33db2-9e5d-42b9-b1da-322458d13f2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.663 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[d3851f47-905a-44ae-b7f9-a0946cc067cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.682 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e46fe56b-f5db-4b45-a066-a9cd3933ce21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.705 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[a6addfc8-5e35-4a17-92a9-cb6ad9ae118f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.710 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3c41c3ce-aa8b-418e-9bec-bdc9c2f7c818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 NetworkManager[48999]: <info>  [1769847062.7109] manager: (tapb88251fc-70): new Veth device (/org/freedesktop/NetworkManager/Devices/248)
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.727 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8b19c3-072b-4d91-b535-48a3aefa4aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.730 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5fab49fd-d984-4e20-8517-18bb7ec9ca35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 NetworkManager[48999]: <info>  [1769847062.7441] device (tapb88251fc-70): carrier: link connected
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.747 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[33d11a7f-4c1b-495d-99f7-605aae1a4995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.757 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8dac7f-4089-4565-b280-5b8842f84ade]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731521, 'reachable_time': 35097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281408, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.768 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[787796d0-f2ca-4235-8f30-61b17c1b8695]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2a68'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731521, 'tstamp': 731521}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281409, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.777 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5eaf938f-e505-47d9-aa54-afc253b53fe3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731521, 'reachable_time': 35097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281410, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.799 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[74720d5c-3a6c-4e34-9bed-6fa0cb6c6eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.836 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c35b56-f715-4ddc-ada0-f45f9a16580d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.837 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.838 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.839 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb88251fc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.877 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:02 compute-2 NetworkManager[48999]: <info>  [1769847062.8785] manager: (tapb88251fc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Jan 31 08:11:02 compute-2 kernel: tapb88251fc-70: entered promiscuous mode
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.881 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.882 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb88251fc-70, col_values=(('external_ids', {'iface-id': '950341c4-aa2a-4261-8207-ff7e92fd4830'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.883 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:02 compute-2 ovn_controller[133834]: 2026-01-31T08:11:02Z|00507|binding|INFO|Releasing lport 950341c4-aa2a-4261-8207-ff7e92fd4830 from this chassis (sb_readonly=0)
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.885 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.887 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5a9eab-d727-406a-bd12-caac90a171ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.888 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:11:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:02.890 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'env', 'PROCESS_TAG=haproxy-b88251fc-7610-460a-ba55-2ed186c6f696', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b88251fc-7610-460a-ba55-2ed186c6f696.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:11:02 compute-2 ceph-mon[77282]: pgmap v2245: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 304 KiB/s rd, 2.6 MiB/s wr, 73 op/s
Jan 31 08:11:02 compute-2 nova_compute[226829]: 2026-01-31 08:11:02.891 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.021 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847063.0210822, 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.022 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] VM Started (Lifecycle Event)
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.045 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.049 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847063.0213375, 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.049 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] VM Paused (Lifecycle Event)
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.055 226833 DEBUG nova.compute.manager [req-3b441a0c-5673-4f49-973c-79b50d40efb8 req-7094f756-7b13-47f0-a567-ff7c8c3d553c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Received event network-vif-plugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.055 226833 DEBUG oslo_concurrency.lockutils [req-3b441a0c-5673-4f49-973c-79b50d40efb8 req-7094f756-7b13-47f0-a567-ff7c8c3d553c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.055 226833 DEBUG oslo_concurrency.lockutils [req-3b441a0c-5673-4f49-973c-79b50d40efb8 req-7094f756-7b13-47f0-a567-ff7c8c3d553c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.055 226833 DEBUG oslo_concurrency.lockutils [req-3b441a0c-5673-4f49-973c-79b50d40efb8 req-7094f756-7b13-47f0-a567-ff7c8c3d553c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.056 226833 DEBUG nova.compute.manager [req-3b441a0c-5673-4f49-973c-79b50d40efb8 req-7094f756-7b13-47f0-a567-ff7c8c3d553c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Processing event network-vif-plugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.057 226833 DEBUG nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.060 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.062 226833 INFO nova.virt.libvirt.driver [-] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Instance spawned successfully.
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.063 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.067 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.070 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847063.059319, 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.071 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] VM Resumed (Lifecycle Event)
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.086 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.086 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.086 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.087 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.087 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.087 226833 DEBUG nova.virt.libvirt.driver [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.091 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.094 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.124 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.193 226833 INFO nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Took 9.26 seconds to spawn the instance on the hypervisor.
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.195 226833 DEBUG nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:11:03 compute-2 podman[281483]: 2026-01-31 08:11:03.279846906 +0000 UTC m=+0.054918854 container create 4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.297 226833 INFO nova.compute.manager [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Took 11.39 seconds to build instance.
Jan 31 08:11:03 compute-2 systemd[1]: Started libpod-conmon-4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080.scope.
Jan 31 08:11:03 compute-2 nova_compute[226829]: 2026-01-31 08:11:03.336 226833 DEBUG oslo_concurrency.lockutils [None req-6f126421-b198-4a5b-8b47-3f6c30c64826 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:03 compute-2 podman[281483]: 2026-01-31 08:11:03.246109398 +0000 UTC m=+0.021181406 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:11:03 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:11:03 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f84e6587dc69dfe02207f20ebab9236133448782a1fe80be35b6d91298bdb3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:03 compute-2 podman[281483]: 2026-01-31 08:11:03.357422587 +0000 UTC m=+0.132494555 container init 4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:11:03 compute-2 podman[281483]: 2026-01-31 08:11:03.362190917 +0000 UTC m=+0.137262865 container start 4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:11:03 compute-2 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[281499]: [NOTICE]   (281519) : New worker (281523) forked
Jan 31 08:11:03 compute-2 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[281499]: [NOTICE]   (281519) : Loading success.
Jan 31 08:11:03 compute-2 podman[281496]: 2026-01-31 08:11:03.395632067 +0000 UTC m=+0.075332841 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:11:03 compute-2 sudo[281534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:03 compute-2 sudo[281534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:03 compute-2 sudo[281534]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:03 compute-2 sudo[281559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:03 compute-2 sudo[281559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:03 compute-2 sudo[281559]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:03.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:04.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:04 compute-2 ceph-mon[77282]: pgmap v2246: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Jan 31 08:11:05 compute-2 nova_compute[226829]: 2026-01-31 08:11:05.190 226833 DEBUG nova.compute.manager [req-5a58eac1-7db7-49fb-8c73-c40aa9f4c2d5 req-8f483743-9e4a-4d0b-a776-ee3590fa3428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Received event network-vif-plugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:05 compute-2 nova_compute[226829]: 2026-01-31 08:11:05.190 226833 DEBUG oslo_concurrency.lockutils [req-5a58eac1-7db7-49fb-8c73-c40aa9f4c2d5 req-8f483743-9e4a-4d0b-a776-ee3590fa3428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:05 compute-2 nova_compute[226829]: 2026-01-31 08:11:05.190 226833 DEBUG oslo_concurrency.lockutils [req-5a58eac1-7db7-49fb-8c73-c40aa9f4c2d5 req-8f483743-9e4a-4d0b-a776-ee3590fa3428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:05 compute-2 nova_compute[226829]: 2026-01-31 08:11:05.191 226833 DEBUG oslo_concurrency.lockutils [req-5a58eac1-7db7-49fb-8c73-c40aa9f4c2d5 req-8f483743-9e4a-4d0b-a776-ee3590fa3428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:05 compute-2 nova_compute[226829]: 2026-01-31 08:11:05.191 226833 DEBUG nova.compute.manager [req-5a58eac1-7db7-49fb-8c73-c40aa9f4c2d5 req-8f483743-9e4a-4d0b-a776-ee3590fa3428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] No waiting events found dispatching network-vif-plugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:11:05 compute-2 nova_compute[226829]: 2026-01-31 08:11:05.192 226833 WARNING nova.compute.manager [req-5a58eac1-7db7-49fb-8c73-c40aa9f4c2d5 req-8f483743-9e4a-4d0b-a776-ee3590fa3428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Received unexpected event network-vif-plugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 for instance with vm_state active and task_state None.
Jan 31 08:11:05 compute-2 nova_compute[226829]: 2026-01-31 08:11:05.311 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:05 compute-2 nova_compute[226829]: 2026-01-31 08:11:05.731 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:05 compute-2 ceph-mon[77282]: pgmap v2247: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 516 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 31 08:11:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:05.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:06.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:06.883 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:06.884 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:06.884 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:07.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:08.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:08 compute-2 ceph-mon[77282]: pgmap v2248: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 31 08:11:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:09.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:10 compute-2 nova_compute[226829]: 2026-01-31 08:11:10.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:10.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:10 compute-2 ceph-mon[77282]: pgmap v2249: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Jan 31 08:11:10 compute-2 nova_compute[226829]: 2026-01-31 08:11:10.733 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:11.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:12.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:12 compute-2 ceph-mon[77282]: pgmap v2250: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Jan 31 08:11:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1691286314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:13.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:14.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:14 compute-2 ceph-mon[77282]: pgmap v2251: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 08:11:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3639175740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:15 compute-2 nova_compute[226829]: 2026-01-31 08:11:15.320 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:15 compute-2 ovn_controller[133834]: 2026-01-31T08:11:15Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:3b:a1 10.100.0.10
Jan 31 08:11:15 compute-2 ovn_controller[133834]: 2026-01-31T08:11:15Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:3b:a1 10.100.0.10
Jan 31 08:11:15 compute-2 nova_compute[226829]: 2026-01-31 08:11:15.735 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:15.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:16.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:17 compute-2 ceph-mon[77282]: pgmap v2252: 305 pgs: 305 active+clean; 264 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 488 KiB/s wr, 77 op/s
Jan 31 08:11:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3159024888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:17.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3324127290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:18 compute-2 ceph-mon[77282]: pgmap v2253: 305 pgs: 305 active+clean; 330 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.8 MiB/s wr, 122 op/s
Jan 31 08:11:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:18.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:20.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:20 compute-2 nova_compute[226829]: 2026-01-31 08:11:20.322 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:20.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:20 compute-2 nova_compute[226829]: 2026-01-31 08:11:20.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:20 compute-2 nova_compute[226829]: 2026-01-31 08:11:20.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:20 compute-2 ceph-mon[77282]: pgmap v2254: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 871 KiB/s rd, 6.5 MiB/s wr, 114 op/s
Jan 31 08:11:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/788118537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/810772048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:20 compute-2 nova_compute[226829]: 2026-01-31 08:11:20.737 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/728860884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:21 compute-2 sudo[281593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:21 compute-2 sudo[281593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:21 compute-2 sudo[281593]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:21 compute-2 sudo[281618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:11:21 compute-2 sudo[281618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:21 compute-2 sudo[281618]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:21 compute-2 sudo[281643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:21 compute-2 sudo[281643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:21 compute-2 sudo[281643]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:22.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:22 compute-2 sudo[281668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:11:22 compute-2 sudo[281668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:22 compute-2 sudo[281668]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:22.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:22 compute-2 ceph-mon[77282]: pgmap v2255: 305 pgs: 305 active+clean; 401 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 6.9 MiB/s wr, 133 op/s
Jan 31 08:11:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1219824679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:11:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:11:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:11:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:11:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:11:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:11:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:11:23 compute-2 podman[281724]: 2026-01-31 08:11:23.199799943 +0000 UTC m=+0.075856205 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:11:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:24.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:24 compute-2 sudo[281751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:24 compute-2 sudo[281751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:24 compute-2 nova_compute[226829]: 2026-01-31 08:11:24.029 226833 DEBUG nova.compute.manager [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 08:11:24 compute-2 sudo[281751]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-2 sudo[281776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:24 compute-2 sudo[281776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:24 compute-2 sudo[281776]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:24.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:24 compute-2 nova_compute[226829]: 2026-01-31 08:11:24.572 226833 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:24 compute-2 nova_compute[226829]: 2026-01-31 08:11:24.572 226833 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:24 compute-2 ceph-mon[77282]: pgmap v2256: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 7.4 MiB/s wr, 148 op/s
Jan 31 08:11:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3711382014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1353661443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:24 compute-2 nova_compute[226829]: 2026-01-31 08:11:24.997 226833 DEBUG nova.objects.instance [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_requests' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.263 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.264 226833 INFO nova.compute.claims [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.264 226833 DEBUG nova.objects.instance [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'resources' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.329 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.365 226833 DEBUG nova.objects.instance [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_devices' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.739 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.775 226833 INFO nova.compute.resource_tracker [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating resource usage from migration 8b4ee4de-01d3-44bb-b219-d1f249fed0f0
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.775 226833 DEBUG nova.compute.resource_tracker [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Starting to track incoming migration 8b4ee4de-01d3-44bb-b219-d1f249fed0f0 with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 08:11:25 compute-2 nova_compute[226829]: 2026-01-31 08:11:25.894 226833 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:26.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:11:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2252345541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.300 226833 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.306 226833 DEBUG nova.compute.provider_tree [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.458 226833 DEBUG nova.scheduler.client.report [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:11:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:26.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.709 226833 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 2.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.709 226833 INFO nova.compute.manager [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Migrating
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.759 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.759 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.760 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:11:26 compute-2 nova_compute[226829]: 2026-01-31 08:11:26.760 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:11:26 compute-2 ceph-mon[77282]: pgmap v2257: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 773 KiB/s rd, 7.5 MiB/s wr, 169 op/s
Jan 31 08:11:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2252345541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:28.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:28.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:28 compute-2 sudo[281825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:28 compute-2 sudo[281825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:28 compute-2 sudo[281825]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:28 compute-2 sudo[281850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:11:28 compute-2 sudo[281850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:28 compute-2 sudo[281850]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:28 compute-2 ceph-mon[77282]: pgmap v2258: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.0 MiB/s wr, 261 op/s
Jan 31 08:11:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:11:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:11:29 compute-2 nova_compute[226829]: 2026-01-31 08:11:29.299 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Updating instance_info_cache with network_info: [{"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:11:29 compute-2 nova_compute[226829]: 2026-01-31 08:11:29.621 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:11:29 compute-2 nova_compute[226829]: 2026-01-31 08:11:29.621 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:11:29 compute-2 nova_compute[226829]: 2026-01-31 08:11:29.621 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:29 compute-2 sshd-session[281875]: Accepted publickey for nova from 192.168.122.100 port 48934 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:11:29 compute-2 systemd-logind[801]: New session 54 of user nova.
Jan 31 08:11:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e283 e283: 3 total, 3 up, 3 in
Jan 31 08:11:29 compute-2 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 08:11:29 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 08:11:29 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 08:11:29 compute-2 systemd[1]: Starting User Manager for UID 42436...
Jan 31 08:11:29 compute-2 systemd[281880]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:11:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:29 compute-2 systemd[281880]: Queued start job for default target Main User Target.
Jan 31 08:11:30 compute-2 systemd[281880]: Created slice User Application Slice.
Jan 31 08:11:30 compute-2 systemd[281880]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:11:30 compute-2 systemd[281880]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:11:30 compute-2 systemd[281880]: Reached target Paths.
Jan 31 08:11:30 compute-2 systemd[281880]: Reached target Timers.
Jan 31 08:11:30 compute-2 systemd[281880]: Starting D-Bus User Message Bus Socket...
Jan 31 08:11:30 compute-2 systemd[281880]: Starting Create User's Volatile Files and Directories...
Jan 31 08:11:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:30.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:30 compute-2 systemd[281880]: Finished Create User's Volatile Files and Directories.
Jan 31 08:11:30 compute-2 systemd[281880]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:11:30 compute-2 systemd[281880]: Reached target Sockets.
Jan 31 08:11:30 compute-2 systemd[281880]: Reached target Basic System.
Jan 31 08:11:30 compute-2 systemd[281880]: Reached target Main User Target.
Jan 31 08:11:30 compute-2 systemd[281880]: Startup finished in 140ms.
Jan 31 08:11:30 compute-2 systemd[1]: Started User Manager for UID 42436.
Jan 31 08:11:30 compute-2 systemd[1]: Started Session 54 of User nova.
Jan 31 08:11:30 compute-2 sshd-session[281875]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:11:30 compute-2 sshd-session[281895]: Received disconnect from 192.168.122.100 port 48934:11: disconnected by user
Jan 31 08:11:30 compute-2 sshd-session[281895]: Disconnected from user nova 192.168.122.100 port 48934
Jan 31 08:11:30 compute-2 sshd-session[281875]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:11:30 compute-2 systemd[1]: session-54.scope: Deactivated successfully.
Jan 31 08:11:30 compute-2 systemd-logind[801]: Session 54 logged out. Waiting for processes to exit.
Jan 31 08:11:30 compute-2 systemd-logind[801]: Removed session 54.
Jan 31 08:11:30 compute-2 sshd-session[281897]: Accepted publickey for nova from 192.168.122.100 port 56728 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:11:30 compute-2 systemd-logind[801]: New session 56 of user nova.
Jan 31 08:11:30 compute-2 systemd[1]: Started Session 56 of User nova.
Jan 31 08:11:30 compute-2 sshd-session[281897]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:11:30 compute-2 sshd-session[281900]: Received disconnect from 192.168.122.100 port 56728:11: disconnected by user
Jan 31 08:11:30 compute-2 sshd-session[281900]: Disconnected from user nova 192.168.122.100 port 56728
Jan 31 08:11:30 compute-2 sshd-session[281897]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:11:30 compute-2 systemd[1]: session-56.scope: Deactivated successfully.
Jan 31 08:11:30 compute-2 systemd-logind[801]: Session 56 logged out. Waiting for processes to exit.
Jan 31 08:11:30 compute-2 systemd-logind[801]: Removed session 56.
Jan 31 08:11:30 compute-2 nova_compute[226829]: 2026-01-31 08:11:30.331 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:30.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:30.698 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:11:30 compute-2 nova_compute[226829]: 2026-01-31 08:11:30.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:30.700 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:11:30 compute-2 nova_compute[226829]: 2026-01-31 08:11:30.741 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:30 compute-2 ceph-mon[77282]: pgmap v2259: 305 pgs: 305 active+clean; 400 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.7 MiB/s wr, 269 op/s
Jan 31 08:11:30 compute-2 ceph-mon[77282]: osdmap e283: 3 total, 3 up, 3 in
Jan 31 08:11:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e284 e284: 3 total, 3 up, 3 in
Jan 31 08:11:31 compute-2 nova_compute[226829]: 2026-01-31 08:11:31.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:31 compute-2 ceph-mon[77282]: osdmap e284: 3 total, 3 up, 3 in
Jan 31 08:11:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3890559462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e285 e285: 3 total, 3 up, 3 in
Jan 31 08:11:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:32.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:32 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Jan 31 08:11:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:32.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:32 compute-2 ceph-mon[77282]: pgmap v2262: 305 pgs: 305 active+clean; 425 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 1.6 MiB/s wr, 379 op/s
Jan 31 08:11:32 compute-2 ceph-mon[77282]: osdmap e285: 3 total, 3 up, 3 in
Jan 31 08:11:33 compute-2 nova_compute[226829]: 2026-01-31 08:11:33.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:33 compute-2 nova_compute[226829]: 2026-01-31 08:11:33.810 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:33 compute-2 nova_compute[226829]: 2026-01-31 08:11:33.811 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:33 compute-2 nova_compute[226829]: 2026-01-31 08:11:33.811 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:33 compute-2 nova_compute[226829]: 2026-01-31 08:11:33.811 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:11:33 compute-2 nova_compute[226829]: 2026-01-31 08:11:33.812 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2795073161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3214386677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:34.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:34 compute-2 podman[281924]: 2026-01-31 08:11:34.178999478 +0000 UTC m=+0.059554421 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 08:11:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:11:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/982230014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:34 compute-2 nova_compute[226829]: 2026-01-31 08:11:34.307 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:34.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:34 compute-2 ceph-mon[77282]: pgmap v2264: 305 pgs: 305 active+clean; 390 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 2.2 MiB/s wr, 384 op/s
Jan 31 08:11:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/982230014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:34 compute-2 nova_compute[226829]: 2026-01-31 08:11:34.992 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:11:34 compute-2 nova_compute[226829]: 2026-01-31 08:11:34.993 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:11:35 compute-2 nova_compute[226829]: 2026-01-31 08:11:35.135 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:11:35 compute-2 nova_compute[226829]: 2026-01-31 08:11:35.137 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4153MB free_disk=20.816539764404297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:11:35 compute-2 nova_compute[226829]: 2026-01-31 08:11:35.137 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:35 compute-2 nova_compute[226829]: 2026-01-31 08:11:35.137 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:35 compute-2 nova_compute[226829]: 2026-01-31 08:11:35.333 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:35 compute-2 nova_compute[226829]: 2026-01-31 08:11:35.743 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:35 compute-2 nova_compute[226829]: 2026-01-31 08:11:35.921 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Migration for instance af8d85d9-c7a2-4709-a234-19511f3e4395 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 31 08:11:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:11:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:36.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:11:36 compute-2 nova_compute[226829]: 2026-01-31 08:11:36.376 226833 INFO nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating resource usage from migration 8b4ee4de-01d3-44bb-b219-d1f249fed0f0
Jan 31 08:11:36 compute-2 nova_compute[226829]: 2026-01-31 08:11:36.377 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Starting to track incoming migration 8b4ee4de-01d3-44bb-b219-d1f249fed0f0 with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 08:11:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:36 compute-2 nova_compute[226829]: 2026-01-31 08:11:36.659 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:11:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:36.703 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:36 compute-2 nova_compute[226829]: 2026-01-31 08:11:36.777 226833 WARNING nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance af8d85d9-c7a2-4709-a234-19511f3e4395 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.
Jan 31 08:11:36 compute-2 nova_compute[226829]: 2026-01-31 08:11:36.778 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:11:36 compute-2 nova_compute[226829]: 2026-01-31 08:11:36.778 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:11:36 compute-2 nova_compute[226829]: 2026-01-31 08:11:36.867 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:37 compute-2 ceph-mon[77282]: pgmap v2265: 305 pgs: 305 active+clean; 379 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Jan 31 08:11:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:11:37 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1777838766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:37 compute-2 nova_compute[226829]: 2026-01-31 08:11:37.324 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:37 compute-2 nova_compute[226829]: 2026-01-31 08:11:37.329 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:11:37 compute-2 nova_compute[226829]: 2026-01-31 08:11:37.487 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:11:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1539722970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1777838766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2566712285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:38 compute-2 ceph-mon[77282]: pgmap v2266: 305 pgs: 305 active+clean; 393 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 293 op/s
Jan 31 08:11:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:38.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:38 compute-2 nova_compute[226829]: 2026-01-31 08:11:38.179 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:11:38 compute-2 nova_compute[226829]: 2026-01-31 08:11:38.179 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:38 compute-2 nova_compute[226829]: 2026-01-31 08:11:38.180 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:38 compute-2 nova_compute[226829]: 2026-01-31 08:11:38.180 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:11:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:38.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:38 compute-2 nova_compute[226829]: 2026-01-31 08:11:38.832 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:11:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:11:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:40.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.220 226833 DEBUG nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.221 226833 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.221 226833 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.221 226833 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.222 226833 DEBUG nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.222 226833 WARNING nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state active and task_state resize_migrated.
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.222 226833 DEBUG nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.222 226833 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.223 226833 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.223 226833 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.223 226833 DEBUG nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.224 226833 WARNING nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state active and task_state resize_migrated.
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.337 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:40 compute-2 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 08:11:40 compute-2 systemd[281880]: Activating special unit Exit the Session...
Jan 31 08:11:40 compute-2 systemd[281880]: Stopped target Main User Target.
Jan 31 08:11:40 compute-2 systemd[281880]: Stopped target Basic System.
Jan 31 08:11:40 compute-2 systemd[281880]: Stopped target Paths.
Jan 31 08:11:40 compute-2 systemd[281880]: Stopped target Sockets.
Jan 31 08:11:40 compute-2 systemd[281880]: Stopped target Timers.
Jan 31 08:11:40 compute-2 systemd[281880]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:11:40 compute-2 systemd[281880]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 08:11:40 compute-2 systemd[281880]: Closed D-Bus User Message Bus Socket.
Jan 31 08:11:40 compute-2 systemd[281880]: Stopped Create User's Volatile Files and Directories.
Jan 31 08:11:40 compute-2 systemd[281880]: Removed slice User Application Slice.
Jan 31 08:11:40 compute-2 systemd[281880]: Reached target Shutdown.
Jan 31 08:11:40 compute-2 systemd[281880]: Finished Exit the Session.
Jan 31 08:11:40 compute-2 systemd[281880]: Reached target Exit the Session.
Jan 31 08:11:40 compute-2 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 08:11:40 compute-2 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 08:11:40 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 08:11:40 compute-2 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 08:11:40 compute-2 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 08:11:40 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 08:11:40 compute-2 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 08:11:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:40.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:40 compute-2 nova_compute[226829]: 2026-01-31 08:11:40.745 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e286 e286: 3 total, 3 up, 3 in
Jan 31 08:11:40 compute-2 ceph-mon[77282]: pgmap v2267: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 209 op/s
Jan 31 08:11:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2209944268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:41 compute-2 nova_compute[226829]: 2026-01-31 08:11:41.310 226833 INFO nova.network.neutron [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating port 4fa5cff4-40cb-4379-bda2-213171730f4f with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 08:11:41 compute-2 ceph-mon[77282]: osdmap e286: 3 total, 3 up, 3 in
Jan 31 08:11:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:42.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:42.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:42 compute-2 nova_compute[226829]: 2026-01-31 08:11:42.669 226833 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:11:42 compute-2 nova_compute[226829]: 2026-01-31 08:11:42.669 226833 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:11:42 compute-2 nova_compute[226829]: 2026-01-31 08:11:42.670 226833 DEBUG nova.network.neutron [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:11:42 compute-2 ceph-mon[77282]: pgmap v2269: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.5 MiB/s wr, 188 op/s
Jan 31 08:11:42 compute-2 nova_compute[226829]: 2026-01-31 08:11:42.980 226833 DEBUG nova.compute.manager [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:42 compute-2 nova_compute[226829]: 2026-01-31 08:11:42.981 226833 DEBUG nova.compute.manager [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing instance network info cache due to event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:11:42 compute-2 nova_compute[226829]: 2026-01-31 08:11:42.981 226833 DEBUG oslo_concurrency.lockutils [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:11:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1515250436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:43 compute-2 nova_compute[226829]: 2026-01-31 08:11:43.996 226833 DEBUG nova.network.neutron [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.023 226833 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.027 226833 DEBUG oslo_concurrency.lockutils [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.027 226833 DEBUG nova.network.neutron [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:11:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:44.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.080 226833 DEBUG os_brick.utils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.082 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.091 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.091 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[992f22fa-e3dc-488a-9c00-96021ede5fad]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.092 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.097 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.098 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[52716513-c521-433e-bb83-fec2c967db99]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.099 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.107 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.107 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[d59a5300-d400-45b3-9dff-fe9694ed16de]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.109 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[df7aa064-1b6f-456e-ab2f-963eb7279245]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.109 226833 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.129 226833 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.132 226833 DEBUG os_brick.initiator.connectors.lightos [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.132 226833 DEBUG os_brick.initiator.connectors.lightos [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.132 226833 DEBUG os_brick.initiator.connectors.lightos [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:11:44 compute-2 nova_compute[226829]: 2026-01-31 08:11:44.133 226833 DEBUG os_brick.utils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:11:44 compute-2 sudo[281983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:44 compute-2 sudo[281983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:44 compute-2 sudo[281983]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:44 compute-2 sudo[282010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:44 compute-2 sudo[282010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:44 compute-2 sudo[282010]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:44.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:11:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1349055314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:44 compute-2 ceph-mon[77282]: pgmap v2270: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 989 KiB/s rd, 3.4 MiB/s wr, 119 op/s
Jan 31 08:11:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1349055314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:45 compute-2 nova_compute[226829]: 2026-01-31 08:11:45.163 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 08:11:45 compute-2 nova_compute[226829]: 2026-01-31 08:11:45.166 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:11:45 compute-2 nova_compute[226829]: 2026-01-31 08:11:45.167 226833 INFO nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Creating image(s)
Jan 31 08:11:45 compute-2 nova_compute[226829]: 2026-01-31 08:11:45.219 226833 DEBUG nova.storage.rbd_utils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(nova-resize) on rbd image(af8d85d9-c7a2-4709-a234-19511f3e4395_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:11:45 compute-2 nova_compute[226829]: 2026-01-31 08:11:45.341 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:45 compute-2 nova_compute[226829]: 2026-01-31 08:11:45.747 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:45 compute-2 nova_compute[226829]: 2026-01-31 08:11:45.903 226833 DEBUG nova.network.neutron [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updated VIF entry in instance network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:11:45 compute-2 nova_compute[226829]: 2026-01-31 08:11:45.905 226833 DEBUG nova.network.neutron [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:11:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e287 e287: 3 total, 3 up, 3 in
Jan 31 08:11:45 compute-2 nova_compute[226829]: 2026-01-31 08:11:45.927 226833 DEBUG oslo_concurrency.lockutils [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:11:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3253062455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:11:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3253062455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:11:45 compute-2 nova_compute[226829]: 2026-01-31 08:11:45.988 226833 DEBUG nova.objects.instance [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'trusted_certs' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:11:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:46.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.116 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.116 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Ensure instance console log exists: /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.117 226833 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.117 226833 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.117 226833 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.120 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Start _get_guest_xml network_info=[{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:af:7b:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-eba44545-2378-4cdd-bfad-f9f0d9c730c0', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'eba44545-2378-4cdd-bfad-f9f0d9c730c0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'af8d85d9-c7a2-4709-a234-19511f3e4395', 'attached_at': '2026-01-31T08:11:44.000000', 'detached_at': '', 'volume_id': 'eba44545-2378-4cdd-bfad-f9f0d9c730c0', 'serial': 'eba44545-2378-4cdd-bfad-f9f0d9c730c0'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'attachment_id': '213fde95-979c-4b76-ae5d-150b7df48047', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.124 226833 WARNING nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.129 226833 DEBUG nova.virt.libvirt.host [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.130 226833 DEBUG nova.virt.libvirt.host [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.133 226833 DEBUG nova.virt.libvirt.host [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.133 226833 DEBUG nova.virt.libvirt.host [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.135 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.135 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.136 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.136 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.136 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.136 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.137 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.137 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.137 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.138 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.138 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.138 226833 DEBUG nova.virt.hardware [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.139 226833 DEBUG nova.objects.instance [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'vcpu_model' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.193 226833 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:46.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:11:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2257966533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.669 226833 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.705 226833 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.833 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:46 compute-2 nova_compute[226829]: 2026-01-31 08:11:46.834 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:11:47 compute-2 ceph-mon[77282]: pgmap v2271: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 412 KiB/s rd, 2.6 MiB/s wr, 112 op/s
Jan 31 08:11:47 compute-2 ceph-mon[77282]: osdmap e287: 3 total, 3 up, 3 in
Jan 31 08:11:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2257966533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:11:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3444596718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.152 226833 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.210 226833 DEBUG nova.virt.libvirt.vif [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2134235451',display_name='tempest-ServerActionsTestOtherB-server-2134235451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2134235451',id=119,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:10:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-9xfxf7l5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=af8d85d9-c7a2-4709-a234-19511f3e4395,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:af:7b:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.211 226833 DEBUG nova.network.os_vif_util [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:af:7b:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.212 226833 DEBUG nova.network.os_vif_util [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.214 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <uuid>af8d85d9-c7a2-4709-a234-19511f3e4395</uuid>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <name>instance-00000077</name>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <memory>196608</memory>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestOtherB-server-2134235451</nova:name>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:11:46</nova:creationTime>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <nova:flavor name="m1.micro">
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <nova:memory>192</nova:memory>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <nova:user uuid="18aee9d81d404f77ac81cde538f140d8">tempest-ServerActionsTestOtherB-2012907318-project-member</nova:user>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <nova:project uuid="c3ddadeb950a490db5c99da98a32c9ec">tempest-ServerActionsTestOtherB-2012907318</nova:project>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <nova:port uuid="4fa5cff4-40cb-4379-bda2-213171730f4f">
Jan 31 08:11:47 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <system>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <entry name="serial">af8d85d9-c7a2-4709-a234-19511f3e4395</entry>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <entry name="uuid">af8d85d9-c7a2-4709-a234-19511f3e4395</entry>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     </system>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <os>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   </os>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <features>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   </features>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/af8d85d9-c7a2-4709-a234-19511f3e4395_disk">
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       </source>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/af8d85d9-c7a2-4709-a234-19511f3e4395_disk.config">
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       </source>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-eba44545-2378-4cdd-bfad-f9f0d9c730c0">
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       </source>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:11:47 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <serial>eba44545-2378-4cdd-bfad-f9f0d9c730c0</serial>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:af:7b:39"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <target dev="tap4fa5cff4-40"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/console.log" append="off"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <video>
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     </video>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:11:47 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:11:47 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:11:47 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:11:47 compute-2 nova_compute[226829]: </domain>
Jan 31 08:11:47 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.215 226833 DEBUG nova.virt.libvirt.vif [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2134235451',display_name='tempest-ServerActionsTestOtherB-server-2134235451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2134235451',id=119,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:10:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-9xfxf7l5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=af8d85d9-c7a2-4709-a234-19511f3e4395,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:af:7b:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.215 226833 DEBUG nova.network.os_vif_util [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:af:7b:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.216 226833 DEBUG nova.network.os_vif_util [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.216 226833 DEBUG os_vif [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.217 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.217 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.218 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.221 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.222 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fa5cff4-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.222 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fa5cff4-40, col_values=(('external_ids', {'iface-id': '4fa5cff4-40cb-4379-bda2-213171730f4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:7b:39', 'vm-uuid': 'af8d85d9-c7a2-4709-a234-19511f3e4395'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.224 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 NetworkManager[48999]: <info>  [1769847107.2259] manager: (tap4fa5cff4-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.226 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.230 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.231 226833 INFO os_vif [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40')
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.338 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.339 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.339 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.339 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No VIF found with MAC fa:16:3e:af:7b:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.341 226833 INFO nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Using config drive
Jan 31 08:11:47 compute-2 kernel: tap4fa5cff4-40: entered promiscuous mode
Jan 31 08:11:47 compute-2 NetworkManager[48999]: <info>  [1769847107.4479] manager: (tap4fa5cff4-40): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Jan 31 08:11:47 compute-2 ovn_controller[133834]: 2026-01-31T08:11:47Z|00508|binding|INFO|Claiming lport 4fa5cff4-40cb-4379-bda2-213171730f4f for this chassis.
Jan 31 08:11:47 compute-2 ovn_controller[133834]: 2026-01-31T08:11:47Z|00509|binding|INFO|4fa5cff4-40cb-4379-bda2-213171730f4f: Claiming fa:16:3e:af:7b:39 10.100.0.6
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.450 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.456 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.460 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 NetworkManager[48999]: <info>  [1769847107.4618] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Jan 31 08:11:47 compute-2 NetworkManager[48999]: <info>  [1769847107.4623] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.469 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:7b:39 10.100.0.6'], port_security=['fa:16:3e:af:7b:39 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af8d85d9-c7a2-4709-a234-19511f3e4395', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=4fa5cff4-40cb-4379-bda2-213171730f4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.472 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 4fa5cff4-40cb-4379-bda2-213171730f4f in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 bound to our chassis
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.473 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.486 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd78e14-7dc7-4b0a-94f4-66ace3b57732]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.487 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape8014d6b-21 in ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.491 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape8014d6b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.492 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c5260c-d5cd-4034-bc47-3060f27c530c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.492 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ff255302-c400-4a93-9299-74295ee3bca4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 systemd-machined[195142]: New machine qemu-55-instance-00000077.
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.497 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.500 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 ovn_controller[133834]: 2026-01-31T08:11:47Z|00510|binding|INFO|Releasing lport 950341c4-aa2a-4261-8207-ff7e92fd4830 from this chassis (sb_readonly=0)
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.506 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[55e5ab1b-0257-413b-9524-9bb0936bec27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 systemd[1]: Started Virtual Machine qemu-55-instance-00000077.
Jan 31 08:11:47 compute-2 systemd-udevd[282204]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:11:47 compute-2 ovn_controller[133834]: 2026-01-31T08:11:47Z|00511|binding|INFO|Setting lport 4fa5cff4-40cb-4379-bda2-213171730f4f up in Southbound
Jan 31 08:11:47 compute-2 ovn_controller[133834]: 2026-01-31T08:11:47Z|00512|binding|INFO|Setting lport 4fa5cff4-40cb-4379-bda2-213171730f4f ovn-installed in OVS
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.568 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.570 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 NetworkManager[48999]: <info>  [1769847107.5780] device (tap4fa5cff4-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:11:47 compute-2 NetworkManager[48999]: <info>  [1769847107.5788] device (tap4fa5cff4-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.580 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9f359b11-d2be-4f18-9613-edb68b3a5860]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.609 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3a2b85-f446-47c2-9441-4233a76baa00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 NetworkManager[48999]: <info>  [1769847107.6154] manager: (tape8014d6b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/254)
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.614 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1725cb4a-abe8-4fe2-b38b-c69bbdf949a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 systemd-udevd[282207]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.641 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[92facd8d-2b49-4881-89e2-93c8ff90a1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.643 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[79c5c7dd-3416-4ce2-8d52-58946085b28a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 NetworkManager[48999]: <info>  [1769847107.6585] device (tape8014d6b-20): carrier: link connected
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.662 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[df32918c-3387-47dd-9915-b3c681f37300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.673 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6c11e3eb-1f38-48aa-b167-9dfc44d2a9a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 158], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736013, 'reachable_time': 31890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282235, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.681 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3863ea94-7d06-47c9-8e6a-4d6c6a1c8c54]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:c1c3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736013, 'tstamp': 736013}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282236, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.690 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[43ae7bab-bdca-4312-bd1d-776feb0a7113]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 158], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736013, 'reachable_time': 31890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282237, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.709 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8719b428-2b6a-4fb0-b845-52bc2c626527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.748 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[137289f0-4cd8-4c69-b48a-9d2c2126885b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.750 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.750 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.751 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.752 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 kernel: tape8014d6b-20: entered promiscuous mode
Jan 31 08:11:47 compute-2 NetworkManager[48999]: <info>  [1769847107.7540] manager: (tape8014d6b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.757 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:47 compute-2 ovn_controller[133834]: 2026-01-31T08:11:47Z|00513|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.759 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.759 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.760 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.761 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5692dd2a-32de-4925-9532-1dd7a600872a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.762 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:11:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:11:47.763 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'env', 'PROCESS_TAG=haproxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e8014d6b-23e1-41ef-b5e2-3d770d302e72.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:11:47 compute-2 nova_compute[226829]: 2026-01-31 08:11:47.764 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3444596718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:48 compute-2 ceph-mon[77282]: pgmap v2273: 305 pgs: 305 active+clean; 427 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.3 MiB/s wr, 38 op/s
Jan 31 08:11:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4215506736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2970087483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:48.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:48 compute-2 podman[282326]: 2026-01-31 08:11:48.133045195 +0000 UTC m=+0.053287171 container create d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:11:48 compute-2 systemd[1]: Started libpod-conmon-d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4.scope.
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.176 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847108.1758885, af8d85d9-c7a2-4709-a234-19511f3e4395 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.177 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Resumed (Lifecycle Event)
Jan 31 08:11:48 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.179 226833 DEBUG nova.compute.manager [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.184 226833 INFO nova.virt.libvirt.driver [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance running successfully.
Jan 31 08:11:48 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953f37d2e59bcc133918b3c4e61468ee3cfc0ca357c57e69d9acebabac2e2e54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:48 compute-2 virtqemud[226546]: argument unsupported: QEMU guest agent is not configured
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.188 226833 DEBUG nova.virt.libvirt.guest [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.188 226833 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 08:11:48 compute-2 podman[282326]: 2026-01-31 08:11:48.196661726 +0000 UTC m=+0.116903722 container init d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:11:48 compute-2 podman[282326]: 2026-01-31 08:11:48.101553869 +0000 UTC m=+0.021795895 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:11:48 compute-2 podman[282326]: 2026-01-31 08:11:48.200912032 +0000 UTC m=+0.121154008 container start d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:11:48 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[282345]: [NOTICE]   (282349) : New worker (282351) forked
Jan 31 08:11:48 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[282345]: [NOTICE]   (282349) : Loading success.
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.467 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.470 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:11:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:48.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.615 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.616 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847108.1760027, af8d85d9-c7a2-4709-a234-19511f3e4395 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.616 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Started (Lifecycle Event)
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.817 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:11:48 compute-2 nova_compute[226829]: 2026-01-31 08:11:48.820 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:11:49 compute-2 nova_compute[226829]: 2026-01-31 08:11:49.025 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:11:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1997915657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2564914381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2760709419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:50.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:50 compute-2 ceph-mon[77282]: pgmap v2274: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 2.4 MiB/s wr, 88 op/s
Jan 31 08:11:50 compute-2 nova_compute[226829]: 2026-01-31 08:11:50.455 226833 DEBUG nova.compute.manager [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:50 compute-2 nova_compute[226829]: 2026-01-31 08:11:50.455 226833 DEBUG oslo_concurrency.lockutils [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:50 compute-2 nova_compute[226829]: 2026-01-31 08:11:50.455 226833 DEBUG oslo_concurrency.lockutils [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:50 compute-2 nova_compute[226829]: 2026-01-31 08:11:50.456 226833 DEBUG oslo_concurrency.lockutils [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:50 compute-2 nova_compute[226829]: 2026-01-31 08:11:50.456 226833 DEBUG nova.compute.manager [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:11:50 compute-2 nova_compute[226829]: 2026-01-31 08:11:50.456 226833 WARNING nova.compute.manager [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state resized and task_state None.
Jan 31 08:11:50 compute-2 nova_compute[226829]: 2026-01-31 08:11:50.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:50 compute-2 nova_compute[226829]: 2026-01-31 08:11:50.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:11:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:50.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:50 compute-2 nova_compute[226829]: 2026-01-31 08:11:50.749 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:52.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:52 compute-2 nova_compute[226829]: 2026-01-31 08:11:52.224 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:52.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:52 compute-2 ceph-mon[77282]: pgmap v2275: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Jan 31 08:11:52 compute-2 nova_compute[226829]: 2026-01-31 08:11:52.755 226833 DEBUG nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:52 compute-2 nova_compute[226829]: 2026-01-31 08:11:52.755 226833 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:52 compute-2 nova_compute[226829]: 2026-01-31 08:11:52.756 226833 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:52 compute-2 nova_compute[226829]: 2026-01-31 08:11:52.756 226833 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:52 compute-2 nova_compute[226829]: 2026-01-31 08:11:52.757 226833 DEBUG nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:11:52 compute-2 nova_compute[226829]: 2026-01-31 08:11:52.757 226833 WARNING nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state resized and task_state None.
Jan 31 08:11:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:54.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:54 compute-2 podman[282363]: 2026-01-31 08:11:54.229927349 +0000 UTC m=+0.120038867 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 08:11:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:54.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:54 compute-2 ceph-mon[77282]: pgmap v2276: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Jan 31 08:11:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:55 compute-2 nova_compute[226829]: 2026-01-31 08:11:55.751 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:56.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:56.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:56 compute-2 ceph-mon[77282]: pgmap v2277: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 245 op/s
Jan 31 08:11:57 compute-2 nova_compute[226829]: 2026-01-31 08:11:57.227 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:57 compute-2 nova_compute[226829]: 2026-01-31 08:11:57.334 226833 DEBUG nova.network.neutron [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Port 4fa5cff4-40cb-4379-bda2-213171730f4f binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 31 08:11:57 compute-2 nova_compute[226829]: 2026-01-31 08:11:57.334 226833 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:11:57 compute-2 nova_compute[226829]: 2026-01-31 08:11:57.335 226833 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:11:57 compute-2 nova_compute[226829]: 2026-01-31 08:11:57.335 226833 DEBUG nova.network.neutron [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:11:57 compute-2 nova_compute[226829]: 2026-01-31 08:11:57.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:11:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:58.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:11:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:11:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:58.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:58 compute-2 ceph-mon[77282]: pgmap v2278: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 1.6 MiB/s wr, 286 op/s
Jan 31 08:11:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:00.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:00.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:00 compute-2 nova_compute[226829]: 2026-01-31 08:12:00.752 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:00 compute-2 ceph-mon[77282]: pgmap v2279: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 964 KiB/s wr, 266 op/s
Jan 31 08:12:01 compute-2 nova_compute[226829]: 2026-01-31 08:12:01.992 226833 DEBUG nova.network.neutron [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.037 226833 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:02.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:02 compute-2 kernel: tap4fa5cff4-40 (unregistering): left promiscuous mode
Jan 31 08:12:02 compute-2 NetworkManager[48999]: <info>  [1769847122.1481] device (tap4fa5cff4-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:12:02 compute-2 ovn_controller[133834]: 2026-01-31T08:12:02Z|00514|binding|INFO|Releasing lport 4fa5cff4-40cb-4379-bda2-213171730f4f from this chassis (sb_readonly=0)
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.157 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:02 compute-2 ovn_controller[133834]: 2026-01-31T08:12:02Z|00515|binding|INFO|Setting lport 4fa5cff4-40cb-4379-bda2-213171730f4f down in Southbound
Jan 31 08:12:02 compute-2 ovn_controller[133834]: 2026-01-31T08:12:02Z|00516|binding|INFO|Removing iface tap4fa5cff4-40 ovn-installed in OVS
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.165 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.178 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:7b:39 10.100.0.6'], port_security=['fa:16:3e:af:7b:39 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af8d85d9-c7a2-4709-a234-19511f3e4395', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=4fa5cff4-40cb-4379-bda2-213171730f4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.180 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 4fa5cff4-40cb-4379-bda2-213171730f4f in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 unbound from our chassis
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.182 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e8014d6b-23e1-41ef-b5e2-3d770d302e72, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.184 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ee273e41-eba3-4551-9a5c-0f38141a9bfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.184 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace which is not needed anymore
Jan 31 08:12:02 compute-2 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000077.scope: Deactivated successfully.
Jan 31 08:12:02 compute-2 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000077.scope: Consumed 14.070s CPU time.
Jan 31 08:12:02 compute-2 systemd-machined[195142]: Machine qemu-55-instance-00000077 terminated.
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.228 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:02 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[282345]: [NOTICE]   (282349) : haproxy version is 2.8.14-c23fe91
Jan 31 08:12:02 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[282345]: [NOTICE]   (282349) : path to executable is /usr/sbin/haproxy
Jan 31 08:12:02 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[282345]: [WARNING]  (282349) : Exiting Master process...
Jan 31 08:12:02 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[282345]: [WARNING]  (282349) : Exiting Master process...
Jan 31 08:12:02 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[282345]: [ALERT]    (282349) : Current worker (282351) exited with code 143 (Terminated)
Jan 31 08:12:02 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[282345]: [WARNING]  (282349) : All workers exited. Exiting... (0)
Jan 31 08:12:02 compute-2 systemd[1]: libpod-d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4.scope: Deactivated successfully.
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.301 226833 INFO nova.virt.libvirt.driver [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance destroyed successfully.
Jan 31 08:12:02 compute-2 podman[282416]: 2026-01-31 08:12:02.302257084 +0000 UTC m=+0.042302282 container died d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.302 226833 DEBUG nova.objects.instance [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'resources' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.321 226833 DEBUG nova.virt.libvirt.vif [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2134235451',display_name='tempest-ServerActionsTestOtherB-server-2134235451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2134235451',id=119,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-9xfxf7l5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=af8d85d9-c7a2-4709-a234-19511f3e4395,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.322 226833 DEBUG nova.network.os_vif_util [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.323 226833 DEBUG nova.network.os_vif_util [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.323 226833 DEBUG os_vif [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.326 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:02 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4-userdata-shm.mount: Deactivated successfully.
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.326 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fa5cff4-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.330 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:12:02 compute-2 systemd[1]: var-lib-containers-storage-overlay-953f37d2e59bcc133918b3c4e61468ee3cfc0ca357c57e69d9acebabac2e2e54-merged.mount: Deactivated successfully.
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.335 226833 INFO os_vif [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40')
Jan 31 08:12:02 compute-2 podman[282416]: 2026-01-31 08:12:02.345599104 +0000 UTC m=+0.085644292 container cleanup d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:12:02 compute-2 systemd[1]: libpod-conmon-d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4.scope: Deactivated successfully.
Jan 31 08:12:02 compute-2 podman[282457]: 2026-01-31 08:12:02.396586711 +0000 UTC m=+0.035927549 container remove d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.399 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3d347e-a86d-408d-a090-dc8a21ea7ff4]: (4, ('Sat Jan 31 08:12:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4)\nd2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4\nSat Jan 31 08:12:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (d2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4)\nd2b99ae127bca6d2804fca0b3db45690c82e8e8ee761df02a2eac11048cd06c4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.401 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[159ca1a5-2a1b-44e8-851e-fa02bfec2d82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.402 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:02 compute-2 kernel: tape8014d6b-20: left promiscuous mode
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.404 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.409 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:02 compute-2 nova_compute[226829]: 2026-01-31 08:12:02.410 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.412 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6250cb58-bb7b-4cd6-bf6e-468f4e515cdc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.423 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[599ea264-1719-4ed5-9121-3d48ec1adfa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.424 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5cd679-71f5-4a88-b1b6-9b24635b4bf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.435 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ab55d0d0-dc2c-4b8b-9c87-1f43cca17997]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736007, 'reachable_time': 33494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282472, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:02 compute-2 systemd[1]: run-netns-ovnmeta\x2de8014d6b\x2d23e1\x2d41ef\x2db5e2\x2d3d770d302e72.mount: Deactivated successfully.
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.440 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:12:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:02.441 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a074aa-1c08-4034-bcab-46ad417c2dca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:02.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:02 compute-2 ceph-mon[77282]: pgmap v2280: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 36 KiB/s wr, 234 op/s
Jan 31 08:12:03 compute-2 nova_compute[226829]: 2026-01-31 08:12:03.500 226833 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:03 compute-2 nova_compute[226829]: 2026-01-31 08:12:03.500 226833 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:03 compute-2 nova_compute[226829]: 2026-01-31 08:12:03.564 226833 DEBUG nova.objects.instance [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'migration_context' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:03 compute-2 nova_compute[226829]: 2026-01-31 08:12:03.772 226833 DEBUG oslo_concurrency.processutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:03 compute-2 nova_compute[226829]: 2026-01-31 08:12:03.931 226833 DEBUG nova.compute.manager [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:03 compute-2 nova_compute[226829]: 2026-01-31 08:12:03.932 226833 DEBUG oslo_concurrency.lockutils [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:03 compute-2 nova_compute[226829]: 2026-01-31 08:12:03.932 226833 DEBUG oslo_concurrency.lockutils [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:03 compute-2 nova_compute[226829]: 2026-01-31 08:12:03.933 226833 DEBUG oslo_concurrency.lockutils [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:03 compute-2 nova_compute[226829]: 2026-01-31 08:12:03.933 226833 DEBUG nova.compute.manager [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:12:03 compute-2 nova_compute[226829]: 2026-01-31 08:12:03.933 226833 WARNING nova.compute.manager [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:12:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:04.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:12:04 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2218415899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:04 compute-2 nova_compute[226829]: 2026-01-31 08:12:04.195 226833 DEBUG oslo_concurrency.processutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:04 compute-2 nova_compute[226829]: 2026-01-31 08:12:04.200 226833 DEBUG nova.compute.provider_tree [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:12:04 compute-2 nova_compute[226829]: 2026-01-31 08:12:04.223 226833 DEBUG nova.scheduler.client.report [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:12:04 compute-2 sudo[282496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:04 compute-2 sudo[282496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:04 compute-2 sudo[282496]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:04 compute-2 sudo[282527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:04 compute-2 sudo[282527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:04 compute-2 sudo[282527]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:04 compute-2 podman[282520]: 2026-01-31 08:12:04.358991786 +0000 UTC m=+0.068980547 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:12:04 compute-2 nova_compute[226829]: 2026-01-31 08:12:04.409 226833 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:04.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:04 compute-2 ceph-mon[77282]: pgmap v2281: 305 pgs: 305 active+clean; 440 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 36 KiB/s wr, 205 op/s
Jan 31 08:12:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2218415899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/784879266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:05 compute-2 nova_compute[226829]: 2026-01-31 08:12:05.758 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:06.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:06.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:06.885 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:06.886 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:06.886 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:06 compute-2 ceph-mon[77282]: pgmap v2282: 305 pgs: 305 active+clean; 415 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 37 KiB/s wr, 216 op/s
Jan 31 08:12:07 compute-2 nova_compute[226829]: 2026-01-31 08:12:07.328 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:12:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:08.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:12:08 compute-2 nova_compute[226829]: 2026-01-31 08:12:08.339 226833 DEBUG nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:08 compute-2 nova_compute[226829]: 2026-01-31 08:12:08.339 226833 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:08 compute-2 nova_compute[226829]: 2026-01-31 08:12:08.340 226833 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:08 compute-2 nova_compute[226829]: 2026-01-31 08:12:08.340 226833 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:08 compute-2 nova_compute[226829]: 2026-01-31 08:12:08.341 226833 DEBUG nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:12:08 compute-2 nova_compute[226829]: 2026-01-31 08:12:08.341 226833 WARNING nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:12:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:08.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:08 compute-2 ceph-mon[77282]: pgmap v2283: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 23 KiB/s wr, 225 op/s
Jan 31 08:12:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:09.027 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:12:09 compute-2 nova_compute[226829]: 2026-01-31 08:12:09.028 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:09.029 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:12:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:09.031 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:09 compute-2 nova_compute[226829]: 2026-01-31 08:12:09.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:09 compute-2 nova_compute[226829]: 2026-01-31 08:12:09.707 226833 DEBUG nova.compute.manager [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:09 compute-2 nova_compute[226829]: 2026-01-31 08:12:09.707 226833 DEBUG nova.compute.manager [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing instance network info cache due to event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:12:09 compute-2 nova_compute[226829]: 2026-01-31 08:12:09.708 226833 DEBUG oslo_concurrency.lockutils [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:09 compute-2 nova_compute[226829]: 2026-01-31 08:12:09.708 226833 DEBUG oslo_concurrency.lockutils [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:09 compute-2 nova_compute[226829]: 2026-01-31 08:12:09.709 226833 DEBUG nova.network.neutron [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:12:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:09 compute-2 ceph-mon[77282]: pgmap v2284: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 23 KiB/s wr, 169 op/s
Jan 31 08:12:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:10.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:10.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:10 compute-2 nova_compute[226829]: 2026-01-31 08:12:10.760 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:12:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:12.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:12:12 compute-2 nova_compute[226829]: 2026-01-31 08:12:12.367 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:12 compute-2 nova_compute[226829]: 2026-01-31 08:12:12.391 226833 DEBUG nova.network.neutron [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updated VIF entry in instance network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:12:12 compute-2 nova_compute[226829]: 2026-01-31 08:12:12.392 226833 DEBUG nova.network.neutron [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:12 compute-2 nova_compute[226829]: 2026-01-31 08:12:12.456 226833 DEBUG oslo_concurrency.lockutils [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:12.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:12 compute-2 ceph-mon[77282]: pgmap v2285: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 23 KiB/s wr, 169 op/s
Jan 31 08:12:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3258372223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:14.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:14 compute-2 ceph-mon[77282]: pgmap v2286: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 13 KiB/s wr, 161 op/s
Jan 31 08:12:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:15 compute-2 nova_compute[226829]: 2026-01-31 08:12:15.763 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:16.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:16 compute-2 ceph-mon[77282]: pgmap v2287: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 156 op/s
Jan 31 08:12:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2812206464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e288 e288: 3 total, 3 up, 3 in
Jan 31 08:12:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:12:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/310619649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:17 compute-2 nova_compute[226829]: 2026-01-31 08:12:17.300 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847122.2988787, af8d85d9-c7a2-4709-a234-19511f3e4395 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:17 compute-2 nova_compute[226829]: 2026-01-31 08:12:17.300 226833 INFO nova.compute.manager [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Stopped (Lifecycle Event)
Jan 31 08:12:17 compute-2 nova_compute[226829]: 2026-01-31 08:12:17.332 226833 DEBUG nova.compute.manager [None req-c0874332-ad3c-47e8-b438-d4579a27f3e6 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:17 compute-2 nova_compute[226829]: 2026-01-31 08:12:17.369 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:17 compute-2 ceph-mon[77282]: osdmap e288: 3 total, 3 up, 3 in
Jan 31 08:12:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/310619649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1091787693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:18.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:18 compute-2 ceph-mon[77282]: pgmap v2289: 305 pgs: 305 active+clean; 407 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 933 KiB/s rd, 163 KiB/s wr, 75 op/s
Jan 31 08:12:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2530508858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:19 compute-2 nova_compute[226829]: 2026-01-31 08:12:19.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:12:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:20.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:12:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:20.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:20 compute-2 nova_compute[226829]: 2026-01-31 08:12:20.765 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:20 compute-2 ceph-mon[77282]: pgmap v2290: 305 pgs: 305 active+clean; 468 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 632 KiB/s rd, 3.2 MiB/s wr, 99 op/s
Jan 31 08:12:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:22.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:22 compute-2 nova_compute[226829]: 2026-01-31 08:12:22.363 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:22 compute-2 nova_compute[226829]: 2026-01-31 08:12:22.364 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:22 compute-2 nova_compute[226829]: 2026-01-31 08:12:22.372 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:22.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:22 compute-2 ceph-mon[77282]: pgmap v2291: 305 pgs: 305 active+clean; 482 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 694 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Jan 31 08:12:23 compute-2 nova_compute[226829]: 2026-01-31 08:12:23.807 226833 DEBUG nova.compute.manager [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:23 compute-2 nova_compute[226829]: 2026-01-31 08:12:23.807 226833 DEBUG oslo_concurrency.lockutils [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:23 compute-2 nova_compute[226829]: 2026-01-31 08:12:23.808 226833 DEBUG oslo_concurrency.lockutils [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:23 compute-2 nova_compute[226829]: 2026-01-31 08:12:23.808 226833 DEBUG oslo_concurrency.lockutils [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:23 compute-2 nova_compute[226829]: 2026-01-31 08:12:23.808 226833 DEBUG nova.compute.manager [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:12:23 compute-2 nova_compute[226829]: 2026-01-31 08:12:23.808 226833 WARNING nova.compute.manager [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:12:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:24.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:24 compute-2 sudo[282574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:24 compute-2 sudo[282574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:24 compute-2 sudo[282574]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:24 compute-2 sudo[282605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:24 compute-2 sudo[282605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:24 compute-2 sudo[282605]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:24 compute-2 podman[282598]: 2026-01-31 08:12:24.484670303 +0000 UTC m=+0.073465740 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller)
Jan 31 08:12:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:24.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:24 compute-2 ceph-mon[77282]: pgmap v2292: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 696 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Jan 31 08:12:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e289 e289: 3 total, 3 up, 3 in
Jan 31 08:12:25 compute-2 nova_compute[226829]: 2026-01-31 08:12:25.767 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3098008576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4176926919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:25 compute-2 ceph-mon[77282]: osdmap e289: 3 total, 3 up, 3 in
Jan 31 08:12:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:26.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:26 compute-2 nova_compute[226829]: 2026-01-31 08:12:26.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:26.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:26 compute-2 ceph-mon[77282]: pgmap v2294: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 696 KiB/s rd, 4.8 MiB/s wr, 148 op/s
Jan 31 08:12:27 compute-2 nova_compute[226829]: 2026-01-31 08:12:27.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:27 compute-2 nova_compute[226829]: 2026-01-31 08:12:27.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:27 compute-2 nova_compute[226829]: 2026-01-31 08:12:27.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:12:27 compute-2 nova_compute[226829]: 2026-01-31 08:12:27.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:12:27 compute-2 ceph-mon[77282]: pgmap v2295: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 148 op/s
Jan 31 08:12:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:28.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:28.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:28 compute-2 sudo[282653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:28 compute-2 sudo[282653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:28 compute-2 sudo[282653]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:28 compute-2 sudo[282678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:28 compute-2 sudo[282678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:28 compute-2 sudo[282678]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:28 compute-2 sudo[282703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:28 compute-2 sudo[282703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:28 compute-2 sudo[282703]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:29 compute-2 sudo[282728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:12:29 compute-2 sudo[282728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:29 compute-2 nova_compute[226829]: 2026-01-31 08:12:29.415 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:29 compute-2 nova_compute[226829]: 2026-01-31 08:12:29.416 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:29 compute-2 nova_compute[226829]: 2026-01-31 08:12:29.416 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:12:29 compute-2 nova_compute[226829]: 2026-01-31 08:12:29.416 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:29 compute-2 sudo[282728]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:12:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:12:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:12:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:12:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:12:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:12:29 compute-2 nova_compute[226829]: 2026-01-31 08:12:29.866 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:29 compute-2 nova_compute[226829]: 2026-01-31 08:12:29.866 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:30 compute-2 nova_compute[226829]: 2026-01-31 08:12:30.130 226833 DEBUG nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:12:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:30.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:30.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:30 compute-2 ceph-mon[77282]: pgmap v2296: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 123 op/s
Jan 31 08:12:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3571684988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:30 compute-2 nova_compute[226829]: 2026-01-31 08:12:30.769 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:30 compute-2 nova_compute[226829]: 2026-01-31 08:12:30.792 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:30 compute-2 nova_compute[226829]: 2026-01-31 08:12:30.793 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:30 compute-2 nova_compute[226829]: 2026-01-31 08:12:30.806 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:12:30 compute-2 nova_compute[226829]: 2026-01-31 08:12:30.807 226833 INFO nova.compute.claims [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:12:31 compute-2 nova_compute[226829]: 2026-01-31 08:12:31.244 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3123372633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:12:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2130249577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:31 compute-2 nova_compute[226829]: 2026-01-31 08:12:31.698 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:31 compute-2 nova_compute[226829]: 2026-01-31 08:12:31.706 226833 DEBUG nova.compute.provider_tree [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:12:31 compute-2 nova_compute[226829]: 2026-01-31 08:12:31.923 226833 DEBUG nova.scheduler.client.report [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:12:32 compute-2 nova_compute[226829]: 2026-01-31 08:12:32.090 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:32 compute-2 nova_compute[226829]: 2026-01-31 08:12:32.091 226833 DEBUG nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:12:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:32.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:32 compute-2 nova_compute[226829]: 2026-01-31 08:12:32.321 226833 DEBUG nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:12:32 compute-2 nova_compute[226829]: 2026-01-31 08:12:32.322 226833 DEBUG nova.network.neutron [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:12:32 compute-2 nova_compute[226829]: 2026-01-31 08:12:32.376 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:32 compute-2 nova_compute[226829]: 2026-01-31 08:12:32.464 226833 INFO nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:12:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:32.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:32 compute-2 nova_compute[226829]: 2026-01-31 08:12:32.554 226833 DEBUG nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:12:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2130249577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:32 compute-2 ceph-mon[77282]: pgmap v2297: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 377 KiB/s wr, 82 op/s
Jan 31 08:12:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2904659359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.307 226833 DEBUG nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.308 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.308 226833 INFO nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Creating image(s)
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.336 226833 DEBUG nova.storage.rbd_utils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.365 226833 DEBUG nova.storage.rbd_utils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.390 226833 DEBUG nova.storage.rbd_utils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.393 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.415 226833 DEBUG nova.policy [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ab9e181016f4d5a899c91dae3aa26e0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.443 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.444 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.444 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.445 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.469 226833 DEBUG nova.storage.rbd_utils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.472 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.792 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.885 226833 DEBUG nova.storage.rbd_utils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] resizing rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:12:33 compute-2 nova_compute[226829]: 2026-01-31 08:12:33.995 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Updating instance_info_cache with network_info: [{"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.001 226833 DEBUG nova.objects.instance [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'migration_context' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:34.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.440 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.440 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Ensure instance console log exists: /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.441 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.442 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.442 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.443 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.444 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.445 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.445 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.445 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.446 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.523 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.524 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.524 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.525 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.526 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:34.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:34 compute-2 ceph-mon[77282]: pgmap v2298: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 31 08:12:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1081685098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.841 226833 DEBUG nova.network.neutron [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Successfully created port: 75f73628-b189-4a56-a44e-b33ec7ff3e50 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:12:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:12:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/413626707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:34 compute-2 nova_compute[226829]: 2026-01-31 08:12:34.981 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:35 compute-2 podman[282998]: 2026-01-31 08:12:35.113870207 +0000 UTC m=+0.085710483 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.183 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.183 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.354 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.355 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4126MB free_disk=20.76446533203125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.355 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.355 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:35 compute-2 sudo[283017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:35 compute-2 sudo[283017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:35 compute-2 sudo[283017]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:35 compute-2 sudo[283042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:12:35 compute-2 sudo[283042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:35 compute-2 sudo[283042]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.771 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/413626707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:12:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.966 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.966 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 1ee317bd-390c-4a22-9e4e-e24189eb499e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.966 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:12:35 compute-2 nova_compute[226829]: 2026-01-31 08:12:35.967 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:12:36 compute-2 nova_compute[226829]: 2026-01-31 08:12:36.050 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:12:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:36.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:12:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:12:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/263703871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:36 compute-2 nova_compute[226829]: 2026-01-31 08:12:36.481 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:36 compute-2 nova_compute[226829]: 2026-01-31 08:12:36.486 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:12:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:36.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:36 compute-2 nova_compute[226829]: 2026-01-31 08:12:36.556 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:12:36 compute-2 nova_compute[226829]: 2026-01-31 08:12:36.684 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:12:36 compute-2 nova_compute[226829]: 2026-01-31 08:12:36.685 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:36 compute-2 ceph-mon[77282]: pgmap v2299: 305 pgs: 305 active+clean; 526 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 873 KiB/s wr, 137 op/s
Jan 31 08:12:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/661894411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/263703871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:37 compute-2 nova_compute[226829]: 2026-01-31 08:12:37.379 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:37 compute-2 nova_compute[226829]: 2026-01-31 08:12:37.541 226833 DEBUG nova.network.neutron [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Successfully updated port: 75f73628-b189-4a56-a44e-b33ec7ff3e50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:12:37 compute-2 nova_compute[226829]: 2026-01-31 08:12:37.563 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:37 compute-2 nova_compute[226829]: 2026-01-31 08:12:37.563 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquired lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:37 compute-2 nova_compute[226829]: 2026-01-31 08:12:37.563 226833 DEBUG nova.network.neutron [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:12:37 compute-2 nova_compute[226829]: 2026-01-31 08:12:37.763 226833 DEBUG nova.network.neutron [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:12:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/793552847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:12:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:38.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:12:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:38.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:38 compute-2 ceph-mon[77282]: pgmap v2300: 305 pgs: 305 active+clean; 507 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 204 op/s
Jan 31 08:12:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4149491747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:39 compute-2 nova_compute[226829]: 2026-01-31 08:12:39.415 226833 DEBUG nova.compute.manager [req-4a3fe33d-33de-4c45-820b-2bd02708be5c req-2c607db4-788f-44d8-be14-7ba64a241897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-changed-75f73628-b189-4a56-a44e-b33ec7ff3e50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:39 compute-2 nova_compute[226829]: 2026-01-31 08:12:39.416 226833 DEBUG nova.compute.manager [req-4a3fe33d-33de-4c45-820b-2bd02708be5c req-2c607db4-788f-44d8-be14-7ba64a241897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Refreshing instance network info cache due to event network-changed-75f73628-b189-4a56-a44e-b33ec7ff3e50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:12:39 compute-2 nova_compute[226829]: 2026-01-31 08:12:39.417 226833 DEBUG oslo_concurrency.lockutils [req-4a3fe33d-33de-4c45-820b-2bd02708be5c req-2c607db4-788f-44d8-be14-7ba64a241897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:39 compute-2 nova_compute[226829]: 2026-01-31 08:12:39.617 226833 DEBUG nova.network.neutron [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Updating instance_info_cache with network_info: [{"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e290 e290: 3 total, 3 up, 3 in
Jan 31 08:12:40 compute-2 ceph-mon[77282]: pgmap v2301: 305 pgs: 305 active+clean; 466 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Jan 31 08:12:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:40.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:40.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:40 compute-2 nova_compute[226829]: 2026-01-31 08:12:40.772 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e291 e291: 3 total, 3 up, 3 in
Jan 31 08:12:41 compute-2 ceph-mon[77282]: osdmap e290: 3 total, 3 up, 3 in
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.273 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Releasing lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.273 226833 DEBUG nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Instance network_info: |[{"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.274 226833 DEBUG oslo_concurrency.lockutils [req-4a3fe33d-33de-4c45-820b-2bd02708be5c req-2c607db4-788f-44d8-be14-7ba64a241897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.274 226833 DEBUG nova.network.neutron [req-4a3fe33d-33de-4c45-820b-2bd02708be5c req-2c607db4-788f-44d8-be14-7ba64a241897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Refreshing network info cache for port 75f73628-b189-4a56-a44e-b33ec7ff3e50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.276 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Start _get_guest_xml network_info=[{"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.280 226833 WARNING nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.287 226833 DEBUG nova.virt.libvirt.host [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.287 226833 DEBUG nova.virt.libvirt.host [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.293 226833 DEBUG nova.virt.libvirt.host [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.294 226833 DEBUG nova.virt.libvirt.host [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.295 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.295 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.296 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.296 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.296 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.296 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.296 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.297 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.297 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.297 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.297 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.297 226833 DEBUG nova.virt.hardware [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.300 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:12:41 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1620311410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.711 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.738 226833 DEBUG nova.storage.rbd_utils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:41 compute-2 nova_compute[226829]: 2026-01-31 08:12:41.743 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:42 compute-2 ceph-mon[77282]: osdmap e291: 3 total, 3 up, 3 in
Jan 31 08:12:42 compute-2 ceph-mon[77282]: pgmap v2304: 305 pgs: 305 active+clean; 481 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 3.1 MiB/s wr, 312 op/s
Jan 31 08:12:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1620311410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e292 e292: 3 total, 3 up, 3 in
Jan 31 08:12:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:12:42 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1119874547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:42.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.167 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.170 226833 DEBUG nova.virt.libvirt.vif [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:12:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-239463750',display_name='tempest-ServerRescueTestJSON-server-239463750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-239463750',id=127,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cbd0f41e455b4b3b9a8edf35ef0b85ed',ramdisk_id='',reservation_id='r-l1tglovt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1911109090',owner_user_name='tempest-ServerRescueTestJSON-1911109090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:32Z,user_data=None,user_id='6ab9e181016f4d5a899c91dae3aa26e0',uuid=1ee317bd-390c-4a22-9e4e-e24189eb499e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.171 226833 DEBUG nova.network.os_vif_util [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converting VIF {"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.172 226833 DEBUG nova.network.os_vif_util [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:a5:ae,bridge_name='br-int',has_traffic_filtering=True,id=75f73628-b189-4a56-a44e-b33ec7ff3e50,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75f73628-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.174 226833 DEBUG nova.objects.instance [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.236 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <uuid>1ee317bd-390c-4a22-9e4e-e24189eb499e</uuid>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <name>instance-0000007f</name>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerRescueTestJSON-server-239463750</nova:name>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:12:41</nova:creationTime>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <nova:user uuid="6ab9e181016f4d5a899c91dae3aa26e0">tempest-ServerRescueTestJSON-1911109090-project-member</nova:user>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <nova:project uuid="cbd0f41e455b4b3b9a8edf35ef0b85ed">tempest-ServerRescueTestJSON-1911109090</nova:project>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <nova:port uuid="75f73628-b189-4a56-a44e-b33ec7ff3e50">
Jan 31 08:12:42 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <system>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <entry name="serial">1ee317bd-390c-4a22-9e4e-e24189eb499e</entry>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <entry name="uuid">1ee317bd-390c-4a22-9e4e-e24189eb499e</entry>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     </system>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <os>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   </os>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <features>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   </features>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1ee317bd-390c-4a22-9e4e-e24189eb499e_disk">
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       </source>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config">
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       </source>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:12:42 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:7b:a5:ae"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <target dev="tap75f73628-b1"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/console.log" append="off"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <video>
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     </video>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:12:42 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:12:42 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:12:42 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:12:42 compute-2 nova_compute[226829]: </domain>
Jan 31 08:12:42 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.238 226833 DEBUG nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Preparing to wait for external event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.239 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.240 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.241 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.242 226833 DEBUG nova.virt.libvirt.vif [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:12:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-239463750',display_name='tempest-ServerRescueTestJSON-server-239463750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-239463750',id=127,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cbd0f41e455b4b3b9a8edf35ef0b85ed',ramdisk_id='',reservation_id='r-l1tglovt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1911109090',owner_user_name='tempest-ServerRescueTestJSON-1911109090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:32Z,user_data=None,user_id='6ab9e181016f4d5a899c91dae3aa26e0',uuid=1ee317bd-390c-4a22-9e4e-e24189eb499e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.243 226833 DEBUG nova.network.os_vif_util [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converting VIF {"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.244 226833 DEBUG nova.network.os_vif_util [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:a5:ae,bridge_name='br-int',has_traffic_filtering=True,id=75f73628-b189-4a56-a44e-b33ec7ff3e50,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75f73628-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.244 226833 DEBUG os_vif [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:a5:ae,bridge_name='br-int',has_traffic_filtering=True,id=75f73628-b189-4a56-a44e-b33ec7ff3e50,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75f73628-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.247 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.248 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.255 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.255 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75f73628-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.256 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75f73628-b1, col_values=(('external_ids', {'iface-id': '75f73628-b189-4a56-a44e-b33ec7ff3e50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:a5:ae', 'vm-uuid': '1ee317bd-390c-4a22-9e4e-e24189eb499e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.258 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:42 compute-2 NetworkManager[48999]: <info>  [1769847162.2599] manager: (tap75f73628-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.261 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.265 226833 INFO os_vif [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:a5:ae,bridge_name='br-int',has_traffic_filtering=True,id=75f73628-b189-4a56-a44e-b33ec7ff3e50,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75f73628-b1')
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.336 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.337 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.337 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No VIF found with MAC fa:16:3e:7b:a5:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.338 226833 INFO nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Using config drive
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.365 226833 DEBUG nova.storage.rbd_utils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:42.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.957 226833 INFO nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Creating config drive at /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config
Jan 31 08:12:42 compute-2 nova_compute[226829]: 2026-01-31 08:12:42.962 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2ezdd4zt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:43 compute-2 nova_compute[226829]: 2026-01-31 08:12:43.092 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2ezdd4zt" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:43 compute-2 nova_compute[226829]: 2026-01-31 08:12:43.120 226833 DEBUG nova.storage.rbd_utils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:43 compute-2 nova_compute[226829]: 2026-01-31 08:12:43.124 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:43 compute-2 ceph-mon[77282]: osdmap e292: 3 total, 3 up, 3 in
Jan 31 08:12:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1119874547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:43 compute-2 nova_compute[226829]: 2026-01-31 08:12:43.981 226833 DEBUG oslo_concurrency.processutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.857s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:43 compute-2 nova_compute[226829]: 2026-01-31 08:12:43.981 226833 INFO nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Deleting local config drive /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config because it was imported into RBD.
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.015 226833 DEBUG nova.network.neutron [req-4a3fe33d-33de-4c45-820b-2bd02708be5c req-2c607db4-788f-44d8-be14-7ba64a241897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Updated VIF entry in instance network info cache for port 75f73628-b189-4a56-a44e-b33ec7ff3e50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.016 226833 DEBUG nova.network.neutron [req-4a3fe33d-33de-4c45-820b-2bd02708be5c req-2c607db4-788f-44d8-be14-7ba64a241897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Updating instance_info_cache with network_info: [{"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:44 compute-2 kernel: tap75f73628-b1: entered promiscuous mode
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.028 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:44 compute-2 NetworkManager[48999]: <info>  [1769847164.0319] manager: (tap75f73628-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Jan 31 08:12:44 compute-2 ovn_controller[133834]: 2026-01-31T08:12:44Z|00517|binding|INFO|Claiming lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 for this chassis.
Jan 31 08:12:44 compute-2 ovn_controller[133834]: 2026-01-31T08:12:44Z|00518|binding|INFO|75f73628-b189-4a56-a44e-b33ec7ff3e50: Claiming fa:16:3e:7b:a5:ae 10.100.0.7
Jan 31 08:12:44 compute-2 ovn_controller[133834]: 2026-01-31T08:12:44Z|00519|binding|INFO|Setting lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 ovn-installed in OVS
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.045 226833 DEBUG oslo_concurrency.lockutils [req-4a3fe33d-33de-4c45-820b-2bd02708be5c req-2c607db4-788f-44d8-be14-7ba64a241897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.047 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.049 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:44 compute-2 ovn_controller[133834]: 2026-01-31T08:12:44Z|00520|binding|INFO|Setting lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 up in Southbound
Jan 31 08:12:44 compute-2 systemd-machined[195142]: New machine qemu-56-instance-0000007f.
Jan 31 08:12:44 compute-2 systemd-udevd[283230]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:12:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:44.054 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:a5:ae 10.100.0.7'], port_security=['fa:16:3e:7b:a5:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1ee317bd-390c-4a22-9e4e-e24189eb499e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=75f73628-b189-4a56-a44e-b33ec7ff3e50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:12:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:44.057 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 75f73628-b189-4a56-a44e-b33ec7ff3e50 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c bound to our chassis
Jan 31 08:12:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:44.059 143841 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:12:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:44.061 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dda607cb-45c0-464d-a974-0fa733921d6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:44 compute-2 systemd[1]: Started Virtual Machine qemu-56-instance-0000007f.
Jan 31 08:12:44 compute-2 NetworkManager[48999]: <info>  [1769847164.0770] device (tap75f73628-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:12:44 compute-2 NetworkManager[48999]: <info>  [1769847164.0781] device (tap75f73628-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:12:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:44.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/516289054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:44 compute-2 ceph-mon[77282]: pgmap v2306: 305 pgs: 305 active+clean; 488 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.2 MiB/s wr, 165 op/s
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.410 226833 DEBUG nova.compute.manager [req-4b6016d5-2b4a-43e3-a5f0-f4267cbde780 req-4590b72e-481d-4124-9afc-7b175e8961ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.411 226833 DEBUG oslo_concurrency.lockutils [req-4b6016d5-2b4a-43e3-a5f0-f4267cbde780 req-4590b72e-481d-4124-9afc-7b175e8961ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.412 226833 DEBUG oslo_concurrency.lockutils [req-4b6016d5-2b4a-43e3-a5f0-f4267cbde780 req-4590b72e-481d-4124-9afc-7b175e8961ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.412 226833 DEBUG oslo_concurrency.lockutils [req-4b6016d5-2b4a-43e3-a5f0-f4267cbde780 req-4590b72e-481d-4124-9afc-7b175e8961ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.412 226833 DEBUG nova.compute.manager [req-4b6016d5-2b4a-43e3-a5f0-f4267cbde780 req-4590b72e-481d-4124-9afc-7b175e8961ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Processing event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:12:44 compute-2 sudo[283239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:44 compute-2 sudo[283239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:44 compute-2 sudo[283239]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:44.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:44 compute-2 sudo[283264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:44 compute-2 sudo[283264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:44 compute-2 sudo[283264]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.632 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.633 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.669 226833 DEBUG nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.728 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.729 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.819 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.819 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.826 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:12:44 compute-2 nova_compute[226829]: 2026-01-31 08:12:44.826 226833 INFO nova.compute.claims [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:12:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4141045526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:12:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4141045526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.466 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847165.465621, 1ee317bd-390c-4a22-9e4e-e24189eb499e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.468 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] VM Started (Lifecycle Event)
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.471 226833 DEBUG nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.474 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.478 226833 INFO nova.virt.libvirt.driver [-] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Instance spawned successfully.
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.483 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.507 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.513 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.515 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.516 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.516 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.516 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.517 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.518 226833 DEBUG nova.virt.libvirt.driver [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.526 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.572 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.573 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847165.4658308, 1ee317bd-390c-4a22-9e4e-e24189eb499e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.573 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] VM Paused (Lifecycle Event)
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.625 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.630 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847165.4736898, 1ee317bd-390c-4a22-9e4e-e24189eb499e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.630 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] VM Resumed (Lifecycle Event)
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.637 226833 INFO nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Took 12.33 seconds to spawn the instance on the hypervisor.
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.638 226833 DEBUG nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:45 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:12:45 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.651 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.655 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.705 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.732 226833 INFO nova.compute.manager [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Took 15.06 seconds to build instance.
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.760 226833 DEBUG oslo_concurrency.lockutils [None req-3409a8c2-14cd-440f-9011-3799db703576 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:12:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1305186884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.961 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:45 compute-2 nova_compute[226829]: 2026-01-31 08:12:45.970 226833 DEBUG nova.compute.provider_tree [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.005 226833 DEBUG nova.scheduler.client.report [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.047 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.049 226833 DEBUG nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:12:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:46.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.173 226833 DEBUG nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.174 226833 DEBUG nova.network.neutron [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.219 226833 INFO nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.316 226833 DEBUG nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:12:46 compute-2 ceph-mon[77282]: pgmap v2307: 305 pgs: 305 active+clean; 513 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 120 op/s
Jan 31 08:12:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1305186884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.436 226833 DEBUG nova.policy [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '18aee9d81d404f77ac81cde538f140d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.441 226833 DEBUG nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.443 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.444 226833 INFO nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Creating image(s)
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.476 226833 DEBUG nova.storage.rbd_utils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.506 226833 DEBUG nova.storage.rbd_utils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.544 226833 DEBUG nova.storage.rbd_utils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.549 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:46.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.622 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.623 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.625 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.626 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.651 226833 DEBUG nova.storage.rbd_utils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.655 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 5a59388d-bade-4df0-9ac0-0022df15ea02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.966 226833 DEBUG nova.compute.manager [req-e7e12b7f-25c8-42ca-8a74-aa4832339133 req-5ada5d76-e4d2-46ff-8c55-abbd3afd4d79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.966 226833 DEBUG oslo_concurrency.lockutils [req-e7e12b7f-25c8-42ca-8a74-aa4832339133 req-5ada5d76-e4d2-46ff-8c55-abbd3afd4d79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.967 226833 DEBUG oslo_concurrency.lockutils [req-e7e12b7f-25c8-42ca-8a74-aa4832339133 req-5ada5d76-e4d2-46ff-8c55-abbd3afd4d79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.967 226833 DEBUG oslo_concurrency.lockutils [req-e7e12b7f-25c8-42ca-8a74-aa4832339133 req-5ada5d76-e4d2-46ff-8c55-abbd3afd4d79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.967 226833 DEBUG nova.compute.manager [req-e7e12b7f-25c8-42ca-8a74-aa4832339133 req-5ada5d76-e4d2-46ff-8c55-abbd3afd4d79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] No waiting events found dispatching network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:12:46 compute-2 nova_compute[226829]: 2026-01-31 08:12:46.967 226833 WARNING nova.compute.manager [req-e7e12b7f-25c8-42ca-8a74-aa4832339133 req-5ada5d76-e4d2-46ff-8c55-abbd3afd4d79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received unexpected event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 for instance with vm_state active and task_state None.
Jan 31 08:12:47 compute-2 nova_compute[226829]: 2026-01-31 08:12:47.025 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 5a59388d-bade-4df0-9ac0-0022df15ea02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:47 compute-2 nova_compute[226829]: 2026-01-31 08:12:47.112 226833 DEBUG nova.storage.rbd_utils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] resizing rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:12:47 compute-2 nova_compute[226829]: 2026-01-31 08:12:47.232 226833 DEBUG nova.objects.instance [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'migration_context' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:47 compute-2 nova_compute[226829]: 2026-01-31 08:12:47.258 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:48.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:48.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:48 compute-2 ceph-mon[77282]: pgmap v2308: 305 pgs: 305 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 9.4 MiB/s wr, 274 op/s
Jan 31 08:12:49 compute-2 nova_compute[226829]: 2026-01-31 08:12:49.264 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:12:49 compute-2 nova_compute[226829]: 2026-01-31 08:12:49.265 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Ensure instance console log exists: /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:12:49 compute-2 nova_compute[226829]: 2026-01-31 08:12:49.266 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:49 compute-2 nova_compute[226829]: 2026-01-31 08:12:49.266 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:49 compute-2 nova_compute[226829]: 2026-01-31 08:12:49.266 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:49 compute-2 nova_compute[226829]: 2026-01-31 08:12:49.953 226833 DEBUG nova.network.neutron [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Successfully created port: c82f9d38-ef16-46ac-829a-12067ec8c603 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:12:50 compute-2 nova_compute[226829]: 2026-01-31 08:12:50.067 226833 INFO nova.compute.manager [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Rescuing
Jan 31 08:12:50 compute-2 nova_compute[226829]: 2026-01-31 08:12:50.068 226833 DEBUG oslo_concurrency.lockutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:50 compute-2 nova_compute[226829]: 2026-01-31 08:12:50.068 226833 DEBUG oslo_concurrency.lockutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquired lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:50 compute-2 nova_compute[226829]: 2026-01-31 08:12:50.068 226833 DEBUG nova.network.neutron [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:12:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:50.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:50.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 e293: 3 total, 3 up, 3 in
Jan 31 08:12:50 compute-2 nova_compute[226829]: 2026-01-31 08:12:50.779 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:50 compute-2 ceph-mon[77282]: pgmap v2309: 305 pgs: 305 active+clean; 652 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 12 MiB/s wr, 392 op/s
Jan 31 08:12:50 compute-2 ceph-mon[77282]: osdmap e293: 3 total, 3 up, 3 in
Jan 31 08:12:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:50.951 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:12:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:50.952 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:12:50 compute-2 nova_compute[226829]: 2026-01-31 08:12:50.953 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:51 compute-2 nova_compute[226829]: 2026-01-31 08:12:51.828 226833 DEBUG nova.network.neutron [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Successfully updated port: c82f9d38-ef16-46ac-829a-12067ec8c603 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:12:51 compute-2 nova_compute[226829]: 2026-01-31 08:12:51.983 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:51 compute-2 nova_compute[226829]: 2026-01-31 08:12:51.984 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:51 compute-2 nova_compute[226829]: 2026-01-31 08:12:51.984 226833 DEBUG nova.network.neutron [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:12:52 compute-2 ceph-mon[77282]: pgmap v2311: 305 pgs: 305 active+clean; 669 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 12 MiB/s wr, 385 op/s
Jan 31 08:12:52 compute-2 nova_compute[226829]: 2026-01-31 08:12:52.116 226833 DEBUG nova.compute.manager [req-cbacf731-b9e1-4fac-bd5a-4b5265ffa803 req-89e8d203-bcc8-49ff-be84-ce3c5b05e84f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-changed-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:52 compute-2 nova_compute[226829]: 2026-01-31 08:12:52.116 226833 DEBUG nova.compute.manager [req-cbacf731-b9e1-4fac-bd5a-4b5265ffa803 req-89e8d203-bcc8-49ff-be84-ce3c5b05e84f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Refreshing instance network info cache due to event network-changed-c82f9d38-ef16-46ac-829a-12067ec8c603. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:12:52 compute-2 nova_compute[226829]: 2026-01-31 08:12:52.117 226833 DEBUG oslo_concurrency.lockutils [req-cbacf731-b9e1-4fac-bd5a-4b5265ffa803 req-89e8d203-bcc8-49ff-be84-ce3c5b05e84f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:52.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:52 compute-2 nova_compute[226829]: 2026-01-31 08:12:52.260 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:52 compute-2 nova_compute[226829]: 2026-01-31 08:12:52.383 226833 DEBUG nova.network.neutron [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:12:52 compute-2 nova_compute[226829]: 2026-01-31 08:12:52.418 226833 DEBUG nova.network.neutron [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Updating instance_info_cache with network_info: [{"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:52 compute-2 nova_compute[226829]: 2026-01-31 08:12:52.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:52.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:52 compute-2 nova_compute[226829]: 2026-01-31 08:12:52.601 226833 DEBUG oslo_concurrency.lockutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Releasing lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1294508577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:53 compute-2 nova_compute[226829]: 2026-01-31 08:12:53.158 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:12:53 compute-2 nova_compute[226829]: 2026-01-31 08:12:53.651 226833 DEBUG nova.network.neutron [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updating instance_info_cache with network_info: [{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/970627810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:54 compute-2 ceph-mon[77282]: pgmap v2312: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 11 MiB/s wr, 358 op/s
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.043 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.043 226833 DEBUG nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance network_info: |[{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.044 226833 DEBUG oslo_concurrency.lockutils [req-cbacf731-b9e1-4fac-bd5a-4b5265ffa803 req-89e8d203-bcc8-49ff-be84-ce3c5b05e84f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.045 226833 DEBUG nova.network.neutron [req-cbacf731-b9e1-4fac-bd5a-4b5265ffa803 req-89e8d203-bcc8-49ff-be84-ce3c5b05e84f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Refreshing network info cache for port c82f9d38-ef16-46ac-829a-12067ec8c603 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.049 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Start _get_guest_xml network_info=[{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.054 226833 WARNING nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.063 226833 DEBUG nova.virt.libvirt.host [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.063 226833 DEBUG nova.virt.libvirt.host [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.066 226833 DEBUG nova.virt.libvirt.host [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.067 226833 DEBUG nova.virt.libvirt.host [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.068 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.068 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.069 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.069 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.070 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.070 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.070 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.070 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.071 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.071 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.071 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.072 226833 DEBUG nova.virt.hardware [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.075 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:54.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:12:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2682229306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.525 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.546 226833 DEBUG nova.storage.rbd_utils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.550 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:54.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:12:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/813564231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.957 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.960 226833 DEBUG nova.virt.libvirt.vif [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:12:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-993615947',display_name='tempest-ServerActionsTestOtherB-server-993615947',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-993615947',id=129,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-5y6n0w3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=5a59388d-bade-4df0-9ac0-0022df15ea02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.961 226833 DEBUG nova.network.os_vif_util [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.962 226833 DEBUG nova.network.os_vif_util [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:12:54 compute-2 nova_compute[226829]: 2026-01-31 08:12:54.964 226833 DEBUG nova.objects.instance [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2682229306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/813564231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:55 compute-2 podman[283587]: 2026-01-31 08:12:55.182205491 +0000 UTC m=+0.064077574 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.260 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <uuid>5a59388d-bade-4df0-9ac0-0022df15ea02</uuid>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <name>instance-00000081</name>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestOtherB-server-993615947</nova:name>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:12:54</nova:creationTime>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <nova:user uuid="18aee9d81d404f77ac81cde538f140d8">tempest-ServerActionsTestOtherB-2012907318-project-member</nova:user>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <nova:project uuid="c3ddadeb950a490db5c99da98a32c9ec">tempest-ServerActionsTestOtherB-2012907318</nova:project>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <nova:port uuid="c82f9d38-ef16-46ac-829a-12067ec8c603">
Jan 31 08:12:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <system>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <entry name="serial">5a59388d-bade-4df0-9ac0-0022df15ea02</entry>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <entry name="uuid">5a59388d-bade-4df0-9ac0-0022df15ea02</entry>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     </system>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <os>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   </os>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <features>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   </features>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/5a59388d-bade-4df0-9ac0-0022df15ea02_disk">
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       </source>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config">
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       </source>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:12:55 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:15:e2:fd"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <target dev="tapc82f9d38-ef"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/console.log" append="off"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <video>
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     </video>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:12:55 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:12:55 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:12:55 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:12:55 compute-2 nova_compute[226829]: </domain>
Jan 31 08:12:55 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.267 226833 DEBUG nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Preparing to wait for external event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.268 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.268 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.268 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.269 226833 DEBUG nova.virt.libvirt.vif [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:12:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-993615947',display_name='tempest-ServerActionsTestOtherB-server-993615947',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-993615947',id=129,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-5y6n0w3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=5a59388d-bade-4df0-9ac0-0022df15ea02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.270 226833 DEBUG nova.network.os_vif_util [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.271 226833 DEBUG nova.network.os_vif_util [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.271 226833 DEBUG os_vif [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.273 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.273 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.278 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.279 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc82f9d38-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.279 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc82f9d38-ef, col_values=(('external_ids', {'iface-id': 'c82f9d38-ef16-46ac-829a-12067ec8c603', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:e2:fd', 'vm-uuid': '5a59388d-bade-4df0-9ac0-0022df15ea02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.281 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:55 compute-2 NetworkManager[48999]: <info>  [1769847175.2822] manager: (tapc82f9d38-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.283 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.287 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.288 226833 INFO os_vif [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef')
Jan 31 08:12:55 compute-2 nova_compute[226829]: 2026-01-31 08:12:55.779 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:56 compute-2 ceph-mon[77282]: pgmap v2313: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 9.4 MiB/s wr, 312 op/s
Jan 31 08:12:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:56.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:56 compute-2 nova_compute[226829]: 2026-01-31 08:12:56.239 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:12:56 compute-2 nova_compute[226829]: 2026-01-31 08:12:56.239 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:12:56 compute-2 nova_compute[226829]: 2026-01-31 08:12:56.239 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No VIF found with MAC fa:16:3e:15:e2:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:12:56 compute-2 nova_compute[226829]: 2026-01-31 08:12:56.240 226833 INFO nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Using config drive
Jan 31 08:12:56 compute-2 nova_compute[226829]: 2026-01-31 08:12:56.271 226833 DEBUG nova.storage.rbd_utils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:56.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:56.956 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:57 compute-2 nova_compute[226829]: 2026-01-31 08:12:57.730 226833 INFO nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Creating config drive at /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config
Jan 31 08:12:57 compute-2 nova_compute[226829]: 2026-01-31 08:12:57.735 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppvthwyx_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:57 compute-2 nova_compute[226829]: 2026-01-31 08:12:57.847 226833 DEBUG nova.network.neutron [req-cbacf731-b9e1-4fac-bd5a-4b5265ffa803 req-89e8d203-bcc8-49ff-be84-ce3c5b05e84f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updated VIF entry in instance network info cache for port c82f9d38-ef16-46ac-829a-12067ec8c603. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:12:57 compute-2 nova_compute[226829]: 2026-01-31 08:12:57.882 226833 DEBUG nova.network.neutron [req-cbacf731-b9e1-4fac-bd5a-4b5265ffa803 req-89e8d203-bcc8-49ff-be84-ce3c5b05e84f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updating instance_info_cache with network_info: [{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:57 compute-2 nova_compute[226829]: 2026-01-31 08:12:57.885 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppvthwyx_" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:57 compute-2 nova_compute[226829]: 2026-01-31 08:12:57.917 226833 DEBUG nova.storage.rbd_utils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:57 compute-2 nova_compute[226829]: 2026-01-31 08:12:57.921 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config 5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:58 compute-2 nova_compute[226829]: 2026-01-31 08:12:58.159 226833 DEBUG oslo_concurrency.lockutils [req-cbacf731-b9e1-4fac-bd5a-4b5265ffa803 req-89e8d203-bcc8-49ff-be84-ce3c5b05e84f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735aef6f0 =====
Jan 31 08:12:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:12:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735aef6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:58 compute-2 radosgw[83985]: beast: 0x7fa735aef6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:58.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:12:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:58.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:12:58 compute-2 nova_compute[226829]: 2026-01-31 08:12:58.692 226833 DEBUG oslo_concurrency.processutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config 5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.771s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:58 compute-2 nova_compute[226829]: 2026-01-31 08:12:58.693 226833 INFO nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Deleting local config drive /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config because it was imported into RBD.
Jan 31 08:12:58 compute-2 kernel: tapc82f9d38-ef: entered promiscuous mode
Jan 31 08:12:58 compute-2 NetworkManager[48999]: <info>  [1769847178.7413] manager: (tapc82f9d38-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Jan 31 08:12:58 compute-2 ovn_controller[133834]: 2026-01-31T08:12:58Z|00521|binding|INFO|Claiming lport c82f9d38-ef16-46ac-829a-12067ec8c603 for this chassis.
Jan 31 08:12:58 compute-2 ovn_controller[133834]: 2026-01-31T08:12:58Z|00522|binding|INFO|c82f9d38-ef16-46ac-829a-12067ec8c603: Claiming fa:16:3e:15:e2:fd 10.100.0.14
Jan 31 08:12:58 compute-2 nova_compute[226829]: 2026-01-31 08:12:58.742 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:58 compute-2 ovn_controller[133834]: 2026-01-31T08:12:58Z|00523|binding|INFO|Setting lport c82f9d38-ef16-46ac-829a-12067ec8c603 ovn-installed in OVS
Jan 31 08:12:58 compute-2 nova_compute[226829]: 2026-01-31 08:12:58.752 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:58 compute-2 systemd-machined[195142]: New machine qemu-57-instance-00000081.
Jan 31 08:12:58 compute-2 ovn_controller[133834]: 2026-01-31T08:12:58Z|00524|binding|INFO|Setting lport c82f9d38-ef16-46ac-829a-12067ec8c603 up in Southbound
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.778 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:e2:fd 10.100.0.14'], port_security=['fa:16:3e:15:e2:fd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5a59388d-bade-4df0-9ac0-0022df15ea02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c82f9d38-ef16-46ac-829a-12067ec8c603) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:12:58 compute-2 systemd[1]: Started Virtual Machine qemu-57-instance-00000081.
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.779 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c82f9d38-ef16-46ac-829a-12067ec8c603 in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 bound to our chassis
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.823 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:12:58 compute-2 systemd-udevd[283691]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.833 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[856c623b-063b-4f65-9dfc-0cf4939e6124]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.834 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape8014d6b-21 in ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.836 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape8014d6b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.836 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7efef73d-d065-4ae6-97ad-e140e06f7c10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.837 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[32e4e7bd-831f-4f18-a4ff-f65af7124175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 NetworkManager[48999]: <info>  [1769847178.8445] device (tapc82f9d38-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:12:58 compute-2 NetworkManager[48999]: <info>  [1769847178.8454] device (tapc82f9d38-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.848 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[9dea0c41-20bb-4d98-a4ef-ddeac787ac6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.858 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[547f5ca4-b3de-4356-b41d-110c2b376cc5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.885 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[362f4c11-b87a-452f-8851-08f5a08132a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 NetworkManager[48999]: <info>  [1769847178.8923] manager: (tape8014d6b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/260)
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.892 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[69b69b96-2709-4e6e-be6c-46414456e48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 ceph-mon[77282]: pgmap v2314: 305 pgs: 305 active+clean; 682 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 197 op/s
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.916 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe5e8f8-c6d4-454f-a6d4-0aee5c6aedb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.918 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdc142e-e732-4657-aa57-a6c1c6ecdbe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 NetworkManager[48999]: <info>  [1769847178.9329] device (tape8014d6b-20): carrier: link connected
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.935 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7b160c7c-5b82-42d1-9095-6724833a69ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.948 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[45623d75-7591-4891-bd13-0f22a32943a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743140, 'reachable_time': 38679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283724, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.958 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef3872e-4982-4a5e-9bcd-fcbd20a1f3e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:c1c3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 743140, 'tstamp': 743140}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283725, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.971 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5698059d-cf67-4432-a540-e926ff1b014e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743140, 'reachable_time': 38679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283726, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:58.986 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bab466cf-d6ed-4fe4-b8be-48583cb9be0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:59.021 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f4dc2d-70bf-4005-b9b7-cf981b6e11e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:59.022 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:59.022 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:59.023 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:59 compute-2 nova_compute[226829]: 2026-01-31 08:12:59.025 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:59 compute-2 NetworkManager[48999]: <info>  [1769847179.0258] manager: (tape8014d6b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Jan 31 08:12:59 compute-2 kernel: tape8014d6b-20: entered promiscuous mode
Jan 31 08:12:59 compute-2 nova_compute[226829]: 2026-01-31 08:12:59.029 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:59.030 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:59 compute-2 nova_compute[226829]: 2026-01-31 08:12:59.031 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:59 compute-2 ovn_controller[133834]: 2026-01-31T08:12:59Z|00525|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:12:59 compute-2 nova_compute[226829]: 2026-01-31 08:12:59.032 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:59.033 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:59.033 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a707b9b6-82fd-45f0-9e70-137d6cfe12d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:59.034 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:12:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:12:59.035 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'env', 'PROCESS_TAG=haproxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e8014d6b-23e1-41ef-b5e2-3d770d302e72.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:12:59 compute-2 nova_compute[226829]: 2026-01-31 08:12:59.037 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:59 compute-2 podman[283758]: 2026-01-31 08:12:59.354079776 +0000 UTC m=+0.049994861 container create 857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:12:59 compute-2 systemd[1]: Started libpod-conmon-857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193.scope.
Jan 31 08:12:59 compute-2 podman[283758]: 2026-01-31 08:12:59.324205603 +0000 UTC m=+0.020120738 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:12:59 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:12:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f120d36d6cfe1cedcd8676c86a334112923e4ba2a0f010522395b333f0491b55/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:59 compute-2 podman[283758]: 2026-01-31 08:12:59.438438561 +0000 UTC m=+0.134353676 container init 857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:12:59 compute-2 podman[283758]: 2026-01-31 08:12:59.444162447 +0000 UTC m=+0.140077522 container start 857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:12:59 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[283774]: [NOTICE]   (283778) : New worker (283780) forked
Jan 31 08:12:59 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[283774]: [NOTICE]   (283778) : Loading success.
Jan 31 08:12:59 compute-2 nova_compute[226829]: 2026-01-31 08:12:59.641 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847179.6400862, 5a59388d-bade-4df0-9ac0-0022df15ea02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:59 compute-2 nova_compute[226829]: 2026-01-31 08:12:59.642 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] VM Started (Lifecycle Event)
Jan 31 08:12:59 compute-2 nova_compute[226829]: 2026-01-31 08:12:59.851 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:59 compute-2 nova_compute[226829]: 2026-01-31 08:12:59.857 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847179.6419175, 5a59388d-bade-4df0-9ac0-0022df15ea02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:59 compute-2 nova_compute[226829]: 2026-01-31 08:12:59.858 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] VM Paused (Lifecycle Event)
Jan 31 08:12:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4270166023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.276 226833 DEBUG nova.compute.manager [req-347215b6-64d9-433c-a0f0-ba31b3597f6b req-448a4b86-c430-494f-ba43-85e354d9832f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.277 226833 DEBUG oslo_concurrency.lockutils [req-347215b6-64d9-433c-a0f0-ba31b3597f6b req-448a4b86-c430-494f-ba43-85e354d9832f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.278 226833 DEBUG oslo_concurrency.lockutils [req-347215b6-64d9-433c-a0f0-ba31b3597f6b req-448a4b86-c430-494f-ba43-85e354d9832f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.278 226833 DEBUG oslo_concurrency.lockutils [req-347215b6-64d9-433c-a0f0-ba31b3597f6b req-448a4b86-c430-494f-ba43-85e354d9832f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.279 226833 DEBUG nova.compute.manager [req-347215b6-64d9-433c-a0f0-ba31b3597f6b req-448a4b86-c430-494f-ba43-85e354d9832f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Processing event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.280 226833 DEBUG nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.282 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.286 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.291 226833 INFO nova.virt.libvirt.driver [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance spawned successfully.
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.295 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.299 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.306 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847180.2846222, 5a59388d-bade-4df0-9ac0-0022df15ea02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.306 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] VM Resumed (Lifecycle Event)
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.464 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.465 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.465 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.466 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.466 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.467 226833 DEBUG nova.virt.libvirt.driver [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.614 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.618 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:13:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:00.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735aef6f0 =====
Jan 31 08:13:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735aef6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:00 compute-2 radosgw[83985]: beast: 0x7fa735aef6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:00.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.671832) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180671956, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2234, "num_deletes": 261, "total_data_size": 5017452, "memory_usage": 5092576, "flush_reason": "Manual Compaction"}
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180694170, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 3284278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52610, "largest_seqno": 54839, "table_properties": {"data_size": 3275094, "index_size": 5678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19871, "raw_average_key_size": 20, "raw_value_size": 3256358, "raw_average_value_size": 3384, "num_data_blocks": 245, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847008, "oldest_key_time": 1769847008, "file_creation_time": 1769847180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 22428 microseconds, and 13254 cpu microseconds.
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.694260) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 3284278 bytes OK
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.694286) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.695746) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.695771) EVENT_LOG_v1 {"time_micros": 1769847180695763, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.695796) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 5007445, prev total WAL file size 5007445, number of live WAL files 2.
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.697158) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373537' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(3207KB)], [102(10MB)]
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180697253, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 14421177, "oldest_snapshot_seqno": -1}
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.781 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8096 keys, 14269774 bytes, temperature: kUnknown
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180788507, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 14269774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14213004, "index_size": 35410, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208690, "raw_average_key_size": 25, "raw_value_size": 14066123, "raw_average_value_size": 1737, "num_data_blocks": 1407, "num_entries": 8096, "num_filter_entries": 8096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.788946) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14269774 bytes
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.790237) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.6 rd, 155.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.6 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 8637, records dropped: 541 output_compression: NoCompression
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.790257) EVENT_LOG_v1 {"time_micros": 1769847180790248, "job": 64, "event": "compaction_finished", "compaction_time_micros": 91526, "compaction_time_cpu_micros": 51769, "output_level": 6, "num_output_files": 1, "total_output_size": 14269774, "num_input_records": 8637, "num_output_records": 8096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180790566, "job": 64, "event": "table_file_deletion", "file_number": 104}
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180791377, "job": 64, "event": "table_file_deletion", "file_number": 102}
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.696967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.791450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.791459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.791462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.791466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:00 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:00.791469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:00 compute-2 nova_compute[226829]: 2026-01-31 08:13:00.946 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:13:01 compute-2 nova_compute[226829]: 2026-01-31 08:13:01.080 226833 INFO nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Took 14.64 seconds to spawn the instance on the hypervisor.
Jan 31 08:13:01 compute-2 nova_compute[226829]: 2026-01-31 08:13:01.081 226833 DEBUG nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:01 compute-2 ceph-mon[77282]: pgmap v2315: 305 pgs: 305 active+clean; 696 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 126 op/s
Jan 31 08:13:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1811768806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2254943178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:01 compute-2 nova_compute[226829]: 2026-01-31 08:13:01.431 226833 INFO nova.compute.manager [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Took 16.64 seconds to build instance.
Jan 31 08:13:01 compute-2 nova_compute[226829]: 2026-01-31 08:13:01.678 226833 DEBUG oslo_concurrency.lockutils [None req-84573d6c-85ac-43f5-abe1-8ce1f7cbb145 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:02 compute-2 ceph-mon[77282]: pgmap v2316: 305 pgs: 305 active+clean; 701 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 155 op/s
Jan 31 08:13:02 compute-2 nova_compute[226829]: 2026-01-31 08:13:02.475 226833 DEBUG nova.compute.manager [req-bc69f103-001f-4b86-9332-4e1eaf9794bd req-4c55486a-9acf-4634-9944-f3adac50aaa5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:02 compute-2 nova_compute[226829]: 2026-01-31 08:13:02.476 226833 DEBUG oslo_concurrency.lockutils [req-bc69f103-001f-4b86-9332-4e1eaf9794bd req-4c55486a-9acf-4634-9944-f3adac50aaa5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:02 compute-2 nova_compute[226829]: 2026-01-31 08:13:02.476 226833 DEBUG oslo_concurrency.lockutils [req-bc69f103-001f-4b86-9332-4e1eaf9794bd req-4c55486a-9acf-4634-9944-f3adac50aaa5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:02 compute-2 nova_compute[226829]: 2026-01-31 08:13:02.477 226833 DEBUG oslo_concurrency.lockutils [req-bc69f103-001f-4b86-9332-4e1eaf9794bd req-4c55486a-9acf-4634-9944-f3adac50aaa5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:02 compute-2 nova_compute[226829]: 2026-01-31 08:13:02.477 226833 DEBUG nova.compute.manager [req-bc69f103-001f-4b86-9332-4e1eaf9794bd req-4c55486a-9acf-4634-9944-f3adac50aaa5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] No waiting events found dispatching network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:13:02 compute-2 nova_compute[226829]: 2026-01-31 08:13:02.478 226833 WARNING nova.compute.manager [req-bc69f103-001f-4b86-9332-4e1eaf9794bd req-4c55486a-9acf-4634-9944-f3adac50aaa5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received unexpected event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 for instance with vm_state active and task_state None.
Jan 31 08:13:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:02.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:02.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:03 compute-2 nova_compute[226829]: 2026-01-31 08:13:03.675 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 08:13:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:04.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:04.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:04 compute-2 sudo[283834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:04 compute-2 sudo[283834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:04 compute-2 sudo[283834]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:04 compute-2 sudo[283859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:04 compute-2 sudo[283859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:04 compute-2 sudo[283859]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:04 compute-2 ceph-mon[77282]: pgmap v2317: 305 pgs: 305 active+clean; 704 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 31 08:13:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:05 compute-2 nova_compute[226829]: 2026-01-31 08:13:05.284 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:05 compute-2 nova_compute[226829]: 2026-01-31 08:13:05.783 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:06 compute-2 kernel: tap75f73628-b1 (unregistering): left promiscuous mode
Jan 31 08:13:06 compute-2 NetworkManager[48999]: <info>  [1769847186.0597] device (tap75f73628-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:13:06 compute-2 nova_compute[226829]: 2026-01-31 08:13:06.090 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:06 compute-2 ovn_controller[133834]: 2026-01-31T08:13:06Z|00526|binding|INFO|Releasing lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 from this chassis (sb_readonly=0)
Jan 31 08:13:06 compute-2 ovn_controller[133834]: 2026-01-31T08:13:06Z|00527|binding|INFO|Setting lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 down in Southbound
Jan 31 08:13:06 compute-2 ovn_controller[133834]: 2026-01-31T08:13:06Z|00528|binding|INFO|Removing iface tap75f73628-b1 ovn-installed in OVS
Jan 31 08:13:06 compute-2 nova_compute[226829]: 2026-01-31 08:13:06.096 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:06 compute-2 nova_compute[226829]: 2026-01-31 08:13:06.099 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:06 compute-2 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Jan 31 08:13:06 compute-2 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007f.scope: Consumed 14.087s CPU time.
Jan 31 08:13:06 compute-2 systemd-machined[195142]: Machine qemu-56-instance-0000007f terminated.
Jan 31 08:13:06 compute-2 podman[283885]: 2026-01-31 08:13:06.161709806 +0000 UTC m=+0.085787825 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.198 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:a5:ae 10.100.0.7'], port_security=['fa:16:3e:7b:a5:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1ee317bd-390c-4a22-9e4e-e24189eb499e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=75f73628-b189-4a56-a44e-b33ec7ff3e50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.201 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 75f73628-b189-4a56-a44e-b33ec7ff3e50 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c unbound from our chassis
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.203 143841 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.204 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eca644e3-5cec-4a46-960f-9d4a31cb027a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:06 compute-2 kernel: tap75f73628-b1: entered promiscuous mode
Jan 31 08:13:06 compute-2 systemd-udevd[283897]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:13:06 compute-2 NetworkManager[48999]: <info>  [1769847186.2984] manager: (tap75f73628-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Jan 31 08:13:06 compute-2 ovn_controller[133834]: 2026-01-31T08:13:06Z|00529|binding|INFO|Claiming lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 for this chassis.
Jan 31 08:13:06 compute-2 ovn_controller[133834]: 2026-01-31T08:13:06Z|00530|binding|INFO|75f73628-b189-4a56-a44e-b33ec7ff3e50: Claiming fa:16:3e:7b:a5:ae 10.100.0.7
Jan 31 08:13:06 compute-2 nova_compute[226829]: 2026-01-31 08:13:06.301 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:06 compute-2 kernel: tap75f73628-b1 (unregistering): left promiscuous mode
Jan 31 08:13:06 compute-2 ovn_controller[133834]: 2026-01-31T08:13:06Z|00531|binding|INFO|Setting lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 ovn-installed in OVS
Jan 31 08:13:06 compute-2 nova_compute[226829]: 2026-01-31 08:13:06.318 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:06 compute-2 ovn_controller[133834]: 2026-01-31T08:13:06Z|00532|if_status|INFO|Dropped 2 log messages in last 231 seconds (most recently, 231 seconds ago) due to excessive rate
Jan 31 08:13:06 compute-2 ovn_controller[133834]: 2026-01-31T08:13:06Z|00533|if_status|INFO|Not setting lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 down as sb is readonly
Jan 31 08:13:06 compute-2 nova_compute[226829]: 2026-01-31 08:13:06.319 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:06 compute-2 ovn_controller[133834]: 2026-01-31T08:13:06Z|00534|binding|INFO|Releasing lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 from this chassis (sb_readonly=0)
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.382 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:a5:ae 10.100.0.7'], port_security=['fa:16:3e:7b:a5:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1ee317bd-390c-4a22-9e4e-e24189eb499e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=75f73628-b189-4a56-a44e-b33ec7ff3e50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.383 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 75f73628-b189-4a56-a44e-b33ec7ff3e50 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c bound to our chassis
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.384 143841 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.385 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7a55dc-de42-4d28-b948-228c2bcf9889]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:06 compute-2 nova_compute[226829]: 2026-01-31 08:13:06.387 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.574 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:a5:ae 10.100.0.7'], port_security=['fa:16:3e:7b:a5:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1ee317bd-390c-4a22-9e4e-e24189eb499e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=75f73628-b189-4a56-a44e-b33ec7ff3e50) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.577 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 75f73628-b189-4a56-a44e-b33ec7ff3e50 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c unbound from our chassis
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.579 143841 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.580 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[19916798-5392-4420-9671-533e62100fd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:06.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:06.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:06 compute-2 nova_compute[226829]: 2026-01-31 08:13:06.691 226833 INFO nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Instance shutdown successfully after 13 seconds.
Jan 31 08:13:06 compute-2 nova_compute[226829]: 2026-01-31 08:13:06.696 226833 INFO nova.virt.libvirt.driver [-] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Instance destroyed successfully.
Jan 31 08:13:06 compute-2 nova_compute[226829]: 2026-01-31 08:13:06.697 226833 DEBUG nova.objects.instance [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'numa_topology' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.886 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.888 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:06.889 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:06 compute-2 ceph-mon[77282]: pgmap v2318: 305 pgs: 305 active+clean; 704 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 222 op/s
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.035 226833 INFO nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Attempting rescue
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.036 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.041 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.042 226833 INFO nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Creating image(s)
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.076 226833 DEBUG nova.storage.rbd_utils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.082 226833 DEBUG nova.objects.instance [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.594 226833 DEBUG nova.storage.rbd_utils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.625 226833 DEBUG nova.storage.rbd_utils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.629 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.713 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.714 226833 DEBUG oslo_concurrency.lockutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.716 226833 DEBUG oslo_concurrency.lockutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.716 226833 DEBUG oslo_concurrency.lockutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.753 226833 DEBUG nova.storage.rbd_utils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:07 compute-2 nova_compute[226829]: 2026-01-31 08:13:07.758 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:08 compute-2 ceph-mon[77282]: pgmap v2319: 305 pgs: 305 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 268 op/s
Jan 31 08:13:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:08.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:08.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:08 compute-2 nova_compute[226829]: 2026-01-31 08:13:08.998 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:09 compute-2 nova_compute[226829]: 2026-01-31 08:13:09.000 226833 DEBUG nova.objects.instance [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'migration_context' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:10 compute-2 nova_compute[226829]: 2026-01-31 08:13:10.288 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:10.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:10.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:10 compute-2 nova_compute[226829]: 2026-01-31 08:13:10.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:11 compute-2 ceph-mon[77282]: pgmap v2320: 305 pgs: 305 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 258 op/s
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.724 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.727 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Start _get_guest_xml network_info=[{"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2001325626-network", "vif_mac": "fa:16:3e:7b:a5:ae"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '7c23949f-bba8-4466-bb79-caf568852d38', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.728 226833 DEBUG nova.objects.instance [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'resources' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.774 226833 WARNING nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.801 226833 DEBUG nova.virt.libvirt.host [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.802 226833 DEBUG nova.virt.libvirt.host [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.818 226833 DEBUG nova.virt.libvirt.host [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.819 226833 DEBUG nova.virt.libvirt.host [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.822 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.822 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.823 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.824 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.825 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.826 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.826 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.827 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.828 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.828 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.829 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.829 226833 DEBUG nova.virt.hardware [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.830 226833 DEBUG nova.objects.instance [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.869 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.991 226833 DEBUG nova.compute.manager [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-changed-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.992 226833 DEBUG nova.compute.manager [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Refreshing instance network info cache due to event network-changed-c82f9d38-ef16-46ac-829a-12067ec8c603. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.993 226833 DEBUG oslo_concurrency.lockutils [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.997 226833 DEBUG oslo_concurrency.lockutils [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:13:11 compute-2 nova_compute[226829]: 2026-01-31 08:13:11.998 226833 DEBUG nova.network.neutron [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Refreshing network info cache for port c82f9d38-ef16-46ac-829a-12067ec8c603 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:13:12 compute-2 ceph-mon[77282]: pgmap v2321: 305 pgs: 305 active+clean; 684 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.3 MiB/s wr, 214 op/s
Jan 31 08:13:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:13:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1414180678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:12 compute-2 nova_compute[226829]: 2026-01-31 08:13:12.295 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:12 compute-2 nova_compute[226829]: 2026-01-31 08:13:12.296 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:12.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:12.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:13:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2099496265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:12 compute-2 nova_compute[226829]: 2026-01-31 08:13:12.730 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:12 compute-2 nova_compute[226829]: 2026-01-31 08:13:12.732 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:13:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2156842279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.203 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.205 226833 DEBUG nova.virt.libvirt.vif [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:12:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-239463750',display_name='tempest-ServerRescueTestJSON-server-239463750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-239463750',id=127,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:12:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cbd0f41e455b4b3b9a8edf35ef0b85ed',ramdisk_id='',reservation_id='r-l1tglovt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1911109090',owner_user_name='tempest-ServerRescueTestJSON-1911109090-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:45Z,user_data=None,user_id='6ab9e181016f4d5a899c91dae3aa26e0',uuid=1ee317bd-390c-4a22-9e4e-e24189eb499e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2001325626-network", "vif_mac": "fa:16:3e:7b:a5:ae"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.206 226833 DEBUG nova.network.os_vif_util [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converting VIF {"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2001325626-network", "vif_mac": "fa:16:3e:7b:a5:ae"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.207 226833 DEBUG nova.network.os_vif_util [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:a5:ae,bridge_name='br-int',has_traffic_filtering=True,id=75f73628-b189-4a56-a44e-b33ec7ff3e50,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75f73628-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.208 226833 DEBUG nova.objects.instance [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.294 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <uuid>1ee317bd-390c-4a22-9e4e-e24189eb499e</uuid>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <name>instance-0000007f</name>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerRescueTestJSON-server-239463750</nova:name>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:13:11</nova:creationTime>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <nova:user uuid="6ab9e181016f4d5a899c91dae3aa26e0">tempest-ServerRescueTestJSON-1911109090-project-member</nova:user>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <nova:project uuid="cbd0f41e455b4b3b9a8edf35ef0b85ed">tempest-ServerRescueTestJSON-1911109090</nova:project>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <nova:port uuid="75f73628-b189-4a56-a44e-b33ec7ff3e50">
Jan 31 08:13:13 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <system>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <entry name="serial">1ee317bd-390c-4a22-9e4e-e24189eb499e</entry>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <entry name="uuid">1ee317bd-390c-4a22-9e4e-e24189eb499e</entry>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     </system>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <os>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   </os>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <features>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   </features>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.rescue">
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       </source>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1ee317bd-390c-4a22-9e4e-e24189eb499e_disk">
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       </source>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config.rescue">
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       </source>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:13:13 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:7b:a5:ae"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <target dev="tap75f73628-b1"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/console.log" append="off"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <video>
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     </video>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:13:13 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:13:13 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:13:13 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:13:13 compute-2 nova_compute[226829]: </domain>
Jan 31 08:13:13 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.303 226833 INFO nova.virt.libvirt.driver [-] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Instance destroyed successfully.
Jan 31 08:13:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1414180678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/983180038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2099496265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2156842279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.945 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.945 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.945 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.946 226833 DEBUG nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No VIF found with MAC fa:16:3e:7b:a5:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.946 226833 INFO nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Using config drive
Jan 31 08:13:13 compute-2 nova_compute[226829]: 2026-01-31 08:13:13.974 226833 DEBUG nova.storage.rbd_utils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.101 226833 DEBUG nova.objects.instance [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.247 226833 DEBUG nova.compute.manager [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-vif-unplugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.247 226833 DEBUG oslo_concurrency.lockutils [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.248 226833 DEBUG oslo_concurrency.lockutils [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.248 226833 DEBUG oslo_concurrency.lockutils [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.249 226833 DEBUG nova.compute.manager [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] No waiting events found dispatching network-vif-unplugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.249 226833 WARNING nova.compute.manager [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received unexpected event network-vif-unplugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 for instance with vm_state active and task_state rescuing.
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.249 226833 DEBUG nova.compute.manager [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.250 226833 DEBUG oslo_concurrency.lockutils [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.250 226833 DEBUG oslo_concurrency.lockutils [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.250 226833 DEBUG oslo_concurrency.lockutils [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.251 226833 DEBUG nova.compute.manager [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] No waiting events found dispatching network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.251 226833 WARNING nova.compute.manager [req-72917da9-c349-42a0-b8b3-f3c242591665 req-48dc4b81-a305-4ceb-86c0-3d47f4317982 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received unexpected event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 for instance with vm_state active and task_state rescuing.
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.458 226833 DEBUG nova.objects.instance [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'keypairs' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:14 compute-2 ceph-mon[77282]: pgmap v2322: 305 pgs: 305 active+clean; 704 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Jan 31 08:13:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:14.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:14.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.693 226833 DEBUG nova.network.neutron [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updated VIF entry in instance network info cache for port c82f9d38-ef16-46ac-829a-12067ec8c603. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.694 226833 DEBUG nova.network.neutron [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updating instance_info_cache with network_info: [{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:14 compute-2 nova_compute[226829]: 2026-01-31 08:13:14.974 226833 DEBUG oslo_concurrency.lockutils [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:13:15 compute-2 nova_compute[226829]: 2026-01-31 08:13:15.291 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:15 compute-2 nova_compute[226829]: 2026-01-31 08:13:15.449 226833 INFO nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Creating config drive at /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config.rescue
Jan 31 08:13:15 compute-2 nova_compute[226829]: 2026-01-31 08:13:15.453 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpog3refgl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:15 compute-2 nova_compute[226829]: 2026-01-31 08:13:15.581 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpog3refgl" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:15 compute-2 nova_compute[226829]: 2026-01-31 08:13:15.623 226833 DEBUG nova.storage.rbd_utils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:15 compute-2 nova_compute[226829]: 2026-01-31 08:13:15.627 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config.rescue 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:15 compute-2 nova_compute[226829]: 2026-01-31 08:13:15.791 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:16 compute-2 ovn_controller[133834]: 2026-01-31T08:13:16Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:e2:fd 10.100.0.14
Jan 31 08:13:16 compute-2 ovn_controller[133834]: 2026-01-31T08:13:16Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:e2:fd 10.100.0.14
Jan 31 08:13:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:16.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:16.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:16 compute-2 nova_compute[226829]: 2026-01-31 08:13:16.815 226833 DEBUG oslo_concurrency.processutils [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config.rescue 1ee317bd-390c-4a22-9e4e-e24189eb499e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:16 compute-2 nova_compute[226829]: 2026-01-31 08:13:16.816 226833 INFO nova.virt.libvirt.driver [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Deleting local config drive /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e/disk.config.rescue because it was imported into RBD.
Jan 31 08:13:16 compute-2 kernel: tap75f73628-b1: entered promiscuous mode
Jan 31 08:13:16 compute-2 NetworkManager[48999]: <info>  [1769847196.8593] manager: (tap75f73628-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Jan 31 08:13:16 compute-2 ovn_controller[133834]: 2026-01-31T08:13:16Z|00535|binding|INFO|Claiming lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 for this chassis.
Jan 31 08:13:16 compute-2 ovn_controller[133834]: 2026-01-31T08:13:16Z|00536|binding|INFO|75f73628-b189-4a56-a44e-b33ec7ff3e50: Claiming fa:16:3e:7b:a5:ae 10.100.0.7
Jan 31 08:13:16 compute-2 nova_compute[226829]: 2026-01-31 08:13:16.860 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:16 compute-2 ovn_controller[133834]: 2026-01-31T08:13:16Z|00537|binding|INFO|Removing lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 ovn-installed in OVS
Jan 31 08:13:16 compute-2 nova_compute[226829]: 2026-01-31 08:13:16.862 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:16 compute-2 ovn_controller[133834]: 2026-01-31T08:13:16Z|00538|binding|INFO|Setting lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 ovn-installed in OVS
Jan 31 08:13:16 compute-2 nova_compute[226829]: 2026-01-31 08:13:16.868 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:16 compute-2 nova_compute[226829]: 2026-01-31 08:13:16.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:16 compute-2 systemd-machined[195142]: New machine qemu-58-instance-0000007f.
Jan 31 08:13:16 compute-2 systemd[1]: Started Virtual Machine qemu-58-instance-0000007f.
Jan 31 08:13:16 compute-2 systemd-udevd[284156]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:13:16 compute-2 NetworkManager[48999]: <info>  [1769847196.9242] device (tap75f73628-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:13:16 compute-2 NetworkManager[48999]: <info>  [1769847196.9248] device (tap75f73628-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:13:16 compute-2 ovn_controller[133834]: 2026-01-31T08:13:16Z|00539|binding|INFO|Setting lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 up in Southbound
Jan 31 08:13:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:16.937 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:a5:ae 10.100.0.7'], port_security=['fa:16:3e:7b:a5:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1ee317bd-390c-4a22-9e4e-e24189eb499e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=75f73628-b189-4a56-a44e-b33ec7ff3e50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:13:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:16.939 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 75f73628-b189-4a56-a44e-b33ec7ff3e50 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c bound to our chassis
Jan 31 08:13:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:16.940 143841 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:13:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:16.941 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3dab7c50-9be5-40cb-9947-b86a132b5050]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:17 compute-2 ceph-mon[77282]: pgmap v2323: 305 pgs: 305 active+clean; 709 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 31 08:13:17 compute-2 nova_compute[226829]: 2026-01-31 08:13:17.912 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for 1ee317bd-390c-4a22-9e4e-e24189eb499e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:13:17 compute-2 nova_compute[226829]: 2026-01-31 08:13:17.913 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847197.9114406, 1ee317bd-390c-4a22-9e4e-e24189eb499e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:17 compute-2 nova_compute[226829]: 2026-01-31 08:13:17.913 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] VM Resumed (Lifecycle Event)
Jan 31 08:13:18 compute-2 nova_compute[226829]: 2026-01-31 08:13:18.005 226833 DEBUG nova.compute.manager [None req-d796d0ad-8f5a-4dfc-b734-e1b8ff014dcc 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:18 compute-2 nova_compute[226829]: 2026-01-31 08:13:18.008 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:18 compute-2 nova_compute[226829]: 2026-01-31 08:13:18.015 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:13:18 compute-2 nova_compute[226829]: 2026-01-31 08:13:18.117 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 31 08:13:18 compute-2 nova_compute[226829]: 2026-01-31 08:13:18.118 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847197.9129317, 1ee317bd-390c-4a22-9e4e-e24189eb499e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:18 compute-2 nova_compute[226829]: 2026-01-31 08:13:18.118 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] VM Started (Lifecycle Event)
Jan 31 08:13:18 compute-2 ceph-mon[77282]: pgmap v2324: 305 pgs: 305 active+clean; 688 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 994 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Jan 31 08:13:18 compute-2 nova_compute[226829]: 2026-01-31 08:13:18.437 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:18 compute-2 nova_compute[226829]: 2026-01-31 08:13:18.440 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:13:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:18.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.068 226833 DEBUG nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.068 226833 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.069 226833 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.069 226833 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.069 226833 DEBUG nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] No waiting events found dispatching network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.069 226833 WARNING nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received unexpected event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 for instance with vm_state rescued and task_state None.
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.069 226833 DEBUG nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.070 226833 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.070 226833 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.070 226833 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.071 226833 DEBUG nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] No waiting events found dispatching network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:13:19 compute-2 nova_compute[226829]: 2026-01-31 08:13:19.071 226833 WARNING nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received unexpected event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 for instance with vm_state rescued and task_state None.
Jan 31 08:13:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:20 compute-2 nova_compute[226829]: 2026-01-31 08:13:20.295 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:20.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:20.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:20 compute-2 nova_compute[226829]: 2026-01-31 08:13:20.793 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:22 compute-2 ceph-mon[77282]: pgmap v2325: 305 pgs: 305 active+clean; 650 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 174 op/s
Jan 31 08:13:22 compute-2 nova_compute[226829]: 2026-01-31 08:13:22.605 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:22.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:22.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:23 compute-2 nova_compute[226829]: 2026-01-31 08:13:23.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:24.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:24.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:24 compute-2 sudo[284229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:24 compute-2 sudo[284229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:24 compute-2 sudo[284229]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:24 compute-2 sudo[284254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:24 compute-2 sudo[284254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:24 compute-2 sudo[284254]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:24 compute-2 ceph-mon[77282]: pgmap v2326: 305 pgs: 305 active+clean; 651 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 223 op/s
Jan 31 08:13:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/645181621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:25 compute-2 nova_compute[226829]: 2026-01-31 08:13:25.297 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:25 compute-2 nova_compute[226829]: 2026-01-31 08:13:25.798 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:26 compute-2 ceph-mon[77282]: pgmap v2327: 305 pgs: 305 active+clean; 651 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.0 MiB/s wr, 252 op/s
Jan 31 08:13:26 compute-2 podman[284280]: 2026-01-31 08:13:26.229339419 +0000 UTC m=+0.105443029 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:13:26 compute-2 nova_compute[226829]: 2026-01-31 08:13:26.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:26.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:26.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.892016) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847206892142, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 493, "num_deletes": 251, "total_data_size": 664324, "memory_usage": 674776, "flush_reason": "Manual Compaction"}
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847206897365, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 437883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54844, "largest_seqno": 55332, "table_properties": {"data_size": 435225, "index_size": 694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6410, "raw_average_key_size": 18, "raw_value_size": 429957, "raw_average_value_size": 1268, "num_data_blocks": 31, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847181, "oldest_key_time": 1769847181, "file_creation_time": 1769847206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 5380 microseconds, and 1639 cpu microseconds.
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.897424) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 437883 bytes OK
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.897445) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.898902) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.898915) EVENT_LOG_v1 {"time_micros": 1769847206898911, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.898932) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 661378, prev total WAL file size 661378, number of live WAL files 2.
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.899322) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(427KB)], [105(13MB)]
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847206899350, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 14707657, "oldest_snapshot_seqno": -1}
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 7922 keys, 12781223 bytes, temperature: kUnknown
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847206965011, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 12781223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12726946, "index_size": 33365, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205773, "raw_average_key_size": 25, "raw_value_size": 12584309, "raw_average_value_size": 1588, "num_data_blocks": 1314, "num_entries": 7922, "num_filter_entries": 7922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.965269) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 12781223 bytes
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.966487) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.7 rd, 194.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.6 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(62.8) write-amplify(29.2) OK, records in: 8435, records dropped: 513 output_compression: NoCompression
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.966506) EVENT_LOG_v1 {"time_micros": 1769847206966497, "job": 66, "event": "compaction_finished", "compaction_time_micros": 65754, "compaction_time_cpu_micros": 20569, "output_level": 6, "num_output_files": 1, "total_output_size": 12781223, "num_input_records": 8435, "num_output_records": 7922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847206966672, "job": 66, "event": "table_file_deletion", "file_number": 107}
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847206967861, "job": 66, "event": "table_file_deletion", "file_number": 105}
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.899239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.967909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.967914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.967916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.967917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:26 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:13:26.967919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:27 compute-2 nova_compute[226829]: 2026-01-31 08:13:27.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:27 compute-2 nova_compute[226829]: 2026-01-31 08:13:27.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:13:27 compute-2 nova_compute[226829]: 2026-01-31 08:13:27.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:13:27 compute-2 ceph-mon[77282]: pgmap v2328: 305 pgs: 305 active+clean; 655 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 248 op/s
Jan 31 08:13:27 compute-2 nova_compute[226829]: 2026-01-31 08:13:27.903 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:13:27 compute-2 nova_compute[226829]: 2026-01-31 08:13:27.904 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:13:27 compute-2 nova_compute[226829]: 2026-01-31 08:13:27.904 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:13:27 compute-2 nova_compute[226829]: 2026-01-31 08:13:27.904 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:28.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:28.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:28 compute-2 ceph-mon[77282]: pgmap v2329: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.3 MiB/s wr, 247 op/s
Jan 31 08:13:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:30 compute-2 nova_compute[226829]: 2026-01-31 08:13:30.351 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/253429130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:30 compute-2 ceph-mon[77282]: pgmap v2330: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 95 KiB/s wr, 222 op/s
Jan 31 08:13:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:30.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:30.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:30 compute-2 nova_compute[226829]: 2026-01-31 08:13:30.799 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:32 compute-2 ceph-mon[77282]: pgmap v2331: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 109 KiB/s wr, 137 op/s
Jan 31 08:13:32 compute-2 nova_compute[226829]: 2026-01-31 08:13:32.401 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Updating instance_info_cache with network_info: [{"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:32.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:32.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:32 compute-2 nova_compute[226829]: 2026-01-31 08:13:32.785 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-4e4e24bf-e5fe-4be2-9d89-52432f07cca0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:13:32 compute-2 nova_compute[226829]: 2026-01-31 08:13:32.786 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:13:32 compute-2 nova_compute[226829]: 2026-01-31 08:13:32.786 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:32 compute-2 nova_compute[226829]: 2026-01-31 08:13:32.787 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1462951967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:34 compute-2 nova_compute[226829]: 2026-01-31 08:13:34.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:34 compute-2 nova_compute[226829]: 2026-01-31 08:13:34.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:34 compute-2 nova_compute[226829]: 2026-01-31 08:13:34.562 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:34 compute-2 nova_compute[226829]: 2026-01-31 08:13:34.563 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:34 compute-2 nova_compute[226829]: 2026-01-31 08:13:34.563 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:34 compute-2 nova_compute[226829]: 2026-01-31 08:13:34.564 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:13:34 compute-2 nova_compute[226829]: 2026-01-31 08:13:34.565 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:34.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:13:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3750999947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.004 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.357 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:35 compute-2 sudo[284333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:35 compute-2 sudo[284333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:35 compute-2 sudo[284333]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:35 compute-2 ceph-mon[77282]: pgmap v2332: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 51 KiB/s wr, 96 op/s
Jan 31 08:13:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3800349255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.686 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.687 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.693 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.694 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.701 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.702 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.702 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:13:35 compute-2 sudo[284358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:35 compute-2 sudo[284358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:35 compute-2 sudo[284358]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:35 compute-2 sudo[284383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:35 compute-2 sudo[284383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:35 compute-2 sudo[284383]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.844 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:35 compute-2 sudo[284409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:13:35 compute-2 sudo[284409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.967 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.968 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3791MB free_disk=20.694122314453125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.969 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:35 compute-2 nova_compute[226829]: 2026-01-31 08:13:35.969 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:36 compute-2 sudo[284409]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.354 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.355 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 1ee317bd-390c-4a22-9e4e-e24189eb499e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.355 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 5a59388d-bade-4df0-9ac0-0022df15ea02 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.355 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.355 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.373 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.396 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.396 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.413 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.441 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:13:36 compute-2 nova_compute[226829]: 2026-01-31 08:13:36.590 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:36.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:36.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:13:37 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1966568629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:37 compute-2 nova_compute[226829]: 2026-01-31 08:13:37.039 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:37 compute-2 nova_compute[226829]: 2026-01-31 08:13:37.045 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:13:37 compute-2 nova_compute[226829]: 2026-01-31 08:13:37.084 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:13:37 compute-2 podman[284488]: 2026-01-31 08:13:37.185308465 +0000 UTC m=+0.070231931 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:13:37 compute-2 nova_compute[226829]: 2026-01-31 08:13:37.240 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:13:37 compute-2 nova_compute[226829]: 2026-01-31 08:13:37.241 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3750999947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4192415656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:38 compute-2 ceph-mon[77282]: pgmap v2333: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 41 KiB/s wr, 85 op/s
Jan 31 08:13:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:38.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:38.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1966568629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1064267121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:40 compute-2 ceph-mon[77282]: pgmap v2334: 305 pgs: 305 active+clean; 687 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 131 op/s
Jan 31 08:13:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2337270161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:40 compute-2 nova_compute[226829]: 2026-01-31 08:13:40.360 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:40.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:40.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:40 compute-2 nova_compute[226829]: 2026-01-31 08:13:40.846 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:41 compute-2 ceph-mon[77282]: pgmap v2335: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Jan 31 08:13:41 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:42.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:42.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:13:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:13:43 compute-2 ceph-mon[77282]: pgmap v2336: 305 pgs: 305 active+clean; 708 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 114 op/s
Jan 31 08:13:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/629244402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3296464802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:13:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:13:43 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:13:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3583203475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:44 compute-2 ceph-mon[77282]: pgmap v2337: 305 pgs: 305 active+clean; 718 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 31 08:13:44 compute-2 nova_compute[226829]: 2026-01-31 08:13:44.694 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "966eef30-6a53-4f56-a48f-d9c2b4348db8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:44 compute-2 nova_compute[226829]: 2026-01-31 08:13:44.694 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:44.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:44 compute-2 nova_compute[226829]: 2026-01-31 08:13:44.717 226833 DEBUG nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:13:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:44.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:44 compute-2 nova_compute[226829]: 2026-01-31 08:13:44.835 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:44 compute-2 nova_compute[226829]: 2026-01-31 08:13:44.836 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:44 compute-2 nova_compute[226829]: 2026-01-31 08:13:44.842 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:13:44 compute-2 nova_compute[226829]: 2026-01-31 08:13:44.842 226833 INFO nova.compute.claims [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:13:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:44 compute-2 sudo[284514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:44 compute-2 sudo[284514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:44 compute-2 sudo[284514]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:45 compute-2 sudo[284539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:45 compute-2 sudo[284539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:45 compute-2 sudo[284539]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.364 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.380 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/523617742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:13:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/523617742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:13:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:13:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1571607756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.798 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.803 226833 DEBUG nova.compute.provider_tree [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.832 226833 DEBUG nova.scheduler.client.report [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.848 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.866 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.867 226833 DEBUG nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.921 226833 DEBUG nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.922 226833 DEBUG nova.network.neutron [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.945 226833 INFO nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:13:45 compute-2 nova_compute[226829]: 2026-01-31 08:13:45.973 226833 DEBUG nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.101 226833 DEBUG nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.103 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.103 226833 INFO nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Creating image(s)
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.131 226833 DEBUG nova.storage.rbd_utils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.160 226833 DEBUG nova.storage.rbd_utils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.183 226833 DEBUG nova.storage.rbd_utils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.187 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.243 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.244 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.244 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.245 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.268 226833 DEBUG nova.storage.rbd_utils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.270 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:46.531 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:13:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:46.533 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.535 226833 DEBUG nova.policy [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '18aee9d81d404f77ac81cde538f140d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:13:46 compute-2 nova_compute[226829]: 2026-01-31 08:13:46.539 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:46.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:46.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:47 compute-2 ceph-mon[77282]: pgmap v2338: 305 pgs: 305 active+clean; 759 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 675 KiB/s rd, 3.7 MiB/s wr, 119 op/s
Jan 31 08:13:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1571607756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:47 compute-2 nova_compute[226829]: 2026-01-31 08:13:47.239 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:47 compute-2 nova_compute[226829]: 2026-01-31 08:13:47.239 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:13:47 compute-2 nova_compute[226829]: 2026-01-31 08:13:47.744 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:47 compute-2 nova_compute[226829]: 2026-01-31 08:13:47.817 226833 DEBUG nova.storage.rbd_utils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] resizing rbd image 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:13:48 compute-2 ceph-mon[77282]: pgmap v2339: 305 pgs: 305 active+clean; 808 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 5.4 MiB/s wr, 136 op/s
Jan 31 08:13:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2869198860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:48.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:48.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:49 compute-2 nova_compute[226829]: 2026-01-31 08:13:49.015 226833 DEBUG nova.objects.instance [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'migration_context' on Instance uuid 966eef30-6a53-4f56-a48f-d9c2b4348db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:49 compute-2 nova_compute[226829]: 2026-01-31 08:13:49.090 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:13:49 compute-2 nova_compute[226829]: 2026-01-31 08:13:49.090 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Ensure instance console log exists: /var/lib/nova/instances/966eef30-6a53-4f56-a48f-d9c2b4348db8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:13:49 compute-2 nova_compute[226829]: 2026-01-31 08:13:49.091 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:49 compute-2 nova_compute[226829]: 2026-01-31 08:13:49.091 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:49 compute-2 nova_compute[226829]: 2026-01-31 08:13:49.091 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:49 compute-2 nova_compute[226829]: 2026-01-31 08:13:49.147 226833 DEBUG nova.network.neutron [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Successfully created port: 28396218-53f3-4ff1-b721-bb503c806019 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:13:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:50 compute-2 nova_compute[226829]: 2026-01-31 08:13:50.367 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:50.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:50.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:50 compute-2 nova_compute[226829]: 2026-01-31 08:13:50.850 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4047450471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:51 compute-2 ceph-mon[77282]: pgmap v2340: 305 pgs: 305 active+clean; 834 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.4 MiB/s wr, 111 op/s
Jan 31 08:13:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1537017837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:51.537 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:51 compute-2 nova_compute[226829]: 2026-01-31 08:13:51.870 226833 DEBUG nova.network.neutron [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Successfully updated port: 28396218-53f3-4ff1-b721-bb503c806019 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:13:52 compute-2 nova_compute[226829]: 2026-01-31 08:13:52.063 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:13:52 compute-2 nova_compute[226829]: 2026-01-31 08:13:52.064 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:13:52 compute-2 nova_compute[226829]: 2026-01-31 08:13:52.064 226833 DEBUG nova.network.neutron [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:13:52 compute-2 nova_compute[226829]: 2026-01-31 08:13:52.069 226833 DEBUG nova.compute.manager [req-d69e4fde-cf46-40b8-8970-460ac9a01f7f req-8a7d0a53-647c-4426-884a-3130deaa37d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Received event network-changed-28396218-53f3-4ff1-b721-bb503c806019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:52 compute-2 nova_compute[226829]: 2026-01-31 08:13:52.070 226833 DEBUG nova.compute.manager [req-d69e4fde-cf46-40b8-8970-460ac9a01f7f req-8a7d0a53-647c-4426-884a-3130deaa37d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Refreshing instance network info cache due to event network-changed-28396218-53f3-4ff1-b721-bb503c806019. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:13:52 compute-2 nova_compute[226829]: 2026-01-31 08:13:52.070 226833 DEBUG oslo_concurrency.lockutils [req-d69e4fde-cf46-40b8-8970-460ac9a01f7f req-8a7d0a53-647c-4426-884a-3130deaa37d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:13:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2975001044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:52 compute-2 ceph-mon[77282]: pgmap v2341: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.3 MiB/s wr, 142 op/s
Jan 31 08:13:52 compute-2 nova_compute[226829]: 2026-01-31 08:13:52.576 226833 DEBUG nova.network.neutron [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:13:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:52.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:52.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:54 compute-2 ceph-mon[77282]: pgmap v2342: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 156 op/s
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.339 226833 DEBUG nova.network.neutron [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Updating instance_info_cache with network_info: [{"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.478 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.478 226833 DEBUG nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Instance network_info: |[{"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.478 226833 DEBUG oslo_concurrency.lockutils [req-d69e4fde-cf46-40b8-8970-460ac9a01f7f req-8a7d0a53-647c-4426-884a-3130deaa37d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.478 226833 DEBUG nova.network.neutron [req-d69e4fde-cf46-40b8-8970-460ac9a01f7f req-8a7d0a53-647c-4426-884a-3130deaa37d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Refreshing network info cache for port 28396218-53f3-4ff1-b721-bb503c806019 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.481 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Start _get_guest_xml network_info=[{"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.484 226833 WARNING nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.489 226833 DEBUG nova.virt.libvirt.host [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.490 226833 DEBUG nova.virt.libvirt.host [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.493 226833 DEBUG nova.virt.libvirt.host [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.493 226833 DEBUG nova.virt.libvirt.host [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.494 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.494 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.495 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.495 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.495 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.496 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.496 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.496 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.496 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.496 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.497 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.497 226833 DEBUG nova.virt.hardware [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.500 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:54.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:54.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:13:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1459224970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:54 compute-2 sudo[284777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:54 compute-2 sudo[284777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:54 compute-2 sudo[284777]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.969 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:54 compute-2 nova_compute[226829]: 2026-01-31 08:13:54.996 226833 DEBUG nova.storage.rbd_utils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:55 compute-2 sudo[284804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:13:55 compute-2 sudo[284804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.001 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:55 compute-2 sudo[284804]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.370 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:13:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1232812821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.404 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.406 226833 DEBUG nova.virt.libvirt.vif [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1412905283',display_name='tempest-ServerActionsTestOtherB-server-1412905283',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1412905283',id=133,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-xrt1x8j8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:13:46Z,user_data=None,user_id='18aee9d81d404f77ac81cde538f140d8',uuid=966eef30-6a53-4f56-a48f-d9c2b4348db8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.406 226833 DEBUG nova.network.os_vif_util [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.407 226833 DEBUG nova.network.os_vif_util [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:cd:6d,bridge_name='br-int',has_traffic_filtering=True,id=28396218-53f3-4ff1-b721-bb503c806019,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28396218-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.408 226833 DEBUG nova.objects.instance [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 966eef30-6a53-4f56-a48f-d9c2b4348db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.576 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <uuid>966eef30-6a53-4f56-a48f-d9c2b4348db8</uuid>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <name>instance-00000085</name>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestOtherB-server-1412905283</nova:name>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:13:54</nova:creationTime>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <nova:user uuid="18aee9d81d404f77ac81cde538f140d8">tempest-ServerActionsTestOtherB-2012907318-project-member</nova:user>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <nova:project uuid="c3ddadeb950a490db5c99da98a32c9ec">tempest-ServerActionsTestOtherB-2012907318</nova:project>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <nova:port uuid="28396218-53f3-4ff1-b721-bb503c806019">
Jan 31 08:13:55 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <system>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <entry name="serial">966eef30-6a53-4f56-a48f-d9c2b4348db8</entry>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <entry name="uuid">966eef30-6a53-4f56-a48f-d9c2b4348db8</entry>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     </system>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <os>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   </os>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <features>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   </features>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/966eef30-6a53-4f56-a48f-d9c2b4348db8_disk">
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       </source>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/966eef30-6a53-4f56-a48f-d9c2b4348db8_disk.config">
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       </source>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:13:55 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:b5:cd:6d"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <target dev="tap28396218-53"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/966eef30-6a53-4f56-a48f-d9c2b4348db8/console.log" append="off"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <video>
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     </video>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:13:55 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:13:55 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:13:55 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:13:55 compute-2 nova_compute[226829]: </domain>
Jan 31 08:13:55 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.577 226833 DEBUG nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Preparing to wait for external event network-vif-plugged-28396218-53f3-4ff1-b721-bb503c806019 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.578 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.578 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.579 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.579 226833 DEBUG nova.virt.libvirt.vif [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1412905283',display_name='tempest-ServerActionsTestOtherB-server-1412905283',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1412905283',id=133,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-xrt1x8j8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:13:46Z,user_data=None,user_id='18aee9d81d404f77ac81cde538f140d8',uuid=966eef30-6a53-4f56-a48f-d9c2b4348db8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.580 226833 DEBUG nova.network.os_vif_util [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.581 226833 DEBUG nova.network.os_vif_util [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:cd:6d,bridge_name='br-int',has_traffic_filtering=True,id=28396218-53f3-4ff1-b721-bb503c806019,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28396218-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.581 226833 DEBUG os_vif [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:cd:6d,bridge_name='br-int',has_traffic_filtering=True,id=28396218-53f3-4ff1-b721-bb503c806019,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28396218-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.582 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.582 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.583 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.588 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.589 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28396218-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.589 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28396218-53, col_values=(('external_ids', {'iface-id': '28396218-53f3-4ff1-b721-bb503c806019', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:cd:6d', 'vm-uuid': '966eef30-6a53-4f56-a48f-d9c2b4348db8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.591 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:55 compute-2 NetworkManager[48999]: <info>  [1769847235.5924] manager: (tap28396218-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.593 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.597 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.598 226833 INFO os_vif [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:cd:6d,bridge_name='br-int',has_traffic_filtering=True,id=28396218-53f3-4ff1-b721-bb503c806019,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28396218-53')
Jan 31 08:13:55 compute-2 nova_compute[226829]: 2026-01-31 08:13:55.852 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1459224970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1232812821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:56 compute-2 nova_compute[226829]: 2026-01-31 08:13:56.551 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:13:56 compute-2 nova_compute[226829]: 2026-01-31 08:13:56.551 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:13:56 compute-2 nova_compute[226829]: 2026-01-31 08:13:56.552 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No VIF found with MAC fa:16:3e:b5:cd:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:13:56 compute-2 nova_compute[226829]: 2026-01-31 08:13:56.552 226833 INFO nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Using config drive
Jan 31 08:13:56 compute-2 nova_compute[226829]: 2026-01-31 08:13:56.573 226833 DEBUG nova.storage.rbd_utils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:13:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:56.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:13:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:57 compute-2 podman[284891]: 2026-01-31 08:13:57.216950742 +0000 UTC m=+0.090282057 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 31 08:13:57 compute-2 ceph-mon[77282]: pgmap v2343: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.1 MiB/s wr, 149 op/s
Jan 31 08:13:57 compute-2 nova_compute[226829]: 2026-01-31 08:13:57.654 226833 INFO nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Creating config drive at /var/lib/nova/instances/966eef30-6a53-4f56-a48f-d9c2b4348db8/disk.config
Jan 31 08:13:57 compute-2 nova_compute[226829]: 2026-01-31 08:13:57.658 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/966eef30-6a53-4f56-a48f-d9c2b4348db8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps11sfgg_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:57 compute-2 nova_compute[226829]: 2026-01-31 08:13:57.783 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/966eef30-6a53-4f56-a48f-d9c2b4348db8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps11sfgg_" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:57 compute-2 nova_compute[226829]: 2026-01-31 08:13:57.810 226833 DEBUG nova.storage.rbd_utils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:57 compute-2 nova_compute[226829]: 2026-01-31 08:13:57.814 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/966eef30-6a53-4f56-a48f-d9c2b4348db8/disk.config 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:58 compute-2 nova_compute[226829]: 2026-01-31 08:13:58.534 226833 DEBUG oslo_concurrency.processutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/966eef30-6a53-4f56-a48f-d9c2b4348db8/disk.config 966eef30-6a53-4f56-a48f-d9c2b4348db8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.720s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:58 compute-2 nova_compute[226829]: 2026-01-31 08:13:58.536 226833 INFO nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Deleting local config drive /var/lib/nova/instances/966eef30-6a53-4f56-a48f-d9c2b4348db8/disk.config because it was imported into RBD.
Jan 31 08:13:58 compute-2 kernel: tap28396218-53: entered promiscuous mode
Jan 31 08:13:58 compute-2 NetworkManager[48999]: <info>  [1769847238.5776] manager: (tap28396218-53): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Jan 31 08:13:58 compute-2 ovn_controller[133834]: 2026-01-31T08:13:58Z|00540|binding|INFO|Claiming lport 28396218-53f3-4ff1-b721-bb503c806019 for this chassis.
Jan 31 08:13:58 compute-2 ovn_controller[133834]: 2026-01-31T08:13:58Z|00541|binding|INFO|28396218-53f3-4ff1-b721-bb503c806019: Claiming fa:16:3e:b5:cd:6d 10.100.0.5
Jan 31 08:13:58 compute-2 nova_compute[226829]: 2026-01-31 08:13:58.579 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:58 compute-2 nova_compute[226829]: 2026-01-31 08:13:58.586 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:58 compute-2 ovn_controller[133834]: 2026-01-31T08:13:58Z|00542|binding|INFO|Setting lport 28396218-53f3-4ff1-b721-bb503c806019 ovn-installed in OVS
Jan 31 08:13:58 compute-2 nova_compute[226829]: 2026-01-31 08:13:58.588 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:58 compute-2 ceph-mon[77282]: pgmap v2344: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 176 op/s
Jan 31 08:13:58 compute-2 systemd-machined[195142]: New machine qemu-59-instance-00000085.
Jan 31 08:13:58 compute-2 systemd[1]: Started Virtual Machine qemu-59-instance-00000085.
Jan 31 08:13:58 compute-2 systemd-udevd[284972]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:13:58 compute-2 NetworkManager[48999]: <info>  [1769847238.6499] device (tap28396218-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:13:58 compute-2 NetworkManager[48999]: <info>  [1769847238.6505] device (tap28396218-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:13:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:58.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:58 compute-2 ovn_controller[133834]: 2026-01-31T08:13:58Z|00543|binding|INFO|Setting lport 28396218-53f3-4ff1-b721-bb503c806019 up in Southbound
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.737 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:cd:6d 10.100.0.5'], port_security=['fa:16:3e:b5:cd:6d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '966eef30-6a53-4f56-a48f-d9c2b4348db8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cef7bb84-6ec0-48b7-8775-111e40762c53', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=28396218-53f3-4ff1-b721-bb503c806019) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.738 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 28396218-53f3-4ff1-b721-bb503c806019 in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 bound to our chassis
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.740 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:13:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:13:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:58.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.756 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[65e0dd89-d9c5-48ce-96b6-3e651f29c4fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.781 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4a383420-b095-480e-8541-ee7bd5f5d126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.784 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5617892a-069f-486d-a90f-71f06d4edf3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.808 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7a26c276-df6b-455c-b10a-5a05f8a4852c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.825 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d765dc-e92e-41e3-a5e2-d9ee355e6495]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743140, 'reachable_time': 38144, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284986, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.839 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[973b0ad6-d03e-47bf-b307-e042d681a932]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 743147, 'tstamp': 743147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284987, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 743149, 'tstamp': 743149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284987, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.841 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:58 compute-2 nova_compute[226829]: 2026-01-31 08:13:58.842 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:58 compute-2 nova_compute[226829]: 2026-01-31 08:13:58.844 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.844 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.844 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.845 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:13:58.845 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:13:58 compute-2 nova_compute[226829]: 2026-01-31 08:13:58.941 226833 DEBUG nova.network.neutron [req-d69e4fde-cf46-40b8-8970-460ac9a01f7f req-8a7d0a53-647c-4426-884a-3130deaa37d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Updated VIF entry in instance network info cache for port 28396218-53f3-4ff1-b721-bb503c806019. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:13:58 compute-2 nova_compute[226829]: 2026-01-31 08:13:58.942 226833 DEBUG nova.network.neutron [req-d69e4fde-cf46-40b8-8970-460ac9a01f7f req-8a7d0a53-647c-4426-884a-3130deaa37d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Updating instance_info_cache with network_info: [{"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.029 226833 DEBUG oslo_concurrency.lockutils [req-d69e4fde-cf46-40b8-8970-460ac9a01f7f req-8a7d0a53-647c-4426-884a-3130deaa37d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.275 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847239.2746277, 966eef30-6a53-4f56-a48f-d9c2b4348db8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.275 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] VM Started (Lifecycle Event)
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.472 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.475 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847239.2749665, 966eef30-6a53-4f56-a48f-d9c2b4348db8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.475 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] VM Paused (Lifecycle Event)
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.589 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.594 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.667 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:13:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.920 226833 DEBUG nova.compute.manager [req-16f2cafe-d6b0-4ade-b76b-b746189ee490 req-d391f726-2ba0-4ca4-9b67-67c07a941fde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Received event network-vif-plugged-28396218-53f3-4ff1-b721-bb503c806019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.920 226833 DEBUG oslo_concurrency.lockutils [req-16f2cafe-d6b0-4ade-b76b-b746189ee490 req-d391f726-2ba0-4ca4-9b67-67c07a941fde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.921 226833 DEBUG oslo_concurrency.lockutils [req-16f2cafe-d6b0-4ade-b76b-b746189ee490 req-d391f726-2ba0-4ca4-9b67-67c07a941fde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.921 226833 DEBUG oslo_concurrency.lockutils [req-16f2cafe-d6b0-4ade-b76b-b746189ee490 req-d391f726-2ba0-4ca4-9b67-67c07a941fde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.921 226833 DEBUG nova.compute.manager [req-16f2cafe-d6b0-4ade-b76b-b746189ee490 req-d391f726-2ba0-4ca4-9b67-67c07a941fde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Processing event network-vif-plugged-28396218-53f3-4ff1-b721-bb503c806019 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.922 226833 DEBUG nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.925 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847239.925088, 966eef30-6a53-4f56-a48f-d9c2b4348db8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.925 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] VM Resumed (Lifecycle Event)
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.927 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.930 226833 INFO nova.virt.libvirt.driver [-] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Instance spawned successfully.
Jan 31 08:13:59 compute-2 nova_compute[226829]: 2026-01-31 08:13:59.931 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.058 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.063 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.063 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.064 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.064 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.065 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.065 226833 DEBUG nova.virt.libvirt.driver [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.069 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.259 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.358 226833 INFO nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Took 14.26 seconds to spawn the instance on the hypervisor.
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.359 226833 DEBUG nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:00 compute-2 ceph-mon[77282]: pgmap v2345: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.592 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:00.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.791 226833 INFO nova.compute.manager [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Took 15.99 seconds to build instance.
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.854 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:00 compute-2 nova_compute[226829]: 2026-01-31 08:14:00.925 226833 DEBUG oslo_concurrency.lockutils [None req-8010626d-0c1f-4d6f-b89e-a802f80dc44e 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:02 compute-2 nova_compute[226829]: 2026-01-31 08:14:02.174 226833 DEBUG nova.compute.manager [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Received event network-vif-plugged-28396218-53f3-4ff1-b721-bb503c806019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:02 compute-2 nova_compute[226829]: 2026-01-31 08:14:02.177 226833 DEBUG oslo_concurrency.lockutils [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:02 compute-2 nova_compute[226829]: 2026-01-31 08:14:02.177 226833 DEBUG oslo_concurrency.lockutils [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:02 compute-2 nova_compute[226829]: 2026-01-31 08:14:02.178 226833 DEBUG oslo_concurrency.lockutils [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:02 compute-2 nova_compute[226829]: 2026-01-31 08:14:02.178 226833 DEBUG nova.compute.manager [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] No waiting events found dispatching network-vif-plugged-28396218-53f3-4ff1-b721-bb503c806019 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:02 compute-2 nova_compute[226829]: 2026-01-31 08:14:02.178 226833 WARNING nova.compute.manager [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Received unexpected event network-vif-plugged-28396218-53f3-4ff1-b721-bb503c806019 for instance with vm_state active and task_state None.
Jan 31 08:14:02 compute-2 ceph-mon[77282]: pgmap v2346: 305 pgs: 305 active+clean; 848 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 526 KiB/s wr, 216 op/s
Jan 31 08:14:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:02.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:03 compute-2 nova_compute[226829]: 2026-01-31 08:14:03.185 226833 INFO nova.compute.manager [None req-68d16184-54b1-4e4e-894f-e8e3dd973aca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Pausing
Jan 31 08:14:03 compute-2 nova_compute[226829]: 2026-01-31 08:14:03.186 226833 DEBUG nova.objects.instance [None req-68d16184-54b1-4e4e-894f-e8e3dd973aca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'flavor' on Instance uuid 966eef30-6a53-4f56-a48f-d9c2b4348db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:03 compute-2 nova_compute[226829]: 2026-01-31 08:14:03.262 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847243.2625191, 966eef30-6a53-4f56-a48f-d9c2b4348db8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:03 compute-2 nova_compute[226829]: 2026-01-31 08:14:03.263 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] VM Paused (Lifecycle Event)
Jan 31 08:14:03 compute-2 nova_compute[226829]: 2026-01-31 08:14:03.264 226833 DEBUG nova.compute.manager [None req-68d16184-54b1-4e4e-894f-e8e3dd973aca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:03 compute-2 nova_compute[226829]: 2026-01-31 08:14:03.379 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:03 compute-2 nova_compute[226829]: 2026-01-31 08:14:03.382 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:14:03 compute-2 nova_compute[226829]: 2026-01-31 08:14:03.477 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 31 08:14:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e294 e294: 3 total, 3 up, 3 in
Jan 31 08:14:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:04.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:04.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:04 compute-2 ceph-mon[77282]: pgmap v2347: 305 pgs: 305 active+clean; 859 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.1 MiB/s wr, 196 op/s
Jan 31 08:14:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:05 compute-2 sudo[285033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:05 compute-2 sudo[285033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:05 compute-2 sudo[285033]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:05 compute-2 sudo[285058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:05 compute-2 sudo[285058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:05 compute-2 sudo[285058]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:05 compute-2 nova_compute[226829]: 2026-01-31 08:14:05.653 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:05 compute-2 nova_compute[226829]: 2026-01-31 08:14:05.856 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:06 compute-2 ceph-mon[77282]: osdmap e294: 3 total, 3 up, 3 in
Jan 31 08:14:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:06.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:06.887 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:06.889 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:06.890 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:07 compute-2 nova_compute[226829]: 2026-01-31 08:14:07.278 226833 DEBUG oslo_concurrency.lockutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "966eef30-6a53-4f56-a48f-d9c2b4348db8" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:07 compute-2 nova_compute[226829]: 2026-01-31 08:14:07.279 226833 DEBUG oslo_concurrency.lockutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:07 compute-2 nova_compute[226829]: 2026-01-31 08:14:07.279 226833 INFO nova.compute.manager [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Shelving
Jan 31 08:14:08 compute-2 podman[285085]: 2026-01-31 08:14:08.196618858 +0000 UTC m=+0.072050121 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:14:08 compute-2 kernel: tap28396218-53 (unregistering): left promiscuous mode
Jan 31 08:14:08 compute-2 NetworkManager[48999]: <info>  [1769847248.5811] device (tap28396218-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:14:08 compute-2 ovn_controller[133834]: 2026-01-31T08:14:08Z|00544|binding|INFO|Releasing lport 28396218-53f3-4ff1-b721-bb503c806019 from this chassis (sb_readonly=0)
Jan 31 08:14:08 compute-2 ovn_controller[133834]: 2026-01-31T08:14:08Z|00545|binding|INFO|Setting lport 28396218-53f3-4ff1-b721-bb503c806019 down in Southbound
Jan 31 08:14:08 compute-2 nova_compute[226829]: 2026-01-31 08:14:08.593 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:08 compute-2 ovn_controller[133834]: 2026-01-31T08:14:08Z|00546|binding|INFO|Removing iface tap28396218-53 ovn-installed in OVS
Jan 31 08:14:08 compute-2 nova_compute[226829]: 2026-01-31 08:14:08.595 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:08 compute-2 nova_compute[226829]: 2026-01-31 08:14:08.610 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.611 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:cd:6d 10.100.0.5'], port_security=['fa:16:3e:b5:cd:6d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '966eef30-6a53-4f56-a48f-d9c2b4348db8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cef7bb84-6ec0-48b7-8775-111e40762c53', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=28396218-53f3-4ff1-b721-bb503c806019) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.616 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 28396218-53f3-4ff1-b721-bb503c806019 in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 unbound from our chassis
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.621 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.640 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[43b118cc-f1b8-43f0-9fe7-e39479780969]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:08 compute-2 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000085.scope: Deactivated successfully.
Jan 31 08:14:08 compute-2 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000085.scope: Consumed 3.886s CPU time.
Jan 31 08:14:08 compute-2 systemd-machined[195142]: Machine qemu-59-instance-00000085 terminated.
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.669 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5739582a-3bf9-4c3c-9f0b-6e57a4e63cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.674 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[1c51d613-252c-4cbb-94a6-392d87d3fc29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.703 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a8628a-1c84-46a1-a553-d219d9e31d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.718 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[57d1562f-c541-4d93-9c6d-d14a243eece0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743140, 'reachable_time': 38144, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285115, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:08.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:08 compute-2 nova_compute[226829]: 2026-01-31 08:14:08.728 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:08 compute-2 nova_compute[226829]: 2026-01-31 08:14:08.732 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.734 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1529652b-beb0-4d2a-bf1f-75d27cb5cbf1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 743147, 'tstamp': 743147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285118, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 743149, 'tstamp': 743149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285118, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.737 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:08 compute-2 nova_compute[226829]: 2026-01-31 08:14:08.739 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:08 compute-2 nova_compute[226829]: 2026-01-31 08:14:08.742 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.742 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.743 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:14:08 compute-2 nova_compute[226829]: 2026-01-31 08:14:08.743 226833 INFO nova.virt.libvirt.driver [-] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Instance destroyed successfully.
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.743 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:08 compute-2 nova_compute[226829]: 2026-01-31 08:14:08.743 226833 DEBUG nova.objects.instance [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'numa_topology' on Instance uuid 966eef30-6a53-4f56-a48f-d9c2b4348db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:08.744 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:14:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:08 compute-2 nova_compute[226829]: 2026-01-31 08:14:08.984 226833 INFO nova.virt.libvirt.driver [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Beginning cold snapshot process
Jan 31 08:14:09 compute-2 ceph-mon[77282]: pgmap v2349: 305 pgs: 305 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.4 MiB/s wr, 290 op/s
Jan 31 08:14:09 compute-2 nova_compute[226829]: 2026-01-31 08:14:09.448 226833 DEBUG nova.virt.libvirt.imagebackend [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 08:14:09 compute-2 nova_compute[226829]: 2026-01-31 08:14:09.902 226833 DEBUG nova.storage.rbd_utils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(f4bc63b21fc3416389c6705ad312b4e1) on rbd image(966eef30-6a53-4f56-a48f-d9c2b4348db8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:14:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:10 compute-2 ceph-mon[77282]: pgmap v2350: 305 pgs: 305 active+clean; 846 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.5 MiB/s wr, 273 op/s
Jan 31 08:14:10 compute-2 ceph-mon[77282]: pgmap v2351: 305 pgs: 305 active+clean; 827 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 214 op/s
Jan 31 08:14:10 compute-2 nova_compute[226829]: 2026-01-31 08:14:10.619 226833 DEBUG nova.compute.manager [req-5e6c4e43-6d06-43a3-9505-26b1b5b98e2b req-58df0189-7579-46cd-98ee-d7293a90b9f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Received event network-vif-unplugged-28396218-53f3-4ff1-b721-bb503c806019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:10 compute-2 nova_compute[226829]: 2026-01-31 08:14:10.619 226833 DEBUG oslo_concurrency.lockutils [req-5e6c4e43-6d06-43a3-9505-26b1b5b98e2b req-58df0189-7579-46cd-98ee-d7293a90b9f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:10 compute-2 nova_compute[226829]: 2026-01-31 08:14:10.619 226833 DEBUG oslo_concurrency.lockutils [req-5e6c4e43-6d06-43a3-9505-26b1b5b98e2b req-58df0189-7579-46cd-98ee-d7293a90b9f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:10 compute-2 nova_compute[226829]: 2026-01-31 08:14:10.620 226833 DEBUG oslo_concurrency.lockutils [req-5e6c4e43-6d06-43a3-9505-26b1b5b98e2b req-58df0189-7579-46cd-98ee-d7293a90b9f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:10 compute-2 nova_compute[226829]: 2026-01-31 08:14:10.620 226833 DEBUG nova.compute.manager [req-5e6c4e43-6d06-43a3-9505-26b1b5b98e2b req-58df0189-7579-46cd-98ee-d7293a90b9f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] No waiting events found dispatching network-vif-unplugged-28396218-53f3-4ff1-b721-bb503c806019 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:10 compute-2 nova_compute[226829]: 2026-01-31 08:14:10.620 226833 WARNING nova.compute.manager [req-5e6c4e43-6d06-43a3-9505-26b1b5b98e2b req-58df0189-7579-46cd-98ee-d7293a90b9f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Received unexpected event network-vif-unplugged-28396218-53f3-4ff1-b721-bb503c806019 for instance with vm_state paused and task_state shelving_image_uploading.
Jan 31 08:14:10 compute-2 nova_compute[226829]: 2026-01-31 08:14:10.656 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:10.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:10 compute-2 nova_compute[226829]: 2026-01-31 08:14:10.860 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e295 e295: 3 total, 3 up, 3 in
Jan 31 08:14:12 compute-2 nova_compute[226829]: 2026-01-31 08:14:12.721 226833 DEBUG nova.compute.manager [req-375ec63d-813e-4c92-bb24-f5e979c6d96b req-a6b08b55-5c66-40c4-8298-5aa014040ff0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Received event network-vif-plugged-28396218-53f3-4ff1-b721-bb503c806019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:12 compute-2 nova_compute[226829]: 2026-01-31 08:14:12.721 226833 DEBUG oslo_concurrency.lockutils [req-375ec63d-813e-4c92-bb24-f5e979c6d96b req-a6b08b55-5c66-40c4-8298-5aa014040ff0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:12 compute-2 nova_compute[226829]: 2026-01-31 08:14:12.722 226833 DEBUG oslo_concurrency.lockutils [req-375ec63d-813e-4c92-bb24-f5e979c6d96b req-a6b08b55-5c66-40c4-8298-5aa014040ff0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:12 compute-2 nova_compute[226829]: 2026-01-31 08:14:12.722 226833 DEBUG oslo_concurrency.lockutils [req-375ec63d-813e-4c92-bb24-f5e979c6d96b req-a6b08b55-5c66-40c4-8298-5aa014040ff0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:12 compute-2 nova_compute[226829]: 2026-01-31 08:14:12.722 226833 DEBUG nova.compute.manager [req-375ec63d-813e-4c92-bb24-f5e979c6d96b req-a6b08b55-5c66-40c4-8298-5aa014040ff0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] No waiting events found dispatching network-vif-plugged-28396218-53f3-4ff1-b721-bb503c806019 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:12 compute-2 nova_compute[226829]: 2026-01-31 08:14:12.722 226833 WARNING nova.compute.manager [req-375ec63d-813e-4c92-bb24-f5e979c6d96b req-a6b08b55-5c66-40c4-8298-5aa014040ff0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Received unexpected event network-vif-plugged-28396218-53f3-4ff1-b721-bb503c806019 for instance with vm_state paused and task_state shelving_image_uploading.
Jan 31 08:14:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:12.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:12.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:13 compute-2 ceph-mon[77282]: pgmap v2352: 305 pgs: 305 active+clean; 834 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Jan 31 08:14:13 compute-2 ceph-mon[77282]: osdmap e295: 3 total, 3 up, 3 in
Jan 31 08:14:13 compute-2 nova_compute[226829]: 2026-01-31 08:14:13.664 226833 DEBUG nova.storage.rbd_utils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] cloning vms/966eef30-6a53-4f56-a48f-d9c2b4348db8_disk@f4bc63b21fc3416389c6705ad312b4e1 to images/d7589e47-590e-4196-b561-0595216f339a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:14:14 compute-2 ceph-mon[77282]: pgmap v2354: 305 pgs: 305 active+clean; 859 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Jan 31 08:14:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:14.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:14.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:14 compute-2 nova_compute[226829]: 2026-01-31 08:14:14.792 226833 DEBUG nova.storage.rbd_utils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] flattening images/d7589e47-590e-4196-b561-0595216f339a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 08:14:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e296 e296: 3 total, 3 up, 3 in
Jan 31 08:14:15 compute-2 nova_compute[226829]: 2026-01-31 08:14:15.659 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:15 compute-2 nova_compute[226829]: 2026-01-31 08:14:15.862 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:16.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:16.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:17 compute-2 ceph-mon[77282]: osdmap e296: 3 total, 3 up, 3 in
Jan 31 08:14:17 compute-2 ceph-mon[77282]: pgmap v2356: 305 pgs: 305 active+clean; 874 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 153 op/s
Jan 31 08:14:17 compute-2 nova_compute[226829]: 2026-01-31 08:14:17.636 226833 DEBUG nova.storage.rbd_utils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] removing snapshot(f4bc63b21fc3416389c6705ad312b4e1) on rbd image(966eef30-6a53-4f56-a48f-d9c2b4348db8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:14:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e297 e297: 3 total, 3 up, 3 in
Jan 31 08:14:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:18.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:18.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:18 compute-2 ceph-mon[77282]: pgmap v2357: 305 pgs: 305 active+clean; 931 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.2 MiB/s wr, 218 op/s
Jan 31 08:14:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e298 e298: 3 total, 3 up, 3 in
Jan 31 08:14:19 compute-2 ceph-mon[77282]: osdmap e297: 3 total, 3 up, 3 in
Jan 31 08:14:19 compute-2 ceph-mon[77282]: osdmap e298: 3 total, 3 up, 3 in
Jan 31 08:14:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:20 compute-2 nova_compute[226829]: 2026-01-31 08:14:20.452 226833 DEBUG nova.storage.rbd_utils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(snap) on rbd image(d7589e47-590e-4196-b561-0595216f339a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:14:20 compute-2 nova_compute[226829]: 2026-01-31 08:14:20.662 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:20.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:20.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:20 compute-2 nova_compute[226829]: 2026-01-31 08:14:20.865 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:21 compute-2 ceph-mon[77282]: pgmap v2360: 305 pgs: 305 active+clean; 957 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 8.5 MiB/s wr, 264 op/s
Jan 31 08:14:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2897049485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1080364343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2234446353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e299 e299: 3 total, 3 up, 3 in
Jan 31 08:14:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:22.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:14:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:22.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:14:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2159375368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:23 compute-2 ceph-mon[77282]: pgmap v2361: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 8.8 MiB/s wr, 206 op/s
Jan 31 08:14:23 compute-2 nova_compute[226829]: 2026-01-31 08:14:23.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:23 compute-2 nova_compute[226829]: 2026-01-31 08:14:23.738 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847248.7376387, 966eef30-6a53-4f56-a48f-d9c2b4348db8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:23 compute-2 nova_compute[226829]: 2026-01-31 08:14:23.740 226833 INFO nova.compute.manager [-] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] VM Stopped (Lifecycle Event)
Jan 31 08:14:24 compute-2 nova_compute[226829]: 2026-01-31 08:14:24.530 226833 DEBUG nova.compute.manager [None req-87874aa6-da5e-4cbd-9569-f2eb24d0172f - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:24 compute-2 nova_compute[226829]: 2026-01-31 08:14:24.533 226833 DEBUG nova.compute.manager [None req-87874aa6-da5e-4cbd-9569-f2eb24d0172f - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: paused, current task_state: shelving_image_uploading, current DB power_state: 3, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:14:24 compute-2 nova_compute[226829]: 2026-01-31 08:14:24.559 226833 INFO nova.compute.manager [None req-87874aa6-da5e-4cbd-9569-f2eb24d0172f - - - - - -] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.
Jan 31 08:14:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:24.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:24.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:25 compute-2 sudo[285279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:25 compute-2 sudo[285279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:25 compute-2 sudo[285279]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:25 compute-2 sudo[285304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:25 compute-2 sudo[285304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:25 compute-2 sudo[285304]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:25 compute-2 nova_compute[226829]: 2026-01-31 08:14:25.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:25 compute-2 nova_compute[226829]: 2026-01-31 08:14:25.665 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:25 compute-2 nova_compute[226829]: 2026-01-31 08:14:25.868 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:25 compute-2 ceph-mon[77282]: osdmap e299: 3 total, 3 up, 3 in
Jan 31 08:14:25 compute-2 ceph-mon[77282]: pgmap v2363: 305 pgs: 305 active+clean; 994 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 114 op/s
Jan 31 08:14:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e300 e300: 3 total, 3 up, 3 in
Jan 31 08:14:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:26.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:14:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:26.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:14:27 compute-2 ceph-mon[77282]: pgmap v2364: 305 pgs: 305 active+clean; 1006 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 109 op/s
Jan 31 08:14:27 compute-2 ceph-mon[77282]: osdmap e300: 3 total, 3 up, 3 in
Jan 31 08:14:27 compute-2 nova_compute[226829]: 2026-01-31 08:14:27.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:27 compute-2 nova_compute[226829]: 2026-01-31 08:14:27.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:14:28 compute-2 nova_compute[226829]: 2026-01-31 08:14:28.159 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:14:28 compute-2 nova_compute[226829]: 2026-01-31 08:14:28.159 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:14:28 compute-2 nova_compute[226829]: 2026-01-31 08:14:28.159 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:14:28 compute-2 podman[285331]: 2026-01-31 08:14:28.240414862 +0000 UTC m=+0.102233259 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 08:14:28 compute-2 ceph-mon[77282]: pgmap v2366: 305 pgs: 305 active+clean; 1006 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 2.8 MiB/s wr, 77 op/s
Jan 31 08:14:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1448647849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:28.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:14:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:28.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:14:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:30 compute-2 nova_compute[226829]: 2026-01-31 08:14:30.163 226833 INFO nova.virt.libvirt.driver [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Snapshot image upload complete
Jan 31 08:14:30 compute-2 nova_compute[226829]: 2026-01-31 08:14:30.165 226833 DEBUG nova.compute.manager [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:30 compute-2 nova_compute[226829]: 2026-01-31 08:14:30.250 226833 INFO nova.compute.manager [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Shelve offloading
Jan 31 08:14:30 compute-2 nova_compute[226829]: 2026-01-31 08:14:30.259 226833 INFO nova.virt.libvirt.driver [-] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Instance destroyed successfully.
Jan 31 08:14:30 compute-2 nova_compute[226829]: 2026-01-31 08:14:30.259 226833 DEBUG nova.compute.manager [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:30 compute-2 nova_compute[226829]: 2026-01-31 08:14:30.261 226833 DEBUG oslo_concurrency.lockutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:14:30 compute-2 nova_compute[226829]: 2026-01-31 08:14:30.262 226833 DEBUG oslo_concurrency.lockutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:14:30 compute-2 nova_compute[226829]: 2026-01-31 08:14:30.262 226833 DEBUG nova.network.neutron [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:14:30 compute-2 nova_compute[226829]: 2026-01-31 08:14:30.668 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:30.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:30.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:30 compute-2 ceph-mon[77282]: pgmap v2367: 305 pgs: 305 active+clean; 1006 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 1.3 MiB/s wr, 80 op/s
Jan 31 08:14:30 compute-2 nova_compute[226829]: 2026-01-31 08:14:30.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:31.171 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:31.172 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:14:31 compute-2 nova_compute[226829]: 2026-01-31 08:14:31.205 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e301 e301: 3 total, 3 up, 3 in
Jan 31 08:14:32 compute-2 nova_compute[226829]: 2026-01-31 08:14:32.568 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Updating instance_info_cache with network_info: [{"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:32 compute-2 nova_compute[226829]: 2026-01-31 08:14:32.595 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-1ee317bd-390c-4a22-9e4e-e24189eb499e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:14:32 compute-2 nova_compute[226829]: 2026-01-31 08:14:32.595 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:14:32 compute-2 nova_compute[226829]: 2026-01-31 08:14:32.596 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:32 compute-2 nova_compute[226829]: 2026-01-31 08:14:32.597 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:32.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:32.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:32 compute-2 ceph-mon[77282]: osdmap e301: 3 total, 3 up, 3 in
Jan 31 08:14:32 compute-2 ceph-mon[77282]: pgmap v2369: 305 pgs: 305 active+clean; 1017 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 1.4 MiB/s wr, 61 op/s
Jan 31 08:14:32 compute-2 nova_compute[226829]: 2026-01-31 08:14:32.872 226833 DEBUG nova.network.neutron [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Updating instance_info_cache with network_info: [{"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:32 compute-2 nova_compute[226829]: 2026-01-31 08:14:32.891 226833 DEBUG oslo_concurrency.lockutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:14:33 compute-2 nova_compute[226829]: 2026-01-31 08:14:33.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2788472167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:34.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:34.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.133 226833 INFO nova.virt.libvirt.driver [-] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Instance destroyed successfully.
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.133 226833 DEBUG nova.objects.instance [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'resources' on Instance uuid 966eef30-6a53-4f56-a48f-d9c2b4348db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.151 226833 DEBUG nova.virt.libvirt.vif [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1412905283',display_name='tempest-ServerActionsTestOtherB-server-1412905283',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1412905283',id=133,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:14:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-xrt1x8j8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member',shelved_at='2026-01-31T08:14:30.165021',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='d7589e47-590e-4196-b561-0595216f339a'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:14:09Z,user_data=None,user_id='18aee9d81d404f77ac81cde538f140d8',uuid=966eef30-6a53-4f56-a48f-d9c2b4348db8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.152 226833 DEBUG nova.network.os_vif_util [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28396218-53", "ovs_interfaceid": "28396218-53f3-4ff1-b721-bb503c806019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.153 226833 DEBUG nova.network.os_vif_util [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:cd:6d,bridge_name='br-int',has_traffic_filtering=True,id=28396218-53f3-4ff1-b721-bb503c806019,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28396218-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.153 226833 DEBUG os_vif [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:cd:6d,bridge_name='br-int',has_traffic_filtering=True,id=28396218-53f3-4ff1-b721-bb503c806019,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28396218-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.157 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.158 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28396218-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.159 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.161 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.166 226833 INFO os_vif [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:cd:6d,bridge_name='br-int',has_traffic_filtering=True,id=28396218-53f3-4ff1-b721-bb503c806019,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28396218-53')
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.261 226833 DEBUG nova.compute.manager [req-1840b673-4e3a-4563-ad75-5e6518a25fae req-7c7a50b8-3fbb-46f1-97fc-4fdaea70a923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Received event network-changed-28396218-53f3-4ff1-b721-bb503c806019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.261 226833 DEBUG nova.compute.manager [req-1840b673-4e3a-4563-ad75-5e6518a25fae req-7c7a50b8-3fbb-46f1-97fc-4fdaea70a923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Refreshing instance network info cache due to event network-changed-28396218-53f3-4ff1-b721-bb503c806019. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.262 226833 DEBUG oslo_concurrency.lockutils [req-1840b673-4e3a-4563-ad75-5e6518a25fae req-7c7a50b8-3fbb-46f1-97fc-4fdaea70a923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.262 226833 DEBUG oslo_concurrency.lockutils [req-1840b673-4e3a-4563-ad75-5e6518a25fae req-7c7a50b8-3fbb-46f1-97fc-4fdaea70a923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.262 226833 DEBUG nova.network.neutron [req-1840b673-4e3a-4563-ad75-5e6518a25fae req-7c7a50b8-3fbb-46f1-97fc-4fdaea70a923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Refreshing network info cache for port 28396218-53f3-4ff1-b721-bb503c806019 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:14:35 compute-2 ceph-mon[77282]: pgmap v2370: 305 pgs: 305 active+clean; 1.0 GiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 1.1 MiB/s wr, 61 op/s
Jan 31 08:14:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/940524916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:35 compute-2 nova_compute[226829]: 2026-01-31 08:14:35.872 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:36 compute-2 nova_compute[226829]: 2026-01-31 08:14:36.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:36 compute-2 nova_compute[226829]: 2026-01-31 08:14:36.491 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:36 compute-2 nova_compute[226829]: 2026-01-31 08:14:36.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:36 compute-2 nova_compute[226829]: 2026-01-31 08:14:36.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:36 compute-2 nova_compute[226829]: 2026-01-31 08:14:36.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:36 compute-2 nova_compute[226829]: 2026-01-31 08:14:36.532 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:14:36 compute-2 nova_compute[226829]: 2026-01-31 08:14:36.532 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:36 compute-2 ceph-mon[77282]: pgmap v2371: 305 pgs: 305 active+clean; 1.0 GiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 134 op/s
Jan 31 08:14:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4072180426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1553411770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:36.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:36.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:36 compute-2 nova_compute[226829]: 2026-01-31 08:14:36.958 226833 DEBUG nova.network.neutron [req-1840b673-4e3a-4563-ad75-5e6518a25fae req-7c7a50b8-3fbb-46f1-97fc-4fdaea70a923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Updated VIF entry in instance network info cache for port 28396218-53f3-4ff1-b721-bb503c806019. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:14:36 compute-2 nova_compute[226829]: 2026-01-31 08:14:36.959 226833 DEBUG nova.network.neutron [req-1840b673-4e3a-4563-ad75-5e6518a25fae req-7c7a50b8-3fbb-46f1-97fc-4fdaea70a923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Updating instance_info_cache with network_info: [{"id": "28396218-53f3-4ff1-b721-bb503c806019", "address": "fa:16:3e:b5:cd:6d", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap28396218-53", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:14:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4290890653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:36 compute-2 nova_compute[226829]: 2026-01-31 08:14:36.987 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.006 226833 DEBUG oslo_concurrency.lockutils [req-1840b673-4e3a-4563-ad75-5e6518a25fae req-7c7a50b8-3fbb-46f1-97fc-4fdaea70a923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-966eef30-6a53-4f56-a48f-d9c2b4348db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.092 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.093 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.100 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.101 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.106 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.107 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.111 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.112 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.112 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:37.174 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.301 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.303 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3699MB free_disk=20.540531158447266GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.304 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.304 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.388 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.388 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 1ee317bd-390c-4a22-9e4e-e24189eb499e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.388 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 5a59388d-bade-4df0-9ac0-0022df15ea02 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.388 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 966eef30-6a53-4f56-a48f-d9c2b4348db8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.389 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.389 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.506 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3364045828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1717633416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4290890653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.825 226833 INFO nova.virt.libvirt.driver [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Deleting instance files /var/lib/nova/instances/966eef30-6a53-4f56-a48f-d9c2b4348db8_del
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.827 226833 INFO nova.virt.libvirt.driver [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 966eef30-6a53-4f56-a48f-d9c2b4348db8] Deletion of /var/lib/nova/instances/966eef30-6a53-4f56-a48f-d9c2b4348db8_del complete
Jan 31 08:14:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:14:37 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2905960649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.928 226833 INFO nova.scheduler.client.report [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Deleted allocations for instance 966eef30-6a53-4f56-a48f-d9c2b4348db8
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.944 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.948 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.975 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.979 226833 DEBUG oslo_concurrency.lockutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.999 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:14:37 compute-2 nova_compute[226829]: 2026-01-31 08:14:37.999 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:38 compute-2 nova_compute[226829]: 2026-01-31 08:14:38.000 226833 DEBUG oslo_concurrency.lockutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:38 compute-2 nova_compute[226829]: 2026-01-31 08:14:38.103 226833 DEBUG oslo_concurrency.processutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:14:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3465886312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:38 compute-2 nova_compute[226829]: 2026-01-31 08:14:38.555 226833 DEBUG oslo_concurrency.processutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:38 compute-2 nova_compute[226829]: 2026-01-31 08:14:38.564 226833 DEBUG nova.compute.provider_tree [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:14:38 compute-2 nova_compute[226829]: 2026-01-31 08:14:38.602 226833 DEBUG nova.scheduler.client.report [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:14:38 compute-2 nova_compute[226829]: 2026-01-31 08:14:38.625 226833 DEBUG oslo_concurrency.lockutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:38 compute-2 nova_compute[226829]: 2026-01-31 08:14:38.695 226833 DEBUG oslo_concurrency.lockutils [None req-ba836845-1e06-490d-af25-97092e219999 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "966eef30-6a53-4f56-a48f-d9c2b4348db8" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 31.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:38.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:38.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:38 compute-2 ceph-mon[77282]: pgmap v2372: 305 pgs: 305 active+clean; 1.0 GiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Jan 31 08:14:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2905960649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3465886312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:39 compute-2 podman[285449]: 2026-01-31 08:14:39.181476715 +0000 UTC m=+0.064072806 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 31 08:14:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:40 compute-2 ceph-mon[77282]: pgmap v2373: 305 pgs: 305 active+clean; 999 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 209 op/s
Jan 31 08:14:40 compute-2 nova_compute[226829]: 2026-01-31 08:14:40.161 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:40.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:40.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:40 compute-2 nova_compute[226829]: 2026-01-31 08:14:40.875 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:42 compute-2 ceph-mon[77282]: pgmap v2374: 305 pgs: 305 active+clean; 973 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 290 op/s
Jan 31 08:14:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:42.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:42.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:44.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:44 compute-2 nova_compute[226829]: 2026-01-31 08:14:44.773 226833 DEBUG oslo_concurrency.lockutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:44 compute-2 nova_compute[226829]: 2026-01-31 08:14:44.774 226833 DEBUG oslo_concurrency.lockutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:44 compute-2 nova_compute[226829]: 2026-01-31 08:14:44.775 226833 INFO nova.compute.manager [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Shelving
Jan 31 08:14:44 compute-2 nova_compute[226829]: 2026-01-31 08:14:44.803 226833 DEBUG nova.virt.libvirt.driver [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:14:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:44.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:45 compute-2 ceph-mon[77282]: pgmap v2375: 305 pgs: 305 active+clean; 959 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.5 MiB/s wr, 278 op/s
Jan 31 08:14:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:45 compute-2 nova_compute[226829]: 2026-01-31 08:14:45.163 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:45 compute-2 sudo[285471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:45 compute-2 sudo[285471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:45 compute-2 sudo[285471]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:45 compute-2 sudo[285496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:45 compute-2 sudo[285496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:45 compute-2 sudo[285496]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:45 compute-2 nova_compute[226829]: 2026-01-31 08:14:45.876 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2418829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:14:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2418829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:14:45 compute-2 ceph-mon[77282]: pgmap v2376: 305 pgs: 305 active+clean; 925 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.1 MiB/s wr, 290 op/s
Jan 31 08:14:45 compute-2 nova_compute[226829]: 2026-01-31 08:14:45.997 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:45 compute-2 nova_compute[226829]: 2026-01-31 08:14:45.998 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.502991) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286503109, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1126, "num_deletes": 254, "total_data_size": 2362736, "memory_usage": 2402912, "flush_reason": "Manual Compaction"}
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286512811, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 1036934, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55337, "largest_seqno": 56458, "table_properties": {"data_size": 1032515, "index_size": 1943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11640, "raw_average_key_size": 21, "raw_value_size": 1023083, "raw_average_value_size": 1891, "num_data_blocks": 84, "num_entries": 541, "num_filter_entries": 541, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847207, "oldest_key_time": 1769847207, "file_creation_time": 1769847286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 9855 microseconds, and 4259 cpu microseconds.
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.512856) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 1036934 bytes OK
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.512874) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.515767) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.515780) EVENT_LOG_v1 {"time_micros": 1769847286515775, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.515798) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2357168, prev total WAL file size 2357168, number of live WAL files 2.
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.516681) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373538' seq:72057594037927935, type:22 .. '6D6772737461740032303130' seq:0, type:0; will stop at (end)
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(1012KB)], [108(12MB)]
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286516752, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 13818157, "oldest_snapshot_seqno": -1}
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 7972 keys, 10453053 bytes, temperature: kUnknown
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286580913, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 10453053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10401969, "index_size": 30044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 207221, "raw_average_key_size": 25, "raw_value_size": 10262084, "raw_average_value_size": 1287, "num_data_blocks": 1176, "num_entries": 7972, "num_filter_entries": 7972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.581172) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10453053 bytes
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.587456) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.1 rd, 162.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.2 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(23.4) write-amplify(10.1) OK, records in: 8463, records dropped: 491 output_compression: NoCompression
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.587484) EVENT_LOG_v1 {"time_micros": 1769847286587473, "job": 68, "event": "compaction_finished", "compaction_time_micros": 64244, "compaction_time_cpu_micros": 23592, "output_level": 6, "num_output_files": 1, "total_output_size": 10453053, "num_input_records": 8463, "num_output_records": 7972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286587739, "job": 68, "event": "table_file_deletion", "file_number": 110}
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286589088, "job": 68, "event": "table_file_deletion", "file_number": 108}
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.516538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.589209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.589218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.589221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.589224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:14:46.589227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:46.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:46.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:47 compute-2 kernel: tapc82f9d38-ef (unregistering): left promiscuous mode
Jan 31 08:14:47 compute-2 NetworkManager[48999]: <info>  [1769847287.4875] device (tapc82f9d38-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.495 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:47 compute-2 ovn_controller[133834]: 2026-01-31T08:14:47Z|00547|binding|INFO|Releasing lport c82f9d38-ef16-46ac-829a-12067ec8c603 from this chassis (sb_readonly=0)
Jan 31 08:14:47 compute-2 ovn_controller[133834]: 2026-01-31T08:14:47Z|00548|binding|INFO|Setting lport c82f9d38-ef16-46ac-829a-12067ec8c603 down in Southbound
Jan 31 08:14:47 compute-2 ovn_controller[133834]: 2026-01-31T08:14:47Z|00549|binding|INFO|Removing iface tapc82f9d38-ef ovn-installed in OVS
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.497 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.503 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:e2:fd 10.100.0.14'], port_security=['fa:16:3e:15:e2:fd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5a59388d-bade-4df0-9ac0-0022df15ea02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c82f9d38-ef16-46ac-829a-12067ec8c603) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.505 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.507 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c82f9d38-ef16-46ac-829a-12067ec8c603 in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 unbound from our chassis
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.510 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e8014d6b-23e1-41ef-b5e2-3d770d302e72, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.513 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cffde9-670f-4c4b-84db-878f918f29d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.514 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace which is not needed anymore
Jan 31 08:14:47 compute-2 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000081.scope: Deactivated successfully.
Jan 31 08:14:47 compute-2 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000081.scope: Consumed 18.613s CPU time.
Jan 31 08:14:47 compute-2 systemd-machined[195142]: Machine qemu-57-instance-00000081 terminated.
Jan 31 08:14:47 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[283774]: [NOTICE]   (283778) : haproxy version is 2.8.14-c23fe91
Jan 31 08:14:47 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[283774]: [NOTICE]   (283778) : path to executable is /usr/sbin/haproxy
Jan 31 08:14:47 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[283774]: [WARNING]  (283778) : Exiting Master process...
Jan 31 08:14:47 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[283774]: [ALERT]    (283778) : Current worker (283780) exited with code 143 (Terminated)
Jan 31 08:14:47 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[283774]: [WARNING]  (283778) : All workers exited. Exiting... (0)
Jan 31 08:14:47 compute-2 systemd[1]: libpod-857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193.scope: Deactivated successfully.
Jan 31 08:14:47 compute-2 podman[285546]: 2026-01-31 08:14:47.671145797 +0000 UTC m=+0.060644673 container died 857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.715 226833 DEBUG oslo_concurrency.lockutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.715 226833 DEBUG oslo_concurrency.lockutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.716 226833 DEBUG oslo_concurrency.lockutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.717 226833 DEBUG oslo_concurrency.lockutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:47 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193-userdata-shm.mount: Deactivated successfully.
Jan 31 08:14:47 compute-2 systemd[1]: var-lib-containers-storage-overlay-f120d36d6cfe1cedcd8676c86a334112923e4ba2a0f010522395b333f0491b55-merged.mount: Deactivated successfully.
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.718 226833 DEBUG oslo_concurrency.lockutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.719 226833 INFO nova.compute.manager [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Terminating instance
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.720 226833 DEBUG nova.compute.manager [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:14:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3913157800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:47 compute-2 podman[285546]: 2026-01-31 08:14:47.757860535 +0000 UTC m=+0.147359411 container cleanup 857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:47 compute-2 systemd[1]: libpod-conmon-857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193.scope: Deactivated successfully.
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.790 226833 DEBUG nova.compute.manager [req-807351db-98db-4e25-a7d8-9c2adfd03199 req-79808391-d242-4f8a-b03d-97add6da530e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-vif-unplugged-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.790 226833 DEBUG oslo_concurrency.lockutils [req-807351db-98db-4e25-a7d8-9c2adfd03199 req-79808391-d242-4f8a-b03d-97add6da530e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.790 226833 DEBUG oslo_concurrency.lockutils [req-807351db-98db-4e25-a7d8-9c2adfd03199 req-79808391-d242-4f8a-b03d-97add6da530e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.791 226833 DEBUG oslo_concurrency.lockutils [req-807351db-98db-4e25-a7d8-9c2adfd03199 req-79808391-d242-4f8a-b03d-97add6da530e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.791 226833 DEBUG nova.compute.manager [req-807351db-98db-4e25-a7d8-9c2adfd03199 req-79808391-d242-4f8a-b03d-97add6da530e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] No waiting events found dispatching network-vif-unplugged-c82f9d38-ef16-46ac-829a-12067ec8c603 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.791 226833 WARNING nova.compute.manager [req-807351db-98db-4e25-a7d8-9c2adfd03199 req-79808391-d242-4f8a-b03d-97add6da530e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received unexpected event network-vif-unplugged-c82f9d38-ef16-46ac-829a-12067ec8c603 for instance with vm_state active and task_state shelving.
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.821 226833 INFO nova.virt.libvirt.driver [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance shutdown successfully after 3 seconds.
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.827 226833 INFO nova.virt.libvirt.driver [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance destroyed successfully.
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.828 226833 DEBUG nova.objects.instance [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'numa_topology' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:47 compute-2 podman[285586]: 2026-01-31 08:14:47.841716686 +0000 UTC m=+0.065521555 container remove 857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.848 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee2b2ec-c71e-4940-81a1-8eb9c90105c6]: (4, ('Sat Jan 31 08:14:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193)\n857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193\nSat Jan 31 08:14:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193)\n857cd00444f095b910aa60349645947753ce02e2649fc9599c7bd2ddc0de9193\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.851 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2024ca08-0063-4c3e-8881-c88871989cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.852 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.854 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:47 compute-2 kernel: tape8014d6b-20: left promiscuous mode
Jan 31 08:14:47 compute-2 nova_compute[226829]: 2026-01-31 08:14:47.864 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.869 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[637151cf-5de0-4d74-9871-d4a8dbb6da64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.886 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9002a33f-6a46-43da-90fc-bed3a0ba89ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.888 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6689b9ca-2b40-46ef-a9fb-f9d549567a9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.903 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e71d5e0d-f5ab-4f02-8763-051b678793d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743135, 'reachable_time': 25516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285604, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:47 compute-2 systemd[1]: run-netns-ovnmeta\x2de8014d6b\x2d23e1\x2d41ef\x2db5e2\x2d3d770d302e72.mount: Deactivated successfully.
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.909 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:14:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:47.909 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[99e27ccf-9555-4c3d-a5f4-37acbde79ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.096 226833 INFO nova.virt.libvirt.driver [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Beginning cold snapshot process
Jan 31 08:14:48 compute-2 kernel: tap75f73628-b1 (unregistering): left promiscuous mode
Jan 31 08:14:48 compute-2 NetworkManager[48999]: <info>  [1769847288.2208] device (tap75f73628-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:14:48 compute-2 ovn_controller[133834]: 2026-01-31T08:14:48Z|00550|binding|INFO|Releasing lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 from this chassis (sb_readonly=0)
Jan 31 08:14:48 compute-2 ovn_controller[133834]: 2026-01-31T08:14:48Z|00551|binding|INFO|Setting lport 75f73628-b189-4a56-a44e-b33ec7ff3e50 down in Southbound
Jan 31 08:14:48 compute-2 ovn_controller[133834]: 2026-01-31T08:14:48Z|00552|binding|INFO|Removing iface tap75f73628-b1 ovn-installed in OVS
Jan 31 08:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:48.233 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:a5:ae 10.100.0.7'], port_security=['fa:16:3e:7b:a5:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1ee317bd-390c-4a22-9e4e-e24189eb499e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=75f73628-b189-4a56-a44e-b33ec7ff3e50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:48.235 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 75f73628-b189-4a56-a44e-b33ec7ff3e50 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c unbound from our chassis
Jan 31 08:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:48.237 143841 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:14:48.238 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9d27c45d-42d2-4026-80b7-b39cc4d553ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.274 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:48 compute-2 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Jan 31 08:14:48 compute-2 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007f.scope: Consumed 18.521s CPU time.
Jan 31 08:14:48 compute-2 systemd-machined[195142]: Machine qemu-58-instance-0000007f terminated.
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.281 226833 DEBUG nova.virt.libvirt.imagebackend [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 08:14:48 compute-2 NetworkManager[48999]: <info>  [1769847288.3361] manager: (tap75f73628-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.352 226833 INFO nova.virt.libvirt.driver [-] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Instance destroyed successfully.
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.354 226833 DEBUG nova.objects.instance [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'resources' on Instance uuid 1ee317bd-390c-4a22-9e4e-e24189eb499e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.369 226833 DEBUG nova.virt.libvirt.vif [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:12:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-239463750',display_name='tempest-ServerRescueTestJSON-server-239463750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-239463750',id=127,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:18Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cbd0f41e455b4b3b9a8edf35ef0b85ed',ramdisk_id='',reservation_id='r-l1tglovt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1911109090',owner_user_name='tempest-ServerRescueTestJSON-1911109090-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:13:18Z,user_data=None,user_id='6ab9e181016f4d5a899c91dae3aa26e0',uuid=1ee317bd-390c-4a22-9e4e-e24189eb499e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.369 226833 DEBUG nova.network.os_vif_util [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converting VIF {"id": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "address": "fa:16:3e:7b:a5:ae", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75f73628-b1", "ovs_interfaceid": "75f73628-b189-4a56-a44e-b33ec7ff3e50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.369 226833 DEBUG nova.network.os_vif_util [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:a5:ae,bridge_name='br-int',has_traffic_filtering=True,id=75f73628-b189-4a56-a44e-b33ec7ff3e50,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75f73628-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.370 226833 DEBUG os_vif [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:a5:ae,bridge_name='br-int',has_traffic_filtering=True,id=75f73628-b189-4a56-a44e-b33ec7ff3e50,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75f73628-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.371 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.371 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75f73628-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.373 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.375 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.376 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.378 226833 INFO os_vif [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:a5:ae,bridge_name='br-int',has_traffic_filtering=True,id=75f73628-b189-4a56-a44e-b33ec7ff3e50,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75f73628-b1')
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.562 226833 DEBUG nova.storage.rbd_utils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(7881d710f7904f5f88d428fcb161877a) on rbd image(5a59388d-bade-4df0-9ac0-0022df15ea02_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.676 226833 DEBUG nova.compute.manager [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-vif-unplugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.677 226833 DEBUG oslo_concurrency.lockutils [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.677 226833 DEBUG oslo_concurrency.lockutils [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.677 226833 DEBUG oslo_concurrency.lockutils [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.677 226833 DEBUG nova.compute.manager [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] No waiting events found dispatching network-vif-unplugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.678 226833 DEBUG nova.compute.manager [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-vif-unplugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.678 226833 DEBUG nova.compute.manager [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.678 226833 DEBUG oslo_concurrency.lockutils [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.679 226833 DEBUG oslo_concurrency.lockutils [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.679 226833 DEBUG oslo_concurrency.lockutils [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.679 226833 DEBUG nova.compute.manager [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] No waiting events found dispatching network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.679 226833 WARNING nova.compute.manager [req-b150ce3a-8dee-4233-8882-2f5a73a7cc10 req-759953cf-3069-49db-b2ff-78c1e835176a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received unexpected event network-vif-plugged-75f73628-b189-4a56-a44e-b33ec7ff3e50 for instance with vm_state rescued and task_state deleting.
Jan 31 08:14:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:48.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e302 e302: 3 total, 3 up, 3 in
Jan 31 08:14:48 compute-2 ceph-mon[77282]: pgmap v2377: 305 pgs: 305 active+clean; 880 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 681 KiB/s wr, 252 op/s
Jan 31 08:14:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:48.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:48 compute-2 nova_compute[226829]: 2026-01-31 08:14:48.863 226833 DEBUG nova.storage.rbd_utils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] cloning vms/5a59388d-bade-4df0-9ac0-0022df15ea02_disk@7881d710f7904f5f88d428fcb161877a to images/70966e79-230e-420e-aa68-b04da70a01e2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:14:49 compute-2 nova_compute[226829]: 2026-01-31 08:14:49.021 226833 DEBUG nova.storage.rbd_utils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] flattening images/70966e79-230e-420e-aa68-b04da70a01e2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 08:14:49 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 31 08:14:49 compute-2 nova_compute[226829]: 2026-01-31 08:14:49.747 226833 DEBUG nova.storage.rbd_utils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] removing snapshot(7881d710f7904f5f88d428fcb161877a) on rbd image(5a59388d-bade-4df0-9ac0-0022df15ea02_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:14:49 compute-2 ceph-mon[77282]: osdmap e302: 3 total, 3 up, 3 in
Jan 31 08:14:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.440 226833 INFO nova.virt.libvirt.driver [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Deleting instance files /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e_del
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.441 226833 INFO nova.virt.libvirt.driver [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Deletion of /var/lib/nova/instances/1ee317bd-390c-4a22-9e4e-e24189eb499e_del complete
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.515 226833 INFO nova.compute.manager [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Took 2.79 seconds to destroy the instance on the hypervisor.
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.516 226833 DEBUG oslo.service.loopingcall [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.516 226833 DEBUG nova.compute.manager [-] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.516 226833 DEBUG nova.network.neutron [-] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.586 226833 DEBUG nova.compute.manager [req-1d9107af-65f2-460c-b128-d6d66c6c9ebd req-6564d604-8aff-4daf-9986-c47ac395c4cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.587 226833 DEBUG oslo_concurrency.lockutils [req-1d9107af-65f2-460c-b128-d6d66c6c9ebd req-6564d604-8aff-4daf-9986-c47ac395c4cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.587 226833 DEBUG oslo_concurrency.lockutils [req-1d9107af-65f2-460c-b128-d6d66c6c9ebd req-6564d604-8aff-4daf-9986-c47ac395c4cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.587 226833 DEBUG oslo_concurrency.lockutils [req-1d9107af-65f2-460c-b128-d6d66c6c9ebd req-6564d604-8aff-4daf-9986-c47ac395c4cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.587 226833 DEBUG nova.compute.manager [req-1d9107af-65f2-460c-b128-d6d66c6c9ebd req-6564d604-8aff-4daf-9986-c47ac395c4cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] No waiting events found dispatching network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.588 226833 WARNING nova.compute.manager [req-1d9107af-65f2-460c-b128-d6d66c6c9ebd req-6564d604-8aff-4daf-9986-c47ac395c4cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received unexpected event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 for instance with vm_state active and task_state shelving_image_uploading.
Jan 31 08:14:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:50.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e303 e303: 3 total, 3 up, 3 in
Jan 31 08:14:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:50.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.861 226833 DEBUG nova.storage.rbd_utils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(snap) on rbd image(70966e79-230e-420e-aa68-b04da70a01e2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:14:50 compute-2 ceph-mon[77282]: pgmap v2379: 305 pgs: 305 active+clean; 858 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.3 MiB/s wr, 220 op/s
Jan 31 08:14:50 compute-2 ceph-mon[77282]: osdmap e303: 3 total, 3 up, 3 in
Jan 31 08:14:50 compute-2 nova_compute[226829]: 2026-01-31 08:14:50.926 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e304 e304: 3 total, 3 up, 3 in
Jan 31 08:14:52 compute-2 nova_compute[226829]: 2026-01-31 08:14:52.529 226833 DEBUG nova.network.neutron [-] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:52 compute-2 nova_compute[226829]: 2026-01-31 08:14:52.543 226833 INFO nova.compute.manager [-] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Took 2.03 seconds to deallocate network for instance.
Jan 31 08:14:52 compute-2 nova_compute[226829]: 2026-01-31 08:14:52.577 226833 DEBUG nova.compute.manager [req-d7e80823-eb87-49ba-a776-6e8f2830baa5 req-f05da094-84b3-49b1-a681-29d568d75257 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Received event network-vif-deleted-75f73628-b189-4a56-a44e-b33ec7ff3e50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:52 compute-2 nova_compute[226829]: 2026-01-31 08:14:52.596 226833 DEBUG oslo_concurrency.lockutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:52 compute-2 nova_compute[226829]: 2026-01-31 08:14:52.596 226833 DEBUG oslo_concurrency.lockutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:52 compute-2 nova_compute[226829]: 2026-01-31 08:14:52.677 226833 DEBUG oslo_concurrency.processutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:52.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:52.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:52 compute-2 ceph-mon[77282]: pgmap v2381: 305 pgs: 305 active+clean; 862 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.3 MiB/s wr, 212 op/s
Jan 31 08:14:52 compute-2 ceph-mon[77282]: osdmap e304: 3 total, 3 up, 3 in
Jan 31 08:14:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:14:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1016822471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.101 226833 DEBUG oslo_concurrency.processutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.107 226833 DEBUG nova.compute.provider_tree [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.129 226833 DEBUG nova.scheduler.client.report [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.152 226833 DEBUG oslo_concurrency.lockutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.206 226833 INFO nova.scheduler.client.report [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Deleted allocations for instance 1ee317bd-390c-4a22-9e4e-e24189eb499e
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.292 226833 DEBUG oslo_concurrency.lockutils [None req-a6265906-436a-4334-90a8-d621c90394e3 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "1ee317bd-390c-4a22-9e4e-e24189eb499e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.299 226833 INFO nova.virt.libvirt.driver [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Snapshot image upload complete
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.300 226833 DEBUG nova.compute.manager [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.367 226833 INFO nova.compute.manager [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Shelve offloading
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.554 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.557 226833 INFO nova.virt.libvirt.driver [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance destroyed successfully.
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.558 226833 DEBUG nova.compute.manager [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.560 226833 DEBUG oslo_concurrency.lockutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.560 226833 DEBUG oslo_concurrency.lockutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:14:53 compute-2 nova_compute[226829]: 2026-01-31 08:14:53.561 226833 DEBUG nova.network.neutron [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:14:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1016822471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:54 compute-2 ceph-mon[77282]: pgmap v2383: 305 pgs: 305 active+clean; 847 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 8.4 MiB/s wr, 237 op/s
Jan 31 08:14:54 compute-2 nova_compute[226829]: 2026-01-31 08:14:54.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:14:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:54.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:14:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:14:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:54.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:14:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:55 compute-2 sudo[285803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:55 compute-2 sudo[285803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:55 compute-2 sudo[285803]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:55 compute-2 sudo[285828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:55 compute-2 sudo[285828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:55 compute-2 sudo[285828]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:55 compute-2 sudo[285853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:55 compute-2 sudo[285853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:55 compute-2 sudo[285853]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:55 compute-2 sudo[285878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:14:55 compute-2 sudo[285878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:55 compute-2 sudo[285878]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:55 compute-2 nova_compute[226829]: 2026-01-31 08:14:55.880 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:56.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:56.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:56 compute-2 nova_compute[226829]: 2026-01-31 08:14:56.902 226833 DEBUG nova.network.neutron [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updating instance_info_cache with network_info: [{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:56 compute-2 nova_compute[226829]: 2026-01-31 08:14:56.991 226833 DEBUG oslo_concurrency.lockutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:14:57 compute-2 ceph-mon[77282]: pgmap v2384: 305 pgs: 305 active+clean; 857 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 9.0 MiB/s wr, 299 op/s
Jan 31 08:14:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1135928742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3915545150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:58 compute-2 ceph-mon[77282]: pgmap v2385: 305 pgs: 305 active+clean; 864 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.6 MiB/s wr, 288 op/s
Jan 31 08:14:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/410555706' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:58 compute-2 nova_compute[226829]: 2026-01-31 08:14:58.586 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:14:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:58.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:14:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:14:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:58.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:59 compute-2 podman[285936]: 2026-01-31 08:14:59.198876544 +0000 UTC m=+0.087995843 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.870 226833 INFO nova.virt.libvirt.driver [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance destroyed successfully.
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.871 226833 DEBUG nova.objects.instance [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'resources' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.921 226833 DEBUG nova.virt.libvirt.vif [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:12:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-993615947',display_name='tempest-ServerActionsTestOtherB-server-993615947',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-993615947',id=129,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:01Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-5y6n0w3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member',shelved_at='2026-01-31T08:14:53.299988',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='70966e79-230e-420e-aa68-b04da70a01e2'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:14:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=5a59388d-bade-4df0-9ac0-0022df15ea02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.922 226833 DEBUG nova.network.os_vif_util [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.924 226833 DEBUG nova.network.os_vif_util [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.925 226833 DEBUG os_vif [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.929 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.930 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc82f9d38-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.934 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.937 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:14:59 compute-2 nova_compute[226829]: 2026-01-31 08:14:59.942 226833 INFO os_vif [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef')
Jan 31 08:15:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:00 compute-2 ceph-mon[77282]: pgmap v2386: 305 pgs: 305 active+clean; 864 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 176 op/s
Jan 31 08:15:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:15:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:15:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:15:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:15:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:15:00 compute-2 nova_compute[226829]: 2026-01-31 08:15:00.344 226833 DEBUG nova.compute.manager [req-8ea3968e-a56e-4683-94ea-ecb81c2bb837 req-d3f6e7a8-a765-4944-aafe-9ceef777ea62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-changed-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:00 compute-2 nova_compute[226829]: 2026-01-31 08:15:00.344 226833 DEBUG nova.compute.manager [req-8ea3968e-a56e-4683-94ea-ecb81c2bb837 req-d3f6e7a8-a765-4944-aafe-9ceef777ea62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Refreshing instance network info cache due to event network-changed-c82f9d38-ef16-46ac-829a-12067ec8c603. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:15:00 compute-2 nova_compute[226829]: 2026-01-31 08:15:00.344 226833 DEBUG oslo_concurrency.lockutils [req-8ea3968e-a56e-4683-94ea-ecb81c2bb837 req-d3f6e7a8-a765-4944-aafe-9ceef777ea62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:15:00 compute-2 nova_compute[226829]: 2026-01-31 08:15:00.344 226833 DEBUG oslo_concurrency.lockutils [req-8ea3968e-a56e-4683-94ea-ecb81c2bb837 req-d3f6e7a8-a765-4944-aafe-9ceef777ea62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:15:00 compute-2 nova_compute[226829]: 2026-01-31 08:15:00.345 226833 DEBUG nova.network.neutron [req-8ea3968e-a56e-4683-94ea-ecb81c2bb837 req-d3f6e7a8-a765-4944-aafe-9ceef777ea62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Refreshing network info cache for port c82f9d38-ef16-46ac-829a-12067ec8c603 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:15:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:00.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:00.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:00 compute-2 nova_compute[226829]: 2026-01-31 08:15:00.874 226833 INFO nova.virt.libvirt.driver [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Deleting instance files /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02_del
Jan 31 08:15:00 compute-2 nova_compute[226829]: 2026-01-31 08:15:00.874 226833 INFO nova.virt.libvirt.driver [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Deletion of /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02_del complete
Jan 31 08:15:00 compute-2 nova_compute[226829]: 2026-01-31 08:15:00.881 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:01 compute-2 nova_compute[226829]: 2026-01-31 08:15:01.313 226833 INFO nova.scheduler.client.report [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Deleted allocations for instance 5a59388d-bade-4df0-9ac0-0022df15ea02
Jan 31 08:15:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e305 e305: 3 total, 3 up, 3 in
Jan 31 08:15:01 compute-2 nova_compute[226829]: 2026-01-31 08:15:01.481 226833 DEBUG oslo_concurrency.lockutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:01 compute-2 nova_compute[226829]: 2026-01-31 08:15:01.483 226833 DEBUG oslo_concurrency.lockutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:01 compute-2 nova_compute[226829]: 2026-01-31 08:15:01.614 226833 DEBUG oslo_concurrency.processutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/6157968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:02 compute-2 nova_compute[226829]: 2026-01-31 08:15:02.094 226833 DEBUG oslo_concurrency.processutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:02 compute-2 nova_compute[226829]: 2026-01-31 08:15:02.102 226833 DEBUG nova.compute.provider_tree [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:02 compute-2 nova_compute[226829]: 2026-01-31 08:15:02.224 226833 DEBUG nova.scheduler.client.report [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:02 compute-2 nova_compute[226829]: 2026-01-31 08:15:02.253 226833 DEBUG oslo_concurrency.lockutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:02 compute-2 nova_compute[226829]: 2026-01-31 08:15:02.347 226833 DEBUG oslo_concurrency.lockutils [None req-3d5a8b99-3e0e-44f2-bf42-bb3d3b253e78 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 17.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:02 compute-2 ceph-mon[77282]: osdmap e305: 3 total, 3 up, 3 in
Jan 31 08:15:02 compute-2 ceph-mon[77282]: pgmap v2388: 305 pgs: 305 active+clean; 840 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 169 op/s
Jan 31 08:15:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/6157968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:02 compute-2 nova_compute[226829]: 2026-01-31 08:15:02.733 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847287.7312963, 5a59388d-bade-4df0-9ac0-0022df15ea02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:02 compute-2 nova_compute[226829]: 2026-01-31 08:15:02.733 226833 INFO nova.compute.manager [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] VM Stopped (Lifecycle Event)
Jan 31 08:15:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:02.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:02 compute-2 nova_compute[226829]: 2026-01-31 08:15:02.823 226833 DEBUG nova.compute.manager [None req-a81ef841-e9f4-4874-a02d-0f504489d398 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:02.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:03 compute-2 nova_compute[226829]: 2026-01-31 08:15:03.351 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847288.3506818, 1ee317bd-390c-4a22-9e4e-e24189eb499e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:03 compute-2 nova_compute[226829]: 2026-01-31 08:15:03.351 226833 INFO nova.compute.manager [-] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] VM Stopped (Lifecycle Event)
Jan 31 08:15:03 compute-2 nova_compute[226829]: 2026-01-31 08:15:03.389 226833 DEBUG nova.compute.manager [None req-99d4a449-724f-477c-a38d-9cfed002361b - - - - - -] [instance: 1ee317bd-390c-4a22-9e4e-e24189eb499e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:04 compute-2 ovn_controller[133834]: 2026-01-31T08:15:04Z|00553|binding|INFO|Releasing lport 950341c4-aa2a-4261-8207-ff7e92fd4830 from this chassis (sb_readonly=0)
Jan 31 08:15:04 compute-2 nova_compute[226829]: 2026-01-31 08:15:04.109 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:04 compute-2 ovn_controller[133834]: 2026-01-31T08:15:04Z|00554|binding|INFO|Releasing lport 950341c4-aa2a-4261-8207-ff7e92fd4830 from this chassis (sb_readonly=0)
Jan 31 08:15:04 compute-2 nova_compute[226829]: 2026-01-31 08:15:04.165 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:04 compute-2 ceph-mon[77282]: pgmap v2389: 305 pgs: 305 active+clean; 821 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 605 KiB/s rd, 2.2 MiB/s wr, 173 op/s
Jan 31 08:15:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:04.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:04.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:04 compute-2 nova_compute[226829]: 2026-01-31 08:15:04.934 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:05 compute-2 nova_compute[226829]: 2026-01-31 08:15:05.438 226833 DEBUG nova.network.neutron [req-8ea3968e-a56e-4683-94ea-ecb81c2bb837 req-d3f6e7a8-a765-4944-aafe-9ceef777ea62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updated VIF entry in instance network info cache for port c82f9d38-ef16-46ac-829a-12067ec8c603. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:15:05 compute-2 nova_compute[226829]: 2026-01-31 08:15:05.438 226833 DEBUG nova.network.neutron [req-8ea3968e-a56e-4683-94ea-ecb81c2bb837 req-d3f6e7a8-a765-4944-aafe-9ceef777ea62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updating instance_info_cache with network_info: [{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:05 compute-2 nova_compute[226829]: 2026-01-31 08:15:05.493 226833 DEBUG oslo_concurrency.lockutils [req-8ea3968e-a56e-4683-94ea-ecb81c2bb837 req-d3f6e7a8-a765-4944-aafe-9ceef777ea62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:15:05 compute-2 sudo[286008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:05 compute-2 sudo[286008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:05 compute-2 sudo[286008]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:05 compute-2 sudo[286033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:05 compute-2 sudo[286033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:05 compute-2 sudo[286033]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:05 compute-2 nova_compute[226829]: 2026-01-31 08:15:05.885 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:06 compute-2 ceph-mon[77282]: pgmap v2390: 305 pgs: 305 active+clean; 785 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 568 KiB/s rd, 1010 KiB/s wr, 130 op/s
Jan 31 08:15:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:06.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:06.889 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:06.891 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:06.892 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:08 compute-2 sudo[286060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:08 compute-2 sudo[286060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:08 compute-2 sudo[286060]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:08 compute-2 sudo[286085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:15:08 compute-2 sudo[286085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:08 compute-2 sudo[286085]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:08.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:08.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:08 compute-2 ceph-mon[77282]: pgmap v2391: 305 pgs: 305 active+clean; 785 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 139 op/s
Jan 31 08:15:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:09 compute-2 nova_compute[226829]: 2026-01-31 08:15:09.936 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:10 compute-2 podman[286111]: 2026-01-31 08:15:10.204226397 +0000 UTC m=+0.072719940 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:10.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:10 compute-2 ceph-mon[77282]: pgmap v2392: 305 pgs: 305 active+clean; 759 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 157 op/s
Jan 31 08:15:10 compute-2 nova_compute[226829]: 2026-01-31 08:15:10.922 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:12.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:15:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:12.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:15:12 compute-2 ceph-mon[77282]: pgmap v2393: 305 pgs: 305 active+clean; 730 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 148 op/s
Jan 31 08:15:13 compute-2 nova_compute[226829]: 2026-01-31 08:15:13.633 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:13 compute-2 nova_compute[226829]: 2026-01-31 08:15:13.634 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:13 compute-2 nova_compute[226829]: 2026-01-31 08:15:13.634 226833 INFO nova.compute.manager [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Unshelving
Jan 31 08:15:14 compute-2 nova_compute[226829]: 2026-01-31 08:15:14.019 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:14 compute-2 nova_compute[226829]: 2026-01-31 08:15:14.020 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:14 compute-2 nova_compute[226829]: 2026-01-31 08:15:14.026 226833 DEBUG nova.objects.instance [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_requests' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:14 compute-2 nova_compute[226829]: 2026-01-31 08:15:14.220 226833 DEBUG nova.objects.instance [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'numa_topology' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:14 compute-2 nova_compute[226829]: 2026-01-31 08:15:14.317 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:15:14 compute-2 nova_compute[226829]: 2026-01-31 08:15:14.317 226833 INFO nova.compute.claims [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:15:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:14.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:14.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:14 compute-2 nova_compute[226829]: 2026-01-31 08:15:14.938 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:14 compute-2 ceph-mon[77282]: pgmap v2394: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 128 op/s
Jan 31 08:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3105481692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:14 compute-2 nova_compute[226829]: 2026-01-31 08:15:14.950 226833 DEBUG oslo_concurrency.processutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1660655347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:15 compute-2 nova_compute[226829]: 2026-01-31 08:15:15.386 226833 DEBUG oslo_concurrency.processutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:15 compute-2 nova_compute[226829]: 2026-01-31 08:15:15.394 226833 DEBUG nova.compute.provider_tree [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:15 compute-2 nova_compute[226829]: 2026-01-31 08:15:15.620 226833 DEBUG nova.scheduler.client.report [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:15 compute-2 nova_compute[226829]: 2026-01-31 08:15:15.715 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:15 compute-2 nova_compute[226829]: 2026-01-31 08:15:15.925 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1660655347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:16 compute-2 nova_compute[226829]: 2026-01-31 08:15:16.533 226833 INFO nova.network.neutron [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updating port c82f9d38-ef16-46ac-829a-12067ec8c603 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 08:15:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:16.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:16.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:16 compute-2 ceph-mon[77282]: pgmap v2395: 305 pgs: 305 active+clean; 710 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 201 KiB/s wr, 111 op/s
Jan 31 08:15:18 compute-2 ceph-mon[77282]: pgmap v2396: 305 pgs: 305 active+clean; 710 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 455 KiB/s wr, 159 op/s
Jan 31 08:15:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:18.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:18.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.004 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.005 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.005 226833 DEBUG nova.network.neutron [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.328 226833 DEBUG nova.compute.manager [req-92ba38d1-16f5-4c59-a735-a4ddcd17a020 req-b6fc3590-455b-4075-87b3-c7c258a1748a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-changed-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.329 226833 DEBUG nova.compute.manager [req-92ba38d1-16f5-4c59-a735-a4ddcd17a020 req-b6fc3590-455b-4075-87b3-c7c258a1748a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Refreshing instance network info cache due to event network-changed-c82f9d38-ef16-46ac-829a-12067ec8c603. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.329 226833 DEBUG oslo_concurrency.lockutils [req-92ba38d1-16f5-4c59-a735-a4ddcd17a020 req-b6fc3590-455b-4075-87b3-c7c258a1748a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.842 226833 DEBUG oslo_concurrency.lockutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.843 226833 DEBUG oslo_concurrency.lockutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.844 226833 DEBUG oslo_concurrency.lockutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.845 226833 DEBUG oslo_concurrency.lockutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.846 226833 DEBUG oslo_concurrency.lockutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.848 226833 INFO nova.compute.manager [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Terminating instance
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.850 226833 DEBUG nova.compute.manager [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.940 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:19 compute-2 kernel: tapa2ea5794-4b (unregistering): left promiscuous mode
Jan 31 08:15:19 compute-2 NetworkManager[48999]: <info>  [1769847319.9501] device (tapa2ea5794-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:15:19 compute-2 ovn_controller[133834]: 2026-01-31T08:15:19Z|00555|binding|INFO|Releasing lport a2ea5794-4bd8-4ecd-84df-7b5500101840 from this chassis (sb_readonly=0)
Jan 31 08:15:19 compute-2 ovn_controller[133834]: 2026-01-31T08:15:19Z|00556|binding|INFO|Setting lport a2ea5794-4bd8-4ecd-84df-7b5500101840 down in Southbound
Jan 31 08:15:19 compute-2 ovn_controller[133834]: 2026-01-31T08:15:19Z|00557|binding|INFO|Removing iface tapa2ea5794-4b ovn-installed in OVS
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.960 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.963 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:19.972 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:3b:a1 10.100.0.10'], port_security=['fa:16:3e:42:3b:a1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4e4e24bf-e5fe-4be2-9d89-52432f07cca0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=a2ea5794-4bd8-4ecd-84df-7b5500101840) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:15:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:19.975 143841 INFO neutron.agent.ovn.metadata.agent [-] Port a2ea5794-4bd8-4ecd-84df-7b5500101840 in datapath b88251fc-7610-460a-ba55-2ed186c6f696 unbound from our chassis
Jan 31 08:15:19 compute-2 nova_compute[226829]: 2026-01-31 08:15:19.978 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:19.979 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b88251fc-7610-460a-ba55-2ed186c6f696, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:15:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:19.982 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d085c71c-7c46-43e5-aa1b-72676e317f8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:19.983 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace which is not needed anymore
Jan 31 08:15:20 compute-2 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000078.scope: Deactivated successfully.
Jan 31 08:15:20 compute-2 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000078.scope: Consumed 23.636s CPU time.
Jan 31 08:15:20 compute-2 systemd-machined[195142]: Machine qemu-54-instance-00000078 terminated.
Jan 31 08:15:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.074 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.079 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.091 226833 INFO nova.virt.libvirt.driver [-] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Instance destroyed successfully.
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.091 226833 DEBUG nova.objects.instance [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'resources' on Instance uuid 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.120 226833 DEBUG nova.virt.libvirt.vif [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:10:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-921122999',display_name='tempest-₡-921122999',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest--921122999',id=120,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-y30ad00c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:03Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=4e4e24bf-e5fe-4be2-9d89-52432f07cca0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.121 226833 DEBUG nova.network.os_vif_util [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "address": "fa:16:3e:42:3b:a1", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2ea5794-4b", "ovs_interfaceid": "a2ea5794-4bd8-4ecd-84df-7b5500101840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.122 226833 DEBUG nova.network.os_vif_util [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:3b:a1,bridge_name='br-int',has_traffic_filtering=True,id=a2ea5794-4bd8-4ecd-84df-7b5500101840,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ea5794-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.122 226833 DEBUG os_vif [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:3b:a1,bridge_name='br-int',has_traffic_filtering=True,id=a2ea5794-4bd8-4ecd-84df-7b5500101840,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ea5794-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.125 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.125 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2ea5794-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.129 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:15:20 compute-2 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[281499]: [NOTICE]   (281519) : haproxy version is 2.8.14-c23fe91
Jan 31 08:15:20 compute-2 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[281499]: [NOTICE]   (281519) : path to executable is /usr/sbin/haproxy
Jan 31 08:15:20 compute-2 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[281499]: [ALERT]    (281519) : Current worker (281523) exited with code 143 (Terminated)
Jan 31 08:15:20 compute-2 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[281499]: [WARNING]  (281519) : All workers exited. Exiting... (0)
Jan 31 08:15:20 compute-2 systemd[1]: libpod-4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080.scope: Deactivated successfully.
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.134 226833 INFO os_vif [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:3b:a1,bridge_name='br-int',has_traffic_filtering=True,id=a2ea5794-4bd8-4ecd-84df-7b5500101840,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2ea5794-4b')
Jan 31 08:15:20 compute-2 podman[286184]: 2026-01-31 08:15:20.143275265 +0000 UTC m=+0.060357535 container died 4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:20 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080-userdata-shm.mount: Deactivated successfully.
Jan 31 08:15:20 compute-2 systemd[1]: var-lib-containers-storage-overlay-75f84e6587dc69dfe02207f20ebab9236133448782a1fe80be35b6d91298bdb3-merged.mount: Deactivated successfully.
Jan 31 08:15:20 compute-2 podman[286184]: 2026-01-31 08:15:20.191525102 +0000 UTC m=+0.108607372 container cleanup 4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:15:20 compute-2 systemd[1]: libpod-conmon-4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080.scope: Deactivated successfully.
Jan 31 08:15:20 compute-2 podman[286241]: 2026-01-31 08:15:20.254398875 +0000 UTC m=+0.042303028 container remove 4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.258 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfadd44-9d87-4748-a7dc-079a84524abe]: (4, ('Sat Jan 31 08:15:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080)\n4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080\nSat Jan 31 08:15:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080)\n4dc8e6042a3e86e4aa4721876342b6ca9af66aa3eeb166daca855d7f0e471080\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.261 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f7558c9c-ce00-412b-aabe-8bc155e1c4b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.263 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.266 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:20 compute-2 kernel: tapb88251fc-70: left promiscuous mode
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.271 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.274 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e796e90e-c56b-47d6-bbca-d2bbcf012115]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.297 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3517a15c-807d-4e2e-9108-ef6eb0ef9f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.298 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[206a8ae9-61b0-4526-a6a9-f37146543ddf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.318 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[db546c3b-6fa2-4b43-a1e5-05bb1b4c843f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731517, 'reachable_time': 21381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286256, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:20 compute-2 systemd[1]: run-netns-ovnmeta\x2db88251fc\x2d7610\x2d460a\x2dba55\x2d2ed186c6f696.mount: Deactivated successfully.
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.322 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.322 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1d4c3e-50eb-4f74-9b29-6a1696b6b2ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.368 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.368 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:20.370 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.635 226833 INFO nova.virt.libvirt.driver [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Deleting instance files /var/lib/nova/instances/4e4e24bf-e5fe-4be2-9d89-52432f07cca0_del
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.637 226833 INFO nova.virt.libvirt.driver [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Deletion of /var/lib/nova/instances/4e4e24bf-e5fe-4be2-9d89-52432f07cca0_del complete
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.735 226833 INFO nova.compute.manager [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Took 0.88 seconds to destroy the instance on the hypervisor.
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.736 226833 DEBUG oslo.service.loopingcall [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.737 226833 DEBUG nova.compute.manager [-] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.737 226833 DEBUG nova.network.neutron [-] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:15:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:20.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:20 compute-2 ceph-mon[77282]: pgmap v2397: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 455 KiB/s wr, 142 op/s
Jan 31 08:15:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:15:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:20.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:15:20 compute-2 nova_compute[226829]: 2026-01-31 08:15:20.927 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.323 226833 DEBUG nova.compute.manager [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Received event network-vif-unplugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.323 226833 DEBUG oslo_concurrency.lockutils [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.324 226833 DEBUG oslo_concurrency.lockutils [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.325 226833 DEBUG oslo_concurrency.lockutils [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.325 226833 DEBUG nova.compute.manager [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] No waiting events found dispatching network-vif-unplugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.326 226833 DEBUG nova.compute.manager [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Received event network-vif-unplugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.326 226833 DEBUG nova.compute.manager [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Received event network-vif-plugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.327 226833 DEBUG oslo_concurrency.lockutils [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.328 226833 DEBUG oslo_concurrency.lockutils [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.328 226833 DEBUG oslo_concurrency.lockutils [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.329 226833 DEBUG nova.compute.manager [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] No waiting events found dispatching network-vif-plugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:21 compute-2 nova_compute[226829]: 2026-01-31 08:15:21.329 226833 WARNING nova.compute.manager [req-8eb691bd-9990-4710-9fde-951af5b95196 req-6a0cab48-a8d6-4ee2-b809-3d0a117250db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Received unexpected event network-vif-plugged-a2ea5794-4bd8-4ecd-84df-7b5500101840 for instance with vm_state active and task_state deleting.
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.182 226833 DEBUG nova.network.neutron [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updating instance_info_cache with network_info: [{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.252 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.254 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.255 226833 INFO nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Creating image(s)
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.287 226833 DEBUG nova.storage.rbd_utils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.291 226833 DEBUG nova.objects.instance [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.293 226833 DEBUG oslo_concurrency.lockutils [req-92ba38d1-16f5-4c59-a735-a4ddcd17a020 req-b6fc3590-455b-4075-87b3-c7c258a1748a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.293 226833 DEBUG nova.network.neutron [req-92ba38d1-16f5-4c59-a735-a4ddcd17a020 req-b6fc3590-455b-4075-87b3-c7c258a1748a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Refreshing network info cache for port c82f9d38-ef16-46ac-829a-12067ec8c603 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.385 226833 DEBUG nova.storage.rbd_utils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.430 226833 DEBUG nova.storage.rbd_utils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.436 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "b115dadf86958ae661c96954e20bf7b25a76daec" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.438 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "b115dadf86958ae661c96954e20bf7b25a76daec" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.466 226833 DEBUG nova.network.neutron [-] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.517 226833 INFO nova.compute.manager [-] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Took 1.78 seconds to deallocate network for instance.
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.608 226833 DEBUG nova.compute.manager [req-a2506732-0ba0-4b6d-a7d7-c3590ffa30b8 req-92021a45-c4d8-4d78-93e4-69d3ac1727e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Received event network-vif-deleted-a2ea5794-4bd8-4ecd-84df-7b5500101840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.618 226833 DEBUG oslo_concurrency.lockutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.619 226833 DEBUG oslo_concurrency.lockutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:22 compute-2 nova_compute[226829]: 2026-01-31 08:15:22.743 226833 DEBUG oslo_concurrency.processutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:22.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:22 compute-2 ceph-mon[77282]: pgmap v2398: 305 pgs: 305 active+clean; 679 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 455 KiB/s wr, 136 op/s
Jan 31 08:15:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:22.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.015 226833 DEBUG nova.virt.libvirt.imagebackend [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/70966e79-230e-420e-aa68-b04da70a01e2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/70966e79-230e-420e-aa68-b04da70a01e2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.100 226833 DEBUG nova.virt.libvirt.imagebackend [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/70966e79-230e-420e-aa68-b04da70a01e2/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.101 226833 DEBUG nova.storage.rbd_utils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] cloning images/70966e79-230e-420e-aa68-b04da70a01e2@snap to None/5a59388d-bade-4df0-9ac0-0022df15ea02_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:15:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3764220402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.183 226833 DEBUG oslo_concurrency.processutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.190 226833 DEBUG nova.compute.provider_tree [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.229 226833 DEBUG nova.scheduler.client.report [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.252 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "b115dadf86958ae661c96954e20bf7b25a76daec" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.297 226833 DEBUG oslo_concurrency.lockutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.437 226833 INFO nova.scheduler.client.report [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Deleted allocations for instance 4e4e24bf-e5fe-4be2-9d89-52432f07cca0
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.450 226833 DEBUG nova.objects.instance [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'migration_context' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.543 226833 DEBUG nova.storage.rbd_utils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] flattening vms/5a59388d-bade-4df0-9ac0-0022df15ea02_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 08:15:23 compute-2 nova_compute[226829]: 2026-01-31 08:15:23.613 226833 DEBUG oslo_concurrency.lockutils [None req-049e54d6-1f5a-484d-ac86-e608d277a42d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "4e4e24bf-e5fe-4be2-9d89-52432f07cca0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3764220402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.064 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Image rbd:vms/5a59388d-bade-4df0-9ac0-0022df15ea02_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.066 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.066 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Ensure instance console log exists: /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.067 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.067 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.067 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.071 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Start _get_guest_xml network_info=[{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:14:44Z,direct_url=<?>,disk_format='raw',id=70966e79-230e-420e-aa68-b04da70a01e2,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-993615947-shelved',owner='c3ddadeb950a490db5c99da98a32c9ec',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:14:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.076 226833 WARNING nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.083 226833 DEBUG nova.virt.libvirt.host [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.083 226833 DEBUG nova.virt.libvirt.host [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.088 226833 DEBUG nova.virt.libvirt.host [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.089 226833 DEBUG nova.virt.libvirt.host [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.091 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.092 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:14:44Z,direct_url=<?>,disk_format='raw',id=70966e79-230e-420e-aa68-b04da70a01e2,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-993615947-shelved',owner='c3ddadeb950a490db5c99da98a32c9ec',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:14:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.093 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.093 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.094 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.094 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.095 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.095 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.096 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.096 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.097 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.097 226833 DEBUG nova.virt.hardware [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.098 226833 DEBUG nova.objects.instance [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.118 226833 DEBUG oslo_concurrency.processutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.328 226833 DEBUG nova.network.neutron [req-92ba38d1-16f5-4c59-a735-a4ddcd17a020 req-b6fc3590-455b-4075-87b3-c7c258a1748a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updated VIF entry in instance network info cache for port c82f9d38-ef16-46ac-829a-12067ec8c603. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.329 226833 DEBUG nova.network.neutron [req-92ba38d1-16f5-4c59-a735-a4ddcd17a020 req-b6fc3590-455b-4075-87b3-c7c258a1748a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updating instance_info_cache with network_info: [{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.362 226833 DEBUG oslo_concurrency.lockutils [req-92ba38d1-16f5-4c59-a735-a4ddcd17a020 req-b6fc3590-455b-4075-87b3-c7c258a1748a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.522 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:15:24 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/192795781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.618 226833 DEBUG oslo_concurrency.processutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.654 226833 DEBUG nova.storage.rbd_utils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:24 compute-2 nova_compute[226829]: 2026-01-31 08:15:24.660 226833 DEBUG oslo_concurrency.processutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:24.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:15:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:24.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:15:24 compute-2 ceph-mon[77282]: pgmap v2399: 305 pgs: 305 active+clean; 670 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 455 KiB/s wr, 133 op/s
Jan 31 08:15:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/192795781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:15:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3434580900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.088 226833 DEBUG oslo_concurrency.processutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.090 226833 DEBUG nova.virt.libvirt.vif [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:12:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-993615947',display_name='tempest-ServerActionsTestOtherB-server-993615947',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-993615947',id=129,image_ref='70966e79-230e-420e-aa68-b04da70a01e2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:01Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-5y6n0w3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member',shelved_at='2026-01-31T08:14:53.299988',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='70966e79-230e-420e-aa68-b04da70a01e2'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:15:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=5a59388d-bade-4df0-9ac0-0022df15ea02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.090 226833 DEBUG nova.network.os_vif_util [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.091 226833 DEBUG nova.network.os_vif_util [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.092 226833 DEBUG nova.objects.instance [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.121 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <uuid>5a59388d-bade-4df0-9ac0-0022df15ea02</uuid>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <name>instance-00000081</name>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerActionsTestOtherB-server-993615947</nova:name>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:15:24</nova:creationTime>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <nova:user uuid="18aee9d81d404f77ac81cde538f140d8">tempest-ServerActionsTestOtherB-2012907318-project-member</nova:user>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <nova:project uuid="c3ddadeb950a490db5c99da98a32c9ec">tempest-ServerActionsTestOtherB-2012907318</nova:project>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="70966e79-230e-420e-aa68-b04da70a01e2"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <nova:port uuid="c82f9d38-ef16-46ac-829a-12067ec8c603">
Jan 31 08:15:25 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <system>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <entry name="serial">5a59388d-bade-4df0-9ac0-0022df15ea02</entry>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <entry name="uuid">5a59388d-bade-4df0-9ac0-0022df15ea02</entry>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     </system>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <os>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   </os>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <features>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   </features>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/5a59388d-bade-4df0-9ac0-0022df15ea02_disk">
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       </source>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config">
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       </source>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:15:25 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:15:e2:fd"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <target dev="tapc82f9d38-ef"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/console.log" append="off"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <video>
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     </video>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:15:25 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:15:25 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:15:25 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:15:25 compute-2 nova_compute[226829]: </domain>
Jan 31 08:15:25 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.123 226833 DEBUG nova.compute.manager [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Preparing to wait for external event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.124 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.124 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.125 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.126 226833 DEBUG nova.virt.libvirt.vif [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:12:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-993615947',display_name='tempest-ServerActionsTestOtherB-server-993615947',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-993615947',id=129,image_ref='70966e79-230e-420e-aa68-b04da70a01e2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:01Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-5y6n0w3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member',shelved_at='2026-01-31T08:14:53.299988',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='70966e79-230e-420e-aa68-b04da70a01e2'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:15:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=5a59388d-bade-4df0-9ac0-0022df15ea02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.126 226833 DEBUG nova.network.os_vif_util [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.127 226833 DEBUG nova.network.os_vif_util [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.128 226833 DEBUG os_vif [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.128 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.129 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.130 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.130 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.133 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.134 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc82f9d38-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.134 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc82f9d38-ef, col_values=(('external_ids', {'iface-id': 'c82f9d38-ef16-46ac-829a-12067ec8c603', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:e2:fd', 'vm-uuid': '5a59388d-bade-4df0-9ac0-0022df15ea02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.136 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:25 compute-2 NetworkManager[48999]: <info>  [1769847325.1380] manager: (tapc82f9d38-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.140 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.141 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.144 226833 INFO os_vif [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef')
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.235 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.236 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.236 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No VIF found with MAC fa:16:3e:15:e2:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.237 226833 INFO nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Using config drive
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.270 226833 DEBUG nova.storage.rbd_utils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.313 226833 DEBUG nova.objects.instance [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.392 226833 DEBUG nova.objects.instance [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'keypairs' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:25 compute-2 sudo[286578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:25 compute-2 sudo[286578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:25 compute-2 sudo[286578]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:25 compute-2 sudo[286603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:25 compute-2 sudo[286603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:25 compute-2 sudo[286603]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:25 compute-2 nova_compute[226829]: 2026-01-31 08:15:25.928 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3434580900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.013 226833 INFO nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Creating config drive at /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.018 226833 DEBUG oslo_concurrency.processutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbeu6oy4d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.146 226833 DEBUG oslo_concurrency.processutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbeu6oy4d" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.170 226833 DEBUG nova.storage.rbd_utils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.173 226833 DEBUG oslo_concurrency.processutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config 5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.691 226833 DEBUG oslo_concurrency.processutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config 5a59388d-bade-4df0-9ac0-0022df15ea02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.692 226833 INFO nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Deleting local config drive /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02/disk.config because it was imported into RBD.
Jan 31 08:15:26 compute-2 kernel: tapc82f9d38-ef: entered promiscuous mode
Jan 31 08:15:26 compute-2 NetworkManager[48999]: <info>  [1769847326.7395] manager: (tapc82f9d38-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Jan 31 08:15:26 compute-2 systemd-udevd[286679]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:15:26 compute-2 ovn_controller[133834]: 2026-01-31T08:15:26Z|00558|binding|INFO|Claiming lport c82f9d38-ef16-46ac-829a-12067ec8c603 for this chassis.
Jan 31 08:15:26 compute-2 ovn_controller[133834]: 2026-01-31T08:15:26Z|00559|binding|INFO|c82f9d38-ef16-46ac-829a-12067ec8c603: Claiming fa:16:3e:15:e2:fd 10.100.0.14
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.778 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.783 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:26 compute-2 NetworkManager[48999]: <info>  [1769847326.7901] device (tapc82f9d38-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:15:26 compute-2 NetworkManager[48999]: <info>  [1769847326.7907] device (tapc82f9d38-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.789 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:26 compute-2 NetworkManager[48999]: <info>  [1769847326.7917] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Jan 31 08:15:26 compute-2 NetworkManager[48999]: <info>  [1769847326.7922] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.799 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:e2:fd 10.100.0.14'], port_security=['fa:16:3e:15:e2:fd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5a59388d-bade-4df0-9ac0-0022df15ea02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '7', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c82f9d38-ef16-46ac-829a-12067ec8c603) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.800 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c82f9d38-ef16-46ac-829a-12067ec8c603 in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 bound to our chassis
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.801 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:15:26 compute-2 systemd-machined[195142]: New machine qemu-60-instance-00000081.
Jan 31 08:15:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:26.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.811 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd83412-f9d2-4c20-84d7-a7af79d7fa78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.811 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape8014d6b-21 in ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.813 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape8014d6b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.813 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[edd12c02-53d7-4837-8357-d88429514d2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.814 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4f6c57-d89e-4859-aa83-f5cbf3fdccf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 systemd[1]: Started Virtual Machine qemu-60-instance-00000081.
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.824 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[03b552d1-fc19-4cb5-a293-9162767f4989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.830 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.834 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ec478477-e38c-4052-8c19-88daf804c675]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.845 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:26 compute-2 ovn_controller[133834]: 2026-01-31T08:15:26Z|00560|binding|INFO|Setting lport c82f9d38-ef16-46ac-829a-12067ec8c603 ovn-installed in OVS
Jan 31 08:15:26 compute-2 ovn_controller[133834]: 2026-01-31T08:15:26Z|00561|binding|INFO|Setting lport c82f9d38-ef16-46ac-829a-12067ec8c603 up in Southbound
Jan 31 08:15:26 compute-2 nova_compute[226829]: 2026-01-31 08:15:26.857 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.865 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9476c601-f7c2-46d1-97f0-4089b6e5a051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.873 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[97ddb3bf-b1cc-43bc-b2eb-e534195492ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 NetworkManager[48999]: <info>  [1769847326.8740] manager: (tape8014d6b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/271)
Jan 31 08:15:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:26.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:26 compute-2 systemd-udevd[286682]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.936 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9c290604-ad30-4d71-9f4e-78fa26c14083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.940 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[db924640-aa77-4d7b-baca-20c61298b3ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 NetworkManager[48999]: <info>  [1769847326.9549] device (tape8014d6b-20): carrier: link connected
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.957 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4be6d59a-a44a-4a9b-bb21-da251f16c592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.969 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a09a7d8f-d994-48e8-a93a-b86b57ca45e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757942, 'reachable_time': 42725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286717, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 ceph-mon[77282]: pgmap v2400: 305 pgs: 305 active+clean; 653 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.7 MiB/s wr, 165 op/s
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.981 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1f2a3d-dfa5-4d0d-820f-5c36dc700350]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:c1c3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757942, 'tstamp': 757942}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286718, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:26.994 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[852b3a09-982d-4131-b177-3b3419408ea1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757942, 'reachable_time': 42725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286719, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:27.016 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[978d8d2b-3872-470d-a50a-0387e4003d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:27.048 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1c75c89a-f866-48f1-abe8-d1d2fae4bdf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:27.049 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:27.049 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:27.050 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.052 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:27 compute-2 kernel: tape8014d6b-20: entered promiscuous mode
Jan 31 08:15:27 compute-2 NetworkManager[48999]: <info>  [1769847327.0534] manager: (tape8014d6b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.054 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:27.057 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.058 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:27 compute-2 ovn_controller[133834]: 2026-01-31T08:15:27Z|00562|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.060 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:27.060 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:27.061 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b20b4fd7-acd5-4ce6-8f03-3ee2de95d1c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:27.062 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:15:27 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:27.062 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'env', 'PROCESS_TAG=haproxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e8014d6b-23e1-41ef-b5e2-3d770d302e72.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.064 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:27 compute-2 podman[286784]: 2026-01-31 08:15:27.417825671 +0000 UTC m=+0.063161011 container create e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:15:27 compute-2 systemd[1]: Started libpod-conmon-e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b.scope.
Jan 31 08:15:27 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.469 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847327.4684162, 5a59388d-bade-4df0-9ac0-0022df15ea02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.470 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] VM Started (Lifecycle Event)
Jan 31 08:15:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325d83698ec4094d0401a5002db344af1ea001b709f01382dc7258327b49dd08/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:27 compute-2 podman[286784]: 2026-01-31 08:15:27.393929624 +0000 UTC m=+0.039264944 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:27 compute-2 podman[286784]: 2026-01-31 08:15:27.490352355 +0000 UTC m=+0.135687645 container init e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:15:27 compute-2 podman[286784]: 2026-01-31 08:15:27.494548809 +0000 UTC m=+0.139884119 container start e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 08:15:27 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[286806]: [NOTICE]   (286812) : New worker (286814) forked
Jan 31 08:15:27 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[286806]: [NOTICE]   (286812) : Loading success.
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.627 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.632 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847327.4686615, 5a59388d-bade-4df0-9ac0-0022df15ea02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.632 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] VM Paused (Lifecycle Event)
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.697 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.701 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:15:27 compute-2 nova_compute[226829]: 2026-01-31 08:15:27.737 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:15:27 compute-2 ceph-mon[77282]: pgmap v2401: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.1 MiB/s wr, 226 op/s
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.022 226833 DEBUG nova.compute.manager [req-14ab607c-8066-4179-a7c9-a11cdf024078 req-67397411-a921-4d9f-996f-73685f2b4a73 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.024 226833 DEBUG oslo_concurrency.lockutils [req-14ab607c-8066-4179-a7c9-a11cdf024078 req-67397411-a921-4d9f-996f-73685f2b4a73 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.024 226833 DEBUG oslo_concurrency.lockutils [req-14ab607c-8066-4179-a7c9-a11cdf024078 req-67397411-a921-4d9f-996f-73685f2b4a73 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.024 226833 DEBUG oslo_concurrency.lockutils [req-14ab607c-8066-4179-a7c9-a11cdf024078 req-67397411-a921-4d9f-996f-73685f2b4a73 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.025 226833 DEBUG nova.compute.manager [req-14ab607c-8066-4179-a7c9-a11cdf024078 req-67397411-a921-4d9f-996f-73685f2b4a73 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Processing event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.025 226833 DEBUG nova.compute.manager [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.029 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847328.0296478, 5a59388d-bade-4df0-9ac0-0022df15ea02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.030 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] VM Resumed (Lifecycle Event)
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.032 226833 DEBUG nova.virt.libvirt.driver [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.035 226833 INFO nova.virt.libvirt.driver [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance spawned successfully.
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.068 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.073 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.131 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:15:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:28.373 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:28 compute-2 nova_compute[226829]: 2026-01-31 08:15:28.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:28.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:28.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e306 e306: 3 total, 3 up, 3 in
Jan 31 08:15:29 compute-2 nova_compute[226829]: 2026-01-31 08:15:29.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:29 compute-2 nova_compute[226829]: 2026-01-31 08:15:29.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:15:29 compute-2 nova_compute[226829]: 2026-01-31 08:15:29.521 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:15:29 compute-2 nova_compute[226829]: 2026-01-31 08:15:29.521 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:15:29 compute-2 nova_compute[226829]: 2026-01-31 08:15:29.522 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:15:29 compute-2 nova_compute[226829]: 2026-01-31 08:15:29.695 226833 DEBUG nova.compute.manager [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:29 compute-2 nova_compute[226829]: 2026-01-31 08:15:29.855 226833 DEBUG oslo_concurrency.lockutils [None req-d67f3d3c-d511-45df-bebc-1487d022c738 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 16.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:30 compute-2 ceph-mon[77282]: osdmap e306: 3 total, 3 up, 3 in
Jan 31 08:15:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1775546745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:30 compute-2 ceph-mon[77282]: pgmap v2403: 305 pgs: 305 active+clean; 676 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.7 MiB/s wr, 212 op/s
Jan 31 08:15:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:30 compute-2 nova_compute[226829]: 2026-01-31 08:15:30.136 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:30 compute-2 nova_compute[226829]: 2026-01-31 08:15:30.168 226833 DEBUG nova.compute.manager [req-09d0db93-0160-4f34-a945-526c220e8f11 req-957487fe-43c4-41ca-9770-a3566dc6a603 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:30 compute-2 nova_compute[226829]: 2026-01-31 08:15:30.169 226833 DEBUG oslo_concurrency.lockutils [req-09d0db93-0160-4f34-a945-526c220e8f11 req-957487fe-43c4-41ca-9770-a3566dc6a603 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:30 compute-2 nova_compute[226829]: 2026-01-31 08:15:30.170 226833 DEBUG oslo_concurrency.lockutils [req-09d0db93-0160-4f34-a945-526c220e8f11 req-957487fe-43c4-41ca-9770-a3566dc6a603 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:30 compute-2 nova_compute[226829]: 2026-01-31 08:15:30.170 226833 DEBUG oslo_concurrency.lockutils [req-09d0db93-0160-4f34-a945-526c220e8f11 req-957487fe-43c4-41ca-9770-a3566dc6a603 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:30 compute-2 nova_compute[226829]: 2026-01-31 08:15:30.171 226833 DEBUG nova.compute.manager [req-09d0db93-0160-4f34-a945-526c220e8f11 req-957487fe-43c4-41ca-9770-a3566dc6a603 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] No waiting events found dispatching network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:30 compute-2 nova_compute[226829]: 2026-01-31 08:15:30.171 226833 WARNING nova.compute.manager [req-09d0db93-0160-4f34-a945-526c220e8f11 req-957487fe-43c4-41ca-9770-a3566dc6a603 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received unexpected event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 for instance with vm_state active and task_state None.
Jan 31 08:15:30 compute-2 podman[286825]: 2026-01-31 08:15:30.22181739 +0000 UTC m=+0.089528905 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 08:15:30 compute-2 ovn_controller[133834]: 2026-01-31T08:15:30Z|00563|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:15:30 compute-2 nova_compute[226829]: 2026-01-31 08:15:30.377 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:30 compute-2 nova_compute[226829]: 2026-01-31 08:15:30.668 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:30.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:30 compute-2 nova_compute[226829]: 2026-01-31 08:15:30.930 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:32 compute-2 nova_compute[226829]: 2026-01-31 08:15:32.627 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updating instance_info_cache with network_info: [{"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:32.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:32 compute-2 ceph-mon[77282]: pgmap v2404: 305 pgs: 305 active+clean; 665 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.0 MiB/s wr, 257 op/s
Jan 31 08:15:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:32.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:32 compute-2 nova_compute[226829]: 2026-01-31 08:15:32.896 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-5a59388d-bade-4df0-9ac0-0022df15ea02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:15:32 compute-2 nova_compute[226829]: 2026-01-31 08:15:32.897 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:15:32 compute-2 nova_compute[226829]: 2026-01-31 08:15:32.898 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:33 compute-2 nova_compute[226829]: 2026-01-31 08:15:33.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:34.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:34 compute-2 ceph-mon[77282]: pgmap v2405: 305 pgs: 305 active+clean; 661 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 6.2 MiB/s wr, 304 op/s
Jan 31 08:15:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:34.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:35 compute-2 nova_compute[226829]: 2026-01-31 08:15:35.089 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847320.0876262, 4e4e24bf-e5fe-4be2-9d89-52432f07cca0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:35 compute-2 nova_compute[226829]: 2026-01-31 08:15:35.089 226833 INFO nova.compute.manager [-] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] VM Stopped (Lifecycle Event)
Jan 31 08:15:35 compute-2 nova_compute[226829]: 2026-01-31 08:15:35.138 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:35 compute-2 nova_compute[226829]: 2026-01-31 08:15:35.237 226833 DEBUG nova.compute.manager [None req-354bfa9d-eb3e-420f-8b53-2a6a38103c33 - - - - - -] [instance: 4e4e24bf-e5fe-4be2-9d89-52432f07cca0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2815147355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e307 e307: 3 total, 3 up, 3 in
Jan 31 08:15:35 compute-2 nova_compute[226829]: 2026-01-31 08:15:35.933 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e308 e308: 3 total, 3 up, 3 in
Jan 31 08:15:36 compute-2 nova_compute[226829]: 2026-01-31 08:15:36.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:36 compute-2 nova_compute[226829]: 2026-01-31 08:15:36.528 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:36 compute-2 nova_compute[226829]: 2026-01-31 08:15:36.528 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:36 compute-2 nova_compute[226829]: 2026-01-31 08:15:36.529 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:36 compute-2 nova_compute[226829]: 2026-01-31 08:15:36.529 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:15:36 compute-2 nova_compute[226829]: 2026-01-31 08:15:36.530 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:36.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:36.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:36 compute-2 ceph-mon[77282]: pgmap v2406: 305 pgs: 305 active+clean; 675 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.3 MiB/s wr, 267 op/s
Jan 31 08:15:36 compute-2 ceph-mon[77282]: osdmap e307: 3 total, 3 up, 3 in
Jan 31 08:15:36 compute-2 ceph-mon[77282]: osdmap e308: 3 total, 3 up, 3 in
Jan 31 08:15:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1720226546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/967709114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:36 compute-2 nova_compute[226829]: 2026-01-31 08:15:36.996 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:37 compute-2 nova_compute[226829]: 2026-01-31 08:15:37.104 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:15:37 compute-2 nova_compute[226829]: 2026-01-31 08:15:37.105 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:15:37 compute-2 nova_compute[226829]: 2026-01-31 08:15:37.230 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:15:37 compute-2 nova_compute[226829]: 2026-01-31 08:15:37.231 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4013MB free_disk=20.739547729492188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:15:37 compute-2 nova_compute[226829]: 2026-01-31 08:15:37.231 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:37 compute-2 nova_compute[226829]: 2026-01-31 08:15:37.232 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:37 compute-2 nova_compute[226829]: 2026-01-31 08:15:37.869 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 5a59388d-bade-4df0-9ac0-0022df15ea02 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:15:37 compute-2 nova_compute[226829]: 2026-01-31 08:15:37.869 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:15:37 compute-2 nova_compute[226829]: 2026-01-31 08:15:37.870 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:15:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/967709114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:37 compute-2 nova_compute[226829]: 2026-01-31 08:15:37.952 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.283 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Acquiring lock "3a9113d4-7b86-462a-843a-a594bd0908ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.285 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.317 226833 DEBUG nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:15:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3396587203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.392 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.396 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.433 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.469 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.483 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.483 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.483 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.490 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.491 226833 INFO nova.compute.claims [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:15:38 compute-2 nova_compute[226829]: 2026-01-31 08:15:38.721 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:38.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:38.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:38 compute-2 ceph-mon[77282]: pgmap v2409: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 147 op/s
Jan 31 08:15:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3396587203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3728581685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1058907294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.124 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.132 226833 DEBUG nova.compute.provider_tree [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.178 226833 DEBUG nova.scheduler.client.report [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.218 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.219 226833 DEBUG nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.327 226833 DEBUG nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.328 226833 DEBUG nova.network.neutron [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.355 226833 INFO nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.381 226833 DEBUG nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.486 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.548 226833 DEBUG nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.549 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.550 226833 INFO nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Creating image(s)
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.576 226833 DEBUG nova.storage.rbd_utils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] rbd image 3a9113d4-7b86-462a-843a-a594bd0908ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.603 226833 DEBUG nova.storage.rbd_utils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] rbd image 3a9113d4-7b86-462a-843a-a594bd0908ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.628 226833 DEBUG nova.storage.rbd_utils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] rbd image 3a9113d4-7b86-462a-843a-a594bd0908ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.631 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.687 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.689 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.689 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.690 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.712 226833 DEBUG nova.storage.rbd_utils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] rbd image 3a9113d4-7b86-462a-843a-a594bd0908ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.715 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 3a9113d4-7b86-462a-843a-a594bd0908ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:39 compute-2 nova_compute[226829]: 2026-01-31 08:15:39.757 226833 DEBUG nova.policy [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e598a75077944569409ad429a456aea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20a1c4d130394aa79f7e0825cb720528', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:15:39 compute-2 ovn_controller[133834]: 2026-01-31T08:15:39Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:e2:fd 10.100.0.14
Jan 31 08:15:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1615842584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1058907294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.011 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 3a9113d4-7b86-462a-843a-a594bd0908ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.082 226833 DEBUG nova.storage.rbd_utils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] resizing rbd image 3a9113d4-7b86-462a-843a-a594bd0908ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.141 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.199 226833 DEBUG nova.objects.instance [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a9113d4-7b86-462a-843a-a594bd0908ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.221 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.221 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Ensure instance console log exists: /var/lib/nova/instances/3a9113d4-7b86-462a-843a-a594bd0908ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.222 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.222 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.223 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.649 226833 DEBUG oslo_concurrency.lockutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.650 226833 DEBUG oslo_concurrency.lockutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.650 226833 DEBUG oslo_concurrency.lockutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.650 226833 DEBUG oslo_concurrency.lockutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.650 226833 DEBUG oslo_concurrency.lockutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.652 226833 INFO nova.compute.manager [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Terminating instance
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.653 226833 DEBUG nova.compute.manager [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:15:40 compute-2 kernel: tapc82f9d38-ef (unregistering): left promiscuous mode
Jan 31 08:15:40 compute-2 NetworkManager[48999]: <info>  [1769847340.7042] device (tapc82f9d38-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.704 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.711 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 ovn_controller[133834]: 2026-01-31T08:15:40Z|00564|binding|INFO|Releasing lport c82f9d38-ef16-46ac-829a-12067ec8c603 from this chassis (sb_readonly=0)
Jan 31 08:15:40 compute-2 ovn_controller[133834]: 2026-01-31T08:15:40Z|00565|binding|INFO|Setting lport c82f9d38-ef16-46ac-829a-12067ec8c603 down in Southbound
Jan 31 08:15:40 compute-2 ovn_controller[133834]: 2026-01-31T08:15:40Z|00566|binding|INFO|Removing iface tapc82f9d38-ef ovn-installed in OVS
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.714 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.721 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:e2:fd 10.100.0.14'], port_security=['fa:16:3e:15:e2:fd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5a59388d-bade-4df0-9ac0-0022df15ea02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '9', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c82f9d38-ef16-46ac-829a-12067ec8c603) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.723 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c82f9d38-ef16-46ac-829a-12067ec8c603 in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 unbound from our chassis
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.725 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e8014d6b-23e1-41ef-b5e2-3d770d302e72, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.726 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.727 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe8d1b7-5467-4541-aad7-1de2e2279cb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.728 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace which is not needed anymore
Jan 31 08:15:40 compute-2 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000081.scope: Deactivated successfully.
Jan 31 08:15:40 compute-2 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000081.scope: Consumed 13.302s CPU time.
Jan 31 08:15:40 compute-2 systemd-machined[195142]: Machine qemu-60-instance-00000081 terminated.
Jan 31 08:15:40 compute-2 podman[287091]: 2026-01-31 08:15:40.775757058 +0000 UTC m=+0.049472871 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:15:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:40.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:40 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[286806]: [NOTICE]   (286812) : haproxy version is 2.8.14-c23fe91
Jan 31 08:15:40 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[286806]: [NOTICE]   (286812) : path to executable is /usr/sbin/haproxy
Jan 31 08:15:40 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[286806]: [WARNING]  (286812) : Exiting Master process...
Jan 31 08:15:40 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[286806]: [WARNING]  (286812) : Exiting Master process...
Jan 31 08:15:40 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[286806]: [ALERT]    (286812) : Current worker (286814) exited with code 143 (Terminated)
Jan 31 08:15:40 compute-2 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[286806]: [WARNING]  (286812) : All workers exited. Exiting... (0)
Jan 31 08:15:40 compute-2 systemd[1]: libpod-e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b.scope: Deactivated successfully.
Jan 31 08:15:40 compute-2 podman[287130]: 2026-01-31 08:15:40.838829805 +0000 UTC m=+0.041111644 container died e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:15:40 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b-userdata-shm.mount: Deactivated successfully.
Jan 31 08:15:40 compute-2 systemd[1]: var-lib-containers-storage-overlay-325d83698ec4094d0401a5002db344af1ea001b709f01382dc7258327b49dd08-merged.mount: Deactivated successfully.
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.873 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 podman[287130]: 2026-01-31 08:15:40.879644221 +0000 UTC m=+0.081926060 container cleanup e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.880 226833 INFO nova.virt.libvirt.driver [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Instance destroyed successfully.
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.880 226833 DEBUG nova.objects.instance [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'resources' on Instance uuid 5a59388d-bade-4df0-9ac0-0022df15ea02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:40 compute-2 systemd[1]: libpod-conmon-e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b.scope: Deactivated successfully.
Jan 31 08:15:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:40.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.900 226833 DEBUG nova.network.neutron [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Successfully created port: 116d832c-6613-46dd-910f-4caf0a8e58ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.923 226833 DEBUG nova.virt.libvirt.vif [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:12:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-993615947',display_name='tempest-ServerActionsTestOtherB-server-993615947',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-993615947',id=129,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:29Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-5y6n0w3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=5a59388d-bade-4df0-9ac0-0022df15ea02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.923 226833 DEBUG nova.network.os_vif_util [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "c82f9d38-ef16-46ac-829a-12067ec8c603", "address": "fa:16:3e:15:e2:fd", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc82f9d38-ef", "ovs_interfaceid": "c82f9d38-ef16-46ac-829a-12067ec8c603", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.924 226833 DEBUG nova.network.os_vif_util [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.925 226833 DEBUG os_vif [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.927 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.927 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc82f9d38-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.929 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.930 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.933 226833 INFO os_vif [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:e2:fd,bridge_name='br-int',has_traffic_filtering=True,id=c82f9d38-ef16-46ac-829a-12067ec8c603,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc82f9d38-ef')
Jan 31 08:15:40 compute-2 podman[287170]: 2026-01-31 08:15:40.936382527 +0000 UTC m=+0.041199156 container remove e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.940 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[35c721c9-b706-4d37-a4f1-cc5061e2fb36]: (4, ('Sat Jan 31 08:15:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b)\ne6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b\nSat Jan 31 08:15:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (e6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b)\ne6edc1ec79c937faa086bf5d8d297eb38edeccf82890c5e1ad96a406c729769b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.942 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[70808b73-753c-4d69-8ae1-028c35ce27c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.943 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:40 compute-2 kernel: tape8014d6b-20: left promiscuous mode
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.950 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 nova_compute[226829]: 2026-01-31 08:15:40.954 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.957 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6afdef-8d14-4765-9e29-a431a62326bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.973 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c674a7-3681-4da7-9d64-660151635f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.974 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[91377cf5-e4c4-4390-85f5-1146b8ffff5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-2 ceph-mon[77282]: pgmap v2410: 305 pgs: 305 active+clean; 643 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 111 op/s
Jan 31 08:15:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1660631670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.984 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0a23af45-1f70-409e-8675-42fd3abd85ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757933, 'reachable_time': 37823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287204, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-2 systemd[1]: run-netns-ovnmeta\x2de8014d6b\x2d23e1\x2d41ef\x2db5e2\x2d3d770d302e72.mount: Deactivated successfully.
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.988 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:15:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:40.988 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[66eefe68-abc2-45e1-8619-ff0321d7f5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:41 compute-2 nova_compute[226829]: 2026-01-31 08:15:41.368 226833 INFO nova.virt.libvirt.driver [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Deleting instance files /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02_del
Jan 31 08:15:41 compute-2 nova_compute[226829]: 2026-01-31 08:15:41.369 226833 INFO nova.virt.libvirt.driver [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Deletion of /var/lib/nova/instances/5a59388d-bade-4df0-9ac0-0022df15ea02_del complete
Jan 31 08:15:41 compute-2 nova_compute[226829]: 2026-01-31 08:15:41.468 226833 INFO nova.compute.manager [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Took 0.81 seconds to destroy the instance on the hypervisor.
Jan 31 08:15:41 compute-2 nova_compute[226829]: 2026-01-31 08:15:41.468 226833 DEBUG oslo.service.loopingcall [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:15:41 compute-2 nova_compute[226829]: 2026-01-31 08:15:41.469 226833 DEBUG nova.compute.manager [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:15:41 compute-2 nova_compute[226829]: 2026-01-31 08:15:41.469 226833 DEBUG nova.network.neutron [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:15:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3699285779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:41 compute-2 ceph-mon[77282]: pgmap v2411: 305 pgs: 305 active+clean; 650 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 1.7 MiB/s wr, 94 op/s
Jan 31 08:15:42 compute-2 nova_compute[226829]: 2026-01-31 08:15:42.731 226833 DEBUG nova.compute.manager [req-107c8bc8-fbe4-4007-be44-3d79c399e6e1 req-1ec42219-b6cc-4922-b560-b871d429faa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-vif-unplugged-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:42 compute-2 nova_compute[226829]: 2026-01-31 08:15:42.732 226833 DEBUG oslo_concurrency.lockutils [req-107c8bc8-fbe4-4007-be44-3d79c399e6e1 req-1ec42219-b6cc-4922-b560-b871d429faa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:42 compute-2 nova_compute[226829]: 2026-01-31 08:15:42.733 226833 DEBUG oslo_concurrency.lockutils [req-107c8bc8-fbe4-4007-be44-3d79c399e6e1 req-1ec42219-b6cc-4922-b560-b871d429faa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:42 compute-2 nova_compute[226829]: 2026-01-31 08:15:42.733 226833 DEBUG oslo_concurrency.lockutils [req-107c8bc8-fbe4-4007-be44-3d79c399e6e1 req-1ec42219-b6cc-4922-b560-b871d429faa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:42 compute-2 nova_compute[226829]: 2026-01-31 08:15:42.733 226833 DEBUG nova.compute.manager [req-107c8bc8-fbe4-4007-be44-3d79c399e6e1 req-1ec42219-b6cc-4922-b560-b871d429faa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] No waiting events found dispatching network-vif-unplugged-c82f9d38-ef16-46ac-829a-12067ec8c603 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:42 compute-2 nova_compute[226829]: 2026-01-31 08:15:42.734 226833 DEBUG nova.compute.manager [req-107c8bc8-fbe4-4007-be44-3d79c399e6e1 req-1ec42219-b6cc-4922-b560-b871d429faa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-vif-unplugged-c82f9d38-ef16-46ac-829a-12067ec8c603 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:15:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:42.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:42.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:43 compute-2 nova_compute[226829]: 2026-01-31 08:15:43.611 226833 DEBUG nova.network.neutron [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Successfully updated port: 116d832c-6613-46dd-910f-4caf0a8e58ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:15:43 compute-2 nova_compute[226829]: 2026-01-31 08:15:43.640 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Acquiring lock "refresh_cache-3a9113d4-7b86-462a-843a-a594bd0908ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:15:43 compute-2 nova_compute[226829]: 2026-01-31 08:15:43.640 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Acquired lock "refresh_cache-3a9113d4-7b86-462a-843a-a594bd0908ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:15:43 compute-2 nova_compute[226829]: 2026-01-31 08:15:43.641 226833 DEBUG nova.network.neutron [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:15:43 compute-2 nova_compute[226829]: 2026-01-31 08:15:43.765 226833 DEBUG nova.network.neutron [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:43 compute-2 nova_compute[226829]: 2026-01-31 08:15:43.848 226833 INFO nova.compute.manager [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Took 2.38 seconds to deallocate network for instance.
Jan 31 08:15:43 compute-2 nova_compute[226829]: 2026-01-31 08:15:43.923 226833 DEBUG oslo_concurrency.lockutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:43 compute-2 nova_compute[226829]: 2026-01-31 08:15:43.924 226833 DEBUG oslo_concurrency.lockutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.007 226833 DEBUG nova.compute.manager [req-9d14c245-ab6c-400c-9dab-6dd39c11efad req-bfb5242e-a840-48be-90d1-49c7e195be5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-vif-deleted-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.009 226833 DEBUG nova.network.neutron [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.018 226833 DEBUG oslo_concurrency.processutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.168 226833 DEBUG nova.compute.manager [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Received event network-changed-116d832c-6613-46dd-910f-4caf0a8e58ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.169 226833 DEBUG nova.compute.manager [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Refreshing instance network info cache due to event network-changed-116d832c-6613-46dd-910f-4caf0a8e58ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.169 226833 DEBUG oslo_concurrency.lockutils [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3a9113d4-7b86-462a-843a-a594bd0908ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:15:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1582960450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.432 226833 DEBUG oslo_concurrency.processutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.438 226833 DEBUG nova.compute.provider_tree [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.484 226833 DEBUG nova.scheduler.client.report [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.544 226833 DEBUG oslo_concurrency.lockutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.601 226833 INFO nova.scheduler.client.report [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Deleted allocations for instance 5a59388d-bade-4df0-9ac0-0022df15ea02
Jan 31 08:15:44 compute-2 nova_compute[226829]: 2026-01-31 08:15:44.729 226833 DEBUG oslo_concurrency.lockutils [None req-a497a59a-0f7f-4e76-b936-618e1ed600d2 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:44.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:44 compute-2 ceph-mon[77282]: pgmap v2412: 305 pgs: 305 active+clean; 619 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 629 KiB/s rd, 933 KiB/s wr, 122 op/s
Jan 31 08:15:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1582960450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:44.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:45 compute-2 nova_compute[226829]: 2026-01-31 08:15:45.012 226833 DEBUG nova.compute.manager [req-05fdf388-2914-49b3-baaa-9c7e4e4b6cde req-1feb13bf-89af-4241-8625-d1c1c66d30da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:45 compute-2 nova_compute[226829]: 2026-01-31 08:15:45.013 226833 DEBUG oslo_concurrency.lockutils [req-05fdf388-2914-49b3-baaa-9c7e4e4b6cde req-1feb13bf-89af-4241-8625-d1c1c66d30da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:45 compute-2 nova_compute[226829]: 2026-01-31 08:15:45.013 226833 DEBUG oslo_concurrency.lockutils [req-05fdf388-2914-49b3-baaa-9c7e4e4b6cde req-1feb13bf-89af-4241-8625-d1c1c66d30da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:45 compute-2 nova_compute[226829]: 2026-01-31 08:15:45.013 226833 DEBUG oslo_concurrency.lockutils [req-05fdf388-2914-49b3-baaa-9c7e4e4b6cde req-1feb13bf-89af-4241-8625-d1c1c66d30da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5a59388d-bade-4df0-9ac0-0022df15ea02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:45 compute-2 nova_compute[226829]: 2026-01-31 08:15:45.014 226833 DEBUG nova.compute.manager [req-05fdf388-2914-49b3-baaa-9c7e4e4b6cde req-1feb13bf-89af-4241-8625-d1c1c66d30da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] No waiting events found dispatching network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:45 compute-2 nova_compute[226829]: 2026-01-31 08:15:45.014 226833 WARNING nova.compute.manager [req-05fdf388-2914-49b3-baaa-9c7e4e4b6cde req-1feb13bf-89af-4241-8625-d1c1c66d30da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Received unexpected event network-vif-plugged-c82f9d38-ef16-46ac-829a-12067ec8c603 for instance with vm_state deleted and task_state None.
Jan 31 08:15:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:15:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1263486483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:15:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:15:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1263486483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:15:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1263486483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:15:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1263486483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:15:45 compute-2 sudo[287231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:45 compute-2 sudo[287231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:45 compute-2 sudo[287231]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:45 compute-2 nova_compute[226829]: 2026-01-31 08:15:45.930 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:45 compute-2 nova_compute[226829]: 2026-01-31 08:15:45.936 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:45 compute-2 nova_compute[226829]: 2026-01-31 08:15:45.965 226833 DEBUG nova.network.neutron [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Updating instance_info_cache with network_info: [{"id": "116d832c-6613-46dd-910f-4caf0a8e58ea", "address": "fa:16:3e:97:8b:81", "network": {"id": "cc512501-011f-4720-b540-0d6adc2eb46a", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1234051733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20a1c4d130394aa79f7e0825cb720528", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116d832c-66", "ovs_interfaceid": "116d832c-6613-46dd-910f-4caf0a8e58ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:45 compute-2 sudo[287256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:45 compute-2 sudo[287256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:45 compute-2 sudo[287256]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.010 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Releasing lock "refresh_cache-3a9113d4-7b86-462a-843a-a594bd0908ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.011 226833 DEBUG nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Instance network_info: |[{"id": "116d832c-6613-46dd-910f-4caf0a8e58ea", "address": "fa:16:3e:97:8b:81", "network": {"id": "cc512501-011f-4720-b540-0d6adc2eb46a", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1234051733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20a1c4d130394aa79f7e0825cb720528", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116d832c-66", "ovs_interfaceid": "116d832c-6613-46dd-910f-4caf0a8e58ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.011 226833 DEBUG oslo_concurrency.lockutils [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3a9113d4-7b86-462a-843a-a594bd0908ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.011 226833 DEBUG nova.network.neutron [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Refreshing network info cache for port 116d832c-6613-46dd-910f-4caf0a8e58ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.015 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Start _get_guest_xml network_info=[{"id": "116d832c-6613-46dd-910f-4caf0a8e58ea", "address": "fa:16:3e:97:8b:81", "network": {"id": "cc512501-011f-4720-b540-0d6adc2eb46a", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1234051733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20a1c4d130394aa79f7e0825cb720528", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116d832c-66", "ovs_interfaceid": "116d832c-6613-46dd-910f-4caf0a8e58ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.020 226833 WARNING nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.034 226833 DEBUG nova.virt.libvirt.host [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.034 226833 DEBUG nova.virt.libvirt.host [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.039 226833 DEBUG nova.virt.libvirt.host [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.040 226833 DEBUG nova.virt.libvirt.host [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.041 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.041 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.042 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.042 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.042 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.043 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.043 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.043 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.043 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.044 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.044 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.044 226833 DEBUG nova.virt.hardware [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.048 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e309 e309: 3 total, 3 up, 3 in
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:15:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:15:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2993535437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.521 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.556 226833 DEBUG nova.storage.rbd_utils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] rbd image 3a9113d4-7b86-462a-843a-a594bd0908ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:46 compute-2 nova_compute[226829]: 2026-01-31 08:15:46.560 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:46.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:46 compute-2 ceph-mon[77282]: pgmap v2413: 305 pgs: 305 active+clean; 618 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 178 op/s
Jan 31 08:15:46 compute-2 ceph-mon[77282]: osdmap e309: 3 total, 3 up, 3 in
Jan 31 08:15:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2993535437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:46.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:15:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/923116940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.034 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.036 226833 DEBUG nova.virt.libvirt.vif [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:15:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-768859624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servertagstestjson-server-768859624',id=136,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20a1c4d130394aa79f7e0825cb720528',ramdisk_id='',reservation_id='r-02dnocz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1875224061',owner_user_name='tempest-ServerTagsTestJSON-1875224061-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:15:39Z,user_data=None,user_id='5e598a75077944569409ad429a456aea',uuid=3a9113d4-7b86-462a-843a-a594bd0908ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "116d832c-6613-46dd-910f-4caf0a8e58ea", "address": "fa:16:3e:97:8b:81", "network": {"id": "cc512501-011f-4720-b540-0d6adc2eb46a", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1234051733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20a1c4d130394aa79f7e0825cb720528", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116d832c-66", "ovs_interfaceid": "116d832c-6613-46dd-910f-4caf0a8e58ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.037 226833 DEBUG nova.network.os_vif_util [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Converting VIF {"id": "116d832c-6613-46dd-910f-4caf0a8e58ea", "address": "fa:16:3e:97:8b:81", "network": {"id": "cc512501-011f-4720-b540-0d6adc2eb46a", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1234051733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20a1c4d130394aa79f7e0825cb720528", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116d832c-66", "ovs_interfaceid": "116d832c-6613-46dd-910f-4caf0a8e58ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.038 226833 DEBUG nova.network.os_vif_util [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:8b:81,bridge_name='br-int',has_traffic_filtering=True,id=116d832c-6613-46dd-910f-4caf0a8e58ea,network=Network(cc512501-011f-4720-b540-0d6adc2eb46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116d832c-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.040 226833 DEBUG nova.objects.instance [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a9113d4-7b86-462a-843a-a594bd0908ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.071 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <uuid>3a9113d4-7b86-462a-843a-a594bd0908ae</uuid>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <name>instance-00000088</name>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerTagsTestJSON-server-768859624</nova:name>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:15:46</nova:creationTime>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <nova:user uuid="5e598a75077944569409ad429a456aea">tempest-ServerTagsTestJSON-1875224061-project-member</nova:user>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <nova:project uuid="20a1c4d130394aa79f7e0825cb720528">tempest-ServerTagsTestJSON-1875224061</nova:project>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <nova:port uuid="116d832c-6613-46dd-910f-4caf0a8e58ea">
Jan 31 08:15:47 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <system>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <entry name="serial">3a9113d4-7b86-462a-843a-a594bd0908ae</entry>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <entry name="uuid">3a9113d4-7b86-462a-843a-a594bd0908ae</entry>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     </system>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <os>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   </os>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <features>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   </features>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/3a9113d4-7b86-462a-843a-a594bd0908ae_disk">
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       </source>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/3a9113d4-7b86-462a-843a-a594bd0908ae_disk.config">
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       </source>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:15:47 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:97:8b:81"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <target dev="tap116d832c-66"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/3a9113d4-7b86-462a-843a-a594bd0908ae/console.log" append="off"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <video>
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     </video>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:15:47 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:15:47 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:15:47 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:15:47 compute-2 nova_compute[226829]: </domain>
Jan 31 08:15:47 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.072 226833 DEBUG nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Preparing to wait for external event network-vif-plugged-116d832c-6613-46dd-910f-4caf0a8e58ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.073 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Acquiring lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.073 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.073 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.074 226833 DEBUG nova.virt.libvirt.vif [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:15:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-768859624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servertagstestjson-server-768859624',id=136,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20a1c4d130394aa79f7e0825cb720528',ramdisk_id='',reservation_id='r-02dnocz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1875224061',owner_user_name='tempest-ServerTagsTestJSON-1875224061-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:15:39Z,user_data=None,user_id='5e598a75077944569409ad429a456aea',uuid=3a9113d4-7b86-462a-843a-a594bd0908ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "116d832c-6613-46dd-910f-4caf0a8e58ea", "address": "fa:16:3e:97:8b:81", "network": {"id": "cc512501-011f-4720-b540-0d6adc2eb46a", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1234051733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20a1c4d130394aa79f7e0825cb720528", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116d832c-66", "ovs_interfaceid": "116d832c-6613-46dd-910f-4caf0a8e58ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.074 226833 DEBUG nova.network.os_vif_util [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Converting VIF {"id": "116d832c-6613-46dd-910f-4caf0a8e58ea", "address": "fa:16:3e:97:8b:81", "network": {"id": "cc512501-011f-4720-b540-0d6adc2eb46a", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1234051733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20a1c4d130394aa79f7e0825cb720528", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116d832c-66", "ovs_interfaceid": "116d832c-6613-46dd-910f-4caf0a8e58ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.075 226833 DEBUG nova.network.os_vif_util [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:8b:81,bridge_name='br-int',has_traffic_filtering=True,id=116d832c-6613-46dd-910f-4caf0a8e58ea,network=Network(cc512501-011f-4720-b540-0d6adc2eb46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116d832c-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.075 226833 DEBUG os_vif [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:8b:81,bridge_name='br-int',has_traffic_filtering=True,id=116d832c-6613-46dd-910f-4caf0a8e58ea,network=Network(cc512501-011f-4720-b540-0d6adc2eb46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116d832c-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.076 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.076 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.076 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.079 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.079 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap116d832c-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.079 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap116d832c-66, col_values=(('external_ids', {'iface-id': '116d832c-6613-46dd-910f-4caf0a8e58ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:8b:81', 'vm-uuid': '3a9113d4-7b86-462a-843a-a594bd0908ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.081 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:47 compute-2 NetworkManager[48999]: <info>  [1769847347.0836] manager: (tap116d832c-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.083 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.086 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.087 226833 INFO os_vif [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:8b:81,bridge_name='br-int',has_traffic_filtering=True,id=116d832c-6613-46dd-910f-4caf0a8e58ea,network=Network(cc512501-011f-4720-b540-0d6adc2eb46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116d832c-66')
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.163 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.164 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.164 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] No VIF found with MAC fa:16:3e:97:8b:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.164 226833 INFO nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Using config drive
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.188 226833 DEBUG nova.storage.rbd_utils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] rbd image 3a9113d4-7b86-462a-843a-a594bd0908ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/804013123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:15:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/804013123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:15:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/923116940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:47 compute-2 nova_compute[226829]: 2026-01-31 08:15:47.995 226833 INFO nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Creating config drive at /var/lib/nova/instances/3a9113d4-7b86-462a-843a-a594bd0908ae/disk.config
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.000 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a9113d4-7b86-462a-843a-a594bd0908ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsh0joaw3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.125 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a9113d4-7b86-462a-843a-a594bd0908ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsh0joaw3" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.162 226833 DEBUG nova.storage.rbd_utils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] rbd image 3a9113d4-7b86-462a-843a-a594bd0908ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.167 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a9113d4-7b86-462a-843a-a594bd0908ae/disk.config 3a9113d4-7b86-462a-843a-a594bd0908ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.346 226833 DEBUG oslo_concurrency.processutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a9113d4-7b86-462a-843a-a594bd0908ae/disk.config 3a9113d4-7b86-462a-843a-a594bd0908ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.347 226833 INFO nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Deleting local config drive /var/lib/nova/instances/3a9113d4-7b86-462a-843a-a594bd0908ae/disk.config because it was imported into RBD.
Jan 31 08:15:48 compute-2 kernel: tap116d832c-66: entered promiscuous mode
Jan 31 08:15:48 compute-2 NetworkManager[48999]: <info>  [1769847348.3894] manager: (tap116d832c-66): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Jan 31 08:15:48 compute-2 ovn_controller[133834]: 2026-01-31T08:15:48Z|00567|binding|INFO|Claiming lport 116d832c-6613-46dd-910f-4caf0a8e58ea for this chassis.
Jan 31 08:15:48 compute-2 ovn_controller[133834]: 2026-01-31T08:15:48Z|00568|binding|INFO|116d832c-6613-46dd-910f-4caf0a8e58ea: Claiming fa:16:3e:97:8b:81 10.100.0.11
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.390 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:48 compute-2 ovn_controller[133834]: 2026-01-31T08:15:48Z|00569|binding|INFO|Setting lport 116d832c-6613-46dd-910f-4caf0a8e58ea ovn-installed in OVS
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.399 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:48 compute-2 systemd-machined[195142]: New machine qemu-61-instance-00000088.
Jan 31 08:15:48 compute-2 systemd-udevd[287418]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:15:48 compute-2 systemd[1]: Started Virtual Machine qemu-61-instance-00000088.
Jan 31 08:15:48 compute-2 NetworkManager[48999]: <info>  [1769847348.4398] device (tap116d832c-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:15:48 compute-2 NetworkManager[48999]: <info>  [1769847348.4432] device (tap116d832c-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:15:48 compute-2 ovn_controller[133834]: 2026-01-31T08:15:48Z|00570|binding|INFO|Setting lport 116d832c-6613-46dd-910f-4caf0a8e58ea up in Southbound
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.489 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:8b:81 10.100.0.11'], port_security=['fa:16:3e:97:8b:81 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3a9113d4-7b86-462a-843a-a594bd0908ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc512501-011f-4720-b540-0d6adc2eb46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20a1c4d130394aa79f7e0825cb720528', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b256b02-9592-478f-8285-75291a806b8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa3eac5b-89e3-4a69-b84e-5aaa66b57fb1, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=116d832c-6613-46dd-910f-4caf0a8e58ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.491 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 116d832c-6613-46dd-910f-4caf0a8e58ea in datapath cc512501-011f-4720-b540-0d6adc2eb46a bound to our chassis
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.493 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cc512501-011f-4720-b540-0d6adc2eb46a
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.503 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8598a0-ed9c-4205-8250-e1ca42febbeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.504 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcc512501-01 in ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.507 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcc512501-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.507 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb24f24-df86-438a-a665-75ad8abc9bce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.508 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5886ca01-f671-43c8-8c12-fb5385df83a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.521 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[b67e493b-87b1-485c-b9cd-c7bed0d1c206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.544 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb004c2-a87e-4896-9802-9881d5c52382]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.569 226833 DEBUG nova.network.neutron [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Updated VIF entry in instance network info cache for port 116d832c-6613-46dd-910f-4caf0a8e58ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.570 226833 DEBUG nova.network.neutron [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Updating instance_info_cache with network_info: [{"id": "116d832c-6613-46dd-910f-4caf0a8e58ea", "address": "fa:16:3e:97:8b:81", "network": {"id": "cc512501-011f-4720-b540-0d6adc2eb46a", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1234051733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20a1c4d130394aa79f7e0825cb720528", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116d832c-66", "ovs_interfaceid": "116d832c-6613-46dd-910f-4caf0a8e58ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.575 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[541d5448-faab-4316-8d78-71277fd203d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 NetworkManager[48999]: <info>  [1769847348.5808] manager: (tapcc512501-00): new Veth device (/org/freedesktop/NetworkManager/Devices/275)
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.579 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cda22f96-7fc0-447a-9aae-518f9c631cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 systemd-udevd[287420]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.609 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[439d1fa6-245c-42a6-9902-ae0e44206e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.613 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ed437392-5019-4a06-b518-f753a5d8bd43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.631 226833 DEBUG oslo_concurrency.lockutils [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3a9113d4-7b86-462a-843a-a594bd0908ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:15:48 compute-2 NetworkManager[48999]: <info>  [1769847348.6357] device (tapcc512501-00): carrier: link connected
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.639 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e3bd5fc3-86de-4785-993a-3cc13ed29dd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.657 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[12728591-d5df-4558-a701-80242eb4661b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc512501-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ef:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 760110, 'reachable_time': 20202, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287451, 'error': None, 'target': 'ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.670 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[738f1a91-0d29-45b5-a245-72c5b968f3eb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:ef5d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 760110, 'tstamp': 760110}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287452, 'error': None, 'target': 'ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.686 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1745e51b-5c78-460e-b5a8-d240228f6a5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc512501-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ef:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 760110, 'reachable_time': 20202, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287453, 'error': None, 'target': 'ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.712 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b78e96be-73d8-4c23-8f2a-d090272ad556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.771 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5f19c632-c925-4688-b5c2-8064b1980056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.773 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc512501-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.774 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.775 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc512501-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:48 compute-2 NetworkManager[48999]: <info>  [1769847348.7781] manager: (tapcc512501-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Jan 31 08:15:48 compute-2 kernel: tapcc512501-00: entered promiscuous mode
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.778 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.782 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcc512501-00, col_values=(('external_ids', {'iface-id': '9fc010aa-cfb0-4885-9b3f-45ba1a2f1f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:48 compute-2 ovn_controller[133834]: 2026-01-31T08:15:48Z|00571|binding|INFO|Releasing lport 9fc010aa-cfb0-4885-9b3f-45ba1a2f1f3b from this chassis (sb_readonly=0)
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.784 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.787 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cc512501-011f-4720-b540-0d6adc2eb46a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cc512501-011f-4720-b540-0d6adc2eb46a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.789 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2df0249b-ff5d-4f9b-83fe-7fcdea9cfa01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.790 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-cc512501-011f-4720-b540-0d6adc2eb46a
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/cc512501-011f-4720-b540-0d6adc2eb46a.pid.haproxy
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID cc512501-011f-4720-b540-0d6adc2eb46a
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:15:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:48.792 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a', 'env', 'PROCESS_TAG=haproxy-cc512501-011f-4720-b540-0d6adc2eb46a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cc512501-011f-4720-b540-0d6adc2eb46a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.796 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:48.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.898 226833 DEBUG nova.compute.manager [req-39375778-8f70-4344-93ea-adf07ad4da97 req-3f920225-1e79-4ed0-b7ef-ac285cf1c066 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Received event network-vif-plugged-116d832c-6613-46dd-910f-4caf0a8e58ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.899 226833 DEBUG oslo_concurrency.lockutils [req-39375778-8f70-4344-93ea-adf07ad4da97 req-3f920225-1e79-4ed0-b7ef-ac285cf1c066 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.900 226833 DEBUG oslo_concurrency.lockutils [req-39375778-8f70-4344-93ea-adf07ad4da97 req-3f920225-1e79-4ed0-b7ef-ac285cf1c066 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.900 226833 DEBUG oslo_concurrency.lockutils [req-39375778-8f70-4344-93ea-adf07ad4da97 req-3f920225-1e79-4ed0-b7ef-ac285cf1c066 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.900 226833 DEBUG nova.compute.manager [req-39375778-8f70-4344-93ea-adf07ad4da97 req-3f920225-1e79-4ed0-b7ef-ac285cf1c066 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Processing event network-vif-plugged-116d832c-6613-46dd-910f-4caf0a8e58ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:15:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:48 compute-2 ceph-mon[77282]: pgmap v2415: 305 pgs: 305 active+clean; 596 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 243 op/s
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.947 226833 DEBUG nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.948 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847348.946932, 3a9113d4-7b86-462a-843a-a594bd0908ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.948 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] VM Started (Lifecycle Event)
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.953 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.956 226833 INFO nova.virt.libvirt.driver [-] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Instance spawned successfully.
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.956 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.982 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:48 compute-2 nova_compute[226829]: 2026-01-31 08:15:48.994 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.001 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.002 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.002 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.003 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.004 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.004 226833 DEBUG nova.virt.libvirt.driver [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.045 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.046 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847348.9470944, 3a9113d4-7b86-462a-843a-a594bd0908ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.047 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] VM Paused (Lifecycle Event)
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.085 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.090 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847348.952806, 3a9113d4-7b86-462a-843a-a594bd0908ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.091 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] VM Resumed (Lifecycle Event)
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.117 226833 INFO nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Took 9.57 seconds to spawn the instance on the hypervisor.
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.118 226833 DEBUG nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:49 compute-2 podman[287527]: 2026-01-31 08:15:49.150864444 +0000 UTC m=+0.057739395 container create 4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.163 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.168 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:15:49 compute-2 systemd[1]: Started libpod-conmon-4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1.scope.
Jan 31 08:15:49 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:15:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb251e2d5f2c92f688903c83cab4039d049aad9e08830f1d8320869824b602c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:49 compute-2 podman[287527]: 2026-01-31 08:15:49.113983775 +0000 UTC m=+0.020858756 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.215 226833 INFO nova.compute.manager [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Took 10.79 seconds to build instance.
Jan 31 08:15:49 compute-2 podman[287527]: 2026-01-31 08:15:49.21978796 +0000 UTC m=+0.126662961 container init 4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:15:49 compute-2 podman[287527]: 2026-01-31 08:15:49.225509225 +0000 UTC m=+0.132384186 container start 4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:15:49 compute-2 nova_compute[226829]: 2026-01-31 08:15:49.254 226833 DEBUG oslo_concurrency.lockutils [None req-da34f85c-45ab-4410-b17e-15e1d2c64635 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:49 compute-2 neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a[287540]: [NOTICE]   (287546) : New worker (287548) forked
Jan 31 08:15:49 compute-2 neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a[287540]: [NOTICE]   (287546) : Loading success.
Jan 31 08:15:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:50.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:50.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:50 compute-2 nova_compute[226829]: 2026-01-31 08:15:50.942 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:50 compute-2 ceph-mon[77282]: pgmap v2416: 305 pgs: 305 active+clean; 574 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 220 op/s
Jan 31 08:15:51 compute-2 nova_compute[226829]: 2026-01-31 08:15:51.195 226833 DEBUG nova.compute.manager [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Received event network-vif-plugged-116d832c-6613-46dd-910f-4caf0a8e58ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:51 compute-2 nova_compute[226829]: 2026-01-31 08:15:51.196 226833 DEBUG oslo_concurrency.lockutils [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:51 compute-2 nova_compute[226829]: 2026-01-31 08:15:51.197 226833 DEBUG oslo_concurrency.lockutils [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:51 compute-2 nova_compute[226829]: 2026-01-31 08:15:51.197 226833 DEBUG oslo_concurrency.lockutils [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:51 compute-2 nova_compute[226829]: 2026-01-31 08:15:51.198 226833 DEBUG nova.compute.manager [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] No waiting events found dispatching network-vif-plugged-116d832c-6613-46dd-910f-4caf0a8e58ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:51 compute-2 nova_compute[226829]: 2026-01-31 08:15:51.199 226833 WARNING nova.compute.manager [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Received unexpected event network-vif-plugged-116d832c-6613-46dd-910f-4caf0a8e58ea for instance with vm_state active and task_state None.
Jan 31 08:15:52 compute-2 nova_compute[226829]: 2026-01-31 08:15:52.081 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:52.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:52 compute-2 ceph-mon[77282]: pgmap v2417: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 244 op/s
Jan 31 08:15:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3079221792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:54 compute-2 nova_compute[226829]: 2026-01-31 08:15:54.681 226833 DEBUG oslo_concurrency.lockutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Acquiring lock "3a9113d4-7b86-462a-843a-a594bd0908ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:54 compute-2 nova_compute[226829]: 2026-01-31 08:15:54.682 226833 DEBUG oslo_concurrency.lockutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:54 compute-2 nova_compute[226829]: 2026-01-31 08:15:54.682 226833 DEBUG oslo_concurrency.lockutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Acquiring lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:54 compute-2 nova_compute[226829]: 2026-01-31 08:15:54.683 226833 DEBUG oslo_concurrency.lockutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:54 compute-2 nova_compute[226829]: 2026-01-31 08:15:54.683 226833 DEBUG oslo_concurrency.lockutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:54 compute-2 nova_compute[226829]: 2026-01-31 08:15:54.685 226833 INFO nova.compute.manager [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Terminating instance
Jan 31 08:15:54 compute-2 nova_compute[226829]: 2026-01-31 08:15:54.686 226833 DEBUG nova.compute.manager [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:15:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:54.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:54 compute-2 ceph-mon[77282]: pgmap v2418: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 225 op/s
Jan 31 08:15:54 compute-2 kernel: tap116d832c-66 (unregistering): left promiscuous mode
Jan 31 08:15:54 compute-2 NetworkManager[48999]: <info>  [1769847354.8940] device (tap116d832c-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:15:54 compute-2 nova_compute[226829]: 2026-01-31 08:15:54.895 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:54 compute-2 ovn_controller[133834]: 2026-01-31T08:15:54Z|00572|binding|INFO|Releasing lport 116d832c-6613-46dd-910f-4caf0a8e58ea from this chassis (sb_readonly=0)
Jan 31 08:15:54 compute-2 ovn_controller[133834]: 2026-01-31T08:15:54Z|00573|binding|INFO|Setting lport 116d832c-6613-46dd-910f-4caf0a8e58ea down in Southbound
Jan 31 08:15:54 compute-2 nova_compute[226829]: 2026-01-31 08:15:54.902 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:54 compute-2 ovn_controller[133834]: 2026-01-31T08:15:54Z|00574|binding|INFO|Removing iface tap116d832c-66 ovn-installed in OVS
Jan 31 08:15:54 compute-2 nova_compute[226829]: 2026-01-31 08:15:54.908 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:54.908 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:8b:81 10.100.0.11'], port_security=['fa:16:3e:97:8b:81 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3a9113d4-7b86-462a-843a-a594bd0908ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc512501-011f-4720-b540-0d6adc2eb46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20a1c4d130394aa79f7e0825cb720528', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b256b02-9592-478f-8285-75291a806b8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa3eac5b-89e3-4a69-b84e-5aaa66b57fb1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=116d832c-6613-46dd-910f-4caf0a8e58ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:15:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:54.911 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 116d832c-6613-46dd-910f-4caf0a8e58ea in datapath cc512501-011f-4720-b540-0d6adc2eb46a unbound from our chassis
Jan 31 08:15:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:54.912 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cc512501-011f-4720-b540-0d6adc2eb46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:15:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:54.914 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[65209c7a-2a49-4769-8af8-e656608881e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:54.915 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a namespace which is not needed anymore
Jan 31 08:15:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:54 compute-2 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000088.scope: Deactivated successfully.
Jan 31 08:15:54 compute-2 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000088.scope: Consumed 6.541s CPU time.
Jan 31 08:15:54 compute-2 systemd-machined[195142]: Machine qemu-61-instance-00000088 terminated.
Jan 31 08:15:55 compute-2 neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a[287540]: [NOTICE]   (287546) : haproxy version is 2.8.14-c23fe91
Jan 31 08:15:55 compute-2 neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a[287540]: [NOTICE]   (287546) : path to executable is /usr/sbin/haproxy
Jan 31 08:15:55 compute-2 neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a[287540]: [WARNING]  (287546) : Exiting Master process...
Jan 31 08:15:55 compute-2 neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a[287540]: [WARNING]  (287546) : Exiting Master process...
Jan 31 08:15:55 compute-2 neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a[287540]: [ALERT]    (287546) : Current worker (287548) exited with code 143 (Terminated)
Jan 31 08:15:55 compute-2 neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a[287540]: [WARNING]  (287546) : All workers exited. Exiting... (0)
Jan 31 08:15:55 compute-2 systemd[1]: libpod-4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1.scope: Deactivated successfully.
Jan 31 08:15:55 compute-2 podman[287583]: 2026-01-31 08:15:55.039746929 +0000 UTC m=+0.048505815 container died 4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.053 226833 INFO nova.virt.libvirt.driver [-] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Instance destroyed successfully.
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.054 226833 DEBUG nova.objects.instance [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lazy-loading 'resources' on Instance uuid 3a9113d4-7b86-462a-843a-a594bd0908ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:55 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1-userdata-shm.mount: Deactivated successfully.
Jan 31 08:15:55 compute-2 systemd[1]: var-lib-containers-storage-overlay-bb251e2d5f2c92f688903c83cab4039d049aad9e08830f1d8320869824b602c5-merged.mount: Deactivated successfully.
Jan 31 08:15:55 compute-2 podman[287583]: 2026-01-31 08:15:55.076060352 +0000 UTC m=+0.084819228 container cleanup 4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:15:55 compute-2 systemd[1]: libpod-conmon-4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1.scope: Deactivated successfully.
Jan 31 08:15:55 compute-2 podman[287621]: 2026-01-31 08:15:55.129604872 +0000 UTC m=+0.038381901 container remove 4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:15:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:55.133 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[683736a6-54ec-4c25-86fa-7ca23e375b18]: (4, ('Sat Jan 31 08:15:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a (4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1)\n4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1\nSat Jan 31 08:15:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a (4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1)\n4c81992f8c834c2350b155fe59b9fcb31424ecd499e9f60c85c648e9820138c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:55.135 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aceb21e9-0ce9-4408-bc07-edfc3aa8e343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.135 226833 DEBUG nova.virt.libvirt.vif [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:15:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-768859624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servertagstestjson-server-768859624',id=136,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:49Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20a1c4d130394aa79f7e0825cb720528',ramdisk_id='',reservation_id='r-02dnocz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1875224061',owner_user_name='tempest-ServerTagsTestJSON-1875224061-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:49Z,user_data=None,user_id='5e598a75077944569409ad429a456aea',uuid=3a9113d4-7b86-462a-843a-a594bd0908ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "116d832c-6613-46dd-910f-4caf0a8e58ea", "address": "fa:16:3e:97:8b:81", "network": {"id": "cc512501-011f-4720-b540-0d6adc2eb46a", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1234051733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20a1c4d130394aa79f7e0825cb720528", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116d832c-66", "ovs_interfaceid": "116d832c-6613-46dd-910f-4caf0a8e58ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.136 226833 DEBUG nova.network.os_vif_util [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Converting VIF {"id": "116d832c-6613-46dd-910f-4caf0a8e58ea", "address": "fa:16:3e:97:8b:81", "network": {"id": "cc512501-011f-4720-b540-0d6adc2eb46a", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1234051733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20a1c4d130394aa79f7e0825cb720528", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116d832c-66", "ovs_interfaceid": "116d832c-6613-46dd-910f-4caf0a8e58ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:55.136 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc512501-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.137 226833 DEBUG nova.network.os_vif_util [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:8b:81,bridge_name='br-int',has_traffic_filtering=True,id=116d832c-6613-46dd-910f-4caf0a8e58ea,network=Network(cc512501-011f-4720-b540-0d6adc2eb46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116d832c-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.137 226833 DEBUG os_vif [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:8b:81,bridge_name='br-int',has_traffic_filtering=True,id=116d832c-6613-46dd-910f-4caf0a8e58ea,network=Network(cc512501-011f-4720-b540-0d6adc2eb46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116d832c-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:15:55 compute-2 kernel: tapcc512501-00: left promiscuous mode
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.140 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.143 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap116d832c-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.144 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.145 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.146 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.146 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:55.149 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[76d165c0-c80c-4375-b2e0-bf9f53f36b2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.154 226833 INFO os_vif [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:8b:81,bridge_name='br-int',has_traffic_filtering=True,id=116d832c-6613-46dd-910f-4caf0a8e58ea,network=Network(cc512501-011f-4720-b540-0d6adc2eb46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116d832c-66')
Jan 31 08:15:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:55.162 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[86fa5fde-7e88-4793-82e3-3c71008b485e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:55.164 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b630ffdc-37ba-4312-916d-f0823523f67d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:55.184 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7c826173-c269-48af-b669-a9ce1798f644]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 760104, 'reachable_time': 17634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287652, 'error': None, 'target': 'ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:55.187 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cc512501-011f-4720-b540-0d6adc2eb46a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:15:55 compute-2 systemd[1]: run-netns-ovnmeta\x2dcc512501\x2d011f\x2d4720\x2db540\x2d0d6adc2eb46a.mount: Deactivated successfully.
Jan 31 08:15:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:15:55.188 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[68b85b6b-509a-4f2b-b98b-4aee9e2f4a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.613 226833 DEBUG nova.compute.manager [req-86c72770-3aff-4695-a4fa-129b4446bd18 req-3d7c455f-14cd-4bdc-a311-1b7c5a99d59e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Received event network-vif-unplugged-116d832c-6613-46dd-910f-4caf0a8e58ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.613 226833 DEBUG oslo_concurrency.lockutils [req-86c72770-3aff-4695-a4fa-129b4446bd18 req-3d7c455f-14cd-4bdc-a311-1b7c5a99d59e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.614 226833 DEBUG oslo_concurrency.lockutils [req-86c72770-3aff-4695-a4fa-129b4446bd18 req-3d7c455f-14cd-4bdc-a311-1b7c5a99d59e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.614 226833 DEBUG oslo_concurrency.lockutils [req-86c72770-3aff-4695-a4fa-129b4446bd18 req-3d7c455f-14cd-4bdc-a311-1b7c5a99d59e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.614 226833 DEBUG nova.compute.manager [req-86c72770-3aff-4695-a4fa-129b4446bd18 req-3d7c455f-14cd-4bdc-a311-1b7c5a99d59e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] No waiting events found dispatching network-vif-unplugged-116d832c-6613-46dd-910f-4caf0a8e58ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.615 226833 DEBUG nova.compute.manager [req-86c72770-3aff-4695-a4fa-129b4446bd18 req-3d7c455f-14cd-4bdc-a311-1b7c5a99d59e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Received event network-vif-unplugged-116d832c-6613-46dd-910f-4caf0a8e58ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.698 226833 INFO nova.virt.libvirt.driver [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Deleting instance files /var/lib/nova/instances/3a9113d4-7b86-462a-843a-a594bd0908ae_del
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.699 226833 INFO nova.virt.libvirt.driver [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Deletion of /var/lib/nova/instances/3a9113d4-7b86-462a-843a-a594bd0908ae_del complete
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.834 226833 INFO nova.compute.manager [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Took 1.15 seconds to destroy the instance on the hypervisor.
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.835 226833 DEBUG oslo.service.loopingcall [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.835 226833 DEBUG nova.compute.manager [-] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.836 226833 DEBUG nova.network.neutron [-] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.879 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847340.8770993, 5a59388d-bade-4df0-9ac0-0022df15ea02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.879 226833 INFO nova.compute.manager [-] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] VM Stopped (Lifecycle Event)
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.919 226833 DEBUG nova.compute.manager [None req-de27f208-c7a4-43af-a9cf-d6b1873b6142 - - - - - -] [instance: 5a59388d-bade-4df0-9ac0-0022df15ea02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:55 compute-2 nova_compute[226829]: 2026-01-31 08:15:55.941 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:15:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:56.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:15:56 compute-2 ceph-mon[77282]: pgmap v2419: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 210 op/s
Jan 31 08:15:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:57 compute-2 nova_compute[226829]: 2026-01-31 08:15:57.795 226833 DEBUG nova.compute.manager [req-d9a24a83-c675-4b0a-9b78-94cdfaaffe1f req-fed4eda4-d3a1-40a5-ae03-ddb9ab07e27b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Received event network-vif-plugged-116d832c-6613-46dd-910f-4caf0a8e58ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:57 compute-2 nova_compute[226829]: 2026-01-31 08:15:57.796 226833 DEBUG oslo_concurrency.lockutils [req-d9a24a83-c675-4b0a-9b78-94cdfaaffe1f req-fed4eda4-d3a1-40a5-ae03-ddb9ab07e27b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:57 compute-2 nova_compute[226829]: 2026-01-31 08:15:57.796 226833 DEBUG oslo_concurrency.lockutils [req-d9a24a83-c675-4b0a-9b78-94cdfaaffe1f req-fed4eda4-d3a1-40a5-ae03-ddb9ab07e27b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:57 compute-2 nova_compute[226829]: 2026-01-31 08:15:57.796 226833 DEBUG oslo_concurrency.lockutils [req-d9a24a83-c675-4b0a-9b78-94cdfaaffe1f req-fed4eda4-d3a1-40a5-ae03-ddb9ab07e27b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:57 compute-2 nova_compute[226829]: 2026-01-31 08:15:57.796 226833 DEBUG nova.compute.manager [req-d9a24a83-c675-4b0a-9b78-94cdfaaffe1f req-fed4eda4-d3a1-40a5-ae03-ddb9ab07e27b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] No waiting events found dispatching network-vif-plugged-116d832c-6613-46dd-910f-4caf0a8e58ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:57 compute-2 nova_compute[226829]: 2026-01-31 08:15:57.796 226833 WARNING nova.compute.manager [req-d9a24a83-c675-4b0a-9b78-94cdfaaffe1f req-fed4eda4-d3a1-40a5-ae03-ddb9ab07e27b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Received unexpected event network-vif-plugged-116d832c-6613-46dd-910f-4caf0a8e58ea for instance with vm_state active and task_state deleting.
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.021 226833 DEBUG nova.network.neutron [-] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.089 226833 INFO nova.compute.manager [-] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Took 2.25 seconds to deallocate network for instance.
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.127 226833 DEBUG nova.compute.manager [req-81eb5d4e-de0b-49d2-a705-4cea549da43f req-79f3af71-8711-4450-9e85-2acdddaa762d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Received event network-vif-deleted-116d832c-6613-46dd-910f-4caf0a8e58ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.207 226833 DEBUG oslo_concurrency.lockutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.208 226833 DEBUG oslo_concurrency.lockutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.279 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.344 226833 DEBUG oslo_concurrency.processutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/561970599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.763 226833 DEBUG oslo_concurrency.processutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.771 226833 DEBUG nova.compute.provider_tree [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.798 226833 DEBUG nova.scheduler.client.report [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.828 226833 DEBUG oslo_concurrency.lockutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:58.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:58 compute-2 ceph-mon[77282]: pgmap v2420: 305 pgs: 305 active+clean; 524 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Jan 31 08:15:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/561970599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:58 compute-2 nova_compute[226829]: 2026-01-31 08:15:58.893 226833 INFO nova.scheduler.client.report [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Deleted allocations for instance 3a9113d4-7b86-462a-843a-a594bd0908ae
Jan 31 08:15:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:15:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:58.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:59 compute-2 nova_compute[226829]: 2026-01-31 08:15:59.027 226833 DEBUG oslo_concurrency.lockutils [None req-22a85480-4fc1-4430-9b0a-f6d4661f6ed6 5e598a75077944569409ad429a456aea 20a1c4d130394aa79f7e0825cb720528 - - default default] Lock "3a9113d4-7b86-462a-843a-a594bd0908ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:59 compute-2 nova_compute[226829]: 2026-01-31 08:15:59.365 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:59 compute-2 nova_compute[226829]: 2026-01-31 08:15:59.433 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:00 compute-2 nova_compute[226829]: 2026-01-31 08:16:00.144 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:00.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:00 compute-2 ceph-mon[77282]: pgmap v2421: 305 pgs: 305 active+clean; 524 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 163 op/s
Jan 31 08:16:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:00 compute-2 nova_compute[226829]: 2026-01-31 08:16:00.943 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:01 compute-2 podman[287683]: 2026-01-31 08:16:01.197953406 +0000 UTC m=+0.084448668 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 08:16:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:02.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:02 compute-2 ceph-mon[77282]: pgmap v2422: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Jan 31 08:16:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:02.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:16:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:04.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:16:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:16:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:04.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:16:04 compute-2 ceph-mon[77282]: pgmap v2423: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 31 08:16:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:05 compute-2 nova_compute[226829]: 2026-01-31 08:16:05.146 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:05 compute-2 nova_compute[226829]: 2026-01-31 08:16:05.945 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:06 compute-2 sudo[287713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:06 compute-2 sudo[287713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:06 compute-2 sudo[287713]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:06 compute-2 sudo[287738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:06 compute-2 sudo[287738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:06 compute-2 sudo[287738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:06.890 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:06.890 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:06.891 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:06.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:06 compute-2 ceph-mon[77282]: pgmap v2424: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 31 08:16:08 compute-2 sudo[287764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:08 compute-2 sudo[287764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:08 compute-2 sudo[287764]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:08 compute-2 sudo[287789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:08 compute-2 sudo[287789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:08 compute-2 sudo[287789]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:08.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:08 compute-2 sudo[287814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:08 compute-2 sudo[287814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:08 compute-2 sudo[287814]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:08 compute-2 sudo[287839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:16:08 compute-2 ceph-mon[77282]: pgmap v2425: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 1016 KiB/s wr, 80 op/s
Jan 31 08:16:08 compute-2 sudo[287839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:09 compute-2 sudo[287839]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:16:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:16:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:16:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:16:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:16:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:16:10 compute-2 nova_compute[226829]: 2026-01-31 08:16:10.052 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847355.050429, 3a9113d4-7b86-462a-843a-a594bd0908ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:16:10 compute-2 nova_compute[226829]: 2026-01-31 08:16:10.052 226833 INFO nova.compute.manager [-] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] VM Stopped (Lifecycle Event)
Jan 31 08:16:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:10 compute-2 nova_compute[226829]: 2026-01-31 08:16:10.078 226833 DEBUG nova.compute.manager [None req-51621dcc-e221-4401-bfe3-1dae90af642c - - - - - -] [instance: 3a9113d4-7b86-462a-843a-a594bd0908ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:16:10 compute-2 nova_compute[226829]: 2026-01-31 08:16:10.147 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:10.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:10.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:10 compute-2 nova_compute[226829]: 2026-01-31 08:16:10.960 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:10 compute-2 ceph-mon[77282]: pgmap v2426: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 140 KiB/s rd, 152 KiB/s wr, 34 op/s
Jan 31 08:16:11 compute-2 podman[287897]: 2026-01-31 08:16:11.194981675 +0000 UTC m=+0.070029027 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:16:12 compute-2 ceph-mon[77282]: pgmap v2427: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 140 KiB/s rd, 156 KiB/s wr, 35 op/s
Jan 31 08:16:12 compute-2 sshd[169503]: Timeout before authentication for connection from 115.236.8.149 to 38.129.56.169, pid = 285128
Jan 31 08:16:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:12.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:12.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:14 compute-2 ceph-mon[77282]: pgmap v2428: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 31 08:16:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:14.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:14.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:15 compute-2 nova_compute[226829]: 2026-01-31 08:16:15.149 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:15 compute-2 nova_compute[226829]: 2026-01-31 08:16:15.290 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:15 compute-2 nova_compute[226829]: 2026-01-31 08:16:15.290 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:15 compute-2 nova_compute[226829]: 2026-01-31 08:16:15.324 226833 DEBUG nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:16:15 compute-2 nova_compute[226829]: 2026-01-31 08:16:15.435 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:15 compute-2 nova_compute[226829]: 2026-01-31 08:16:15.435 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:15 compute-2 nova_compute[226829]: 2026-01-31 08:16:15.444 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:16:15 compute-2 nova_compute[226829]: 2026-01-31 08:16:15.445 226833 INFO nova.compute.claims [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:16:15 compute-2 nova_compute[226829]: 2026-01-31 08:16:15.654 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:15 compute-2 nova_compute[226829]: 2026-01-31 08:16:15.995 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:16:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3818137887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.084 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.089 226833 DEBUG nova.compute.provider_tree [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.111 226833 DEBUG nova.scheduler.client.report [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.187 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.187 226833 DEBUG nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.261 226833 DEBUG nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.262 226833 DEBUG nova.network.neutron [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.314 226833 INFO nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.362 226833 DEBUG nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.499 226833 DEBUG nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.501 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.501 226833 INFO nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Creating image(s)
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.540 226833 DEBUG nova.storage.rbd_utils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.577 226833 DEBUG nova.storage.rbd_utils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.610 226833 DEBUG nova.storage.rbd_utils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.614 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.666 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.667 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.668 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.668 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.693 226833 DEBUG nova.storage.rbd_utils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.697 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:16 compute-2 nova_compute[226829]: 2026-01-31 08:16:16.720 226833 DEBUG nova.policy [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b153d2832404e5b9250422b70ba522d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b06982960ad4453b8e542cb6330835d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:16:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:16.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:16.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:16 compute-2 ceph-mon[77282]: pgmap v2429: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Jan 31 08:16:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3818137887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:16:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:16:17 compute-2 sudo[288036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:17 compute-2 sudo[288036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:17 compute-2 sudo[288036]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:17 compute-2 sudo[288061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:16:17 compute-2 sudo[288061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:17 compute-2 sudo[288061]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:17 compute-2 nova_compute[226829]: 2026-01-31 08:16:17.366 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:17 compute-2 nova_compute[226829]: 2026-01-31 08:16:17.479 226833 DEBUG nova.storage.rbd_utils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] resizing rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:16:17 compute-2 nova_compute[226829]: 2026-01-31 08:16:17.651 226833 DEBUG nova.objects.instance [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'migration_context' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:16:17 compute-2 nova_compute[226829]: 2026-01-31 08:16:17.677 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:16:17 compute-2 nova_compute[226829]: 2026-01-31 08:16:17.677 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Ensure instance console log exists: /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:16:17 compute-2 nova_compute[226829]: 2026-01-31 08:16:17.678 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:17 compute-2 nova_compute[226829]: 2026-01-31 08:16:17.679 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:17 compute-2 nova_compute[226829]: 2026-01-31 08:16:17.679 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:18 compute-2 ceph-mon[77282]: pgmap v2430: 305 pgs: 305 active+clean; 520 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 734 KiB/s wr, 25 op/s
Jan 31 08:16:18 compute-2 nova_compute[226829]: 2026-01-31 08:16:18.424 226833 DEBUG nova.network.neutron [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Successfully created port: 41d7c79a-73d5-466b-ba68-192c5a2c01b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:16:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:16:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:18.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:16:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:18.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:20 compute-2 nova_compute[226829]: 2026-01-31 08:16:20.150 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:20 compute-2 nova_compute[226829]: 2026-01-31 08:16:20.261 226833 DEBUG nova.network.neutron [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Successfully updated port: 41d7c79a-73d5-466b-ba68-192c5a2c01b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:16:20 compute-2 nova_compute[226829]: 2026-01-31 08:16:20.286 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:16:20 compute-2 nova_compute[226829]: 2026-01-31 08:16:20.287 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquired lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:16:20 compute-2 nova_compute[226829]: 2026-01-31 08:16:20.287 226833 DEBUG nova.network.neutron [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:16:20 compute-2 nova_compute[226829]: 2026-01-31 08:16:20.492 226833 DEBUG nova.compute.manager [req-c97a9aa5-0c51-49de-9853-ac440eb69f7e req-abdb681d-8751-4a00-939a-e4b08b59c77b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-changed-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:16:20 compute-2 nova_compute[226829]: 2026-01-31 08:16:20.493 226833 DEBUG nova.compute.manager [req-c97a9aa5-0c51-49de-9853-ac440eb69f7e req-abdb681d-8751-4a00-939a-e4b08b59c77b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Refreshing instance network info cache due to event network-changed-41d7c79a-73d5-466b-ba68-192c5a2c01b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:16:20 compute-2 nova_compute[226829]: 2026-01-31 08:16:20.493 226833 DEBUG oslo_concurrency.lockutils [req-c97a9aa5-0c51-49de-9853-ac440eb69f7e req-abdb681d-8751-4a00-939a-e4b08b59c77b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:16:20 compute-2 nova_compute[226829]: 2026-01-31 08:16:20.650 226833 DEBUG nova.network.neutron [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:16:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:20.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:20 compute-2 ceph-mon[77282]: pgmap v2431: 305 pgs: 305 active+clean; 520 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 733 KiB/s wr, 24 op/s
Jan 31 08:16:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:20.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:20 compute-2 nova_compute[226829]: 2026-01-31 08:16:20.998 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:22 compute-2 ceph-mon[77282]: pgmap v2432: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.535 226833 DEBUG nova.network.neutron [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating instance_info_cache with network_info: [{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.577 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Releasing lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.578 226833 DEBUG nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance network_info: |[{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.579 226833 DEBUG oslo_concurrency.lockutils [req-c97a9aa5-0c51-49de-9853-ac440eb69f7e req-abdb681d-8751-4a00-939a-e4b08b59c77b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.580 226833 DEBUG nova.network.neutron [req-c97a9aa5-0c51-49de-9853-ac440eb69f7e req-abdb681d-8751-4a00-939a-e4b08b59c77b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Refreshing network info cache for port 41d7c79a-73d5-466b-ba68-192c5a2c01b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.585 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Start _get_guest_xml network_info=[{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.591 226833 WARNING nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.598 226833 DEBUG nova.virt.libvirt.host [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.600 226833 DEBUG nova.virt.libvirt.host [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.605 226833 DEBUG nova.virt.libvirt.host [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.606 226833 DEBUG nova.virt.libvirt.host [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.608 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.608 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.609 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.610 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.611 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.611 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.612 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.612 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.613 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.613 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.614 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.614 226833 DEBUG nova.virt.hardware [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.619 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:22.650 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:16:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:22.651 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:16:22 compute-2 nova_compute[226829]: 2026-01-31 08:16:22.655 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:16:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:22.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:16:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:22.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:16:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/121152499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.064 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.107 226833 DEBUG nova.storage.rbd_utils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.111 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e310 e310: 3 total, 3 up, 3 in
Jan 31 08:16:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/121152499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:16:23 compute-2 ceph-mon[77282]: osdmap e310: 3 total, 3 up, 3 in
Jan 31 08:16:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:16:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4224457695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.555 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.557 226833 DEBUG nova.virt.libvirt.vif [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:16:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1656500391',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1656500391',id=137,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFicZlkmwmJx2QcB2FaCtZWA/EPieHD4tlRsDzixOV+Fehvb9d4YKWopnndvdTu7d1fOEkn5wswwcVr24I+nVYUHx/SYBenHvgz9Ve3+IdDKcppTFyb3Gp0ZC2yG6jAiiQ==',key_name='tempest-keypair-46056576',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-e68k7fwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:16:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=2c1aa7ad-f9c1-4e05-8261-defaa3eef40b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.558 226833 DEBUG nova.network.os_vif_util [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.558 226833 DEBUG nova.network.os_vif_util [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.560 226833 DEBUG nova.objects.instance [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.604 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <uuid>2c1aa7ad-f9c1-4e05-8261-defaa3eef40b</uuid>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <name>instance-00000089</name>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-1656500391</nova:name>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:16:22</nova:creationTime>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <nova:user uuid="3b153d2832404e5b9250422b70ba522d">tempest-AttachVolumeShelveTestJSON-332944999-project-member</nova:user>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <nova:project uuid="3b06982960ad4453b8e542cb6330835d">tempest-AttachVolumeShelveTestJSON-332944999</nova:project>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <nova:port uuid="41d7c79a-73d5-466b-ba68-192c5a2c01b3">
Jan 31 08:16:23 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <system>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <entry name="serial">2c1aa7ad-f9c1-4e05-8261-defaa3eef40b</entry>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <entry name="uuid">2c1aa7ad-f9c1-4e05-8261-defaa3eef40b</entry>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     </system>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <os>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   </os>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <features>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   </features>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk">
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       </source>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config">
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       </source>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:16:23 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:b7:40:e9"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <target dev="tap41d7c79a-73"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/console.log" append="off"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <video>
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     </video>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:16:23 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:16:23 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:16:23 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:16:23 compute-2 nova_compute[226829]: </domain>
Jan 31 08:16:23 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.606 226833 DEBUG nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Preparing to wait for external event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.606 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.607 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.607 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.608 226833 DEBUG nova.virt.libvirt.vif [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:16:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1656500391',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1656500391',id=137,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFicZlkmwmJx2QcB2FaCtZWA/EPieHD4tlRsDzixOV+Fehvb9d4YKWopnndvdTu7d1fOEkn5wswwcVr24I+nVYUHx/SYBenHvgz9Ve3+IdDKcppTFyb3Gp0ZC2yG6jAiiQ==',key_name='tempest-keypair-46056576',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-e68k7fwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:16:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=2c1aa7ad-f9c1-4e05-8261-defaa3eef40b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.608 226833 DEBUG nova.network.os_vif_util [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.609 226833 DEBUG nova.network.os_vif_util [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.609 226833 DEBUG os_vif [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.610 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.611 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.611 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.614 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.615 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41d7c79a-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.615 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41d7c79a-73, col_values=(('external_ids', {'iface-id': '41d7c79a-73d5-466b-ba68-192c5a2c01b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:40:e9', 'vm-uuid': '2c1aa7ad-f9c1-4e05-8261-defaa3eef40b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.617 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:23 compute-2 NetworkManager[48999]: <info>  [1769847383.6191] manager: (tap41d7c79a-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.620 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.624 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.625 226833 INFO os_vif [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73')
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.684 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.685 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.685 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No VIF found with MAC fa:16:3e:b7:40:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.686 226833 INFO nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Using config drive
Jan 31 08:16:23 compute-2 nova_compute[226829]: 2026-01-31 08:16:23.709 226833 DEBUG nova.storage.rbd_utils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:16:24 compute-2 nova_compute[226829]: 2026-01-31 08:16:24.542 226833 INFO nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Creating config drive at /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config
Jan 31 08:16:24 compute-2 nova_compute[226829]: 2026-01-31 08:16:24.547 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp129nsgmc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4224457695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:16:24 compute-2 ceph-mon[77282]: pgmap v2434: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Jan 31 08:16:24 compute-2 nova_compute[226829]: 2026-01-31 08:16:24.685 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp129nsgmc" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:24 compute-2 nova_compute[226829]: 2026-01-31 08:16:24.714 226833 DEBUG nova.storage.rbd_utils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:16:24 compute-2 nova_compute[226829]: 2026-01-31 08:16:24.717 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:24.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:16:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:24.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:16:24 compute-2 nova_compute[226829]: 2026-01-31 08:16:24.992 226833 DEBUG oslo_concurrency.processutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:24 compute-2 nova_compute[226829]: 2026-01-31 08:16:24.993 226833 INFO nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Deleting local config drive /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config because it was imported into RBD.
Jan 31 08:16:25 compute-2 kernel: tap41d7c79a-73: entered promiscuous mode
Jan 31 08:16:25 compute-2 NetworkManager[48999]: <info>  [1769847385.0414] manager: (tap41d7c79a-73): new Tun device (/org/freedesktop/NetworkManager/Devices/278)
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.043 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:25 compute-2 ovn_controller[133834]: 2026-01-31T08:16:25Z|00575|binding|INFO|Claiming lport 41d7c79a-73d5-466b-ba68-192c5a2c01b3 for this chassis.
Jan 31 08:16:25 compute-2 ovn_controller[133834]: 2026-01-31T08:16:25Z|00576|binding|INFO|41d7c79a-73d5-466b-ba68-192c5a2c01b3: Claiming fa:16:3e:b7:40:e9 10.100.0.14
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.047 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.048 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.058 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:40:e9 10.100.0.14'], port_security=['fa:16:3e:b7:40:e9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2c1aa7ad-f9c1-4e05-8261-defaa3eef40b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b47e290-9853-478f-86cb-c8ea73119a97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b06982960ad4453b8e542cb6330835d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd12bd0a3-e514-4fde-9351-39f4527fc3f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c631ec-3e4c-4c27-ae32-3645d3f29853, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=41d7c79a-73d5-466b-ba68-192c5a2c01b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.061 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 41d7c79a-73d5-466b-ba68-192c5a2c01b3 in datapath 2b47e290-9853-478f-86cb-c8ea73119a97 bound to our chassis
Jan 31 08:16:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:25 compute-2 systemd-udevd[288298]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.065 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b47e290-9853-478f-86cb-c8ea73119a97
Jan 31 08:16:25 compute-2 systemd-machined[195142]: New machine qemu-62-instance-00000089.
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.073 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[15cd8fa6-de50-4b4e-813b-c9877c86ac77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.074 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b47e290-91 in ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.074 226833 DEBUG nova.network.neutron [req-c97a9aa5-0c51-49de-9853-ac440eb69f7e req-abdb681d-8751-4a00-939a-e4b08b59c77b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updated VIF entry in instance network info cache for port 41d7c79a-73d5-466b-ba68-192c5a2c01b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.074 226833 DEBUG nova.network.neutron [req-c97a9aa5-0c51-49de-9853-ac440eb69f7e req-abdb681d-8751-4a00-939a-e4b08b59c77b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating instance_info_cache with network_info: [{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:16:25 compute-2 ovn_controller[133834]: 2026-01-31T08:16:25Z|00577|binding|INFO|Setting lport 41d7c79a-73d5-466b-ba68-192c5a2c01b3 ovn-installed in OVS
Jan 31 08:16:25 compute-2 ovn_controller[133834]: 2026-01-31T08:16:25Z|00578|binding|INFO|Setting lport 41d7c79a-73d5-466b-ba68-192c5a2c01b3 up in Southbound
Jan 31 08:16:25 compute-2 systemd[1]: Started Virtual Machine qemu-62-instance-00000089.
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.106 226833 DEBUG oslo_concurrency.lockutils [req-c97a9aa5-0c51-49de-9853-ac440eb69f7e req-abdb681d-8751-4a00-939a-e4b08b59c77b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.107 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.107 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b47e290-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.107 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[22ef0bfe-e457-41c4-8165-cc1f856923d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 NetworkManager[48999]: <info>  [1769847385.1113] device (tap41d7c79a-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:16:25 compute-2 NetworkManager[48999]: <info>  [1769847385.1120] device (tap41d7c79a-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.109 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[21792351-4574-4a05-8f2c-5bc8696b7939]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.117 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[84a7297d-d0c4-4038-877b-9d2e874c2d95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.126 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ea98e937-dff8-47b3-9430-f24710ed15b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.147 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf72138-621b-482f-aaab-74e4a9b3de09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 systemd-udevd[288301]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.151 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[df386b48-6b4a-4709-88c2-0882229d8fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 NetworkManager[48999]: <info>  [1769847385.1525] manager: (tap2b47e290-90): new Veth device (/org/freedesktop/NetworkManager/Devices/279)
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.173 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[13742634-9751-45cf-ac42-5147dcbe47db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.179 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d2918b-ca6e-4f41-aefb-a7f151be8561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 NetworkManager[48999]: <info>  [1769847385.1941] device (tap2b47e290-90): carrier: link connected
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.197 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[650eced9-6e71-443c-99c5-28a1927eba1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.209 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bce2d512-5e45-4fad-b7e2-9571b3f5fd6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b47e290-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:d6:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763766, 'reachable_time': 19118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288331, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.219 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f868a995-c9a2-4520-b0a0-714c1854cb3c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:d61c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763766, 'tstamp': 763766}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288332, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.230 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5f32bf43-5f36-4e49-9da2-ae75971f87bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b47e290-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:d6:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763766, 'reachable_time': 19118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288333, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.251 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[72db0a88-ffda-4970-80c8-41f9badd857c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.291 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c6311a4d-bc83-40e3-a3c6-bb2c26f2e3be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.293 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b47e290-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.293 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.293 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b47e290-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.295 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:25 compute-2 NetworkManager[48999]: <info>  [1769847385.2961] manager: (tap2b47e290-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Jan 31 08:16:25 compute-2 kernel: tap2b47e290-90: entered promiscuous mode
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.298 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.300 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b47e290-90, col_values=(('external_ids', {'iface-id': '4fadf8e2-21f6-4df7-9cc2-be518280ee18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.301 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:25 compute-2 ovn_controller[133834]: 2026-01-31T08:16:25Z|00579|binding|INFO|Releasing lport 4fadf8e2-21f6-4df7-9cc2-be518280ee18 from this chassis (sb_readonly=0)
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.302 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.304 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.305 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[60757b4e-cf1f-492d-bffa-d313e3254ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.306 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-2b47e290-9853-478f-86cb-c8ea73119a97
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 2b47e290-9853-478f-86cb-c8ea73119a97
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:16:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:25.307 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'env', 'PROCESS_TAG=haproxy-2b47e290-9853-478f-86cb-c8ea73119a97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b47e290-9853-478f-86cb-c8ea73119a97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.308 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:25 compute-2 podman[288379]: 2026-01-31 08:16:25.666855855 +0000 UTC m=+0.049059100 container create f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:16:25 compute-2 systemd[1]: Started libpod-conmon-f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af.scope.
Jan 31 08:16:25 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:16:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4201fc73ce78c32285e900c62ef77e0c358c04727fe4dc897a702b721eceb974/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:25 compute-2 podman[288379]: 2026-01-31 08:16:25.71656503 +0000 UTC m=+0.098768325 container init f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 08:16:25 compute-2 podman[288379]: 2026-01-31 08:16:25.722433219 +0000 UTC m=+0.104636484 container start f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:16:25 compute-2 podman[288379]: 2026-01-31 08:16:25.637413327 +0000 UTC m=+0.019616622 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:16:25 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[288418]: [NOTICE]   (288424) : New worker (288427) forked
Jan 31 08:16:25 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[288418]: [NOTICE]   (288424) : Loading success.
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.774 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847385.7740145, 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.775 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] VM Started (Lifecycle Event)
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.803 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.807 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847385.7743366, 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.807 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] VM Paused (Lifecycle Event)
Jan 31 08:16:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e311 e311: 3 total, 3 up, 3 in
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.852 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.858 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:16:25 compute-2 nova_compute[226829]: 2026-01-31 08:16:25.927 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:16:26 compute-2 nova_compute[226829]: 2026-01-31 08:16:26.000 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:26 compute-2 sudo[288437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:26 compute-2 sudo[288437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:26 compute-2 sudo[288437]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:26 compute-2 sudo[288462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:26 compute-2 sudo[288462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:26 compute-2 sudo[288462]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:16:26.654 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:16:26 compute-2 ceph-mon[77282]: pgmap v2435: 305 pgs: 305 active+clean; 588 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.4 MiB/s wr, 91 op/s
Jan 31 08:16:26 compute-2 ceph-mon[77282]: osdmap e311: 3 total, 3 up, 3 in
Jan 31 08:16:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:26.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e312 e312: 3 total, 3 up, 3 in
Jan 31 08:16:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:26.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:27 compute-2 ceph-mon[77282]: osdmap e312: 3 total, 3 up, 3 in
Jan 31 08:16:28 compute-2 nova_compute[226829]: 2026-01-31 08:16:28.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:28 compute-2 nova_compute[226829]: 2026-01-31 08:16:28.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:28.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:28 compute-2 ceph-mon[77282]: pgmap v2438: 305 pgs: 305 active+clean; 615 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.8 MiB/s wr, 156 op/s
Jan 31 08:16:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:28.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:30 compute-2 ceph-mon[77282]: pgmap v2439: 305 pgs: 305 active+clean; 615 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.1 MiB/s wr, 133 op/s
Jan 31 08:16:30 compute-2 nova_compute[226829]: 2026-01-31 08:16:30.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:30 compute-2 nova_compute[226829]: 2026-01-31 08:16:30.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:16:30 compute-2 nova_compute[226829]: 2026-01-31 08:16:30.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:16:30 compute-2 nova_compute[226829]: 2026-01-31 08:16:30.678 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:16:30 compute-2 nova_compute[226829]: 2026-01-31 08:16:30.678 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:16:30 compute-2 nova_compute[226829]: 2026-01-31 08:16:30.679 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:30.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:30.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:31 compute-2 nova_compute[226829]: 2026-01-31 08:16:31.034 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:32 compute-2 podman[288490]: 2026-01-31 08:16:32.219699527 +0000 UTC m=+0.090102731 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.251 226833 DEBUG nova.compute.manager [req-4320d2b1-d8f1-46bb-8ef6-87ef904f8b82 req-39b0c6b7-fa21-43d7-8d7b-0d76109a5053 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.251 226833 DEBUG oslo_concurrency.lockutils [req-4320d2b1-d8f1-46bb-8ef6-87ef904f8b82 req-39b0c6b7-fa21-43d7-8d7b-0d76109a5053 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.252 226833 DEBUG oslo_concurrency.lockutils [req-4320d2b1-d8f1-46bb-8ef6-87ef904f8b82 req-39b0c6b7-fa21-43d7-8d7b-0d76109a5053 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.252 226833 DEBUG oslo_concurrency.lockutils [req-4320d2b1-d8f1-46bb-8ef6-87ef904f8b82 req-39b0c6b7-fa21-43d7-8d7b-0d76109a5053 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.252 226833 DEBUG nova.compute.manager [req-4320d2b1-d8f1-46bb-8ef6-87ef904f8b82 req-39b0c6b7-fa21-43d7-8d7b-0d76109a5053 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Processing event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.252 226833 DEBUG nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.256 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847392.2566187, 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.257 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] VM Resumed (Lifecycle Event)
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.258 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.261 226833 INFO nova.virt.libvirt.driver [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance spawned successfully.
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.261 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.320 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.321 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.321 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.322 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.322 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.322 226833 DEBUG nova.virt.libvirt.driver [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.328 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.331 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.373 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.478 226833 INFO nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Took 15.98 seconds to spawn the instance on the hypervisor.
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.479 226833 DEBUG nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.638 226833 INFO nova.compute.manager [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Took 17.23 seconds to build instance.
Jan 31 08:16:32 compute-2 nova_compute[226829]: 2026-01-31 08:16:32.700 226833 DEBUG oslo_concurrency.lockutils [None req-178e9714-06dc-4d81-ba3b-bb010cd3f92b 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:16:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:32.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:16:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:32.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:33 compute-2 ceph-mon[77282]: pgmap v2440: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.9 MiB/s wr, 147 op/s
Jan 31 08:16:33 compute-2 nova_compute[226829]: 2026-01-31 08:16:33.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:33 compute-2 nova_compute[226829]: 2026-01-31 08:16:33.622 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:34 compute-2 ceph-mon[77282]: pgmap v2441: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 88 op/s
Jan 31 08:16:34 compute-2 nova_compute[226829]: 2026-01-31 08:16:34.499 226833 DEBUG nova.compute.manager [req-a0b79a99-eb55-40ad-9986-044c888236be req-b44faecf-f629-44ce-beed-0e16b56de0ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:16:34 compute-2 nova_compute[226829]: 2026-01-31 08:16:34.500 226833 DEBUG oslo_concurrency.lockutils [req-a0b79a99-eb55-40ad-9986-044c888236be req-b44faecf-f629-44ce-beed-0e16b56de0ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:34 compute-2 nova_compute[226829]: 2026-01-31 08:16:34.500 226833 DEBUG oslo_concurrency.lockutils [req-a0b79a99-eb55-40ad-9986-044c888236be req-b44faecf-f629-44ce-beed-0e16b56de0ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:34 compute-2 nova_compute[226829]: 2026-01-31 08:16:34.500 226833 DEBUG oslo_concurrency.lockutils [req-a0b79a99-eb55-40ad-9986-044c888236be req-b44faecf-f629-44ce-beed-0e16b56de0ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:34 compute-2 nova_compute[226829]: 2026-01-31 08:16:34.500 226833 DEBUG nova.compute.manager [req-a0b79a99-eb55-40ad-9986-044c888236be req-b44faecf-f629-44ce-beed-0e16b56de0ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] No waiting events found dispatching network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:16:34 compute-2 nova_compute[226829]: 2026-01-31 08:16:34.500 226833 WARNING nova.compute.manager [req-a0b79a99-eb55-40ad-9986-044c888236be req-b44faecf-f629-44ce-beed-0e16b56de0ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received unexpected event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 for instance with vm_state active and task_state None.
Jan 31 08:16:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:34.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:34.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:36 compute-2 nova_compute[226829]: 2026-01-31 08:16:36.035 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e313 e313: 3 total, 3 up, 3 in
Jan 31 08:16:36 compute-2 nova_compute[226829]: 2026-01-31 08:16:36.777 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:36 compute-2 NetworkManager[48999]: <info>  [1769847396.7784] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Jan 31 08:16:36 compute-2 NetworkManager[48999]: <info>  [1769847396.7795] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Jan 31 08:16:36 compute-2 nova_compute[226829]: 2026-01-31 08:16:36.852 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:36 compute-2 ovn_controller[133834]: 2026-01-31T08:16:36Z|00580|binding|INFO|Releasing lport 4fadf8e2-21f6-4df7-9cc2-be518280ee18 from this chassis (sb_readonly=0)
Jan 31 08:16:36 compute-2 nova_compute[226829]: 2026-01-31 08:16:36.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:36.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:36.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:37 compute-2 nova_compute[226829]: 2026-01-31 08:16:37.619 226833 DEBUG nova.compute.manager [req-85595533-d054-4276-a9e0-945cb63f9ddc req-8cbf9116-1d3f-4105-87a3-29b42dd7e1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-changed-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:16:37 compute-2 nova_compute[226829]: 2026-01-31 08:16:37.620 226833 DEBUG nova.compute.manager [req-85595533-d054-4276-a9e0-945cb63f9ddc req-8cbf9116-1d3f-4105-87a3-29b42dd7e1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Refreshing instance network info cache due to event network-changed-41d7c79a-73d5-466b-ba68-192c5a2c01b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:16:37 compute-2 nova_compute[226829]: 2026-01-31 08:16:37.620 226833 DEBUG oslo_concurrency.lockutils [req-85595533-d054-4276-a9e0-945cb63f9ddc req-8cbf9116-1d3f-4105-87a3-29b42dd7e1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:16:37 compute-2 nova_compute[226829]: 2026-01-31 08:16:37.620 226833 DEBUG oslo_concurrency.lockutils [req-85595533-d054-4276-a9e0-945cb63f9ddc req-8cbf9116-1d3f-4105-87a3-29b42dd7e1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:16:37 compute-2 nova_compute[226829]: 2026-01-31 08:16:37.621 226833 DEBUG nova.network.neutron [req-85595533-d054-4276-a9e0-945cb63f9ddc req-8cbf9116-1d3f-4105-87a3-29b42dd7e1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Refreshing network info cache for port 41d7c79a-73d5-466b-ba68-192c5a2c01b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:16:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:16:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987985800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:16:38 compute-2 nova_compute[226829]: 2026-01-31 08:16:38.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:38 compute-2 nova_compute[226829]: 2026-01-31 08:16:38.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:38 compute-2 nova_compute[226829]: 2026-01-31 08:16:38.584 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:38 compute-2 nova_compute[226829]: 2026-01-31 08:16:38.585 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:38 compute-2 nova_compute[226829]: 2026-01-31 08:16:38.585 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:38 compute-2 nova_compute[226829]: 2026-01-31 08:16:38.585 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:16:38 compute-2 nova_compute[226829]: 2026-01-31 08:16:38.585 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:38 compute-2 nova_compute[226829]: 2026-01-31 08:16:38.625 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:38.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:16:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4037236936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:38 compute-2 nova_compute[226829]: 2026-01-31 08:16:38.990 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:39.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.096 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.097 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.218 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.219 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3994MB free_disk=20.784923553466797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.219 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.219 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.477 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.478 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.478 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.534 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:16:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3089800500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.957 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.962 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:16:39 compute-2 nova_compute[226829]: 2026-01-31 08:16:39.988 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:16:40 compute-2 nova_compute[226829]: 2026-01-31 08:16:40.021 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:16:40 compute-2 nova_compute[226829]: 2026-01-31 08:16:40.022 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:40 compute-2 nova_compute[226829]: 2026-01-31 08:16:40.600 226833 DEBUG nova.network.neutron [req-85595533-d054-4276-a9e0-945cb63f9ddc req-8cbf9116-1d3f-4105-87a3-29b42dd7e1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updated VIF entry in instance network info cache for port 41d7c79a-73d5-466b-ba68-192c5a2c01b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:16:40 compute-2 nova_compute[226829]: 2026-01-31 08:16:40.601 226833 DEBUG nova.network.neutron [req-85595533-d054-4276-a9e0-945cb63f9ddc req-8cbf9116-1d3f-4105-87a3-29b42dd7e1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating instance_info_cache with network_info: [{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:16:40 compute-2 nova_compute[226829]: 2026-01-31 08:16:40.798 226833 DEBUG oslo_concurrency.lockutils [req-85595533-d054-4276-a9e0-945cb63f9ddc req-8cbf9116-1d3f-4105-87a3-29b42dd7e1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:16:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:40.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:16:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:41.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:16:41 compute-2 nova_compute[226829]: 2026-01-31 08:16:41.037 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:41 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 08:16:42 compute-2 podman[288568]: 2026-01-31 08:16:42.179195648 +0000 UTC m=+0.063674255 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:16:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:42.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:43.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:43 compute-2 nova_compute[226829]: 2026-01-31 08:16:43.642 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:44.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:45.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:45 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 08:16:46 compute-2 nova_compute[226829]: 2026-01-31 08:16:46.040 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:46 compute-2 nova_compute[226829]: 2026-01-31 08:16:46.054 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:46 compute-2 sudo[288590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:46 compute-2 sudo[288590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:46 compute-2 sudo[288590]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:46 compute-2 sudo[288615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:46 compute-2 sudo[288615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:46 compute-2 sudo[288615]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:46.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:47.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:48 compute-2 nova_compute[226829]: 2026-01-31 08:16:48.021 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:48 compute-2 nova_compute[226829]: 2026-01-31 08:16:48.022 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:16:48 compute-2 nova_compute[226829]: 2026-01-31 08:16:48.644 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:48.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:49.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:49 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 08:16:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).paxos(paxos active c 4770..5506) lease_timeout -- calling new election
Jan 31 08:16:49 compute-2 ceph-mon[77282]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 31 08:16:49 compute-2 ceph-mon[77282]: paxos.1).electionLogic(40) init, last seen epoch 40
Jan 31 08:16:49 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:49 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:49 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:16:50 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-2 nova_compute[226829]: 2026-01-31 08:16:50.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:50 compute-2 nova_compute[226829]: 2026-01-31 08:16:50.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:16:50 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-2 nova_compute[226829]: 2026-01-31 08:16:50.864 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:16:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:16:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:50.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:16:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:16:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:51.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:16:51 compute-2 nova_compute[226829]: 2026-01-31 08:16:51.042 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:51 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:51 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:51 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:52 compute-2 ovn_controller[133834]: 2026-01-31T08:16:52Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:40:e9 10.100.0.14
Jan 31 08:16:52 compute-2 ovn_controller[133834]: 2026-01-31T08:16:52Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:40:e9 10.100.0.14
Jan 31 08:16:52 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma MDS connection to Monitors appears to be laggy; 18.8875s since last acked beacon
Jan 31 08:16:52 compute-2 ceph-mds[84366]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 31 08:16:52 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:52 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:16:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:52.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:16:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:53.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:53 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:53 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:53 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:53 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:53 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 08:16:53 compute-2 nova_compute[226829]: 2026-01-31 08:16:53.647 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:16:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:54.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:16:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:55.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: paxos.1).electionLogic(41) init, last seen epoch 41, mid-election, bumping
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:16:55 compute-2 ceph-mon[77282]: pgmap v2442: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.4 MiB/s wr, 117 op/s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: osdmap e313: 3 total, 3 up, 3 in
Jan 31 08:16:55 compute-2 ceph-mon[77282]: pgmap v2444: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 630 KiB/s wr, 114 op/s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: pgmap v2445: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 630 KiB/s wr, 114 op/s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: pgmap v2446: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 511 B/s wr, 85 op/s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: pgmap v2447: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 409 B/s wr, 78 op/s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: pgmap v2448: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 37 op/s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3636384441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mon.compute-1 calling monitor election
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 08:16:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1530353335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:55 compute-2 ceph-mon[77282]: pgmap v2449: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 0 B/s wr, 37 op/s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: pgmap v2450: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: pgmap v2451: 305 pgs: 305 active+clean; 638 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 903 KiB/s wr, 13 op/s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Jan 31 08:16:55 compute-2 ceph-mon[77282]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 08:16:55 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 08:16:55 compute-2 ceph-mon[77282]: osdmap e313: 3 total, 3 up, 3 in
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mgrmap e11: compute-0.hhuoua(active, since 71m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 08:16:55 compute-2 ceph-mon[77282]: Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 31 08:16:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3581355920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2529377701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:55 compute-2 ceph-mon[77282]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 31 08:16:55 compute-2 ceph-mon[77282]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-1
Jan 31 08:16:55 compute-2 ceph-mon[77282]:     mon.compute-2 (rank 1) addr [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] is down (out of quorum)
Jan 31 08:16:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2899978361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:16:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2899978361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:16:55 compute-2 ceph-mon[77282]: pgmap v2452: 305 pgs: 305 active+clean; 646 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 1.4 MiB/s wr, 45 op/s
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:16:55 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma  MDS is no longer laggy
Jan 31 08:16:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1987985800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:16:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4037236936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3089800500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mon.compute-2 calling monitor election
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mon.compute-1 calling monitor election
Jan 31 08:16:55 compute-2 ceph-mon[77282]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 08:16:55 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 08:16:55 compute-2 ceph-mon[77282]: osdmap e313: 3 total, 3 up, 3 in
Jan 31 08:16:55 compute-2 ceph-mon[77282]: mgrmap e11: compute-0.hhuoua(active, since 71m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 08:16:55 compute-2 ceph-mon[77282]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 08:16:55 compute-2 ceph-mon[77282]: Cluster is now healthy
Jan 31 08:16:55 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 08:16:56 compute-2 nova_compute[226829]: 2026-01-31 08:16:56.045 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:56 compute-2 ceph-mon[77282]: pgmap v2453: 305 pgs: 305 active+clean; 664 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Jan 31 08:16:56 compute-2 nova_compute[226829]: 2026-01-31 08:16:56.859 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:56.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:16:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:57.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:16:57 compute-2 nova_compute[226829]: 2026-01-31 08:16:57.362 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Acquiring lock "a0eef779-24bd-4096-a30b-859b597c1e18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:57 compute-2 nova_compute[226829]: 2026-01-31 08:16:57.362 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:58 compute-2 nova_compute[226829]: 2026-01-31 08:16:58.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:58 compute-2 nova_compute[226829]: 2026-01-31 08:16:58.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:16:58 compute-2 ceph-mon[77282]: pgmap v2454: 305 pgs: 305 active+clean; 669 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.3 MiB/s wr, 84 op/s
Jan 31 08:16:58 compute-2 nova_compute[226829]: 2026-01-31 08:16:58.650 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:58 compute-2 nova_compute[226829]: 2026-01-31 08:16:58.837 226833 DEBUG nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:16:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:58.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:58 compute-2 nova_compute[226829]: 2026-01-31 08:16:58.997 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:58 compute-2 nova_compute[226829]: 2026-01-31 08:16:58.997 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:59 compute-2 nova_compute[226829]: 2026-01-31 08:16:59.008 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:16:59 compute-2 nova_compute[226829]: 2026-01-31 08:16:59.009 226833 INFO nova.compute.claims [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:16:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:16:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:59.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:59 compute-2 nova_compute[226829]: 2026-01-31 08:16:59.269 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:16:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 57K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.11 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1665 writes, 7981 keys, 1665 commit groups, 1.0 writes per commit group, ingest: 16.52 MB, 0.03 MB/s
                                           Interval WAL: 1665 writes, 1665 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     71.8      0.96              0.21        34    0.028       0      0       0.0       0.0
                                             L6      1/0    9.97 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6    149.1    125.6      2.51              0.93        33    0.076    208K    18K       0.0       0.0
                                            Sum      1/0    9.97 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6    107.9    110.7      3.47              1.14        67    0.052    208K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.4    144.1    143.5      0.45              0.17        10    0.045     41K   2597       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    149.1    125.6      2.51              0.93        33    0.076    208K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     71.9      0.96              0.21        33    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.067, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.37 GB write, 0.09 MB/s write, 0.37 GB read, 0.09 MB/s read, 3.5 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 41.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000308 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2397,39.60 MB,13.0248%) FilterBlock(67,582.86 KB,0.187236%) IndexBlock(67,1004.56 KB,0.322703%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:16:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:16:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1300768238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:59 compute-2 nova_compute[226829]: 2026-01-31 08:16:59.670 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:59 compute-2 nova_compute[226829]: 2026-01-31 08:16:59.676 226833 DEBUG nova.compute.provider_tree [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:16:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1300768238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:00 compute-2 nova_compute[226829]: 2026-01-31 08:17:00.005 226833 DEBUG nova.scheduler.client.report [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:17:00 compute-2 nova_compute[226829]: 2026-01-31 08:17:00.295 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:00 compute-2 nova_compute[226829]: 2026-01-31 08:17:00.296 226833 DEBUG nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:17:00 compute-2 nova_compute[226829]: 2026-01-31 08:17:00.678 226833 DEBUG nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:17:00 compute-2 nova_compute[226829]: 2026-01-31 08:17:00.679 226833 DEBUG nova.network.neutron [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:17:00 compute-2 nova_compute[226829]: 2026-01-31 08:17:00.742 226833 INFO nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:17:00 compute-2 ceph-mon[77282]: pgmap v2455: 305 pgs: 305 active+clean; 669 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 2.3 MiB/s wr, 80 op/s
Jan 31 08:17:00 compute-2 nova_compute[226829]: 2026-01-31 08:17:00.778 226833 DEBUG nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:17:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:00.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:01.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.047 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.081 226833 DEBUG nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.083 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.083 226833 INFO nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Creating image(s)
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.103 226833 DEBUG nova.storage.rbd_utils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] rbd image a0eef779-24bd-4096-a30b-859b597c1e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.126 226833 DEBUG nova.storage.rbd_utils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] rbd image a0eef779-24bd-4096-a30b-859b597c1e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.152 226833 DEBUG nova.storage.rbd_utils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] rbd image a0eef779-24bd-4096-a30b-859b597c1e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.155 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.211 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.212 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.213 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.213 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.233 226833 DEBUG nova.storage.rbd_utils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] rbd image a0eef779-24bd-4096-a30b-859b597c1e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.236 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a0eef779-24bd-4096-a30b-859b597c1e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.551 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a0eef779-24bd-4096-a30b-859b597c1e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.618 226833 DEBUG nova.storage.rbd_utils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] resizing rbd image a0eef779-24bd-4096-a30b-859b597c1e18_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.760 226833 DEBUG nova.objects.instance [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lazy-loading 'migration_context' on Instance uuid a0eef779-24bd-4096-a30b-859b597c1e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.788 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.789 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Ensure instance console log exists: /var/lib/nova/instances/a0eef779-24bd-4096-a30b-859b597c1e18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.790 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.790 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.791 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:01 compute-2 nova_compute[226829]: 2026-01-31 08:17:01.808 226833 DEBUG nova.policy [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ca99c4062164ef3b438a9f924677d01', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '120d0fd1a2da48e3b76cc3c39ee91084', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:17:02 compute-2 nova_compute[226829]: 2026-01-31 08:17:02.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:02 compute-2 nova_compute[226829]: 2026-01-31 08:17:02.771 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:02 compute-2 nova_compute[226829]: 2026-01-31 08:17:02.806 226833 WARNING nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Jan 31 08:17:02 compute-2 nova_compute[226829]: 2026-01-31 08:17:02.806 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Triggering sync for uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:17:02 compute-2 nova_compute[226829]: 2026-01-31 08:17:02.806 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Triggering sync for uuid a0eef779-24bd-4096-a30b-859b597c1e18 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:17:02 compute-2 nova_compute[226829]: 2026-01-31 08:17:02.807 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:02 compute-2 nova_compute[226829]: 2026-01-31 08:17:02.807 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:02 compute-2 nova_compute[226829]: 2026-01-31 08:17:02.807 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "a0eef779-24bd-4096-a30b-859b597c1e18" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:02 compute-2 nova_compute[226829]: 2026-01-31 08:17:02.858 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:02.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:02 compute-2 ceph-mon[77282]: pgmap v2456: 305 pgs: 305 active+clean; 708 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:02.923910) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847422924066, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1671, "num_deletes": 254, "total_data_size": 3562459, "memory_usage": 3606736, "flush_reason": "Manual Compaction"}
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847422942487, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 2342693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56463, "largest_seqno": 58129, "table_properties": {"data_size": 2335531, "index_size": 4105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16155, "raw_average_key_size": 20, "raw_value_size": 2320820, "raw_average_value_size": 3006, "num_data_blocks": 181, "num_entries": 772, "num_filter_entries": 772, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847287, "oldest_key_time": 1769847287, "file_creation_time": 1769847422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 18645 microseconds, and 8148 cpu microseconds.
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:02.942556) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 2342693 bytes OK
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:02.942575) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:02.944306) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:02.944320) EVENT_LOG_v1 {"time_micros": 1769847422944316, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:02.944338) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3554684, prev total WAL file size 3554684, number of live WAL files 2.
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:02.945279) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(2287KB)], [111(10208KB)]
Jan 31 08:17:02 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847422945363, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12795746, "oldest_snapshot_seqno": -1}
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8213 keys, 10839498 bytes, temperature: kUnknown
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847423009289, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 10839498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10786443, "index_size": 31439, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 213416, "raw_average_key_size": 25, "raw_value_size": 10641930, "raw_average_value_size": 1295, "num_data_blocks": 1229, "num_entries": 8213, "num_filter_entries": 8213, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:03.009996) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10839498 bytes
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:03.011484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.6 rd, 168.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 10.0 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(10.1) write-amplify(4.6) OK, records in: 8744, records dropped: 531 output_compression: NoCompression
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:03.011509) EVENT_LOG_v1 {"time_micros": 1769847423011498, "job": 70, "event": "compaction_finished", "compaction_time_micros": 64428, "compaction_time_cpu_micros": 23215, "output_level": 6, "num_output_files": 1, "total_output_size": 10839498, "num_input_records": 8744, "num_output_records": 8213, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847423011838, "job": 70, "event": "table_file_deletion", "file_number": 113}
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847423012614, "job": 70, "event": "table_file_deletion", "file_number": 111}
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:02.945107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:03.012705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:03.012714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:03.012715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:03.012717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:17:03.012720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 31 08:17:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:03.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 31 08:17:03 compute-2 podman[288836]: 2026-01-31 08:17:03.201797077 +0000 UTC m=+0.069329788 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:17:03 compute-2 nova_compute[226829]: 2026-01-31 08:17:03.653 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:04 compute-2 nova_compute[226829]: 2026-01-31 08:17:04.049 226833 DEBUG nova.network.neutron [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Successfully created port: b9f7bcac-a230-466f-96c2-279443fc6e7a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:17:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:04.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:04 compute-2 ceph-mon[77282]: pgmap v2457: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 4.0 MiB/s wr, 104 op/s
Jan 31 08:17:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:05.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:05 compute-2 nova_compute[226829]: 2026-01-31 08:17:05.661 226833 DEBUG nova.network.neutron [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Successfully updated port: b9f7bcac-a230-466f-96c2-279443fc6e7a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:17:05 compute-2 nova_compute[226829]: 2026-01-31 08:17:05.691 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Acquiring lock "refresh_cache-a0eef779-24bd-4096-a30b-859b597c1e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:05 compute-2 nova_compute[226829]: 2026-01-31 08:17:05.691 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Acquired lock "refresh_cache-a0eef779-24bd-4096-a30b-859b597c1e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:05 compute-2 nova_compute[226829]: 2026-01-31 08:17:05.691 226833 DEBUG nova.network.neutron [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:17:05 compute-2 nova_compute[226829]: 2026-01-31 08:17:05.947 226833 DEBUG nova.compute.manager [req-910cb9e4-0c6f-4fc3-ac89-0b3d9b595c5a req-ec26f2fd-9c96-4735-9dd6-218a7e7e52ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Received event network-changed-b9f7bcac-a230-466f-96c2-279443fc6e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:05 compute-2 nova_compute[226829]: 2026-01-31 08:17:05.948 226833 DEBUG nova.compute.manager [req-910cb9e4-0c6f-4fc3-ac89-0b3d9b595c5a req-ec26f2fd-9c96-4735-9dd6-218a7e7e52ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Refreshing instance network info cache due to event network-changed-b9f7bcac-a230-466f-96c2-279443fc6e7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:17:05 compute-2 nova_compute[226829]: 2026-01-31 08:17:05.948 226833 DEBUG oslo_concurrency.lockutils [req-910cb9e4-0c6f-4fc3-ac89-0b3d9b595c5a req-ec26f2fd-9c96-4735-9dd6-218a7e7e52ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a0eef779-24bd-4096-a30b-859b597c1e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:06 compute-2 nova_compute[226829]: 2026-01-31 08:17:06.036 226833 DEBUG nova.network.neutron [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:17:06 compute-2 nova_compute[226829]: 2026-01-31 08:17:06.085 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:06 compute-2 sudo[288864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:06 compute-2 sudo[288864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:06 compute-2 sudo[288864]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:06 compute-2 ceph-mon[77282]: pgmap v2458: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 136 KiB/s rd, 4.3 MiB/s wr, 87 op/s
Jan 31 08:17:06 compute-2 sudo[288889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:06 compute-2 sudo[288889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:06 compute-2 sudo[288889]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:06.892 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:06.894 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:06.895 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:06.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:17:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:07.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:17:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2757802043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:08 compute-2 nova_compute[226829]: 2026-01-31 08:17:08.533 226833 DEBUG nova.network.neutron [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Updating instance_info_cache with network_info: [{"id": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "address": "fa:16:3e:82:02:48", "network": {"id": "55dd1ae9-f1f5-44fe-b83d-9159a673a6c6", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1906953653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "120d0fd1a2da48e3b76cc3c39ee91084", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f7bcac-a2", "ovs_interfaceid": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:08 compute-2 ceph-mon[77282]: pgmap v2459: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 3.5 MiB/s wr, 47 op/s
Jan 31 08:17:08 compute-2 nova_compute[226829]: 2026-01-31 08:17:08.656 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:08.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:09.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.218 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Releasing lock "refresh_cache-a0eef779-24bd-4096-a30b-859b597c1e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.218 226833 DEBUG nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Instance network_info: |[{"id": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "address": "fa:16:3e:82:02:48", "network": {"id": "55dd1ae9-f1f5-44fe-b83d-9159a673a6c6", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1906953653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "120d0fd1a2da48e3b76cc3c39ee91084", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f7bcac-a2", "ovs_interfaceid": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.219 226833 DEBUG oslo_concurrency.lockutils [req-910cb9e4-0c6f-4fc3-ac89-0b3d9b595c5a req-ec26f2fd-9c96-4735-9dd6-218a7e7e52ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a0eef779-24bd-4096-a30b-859b597c1e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.220 226833 DEBUG nova.network.neutron [req-910cb9e4-0c6f-4fc3-ac89-0b3d9b595c5a req-ec26f2fd-9c96-4735-9dd6-218a7e7e52ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Refreshing network info cache for port b9f7bcac-a230-466f-96c2-279443fc6e7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.224 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Start _get_guest_xml network_info=[{"id": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "address": "fa:16:3e:82:02:48", "network": {"id": "55dd1ae9-f1f5-44fe-b83d-9159a673a6c6", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1906953653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "120d0fd1a2da48e3b76cc3c39ee91084", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f7bcac-a2", "ovs_interfaceid": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.230 226833 WARNING nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.245 226833 DEBUG nova.virt.libvirt.host [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.247 226833 DEBUG nova.virt.libvirt.host [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.253 226833 DEBUG nova.virt.libvirt.host [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.254 226833 DEBUG nova.virt.libvirt.host [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.256 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.256 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.258 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.258 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.259 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.259 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.260 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.260 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.261 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.261 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.262 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.262 226833 DEBUG nova.virt.hardware [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.268 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:17:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4075107395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.718 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.744 226833 DEBUG nova.storage.rbd_utils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] rbd image a0eef779-24bd-4096-a30b-859b597c1e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:10 compute-2 nova_compute[226829]: 2026-01-31 08:17:10.748 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:10.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:10 compute-2 ceph-mon[77282]: pgmap v2460: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 3.4 MiB/s wr, 44 op/s
Jan 31 08:17:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4075107395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:11.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.090 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:11.092 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:11.094 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:17:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:17:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/595431473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.202 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.204 226833 DEBUG nova.virt.libvirt.vif [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:16:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-716669371',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-716669371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-716669371',id=138,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='120d0fd1a2da48e3b76cc3c39ee91084',ramdisk_id='',reservation_id='r-2lnqwo5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-913751323',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-913751323-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:17:00Z,user_data=None,user_id='9ca99c4062164ef3b438a9f924677d01',uuid=a0eef779-24bd-4096-a30b-859b597c1e18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "address": "fa:16:3e:82:02:48", "network": {"id": "55dd1ae9-f1f5-44fe-b83d-9159a673a6c6", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1906953653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "120d0fd1a2da48e3b76cc3c39ee91084", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f7bcac-a2", "ovs_interfaceid": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.204 226833 DEBUG nova.network.os_vif_util [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Converting VIF {"id": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "address": "fa:16:3e:82:02:48", "network": {"id": "55dd1ae9-f1f5-44fe-b83d-9159a673a6c6", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1906953653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "120d0fd1a2da48e3b76cc3c39ee91084", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f7bcac-a2", "ovs_interfaceid": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.205 226833 DEBUG nova.network.os_vif_util [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:02:48,bridge_name='br-int',has_traffic_filtering=True,id=b9f7bcac-a230-466f-96c2-279443fc6e7a,network=Network(55dd1ae9-f1f5-44fe-b83d-9159a673a6c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f7bcac-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.206 226833 DEBUG nova.objects.instance [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lazy-loading 'pci_devices' on Instance uuid a0eef779-24bd-4096-a30b-859b597c1e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.274 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <uuid>a0eef779-24bd-4096-a30b-859b597c1e18</uuid>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <name>instance-0000008a</name>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-716669371</nova:name>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:17:10</nova:creationTime>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <nova:user uuid="9ca99c4062164ef3b438a9f924677d01">tempest-ServersNegativeTestMultiTenantJSON-913751323-project-member</nova:user>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <nova:project uuid="120d0fd1a2da48e3b76cc3c39ee91084">tempest-ServersNegativeTestMultiTenantJSON-913751323</nova:project>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <nova:port uuid="b9f7bcac-a230-466f-96c2-279443fc6e7a">
Jan 31 08:17:11 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <system>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <entry name="serial">a0eef779-24bd-4096-a30b-859b597c1e18</entry>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <entry name="uuid">a0eef779-24bd-4096-a30b-859b597c1e18</entry>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     </system>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <os>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   </os>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <features>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   </features>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/a0eef779-24bd-4096-a30b-859b597c1e18_disk">
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       </source>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/a0eef779-24bd-4096-a30b-859b597c1e18_disk.config">
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       </source>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:17:11 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:82:02:48"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <target dev="tapb9f7bcac-a2"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/a0eef779-24bd-4096-a30b-859b597c1e18/console.log" append="off"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <video>
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     </video>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:17:11 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:17:11 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:17:11 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:17:11 compute-2 nova_compute[226829]: </domain>
Jan 31 08:17:11 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.274 226833 DEBUG nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Preparing to wait for external event network-vif-plugged-b9f7bcac-a230-466f-96c2-279443fc6e7a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.274 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Acquiring lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.274 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.275 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.275 226833 DEBUG nova.virt.libvirt.vif [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:16:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-716669371',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-716669371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-716669371',id=138,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='120d0fd1a2da48e3b76cc3c39ee91084',ramdisk_id='',reservation_id='r-2lnqwo5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-913751323',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-913751323-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:17:00Z,user_data=None,user_id='9ca99c4062164ef3b438a9f924677d01',uuid=a0eef779-24bd-4096-a30b-859b597c1e18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "address": "fa:16:3e:82:02:48", "network": {"id": "55dd1ae9-f1f5-44fe-b83d-9159a673a6c6", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1906953653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "120d0fd1a2da48e3b76cc3c39ee91084", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f7bcac-a2", "ovs_interfaceid": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.276 226833 DEBUG nova.network.os_vif_util [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Converting VIF {"id": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "address": "fa:16:3e:82:02:48", "network": {"id": "55dd1ae9-f1f5-44fe-b83d-9159a673a6c6", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1906953653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "120d0fd1a2da48e3b76cc3c39ee91084", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f7bcac-a2", "ovs_interfaceid": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.276 226833 DEBUG nova.network.os_vif_util [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:02:48,bridge_name='br-int',has_traffic_filtering=True,id=b9f7bcac-a230-466f-96c2-279443fc6e7a,network=Network(55dd1ae9-f1f5-44fe-b83d-9159a673a6c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f7bcac-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.277 226833 DEBUG os_vif [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:02:48,bridge_name='br-int',has_traffic_filtering=True,id=b9f7bcac-a230-466f-96c2-279443fc6e7a,network=Network(55dd1ae9-f1f5-44fe-b83d-9159a673a6c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f7bcac-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.277 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.278 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.278 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.283 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.283 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9f7bcac-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.284 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9f7bcac-a2, col_values=(('external_ids', {'iface-id': 'b9f7bcac-a230-466f-96c2-279443fc6e7a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:02:48', 'vm-uuid': 'a0eef779-24bd-4096-a30b-859b597c1e18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:11 compute-2 NetworkManager[48999]: <info>  [1769847431.2857] manager: (tapb9f7bcac-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.286 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.289 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.290 226833 INFO os_vif [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:02:48,bridge_name='br-int',has_traffic_filtering=True,id=b9f7bcac-a230-466f-96c2-279443fc6e7a,network=Network(55dd1ae9-f1f5-44fe-b83d-9159a673a6c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f7bcac-a2')
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.487 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.487 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.487 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] No VIF found with MAC fa:16:3e:82:02:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.488 226833 INFO nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Using config drive
Jan 31 08:17:11 compute-2 nova_compute[226829]: 2026-01-31 08:17:11.510 226833 DEBUG nova.storage.rbd_utils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] rbd image a0eef779-24bd-4096-a30b-859b597c1e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/595431473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1102898705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:12 compute-2 nova_compute[226829]: 2026-01-31 08:17:12.843 226833 INFO nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Creating config drive at /var/lib/nova/instances/a0eef779-24bd-4096-a30b-859b597c1e18/disk.config
Jan 31 08:17:12 compute-2 nova_compute[226829]: 2026-01-31 08:17:12.850 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0eef779-24bd-4096-a30b-859b597c1e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuaiv0vnx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:12.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:12 compute-2 nova_compute[226829]: 2026-01-31 08:17:12.984 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0eef779-24bd-4096-a30b-859b597c1e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuaiv0vnx" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:13 compute-2 ceph-mon[77282]: pgmap v2461: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 3.4 MiB/s wr, 46 op/s
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.015 226833 DEBUG nova.storage.rbd_utils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] rbd image a0eef779-24bd-4096-a30b-859b597c1e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.019 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0eef779-24bd-4096-a30b-859b597c1e18/disk.config a0eef779-24bd-4096-a30b-859b597c1e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:13.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:13 compute-2 podman[289036]: 2026-01-31 08:17:13.155820682 +0000 UTC m=+0.043899140 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.195 226833 DEBUG oslo_concurrency.processutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a0eef779-24bd-4096-a30b-859b597c1e18/disk.config a0eef779-24bd-4096-a30b-859b597c1e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.196 226833 INFO nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Deleting local config drive /var/lib/nova/instances/a0eef779-24bd-4096-a30b-859b597c1e18/disk.config because it was imported into RBD.
Jan 31 08:17:13 compute-2 kernel: tapb9f7bcac-a2: entered promiscuous mode
Jan 31 08:17:13 compute-2 NetworkManager[48999]: <info>  [1769847433.2267] manager: (tapb9f7bcac-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.227 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.230 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.234 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:13 compute-2 ovn_controller[133834]: 2026-01-31T08:17:13Z|00581|binding|INFO|Claiming lport b9f7bcac-a230-466f-96c2-279443fc6e7a for this chassis.
Jan 31 08:17:13 compute-2 ovn_controller[133834]: 2026-01-31T08:17:13Z|00582|binding|INFO|b9f7bcac-a230-466f-96c2-279443fc6e7a: Claiming fa:16:3e:82:02:48 10.100.0.5
Jan 31 08:17:13 compute-2 ovn_controller[133834]: 2026-01-31T08:17:13Z|00583|binding|INFO|Setting lport b9f7bcac-a230-466f-96c2-279443fc6e7a ovn-installed in OVS
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.241 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:13 compute-2 systemd-udevd[289071]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:17:13 compute-2 systemd-machined[195142]: New machine qemu-63-instance-0000008a.
Jan 31 08:17:13 compute-2 NetworkManager[48999]: <info>  [1769847433.2605] device (tapb9f7bcac-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:17:13 compute-2 NetworkManager[48999]: <info>  [1769847433.2613] device (tapb9f7bcac-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:17:13 compute-2 systemd[1]: Started Virtual Machine qemu-63-instance-0000008a.
Jan 31 08:17:13 compute-2 ovn_controller[133834]: 2026-01-31T08:17:13Z|00584|binding|INFO|Setting lport b9f7bcac-a230-466f-96c2-279443fc6e7a up in Southbound
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.269 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:02:48 10.100.0.5'], port_security=['fa:16:3e:82:02:48 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a0eef779-24bd-4096-a30b-859b597c1e18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '120d0fd1a2da48e3b76cc3c39ee91084', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e953946e-7029-4728-8501-25e409251629', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f14f89f7-8d0f-4bf9-b53b-a4f88664cebd, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=b9f7bcac-a230-466f-96c2-279443fc6e7a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.271 143841 INFO neutron.agent.ovn.metadata.agent [-] Port b9f7bcac-a230-466f-96c2-279443fc6e7a in datapath 55dd1ae9-f1f5-44fe-b83d-9159a673a6c6 bound to our chassis
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.273 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55dd1ae9-f1f5-44fe-b83d-9159a673a6c6
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.283 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9af4c686-08d1-4e8d-91b3-69513d75812c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.285 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap55dd1ae9-f1 in ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.286 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap55dd1ae9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.286 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[35bbe8c0-6477-4e76-9c83-445d236fe0f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.287 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dc50aeab-0787-4c6c-a7f9-3369d63f6f0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.295 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f0aba0-0be7-4fdd-8fcb-fd112e98e498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.315 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2db5af0c-e06c-4f42-a751-2443b8024c21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.344 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[500b7fde-f18d-412b-a370-7cfbaa94cde0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.351 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd42971-ada0-45f9-8725-56a73c0caa2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 NetworkManager[48999]: <info>  [1769847433.3533] manager: (tap55dd1ae9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/285)
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.379 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac53d82-d8c9-48f2-9b39-09cd0956b65e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.383 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[03ca10b2-b6ae-4764-9d4d-3b550a0438c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 NetworkManager[48999]: <info>  [1769847433.4041] device (tap55dd1ae9-f0): carrier: link connected
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.409 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3e36c8-4fa0-4f7e-92eb-497c5bb383db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.423 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cd01db92-fa5f-4184-9dd7-7a702ba321ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55dd1ae9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:06:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768587, 'reachable_time': 19024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289105, 'error': None, 'target': 'ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.437 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[77226578-e5c4-405d-a233-0986f01ae5b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:679'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 768587, 'tstamp': 768587}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289106, 'error': None, 'target': 'ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.450 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[158b40b0-586f-4ec4-921e-366300fbee76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55dd1ae9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:06:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768587, 'reachable_time': 19024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289107, 'error': None, 'target': 'ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.473 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[79d549c4-7ea3-40f7-bf11-01f0aba6abf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.508 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4114a1ec-cc6b-4f8e-951e-be67d73b6535]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.510 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55dd1ae9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.510 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.511 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55dd1ae9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.512 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:13 compute-2 kernel: tap55dd1ae9-f0: entered promiscuous mode
Jan 31 08:17:13 compute-2 NetworkManager[48999]: <info>  [1769847433.5146] manager: (tap55dd1ae9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.515 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55dd1ae9-f0, col_values=(('external_ids', {'iface-id': 'c9e66ac3-ce30-40c6-b781-cd1bf305b2ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:13 compute-2 ovn_controller[133834]: 2026-01-31T08:17:13Z|00585|binding|INFO|Releasing lport c9e66ac3-ce30-40c6-b781-cd1bf305b2ab from this chassis (sb_readonly=0)
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.520 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/55dd1ae9-f1f5-44fe-b83d-9159a673a6c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/55dd1ae9-f1f5-44fe-b83d-9159a673a6c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.520 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.521 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9588a793-8b66-43b4-899c-7b87af1e8a5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.522 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/55dd1ae9-f1f5-44fe-b83d-9159a673a6c6.pid.haproxy
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 55dd1ae9-f1f5-44fe-b83d-9159a673a6c6
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:17:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:13.524 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6', 'env', 'PROCESS_TAG=haproxy-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/55dd1ae9-f1f5-44fe-b83d-9159a673a6c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.860 226833 DEBUG nova.network.neutron [req-910cb9e4-0c6f-4fc3-ac89-0b3d9b595c5a req-ec26f2fd-9c96-4735-9dd6-218a7e7e52ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Updated VIF entry in instance network info cache for port b9f7bcac-a230-466f-96c2-279443fc6e7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:17:13 compute-2 nova_compute[226829]: 2026-01-31 08:17:13.861 226833 DEBUG nova.network.neutron [req-910cb9e4-0c6f-4fc3-ac89-0b3d9b595c5a req-ec26f2fd-9c96-4735-9dd6-218a7e7e52ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Updating instance_info_cache with network_info: [{"id": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "address": "fa:16:3e:82:02:48", "network": {"id": "55dd1ae9-f1f5-44fe-b83d-9159a673a6c6", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1906953653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "120d0fd1a2da48e3b76cc3c39ee91084", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f7bcac-a2", "ovs_interfaceid": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:13 compute-2 podman[289139]: 2026-01-31 08:17:13.837726057 +0000 UTC m=+0.020865587 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:17:14 compute-2 podman[289139]: 2026-01-31 08:17:14.013671161 +0000 UTC m=+0.196810681 container create 0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:17:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4193268699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:14 compute-2 ceph-mon[77282]: pgmap v2462: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:17:14 compute-2 nova_compute[226829]: 2026-01-31 08:17:14.073 226833 DEBUG oslo_concurrency.lockutils [req-910cb9e4-0c6f-4fc3-ac89-0b3d9b595c5a req-ec26f2fd-9c96-4735-9dd6-218a7e7e52ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a0eef779-24bd-4096-a30b-859b597c1e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:14 compute-2 systemd[1]: Started libpod-conmon-0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c.scope.
Jan 31 08:17:14 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:17:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/922e403436e8f0c7d03699904b792a1160c88abc943e03c84eaf8e3d3e0d6c0e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:14 compute-2 podman[289139]: 2026-01-31 08:17:14.17428704 +0000 UTC m=+0.357426590 container init 0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:17:14 compute-2 podman[289139]: 2026-01-31 08:17:14.179238425 +0000 UTC m=+0.362377945 container start 0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:14 compute-2 neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6[289155]: [NOTICE]   (289159) : New worker (289161) forked
Jan 31 08:17:14 compute-2 neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6[289155]: [NOTICE]   (289159) : Loading success.
Jan 31 08:17:14 compute-2 nova_compute[226829]: 2026-01-31 08:17:14.830 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847434.829481, a0eef779-24bd-4096-a30b-859b597c1e18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:14 compute-2 nova_compute[226829]: 2026-01-31 08:17:14.830 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] VM Started (Lifecycle Event)
Jan 31 08:17:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:14 compute-2 nova_compute[226829]: 2026-01-31 08:17:14.880 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:14 compute-2 nova_compute[226829]: 2026-01-31 08:17:14.883 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847434.831932, a0eef779-24bd-4096-a30b-859b597c1e18 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:14 compute-2 nova_compute[226829]: 2026-01-31 08:17:14.883 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] VM Paused (Lifecycle Event)
Jan 31 08:17:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:14.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:14 compute-2 nova_compute[226829]: 2026-01-31 08:17:14.944 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:14 compute-2 nova_compute[226829]: 2026-01-31 08:17:14.948 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:17:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4054031621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2253487533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:15.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.268 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.528 226833 DEBUG oslo_concurrency.lockutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.529 226833 DEBUG oslo_concurrency.lockutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.529 226833 INFO nova.compute.manager [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Shelving
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.556 226833 DEBUG nova.virt.libvirt.driver [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:17:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.664 226833 DEBUG nova.compute.manager [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Received event network-vif-plugged-b9f7bcac-a230-466f-96c2-279443fc6e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.665 226833 DEBUG oslo_concurrency.lockutils [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.665 226833 DEBUG oslo_concurrency.lockutils [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.665 226833 DEBUG oslo_concurrency.lockutils [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.665 226833 DEBUG nova.compute.manager [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Processing event network-vif-plugged-b9f7bcac-a230-466f-96c2-279443fc6e7a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.666 226833 DEBUG nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.670 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847435.6701748, a0eef779-24bd-4096-a30b-859b597c1e18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.670 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] VM Resumed (Lifecycle Event)
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.672 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.675 226833 INFO nova.virt.libvirt.driver [-] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Instance spawned successfully.
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.675 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.714 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.717 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.718 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.719 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.720 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.720 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.721 226833 DEBUG nova.virt.libvirt.driver [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.731 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.760 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.822 226833 INFO nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Took 14.74 seconds to spawn the instance on the hypervisor.
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.823 226833 DEBUG nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.937 226833 INFO nova.compute.manager [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Took 17.01 seconds to build instance.
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.968 226833 DEBUG oslo_concurrency.lockutils [None req-1c8cfd81-ae8e-40d7-8559-c786ff963a33 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.968 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "a0eef779-24bd-4096-a30b-859b597c1e18" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 13.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.969 226833 INFO nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:17:15 compute-2 nova_compute[226829]: 2026-01-31 08:17:15.969 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "a0eef779-24bd-4096-a30b-859b597c1e18" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:16 compute-2 nova_compute[226829]: 2026-01-31 08:17:16.089 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:16 compute-2 nova_compute[226829]: 2026-01-31 08:17:16.285 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:16 compute-2 ceph-mon[77282]: pgmap v2463: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 850 KiB/s wr, 43 op/s
Jan 31 08:17:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:17:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:16.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:17:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:17.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:17.097 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:17 compute-2 sudo[289213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:17 compute-2 sudo[289213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:17 compute-2 sudo[289213]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:17 compute-2 sudo[289238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:17 compute-2 sudo[289238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:17 compute-2 sudo[289238]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:17 compute-2 sudo[289263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:17 compute-2 sudo[289263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:17 compute-2 sudo[289263]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:17 compute-2 sudo[289288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:17:17 compute-2 sudo[289288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:17:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807485188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2807485188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:17 compute-2 podman[289387]: 2026-01-31 08:17:17.888728073 +0000 UTC m=+0.066037490 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:17:17 compute-2 nova_compute[226829]: 2026-01-31 08:17:17.933 226833 DEBUG nova.compute.manager [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Received event network-vif-plugged-b9f7bcac-a230-466f-96c2-279443fc6e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:17 compute-2 nova_compute[226829]: 2026-01-31 08:17:17.934 226833 DEBUG oslo_concurrency.lockutils [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:17 compute-2 nova_compute[226829]: 2026-01-31 08:17:17.934 226833 DEBUG oslo_concurrency.lockutils [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:17 compute-2 nova_compute[226829]: 2026-01-31 08:17:17.934 226833 DEBUG oslo_concurrency.lockutils [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:17 compute-2 nova_compute[226829]: 2026-01-31 08:17:17.934 226833 DEBUG nova.compute.manager [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] No waiting events found dispatching network-vif-plugged-b9f7bcac-a230-466f-96c2-279443fc6e7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:17 compute-2 nova_compute[226829]: 2026-01-31 08:17:17.935 226833 WARNING nova.compute.manager [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Received unexpected event network-vif-plugged-b9f7bcac-a230-466f-96c2-279443fc6e7a for instance with vm_state active and task_state None.
Jan 31 08:17:18 compute-2 podman[289387]: 2026-01-31 08:17:18.003387508 +0000 UTC m=+0.180696915 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:17:18 compute-2 podman[289545]: 2026-01-31 08:17:18.690315799 +0000 UTC m=+0.116162216 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:17:18 compute-2 podman[289564]: 2026-01-31 08:17:18.759194124 +0000 UTC m=+0.053383936 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:17:18 compute-2 podman[289545]: 2026-01-31 08:17:18.769671857 +0000 UTC m=+0.195518214 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:17:18 compute-2 ceph-mon[77282]: pgmap v2464: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 974 KiB/s rd, 48 KiB/s wr, 69 op/s
Jan 31 08:17:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:18.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:18 compute-2 podman[289606]: 2026-01-31 08:17:18.98507934 +0000 UTC m=+0.061267490 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived)
Jan 31 08:17:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:19.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:19 compute-2 podman[289626]: 2026-01-31 08:17:19.076291041 +0000 UTC m=+0.074691494 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.component=keepalived-container, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 08:17:19 compute-2 kernel: tap41d7c79a-73 (unregistering): left promiscuous mode
Jan 31 08:17:19 compute-2 NetworkManager[48999]: <info>  [1769847439.0962] device (tap41d7c79a-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:17:19 compute-2 nova_compute[226829]: 2026-01-31 08:17:19.104 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:19 compute-2 ovn_controller[133834]: 2026-01-31T08:17:19Z|00586|binding|INFO|Releasing lport 41d7c79a-73d5-466b-ba68-192c5a2c01b3 from this chassis (sb_readonly=0)
Jan 31 08:17:19 compute-2 ovn_controller[133834]: 2026-01-31T08:17:19Z|00587|binding|INFO|Setting lport 41d7c79a-73d5-466b-ba68-192c5a2c01b3 down in Southbound
Jan 31 08:17:19 compute-2 ovn_controller[133834]: 2026-01-31T08:17:19Z|00588|binding|INFO|Removing iface tap41d7c79a-73 ovn-installed in OVS
Jan 31 08:17:19 compute-2 podman[289606]: 2026-01-31 08:17:19.108870693 +0000 UTC m=+0.185058843 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, vcs-type=git, io.openshift.tags=Ceph keepalived, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1793, com.redhat.component=keepalived-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 31 08:17:19 compute-2 nova_compute[226829]: 2026-01-31 08:17:19.114 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:19 compute-2 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000089.scope: Deactivated successfully.
Jan 31 08:17:19 compute-2 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000089.scope: Consumed 14.665s CPU time.
Jan 31 08:17:19 compute-2 systemd-machined[195142]: Machine qemu-62-instance-00000089 terminated.
Jan 31 08:17:19 compute-2 sudo[289288]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:19 compute-2 sudo[289646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:19 compute-2 sudo[289646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:19 compute-2 sudo[289646]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:19 compute-2 sudo[289671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:19 compute-2 sudo[289671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:19 compute-2 sudo[289671]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:19 compute-2 sudo[289707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:19 compute-2 sudo[289707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:19 compute-2 sudo[289707]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:19 compute-2 sudo[289732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:17:19 compute-2 sudo[289732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:19 compute-2 nova_compute[226829]: 2026-01-31 08:17:19.580 226833 INFO nova.virt.libvirt.driver [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance shutdown successfully after 4 seconds.
Jan 31 08:17:19 compute-2 nova_compute[226829]: 2026-01-31 08:17:19.593 226833 INFO nova.virt.libvirt.driver [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance destroyed successfully.
Jan 31 08:17:19 compute-2 nova_compute[226829]: 2026-01-31 08:17:19.593 226833 DEBUG nova.objects.instance [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'numa_topology' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:19 compute-2 sudo[289732]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.155 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:40:e9 10.100.0.14'], port_security=['fa:16:3e:b7:40:e9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2c1aa7ad-f9c1-4e05-8261-defaa3eef40b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b47e290-9853-478f-86cb-c8ea73119a97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b06982960ad4453b8e542cb6330835d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd12bd0a3-e514-4fde-9351-39f4527fc3f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c631ec-3e4c-4c27-ae32-3645d3f29853, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=41d7c79a-73d5-466b-ba68-192c5a2c01b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.157 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 41d7c79a-73d5-466b-ba68-192c5a2c01b3 in datapath 2b47e290-9853-478f-86cb-c8ea73119a97 unbound from our chassis
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.160 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b47e290-9853-478f-86cb-c8ea73119a97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.161 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[35fd3f7d-9fcb-4ddf-a577-eef592128701]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.162 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 namespace which is not needed anymore
Jan 31 08:17:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:17:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:17:20 compute-2 ceph-mon[77282]: pgmap v2465: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 974 KiB/s rd, 48 KiB/s wr, 69 op/s
Jan 31 08:17:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:17:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:17:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:17:20 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[288418]: [NOTICE]   (288424) : haproxy version is 2.8.14-c23fe91
Jan 31 08:17:20 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[288418]: [NOTICE]   (288424) : path to executable is /usr/sbin/haproxy
Jan 31 08:17:20 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[288418]: [ALERT]    (288424) : Current worker (288427) exited with code 143 (Terminated)
Jan 31 08:17:20 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[288418]: [WARNING]  (288424) : All workers exited. Exiting... (0)
Jan 31 08:17:20 compute-2 systemd[1]: libpod-f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af.scope: Deactivated successfully.
Jan 31 08:17:20 compute-2 podman[289805]: 2026-01-31 08:17:20.296969975 +0000 UTC m=+0.066367698 container died f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:17:20 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af-userdata-shm.mount: Deactivated successfully.
Jan 31 08:17:20 compute-2 systemd[1]: var-lib-containers-storage-overlay-4201fc73ce78c32285e900c62ef77e0c358c04727fe4dc897a702b721eceb974-merged.mount: Deactivated successfully.
Jan 31 08:17:20 compute-2 podman[289805]: 2026-01-31 08:17:20.502328216 +0000 UTC m=+0.271725939 container cleanup f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:17:20 compute-2 systemd[1]: libpod-conmon-f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af.scope: Deactivated successfully.
Jan 31 08:17:20 compute-2 podman[289835]: 2026-01-31 08:17:20.63357028 +0000 UTC m=+0.112628611 container remove f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.637 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3aefef05-1cf0-4427-9f4b-931a087389e0]: (4, ('Sat Jan 31 08:17:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 (f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af)\nf6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af\nSat Jan 31 08:17:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 (f6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af)\nf6f8c54b9d3c54ead3cca4fcd5c8ec3e400410e396ee7888ada814ea664fd1af\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.639 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[89eb5943-bcb7-4bfd-b87a-7705c686ad84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.640 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b47e290-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:20 compute-2 nova_compute[226829]: 2026-01-31 08:17:20.642 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:20 compute-2 kernel: tap2b47e290-90: left promiscuous mode
Jan 31 08:17:20 compute-2 nova_compute[226829]: 2026-01-31 08:17:20.657 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.659 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[580b051c-8c2e-4731-9dbe-4434eb17e44c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.675 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bacf8143-1546-41db-9565-7827bc1379a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.678 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[464967db-440e-454f-8a2d-b00594ba7c03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.693 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[94b47681-3ab3-44ed-83ee-9f777b774ab6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763761, 'reachable_time': 25017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289856, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:20 compute-2 systemd[1]: run-netns-ovnmeta\x2d2b47e290\x2d9853\x2d478f\x2d86cb\x2dc8ea73119a97.mount: Deactivated successfully.
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.700 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:17:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:20.700 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[33823b75-4199-40db-9ba3-a709e59f320e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:20.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:20 compute-2 nova_compute[226829]: 2026-01-31 08:17:20.944 226833 INFO nova.virt.libvirt.driver [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Beginning cold snapshot process
Jan 31 08:17:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:21.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.091 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.324 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.331 226833 DEBUG nova.virt.libvirt.imagebackend [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.705 226833 DEBUG nova.storage.rbd_utils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] creating snapshot(b3fedbc627fa46f78a3ee352ad3c5c6c) on rbd image(2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.821 226833 DEBUG oslo_concurrency.lockutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Acquiring lock "a0eef779-24bd-4096-a30b-859b597c1e18" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.821 226833 DEBUG oslo_concurrency.lockutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.822 226833 DEBUG oslo_concurrency.lockutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Acquiring lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.822 226833 DEBUG oslo_concurrency.lockutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.822 226833 DEBUG oslo_concurrency.lockutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.824 226833 INFO nova.compute.manager [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Terminating instance
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.825 226833 DEBUG nova.compute.manager [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:17:21 compute-2 kernel: tapb9f7bcac-a2 (unregistering): left promiscuous mode
Jan 31 08:17:21 compute-2 NetworkManager[48999]: <info>  [1769847441.8752] device (tapb9f7bcac-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:17:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e314 e314: 3 total, 3 up, 3 in
Jan 31 08:17:21 compute-2 ovn_controller[133834]: 2026-01-31T08:17:21Z|00589|binding|INFO|Releasing lport b9f7bcac-a230-466f-96c2-279443fc6e7a from this chassis (sb_readonly=0)
Jan 31 08:17:21 compute-2 ovn_controller[133834]: 2026-01-31T08:17:21Z|00590|binding|INFO|Setting lport b9f7bcac-a230-466f-96c2-279443fc6e7a down in Southbound
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.881 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:21 compute-2 ovn_controller[133834]: 2026-01-31T08:17:21Z|00591|binding|INFO|Removing iface tapb9f7bcac-a2 ovn-installed in OVS
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.885 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.895 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:21.902 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:02:48 10.100.0.5'], port_security=['fa:16:3e:82:02:48 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a0eef779-24bd-4096-a30b-859b597c1e18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '120d0fd1a2da48e3b76cc3c39ee91084', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e953946e-7029-4728-8501-25e409251629', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f14f89f7-8d0f-4bf9-b53b-a4f88664cebd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=b9f7bcac-a230-466f-96c2-279443fc6e7a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:21.905 143841 INFO neutron.agent.ovn.metadata.agent [-] Port b9f7bcac-a230-466f-96c2-279443fc6e7a in datapath 55dd1ae9-f1f5-44fe-b83d-9159a673a6c6 unbound from our chassis
Jan 31 08:17:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:21.909 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55dd1ae9-f1f5-44fe-b83d-9159a673a6c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:17:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:21.911 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[93634252-8231-4b94-a78d-2ab835bad4f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:21.912 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6 namespace which is not needed anymore
Jan 31 08:17:21 compute-2 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Jan 31 08:17:21 compute-2 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008a.scope: Consumed 7.862s CPU time.
Jan 31 08:17:21 compute-2 systemd-machined[195142]: Machine qemu-63-instance-0000008a terminated.
Jan 31 08:17:21 compute-2 nova_compute[226829]: 2026-01-31 08:17:21.987 226833 DEBUG nova.storage.rbd_utils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] cloning vms/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk@b3fedbc627fa46f78a3ee352ad3c5c6c to images/9d458b54-ba05-4df1-8123-8bec7fac57ca clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:17:22 compute-2 neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6[289155]: [NOTICE]   (289159) : haproxy version is 2.8.14-c23fe91
Jan 31 08:17:22 compute-2 neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6[289155]: [NOTICE]   (289159) : path to executable is /usr/sbin/haproxy
Jan 31 08:17:22 compute-2 neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6[289155]: [WARNING]  (289159) : Exiting Master process...
Jan 31 08:17:22 compute-2 neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6[289155]: [WARNING]  (289159) : Exiting Master process...
Jan 31 08:17:22 compute-2 neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6[289155]: [ALERT]    (289159) : Current worker (289161) exited with code 143 (Terminated)
Jan 31 08:17:22 compute-2 neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6[289155]: [WARNING]  (289159) : All workers exited. Exiting... (0)
Jan 31 08:17:22 compute-2 systemd[1]: libpod-0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c.scope: Deactivated successfully.
Jan 31 08:17:22 compute-2 podman[289933]: 2026-01-31 08:17:22.017825264 +0000 UTC m=+0.037597059 container died 0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:17:22 compute-2 NetworkManager[48999]: <info>  [1769847442.0395] manager: (tapb9f7bcac-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/287)
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.058 226833 INFO nova.virt.libvirt.driver [-] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Instance destroyed successfully.
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.059 226833 DEBUG nova.objects.instance [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lazy-loading 'resources' on Instance uuid a0eef779-24bd-4096-a30b-859b597c1e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.090 226833 DEBUG nova.virt.libvirt.vif [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:16:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-716669371',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-716669371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-716669371',id=138,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:17:15Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='120d0fd1a2da48e3b76cc3c39ee91084',ramdisk_id='',reservation_id='r-2lnqwo5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-913751323',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-913751323-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:17:15Z,user_data=None,user_id='9ca99c4062164ef3b438a9f924677d01',uuid=a0eef779-24bd-4096-a30b-859b597c1e18,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "address": "fa:16:3e:82:02:48", "network": {"id": "55dd1ae9-f1f5-44fe-b83d-9159a673a6c6", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1906953653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "120d0fd1a2da48e3b76cc3c39ee91084", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f7bcac-a2", "ovs_interfaceid": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.091 226833 DEBUG nova.network.os_vif_util [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Converting VIF {"id": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "address": "fa:16:3e:82:02:48", "network": {"id": "55dd1ae9-f1f5-44fe-b83d-9159a673a6c6", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1906953653-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "120d0fd1a2da48e3b76cc3c39ee91084", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f7bcac-a2", "ovs_interfaceid": "b9f7bcac-a230-466f-96c2-279443fc6e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.092 226833 DEBUG nova.network.os_vif_util [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:02:48,bridge_name='br-int',has_traffic_filtering=True,id=b9f7bcac-a230-466f-96c2-279443fc6e7a,network=Network(55dd1ae9-f1f5-44fe-b83d-9159a673a6c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f7bcac-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.093 226833 DEBUG os_vif [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:02:48,bridge_name='br-int',has_traffic_filtering=True,id=b9f7bcac-a230-466f-96c2-279443fc6e7a,network=Network(55dd1ae9-f1f5-44fe-b83d-9159a673a6c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f7bcac-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.095 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.096 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f7bcac-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.097 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.100 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.101 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.113 226833 INFO os_vif [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:02:48,bridge_name='br-int',has_traffic_filtering=True,id=b9f7bcac-a230-466f-96c2-279443fc6e7a,network=Network(55dd1ae9-f1f5-44fe-b83d-9159a673a6c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f7bcac-a2')
Jan 31 08:17:22 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c-userdata-shm.mount: Deactivated successfully.
Jan 31 08:17:22 compute-2 systemd[1]: var-lib-containers-storage-overlay-922e403436e8f0c7d03699904b792a1160c88abc943e03c84eaf8e3d3e0d6c0e-merged.mount: Deactivated successfully.
Jan 31 08:17:22 compute-2 podman[289933]: 2026-01-31 08:17:22.16652839 +0000 UTC m=+0.186300175 container cleanup 0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:22 compute-2 systemd[1]: libpod-conmon-0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c.scope: Deactivated successfully.
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.270 226833 DEBUG nova.storage.rbd_utils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] flattening images/9d458b54-ba05-4df1-8123-8bec7fac57ca flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 08:17:22 compute-2 podman[290032]: 2026-01-31 08:17:22.275367078 +0000 UTC m=+0.088954700 container remove 0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:17:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:22.280 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[83bc3ae1-695b-4fbb-83c5-f60c94a06cfa]: (4, ('Sat Jan 31 08:17:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6 (0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c)\n0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c\nSat Jan 31 08:17:22 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6 (0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c)\n0a14be7077661746a29781678f1ceb50b8a13f3f7c5da0d35053a27f99c0981c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:22.282 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c42094c2-f41f-4b13-b563-0854f305b75a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:22.284 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55dd1ae9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:22 compute-2 kernel: tap55dd1ae9-f0: left promiscuous mode
Jan 31 08:17:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:22.300 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[204d8d68-310c-4fab-bf03-bf793b130342]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:22.314 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d92cbd-fd91-41fa-b8f1-face6b9e216a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:22.316 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8cb071-990c-4d13-a546-877e7b4152c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:22.337 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfb8000-4e6e-4baf-ac26-99702320c50a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768581, 'reachable_time': 23360, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290067, 'error': None, 'target': 'ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-2 systemd[1]: run-netns-ovnmeta\x2d55dd1ae9\x2df1f5\x2d44fe\x2db83d\x2d9159a673a6c6.mount: Deactivated successfully.
Jan 31 08:17:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:22.340 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55dd1ae9-f1f5-44fe-b83d-9159a673a6c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:17:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:22.340 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[a246a4ba-2b53-4862-8991-71e23ec2b85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.354 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.356 226833 DEBUG nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-vif-unplugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.357 226833 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.357 226833 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.357 226833 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.357 226833 DEBUG nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] No waiting events found dispatching network-vif-unplugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.358 226833 WARNING nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received unexpected event network-vif-unplugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 for instance with vm_state active and task_state shelving_image_uploading.
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.358 226833 DEBUG nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.358 226833 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.358 226833 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.359 226833 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.359 226833 DEBUG nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] No waiting events found dispatching network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.359 226833 WARNING nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received unexpected event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 for instance with vm_state active and task_state shelving_image_uploading.
Jan 31 08:17:22 compute-2 ceph-mon[77282]: pgmap v2466: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 56 KiB/s wr, 150 op/s
Jan 31 08:17:22 compute-2 ceph-mon[77282]: osdmap e314: 3 total, 3 up, 3 in
Jan 31 08:17:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:22.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:22 compute-2 nova_compute[226829]: 2026-01-31 08:17:22.978 226833 DEBUG nova.storage.rbd_utils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] removing snapshot(b3fedbc627fa46f78a3ee352ad3c5c6c) on rbd image(2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:17:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:17:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:23.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:17:23 compute-2 nova_compute[226829]: 2026-01-31 08:17:23.501 226833 INFO nova.virt.libvirt.driver [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Deleting instance files /var/lib/nova/instances/a0eef779-24bd-4096-a30b-859b597c1e18_del
Jan 31 08:17:23 compute-2 nova_compute[226829]: 2026-01-31 08:17:23.502 226833 INFO nova.virt.libvirt.driver [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Deletion of /var/lib/nova/instances/a0eef779-24bd-4096-a30b-859b597c1e18_del complete
Jan 31 08:17:23 compute-2 nova_compute[226829]: 2026-01-31 08:17:23.661 226833 INFO nova.compute.manager [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Took 1.84 seconds to destroy the instance on the hypervisor.
Jan 31 08:17:23 compute-2 nova_compute[226829]: 2026-01-31 08:17:23.662 226833 DEBUG oslo.service.loopingcall [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:17:23 compute-2 nova_compute[226829]: 2026-01-31 08:17:23.663 226833 DEBUG nova.compute.manager [-] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:17:23 compute-2 nova_compute[226829]: 2026-01-31 08:17:23.663 226833 DEBUG nova.network.neutron [-] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:17:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e315 e315: 3 total, 3 up, 3 in
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.026 226833 DEBUG nova.storage.rbd_utils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] creating snapshot(snap) on rbd image(9d458b54-ba05-4df1-8123-8bec7fac57ca) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.605 226833 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Received event network-vif-unplugged-b9f7bcac-a230-466f-96c2-279443fc6e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.605 226833 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.606 226833 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.606 226833 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.606 226833 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] No waiting events found dispatching network-vif-unplugged-b9f7bcac-a230-466f-96c2-279443fc6e7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.606 226833 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Received event network-vif-unplugged-b9f7bcac-a230-466f-96c2-279443fc6e7a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.606 226833 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Received event network-vif-plugged-b9f7bcac-a230-466f-96c2-279443fc6e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.607 226833 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.607 226833 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.607 226833 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.607 226833 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] No waiting events found dispatching network-vif-plugged-b9f7bcac-a230-466f-96c2-279443fc6e7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.607 226833 WARNING nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Received unexpected event network-vif-plugged-b9f7bcac-a230-466f-96c2-279443fc6e7a for instance with vm_state active and task_state deleting.
Jan 31 08:17:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:24.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:24 compute-2 ceph-mon[77282]: pgmap v2468: 305 pgs: 305 active+clean; 770 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.3 MiB/s wr, 251 op/s
Jan 31 08:17:24 compute-2 ceph-mon[77282]: osdmap e315: 3 total, 3 up, 3 in
Jan 31 08:17:24 compute-2 nova_compute[226829]: 2026-01-31 08:17:24.936 226833 DEBUG nova.network.neutron [-] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e316 e316: 3 total, 3 up, 3 in
Jan 31 08:17:25 compute-2 nova_compute[226829]: 2026-01-31 08:17:25.015 226833 INFO nova.compute.manager [-] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Took 1.35 seconds to deallocate network for instance.
Jan 31 08:17:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:25.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:25 compute-2 nova_compute[226829]: 2026-01-31 08:17:25.119 226833 DEBUG oslo_concurrency.lockutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:25 compute-2 nova_compute[226829]: 2026-01-31 08:17:25.119 226833 DEBUG oslo_concurrency.lockutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:25 compute-2 nova_compute[226829]: 2026-01-31 08:17:25.365 226833 DEBUG oslo_concurrency.processutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:17:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1733097171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:25 compute-2 nova_compute[226829]: 2026-01-31 08:17:25.799 226833 DEBUG oslo_concurrency.processutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:25 compute-2 nova_compute[226829]: 2026-01-31 08:17:25.807 226833 DEBUG nova.compute.provider_tree [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:17:25 compute-2 nova_compute[226829]: 2026-01-31 08:17:25.870 226833 DEBUG nova.scheduler.client.report [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:17:25 compute-2 nova_compute[226829]: 2026-01-31 08:17:25.933 226833 DEBUG oslo_concurrency.lockutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:25 compute-2 nova_compute[226829]: 2026-01-31 08:17:25.995 226833 INFO nova.scheduler.client.report [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Deleted allocations for instance a0eef779-24bd-4096-a30b-859b597c1e18
Jan 31 08:17:26 compute-2 ceph-mon[77282]: osdmap e316: 3 total, 3 up, 3 in
Jan 31 08:17:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1733097171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:26 compute-2 ceph-mon[77282]: pgmap v2471: 305 pgs: 305 active+clean; 794 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 18 MiB/s rd, 6.8 MiB/s wr, 596 op/s
Jan 31 08:17:26 compute-2 nova_compute[226829]: 2026-01-31 08:17:26.093 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:26 compute-2 nova_compute[226829]: 2026-01-31 08:17:26.241 226833 DEBUG oslo_concurrency.lockutils [None req-6889146f-4eeb-49a6-abe1-9a0491f2b0c7 9ca99c4062164ef3b438a9f924677d01 120d0fd1a2da48e3b76cc3c39ee91084 - - default default] Lock "a0eef779-24bd-4096-a30b-859b597c1e18" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:26 compute-2 nova_compute[226829]: 2026-01-31 08:17:26.520 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:26 compute-2 sudo[290133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:26 compute-2 sudo[290133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:26 compute-2 sudo[290133]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:26 compute-2 sudo[290158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:26 compute-2 sudo[290158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:26 compute-2 sudo[290158]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:26 compute-2 nova_compute[226829]: 2026-01-31 08:17:26.775 226833 DEBUG nova.compute.manager [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Received event network-vif-deleted-b9f7bcac-a230-466f-96c2-279443fc6e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:26 compute-2 sudo[290183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:26 compute-2 sudo[290183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:26 compute-2 sudo[290183]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:26 compute-2 sudo[290208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:17:26 compute-2 sudo[290208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:26 compute-2 sudo[290208]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:26.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:27 compute-2 nova_compute[226829]: 2026-01-31 08:17:27.098 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:27.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:27 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:27 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:28 compute-2 ceph-mon[77282]: pgmap v2472: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 MiB/s rd, 7.8 MiB/s wr, 565 op/s
Jan 31 08:17:28 compute-2 nova_compute[226829]: 2026-01-31 08:17:28.628 226833 INFO nova.virt.libvirt.driver [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Snapshot image upload complete
Jan 31 08:17:28 compute-2 nova_compute[226829]: 2026-01-31 08:17:28.629 226833 DEBUG nova.compute.manager [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:28 compute-2 nova_compute[226829]: 2026-01-31 08:17:28.706 226833 INFO nova.compute.manager [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Shelve offloading
Jan 31 08:17:28 compute-2 nova_compute[226829]: 2026-01-31 08:17:28.717 226833 INFO nova.virt.libvirt.driver [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance destroyed successfully.
Jan 31 08:17:28 compute-2 nova_compute[226829]: 2026-01-31 08:17:28.717 226833 DEBUG nova.compute.manager [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:28 compute-2 nova_compute[226829]: 2026-01-31 08:17:28.720 226833 DEBUG oslo_concurrency.lockutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:28 compute-2 nova_compute[226829]: 2026-01-31 08:17:28.720 226833 DEBUG oslo_concurrency.lockutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquired lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:28 compute-2 nova_compute[226829]: 2026-01-31 08:17:28.721 226833 DEBUG nova.network.neutron [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:17:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:17:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 49K writes, 204K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.05 MB/s
                                           Cumulative WAL: 49K writes, 17K syncs, 2.85 writes per sync, written: 0.20 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9181 writes, 37K keys, 9181 commit groups, 1.0 writes per commit group, ingest: 37.48 MB, 0.06 MB/s
                                           Interval WAL: 9181 writes, 3593 syncs, 2.56 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:17:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:28.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:29.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:29 compute-2 nova_compute[226829]: 2026-01-31 08:17:29.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:30 compute-2 nova_compute[226829]: 2026-01-31 08:17:30.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:30 compute-2 nova_compute[226829]: 2026-01-31 08:17:30.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:17:30 compute-2 nova_compute[226829]: 2026-01-31 08:17:30.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:17:30 compute-2 nova_compute[226829]: 2026-01-31 08:17:30.527 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:30.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:30 compute-2 ceph-mon[77282]: pgmap v2473: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 425 op/s
Jan 31 08:17:31 compute-2 nova_compute[226829]: 2026-01-31 08:17:31.097 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e317 e317: 3 total, 3 up, 3 in
Jan 31 08:17:32 compute-2 nova_compute[226829]: 2026-01-31 08:17:32.100 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:32 compute-2 nova_compute[226829]: 2026-01-31 08:17:32.819 226833 DEBUG nova.network.neutron [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating instance_info_cache with network_info: [{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:32 compute-2 nova_compute[226829]: 2026-01-31 08:17:32.860 226833 DEBUG oslo_concurrency.lockutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Releasing lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:32 compute-2 nova_compute[226829]: 2026-01-31 08:17:32.862 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:32 compute-2 nova_compute[226829]: 2026-01-31 08:17:32.862 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:17:32 compute-2 nova_compute[226829]: 2026-01-31 08:17:32.863 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:32 compute-2 ceph-mon[77282]: pgmap v2474: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 4.3 MiB/s wr, 373 op/s
Jan 31 08:17:32 compute-2 ceph-mon[77282]: osdmap e317: 3 total, 3 up, 3 in
Jan 31 08:17:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:32.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:33.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:34 compute-2 podman[290237]: 2026-01-31 08:17:34.212618714 +0000 UTC m=+0.094640884 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:17:34 compute-2 nova_compute[226829]: 2026-01-31 08:17:34.330 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847439.328773, 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:34 compute-2 nova_compute[226829]: 2026-01-31 08:17:34.330 226833 INFO nova.compute.manager [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] VM Stopped (Lifecycle Event)
Jan 31 08:17:34 compute-2 nova_compute[226829]: 2026-01-31 08:17:34.561 226833 DEBUG nova.compute.manager [None req-9f722b79-2f16-4d5d-b5e6-c9c98a3fa8d4 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:34 compute-2 nova_compute[226829]: 2026-01-31 08:17:34.565 226833 DEBUG nova.compute.manager [None req-9f722b79-2f16-4d5d-b5e6-c9c98a3fa8d4 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:17:34 compute-2 nova_compute[226829]: 2026-01-31 08:17:34.617 226833 INFO nova.compute.manager [None req-9f722b79-2f16-4d5d-b5e6-c9c98a3fa8d4 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Jan 31 08:17:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:34.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:35.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:35 compute-2 ceph-mon[77282]: pgmap v2476: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 1.6 MiB/s wr, 210 op/s
Jan 31 08:17:35 compute-2 nova_compute[226829]: 2026-01-31 08:17:35.948 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating instance_info_cache with network_info: [{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.005 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.006 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.006 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.007 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.007 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.012 226833 INFO nova.virt.libvirt.driver [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance destroyed successfully.
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.012 226833 DEBUG nova.objects.instance [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'resources' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:36 compute-2 ceph-mon[77282]: pgmap v2477: 305 pgs: 305 active+clean; 790 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 125 op/s
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.093 226833 DEBUG nova.virt.libvirt.vif [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:16:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1656500391',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1656500391',id=137,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFicZlkmwmJx2QcB2FaCtZWA/EPieHD4tlRsDzixOV+Fehvb9d4YKWopnndvdTu7d1fOEkn5wswwcVr24I+nVYUHx/SYBenHvgz9Ve3+IdDKcppTFyb3Gp0ZC2yG6jAiiQ==',key_name='tempest-keypair-46056576',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:16:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-e68k7fwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member',shelved_at='2026-01-31T08:17:28.629258',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='9d458b54-ba05-4df1-8123-8bec7fac57ca'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:17:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=2c1aa7ad-f9c1-4e05-8261-defaa3eef40b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.094 226833 DEBUG nova.network.os_vif_util [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.095 226833 DEBUG nova.network.os_vif_util [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.095 226833 DEBUG os_vif [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.097 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.097 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41d7c79a-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.098 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.099 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.100 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.103 226833 INFO os_vif [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73')
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.125 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.198 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.261 226833 DEBUG nova.compute.manager [req-02398759-4b5b-433d-903d-fcd6af1ef838 req-39fc5303-b971-40dc-8fcd-42e558ebb608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-changed-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.262 226833 DEBUG nova.compute.manager [req-02398759-4b5b-433d-903d-fcd6af1ef838 req-39fc5303-b971-40dc-8fcd-42e558ebb608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Refreshing instance network info cache due to event network-changed-41d7c79a-73d5-466b-ba68-192c5a2c01b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.262 226833 DEBUG oslo_concurrency.lockutils [req-02398759-4b5b-433d-903d-fcd6af1ef838 req-39fc5303-b971-40dc-8fcd-42e558ebb608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.262 226833 DEBUG oslo_concurrency.lockutils [req-02398759-4b5b-433d-903d-fcd6af1ef838 req-39fc5303-b971-40dc-8fcd-42e558ebb608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:36 compute-2 nova_compute[226829]: 2026-01-31 08:17:36.262 226833 DEBUG nova.network.neutron [req-02398759-4b5b-433d-903d-fcd6af1ef838 req-39fc5303-b971-40dc-8fcd-42e558ebb608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Refreshing network info cache for port 41d7c79a-73d5-466b-ba68-192c5a2c01b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:17:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:36.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.009 226833 INFO nova.virt.libvirt.driver [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Deleting instance files /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_del
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.010 226833 INFO nova.virt.libvirt.driver [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Deletion of /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_del complete
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.056 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847442.0555294, a0eef779-24bd-4096-a30b-859b597c1e18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.056 226833 INFO nova.compute.manager [-] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] VM Stopped (Lifecycle Event)
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.085 226833 DEBUG nova.compute.manager [None req-b8f79bc8-5629-4499-9091-059b778849af - - - - - -] [instance: a0eef779-24bd-4096-a30b-859b597c1e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:37.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.177 226833 INFO nova.scheduler.client.report [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Deleted allocations for instance 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.244 226833 DEBUG oslo_concurrency.lockutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.245 226833 DEBUG oslo_concurrency.lockutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.374 226833 DEBUG oslo_concurrency.processutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:17:37 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/311006205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.817 226833 DEBUG oslo_concurrency.processutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.824 226833 DEBUG nova.compute.provider_tree [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.854 226833 DEBUG nova.scheduler.client.report [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:17:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/311006205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.908 226833 DEBUG oslo_concurrency.lockutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:37 compute-2 nova_compute[226829]: 2026-01-31 08:17:37.980 226833 DEBUG oslo_concurrency.lockutils [None req-6917ab93-5961-4aaf-9c7f-7507fc155ea9 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 22.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:38 compute-2 nova_compute[226829]: 2026-01-31 08:17:38.388 226833 DEBUG nova.network.neutron [req-02398759-4b5b-433d-903d-fcd6af1ef838 req-39fc5303-b971-40dc-8fcd-42e558ebb608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updated VIF entry in instance network info cache for port 41d7c79a-73d5-466b-ba68-192c5a2c01b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:17:38 compute-2 nova_compute[226829]: 2026-01-31 08:17:38.388 226833 DEBUG nova.network.neutron [req-02398759-4b5b-433d-903d-fcd6af1ef838 req-39fc5303-b971-40dc-8fcd-42e558ebb608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating instance_info_cache with network_info: [{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap41d7c79a-73", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:38 compute-2 nova_compute[226829]: 2026-01-31 08:17:38.466 226833 DEBUG oslo_concurrency.lockutils [req-02398759-4b5b-433d-903d-fcd6af1ef838 req-39fc5303-b971-40dc-8fcd-42e558ebb608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:38 compute-2 nova_compute[226829]: 2026-01-31 08:17:38.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:38.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:38 compute-2 ceph-mon[77282]: pgmap v2478: 305 pgs: 305 active+clean; 771 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 917 KiB/s rd, 1000 KiB/s wr, 75 op/s
Jan 31 08:17:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2896216350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2024757108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:39.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/571439263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:40 compute-2 nova_compute[226829]: 2026-01-31 08:17:40.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:40 compute-2 nova_compute[226829]: 2026-01-31 08:17:40.513 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:40 compute-2 nova_compute[226829]: 2026-01-31 08:17:40.514 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:40 compute-2 nova_compute[226829]: 2026-01-31 08:17:40.514 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:40 compute-2 nova_compute[226829]: 2026-01-31 08:17:40.514 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:17:40 compute-2 nova_compute[226829]: 2026-01-31 08:17:40.514 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:17:40 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/889878504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:40 compute-2 nova_compute[226829]: 2026-01-31 08:17:40.928 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:40.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:40 compute-2 ceph-mon[77282]: pgmap v2479: 305 pgs: 305 active+clean; 763 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 135 op/s
Jan 31 08:17:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3919650562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/889878504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.135 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:41.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.157 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.158 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4112MB free_disk=20.784008026123047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.158 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.158 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.323 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.323 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.463 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:17:41 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2629141597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.933 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.938 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.956 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.985 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:17:41 compute-2 nova_compute[226829]: 2026-01-31 08:17:41.985 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2629141597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:42.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:43 compute-2 ceph-mon[77282]: pgmap v2480: 305 pgs: 305 active+clean; 740 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 945 KiB/s rd, 2.5 MiB/s wr, 158 op/s
Jan 31 08:17:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:43.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:44 compute-2 ceph-mon[77282]: pgmap v2481: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 08:17:44 compute-2 podman[290356]: 2026-01-31 08:17:44.152898854 +0000 UTC m=+0.044472874 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:17:44 compute-2 nova_compute[226829]: 2026-01-31 08:17:44.456 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:44 compute-2 nova_compute[226829]: 2026-01-31 08:17:44.457 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:44 compute-2 nova_compute[226829]: 2026-01-31 08:17:44.457 226833 INFO nova.compute.manager [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Unshelving
Jan 31 08:17:44 compute-2 nova_compute[226829]: 2026-01-31 08:17:44.597 226833 INFO nova.virt.block_device [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Booting with volume 404c4de7-aa21-488f-9d20-6b6e4016f179 at /dev/vdc
Jan 31 08:17:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:44.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:45.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.607 226833 DEBUG os_brick.utils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.610 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.659 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.660 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[d712b480-1e22-4e8f-91b3-ffec98036988]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.661 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.667 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.668 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ec9a36-6f68-4277-8581-f64ccda5836f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.670 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.676 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.676 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[723910e8-dcdb-4bc5-861d-6f60b2ff4b1a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.678 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[276f0fb9-b3f4-4243-bd07-2d48bffc1d15]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.678 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.701 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.704 226833 DEBUG os_brick.initiator.connectors.lightos [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.705 226833 DEBUG os_brick.initiator.connectors.lightos [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.705 226833 DEBUG os_brick.initiator.connectors.lightos [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.705 226833 DEBUG os_brick.utils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:17:45 compute-2 nova_compute[226829]: 2026-01-31 08:17:45.706 226833 DEBUG nova.virt.block_device [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating existing volume attachment record: 341e2c00-961d-4cab-a3b1-d0caae0c47b1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:17:46 compute-2 nova_compute[226829]: 2026-01-31 08:17:46.137 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:46 compute-2 sudo[290384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:46 compute-2 sudo[290384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:46 compute-2 sudo[290384]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:46 compute-2 sudo[290409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:46 compute-2 sudo[290409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:46 compute-2 sudo[290409]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:46.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:46 compute-2 ceph-mon[77282]: pgmap v2482: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 835 KiB/s rd, 2.2 MiB/s wr, 133 op/s
Jan 31 08:17:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:17:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:47.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:17:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3134165204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.020 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.020 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.024 226833 DEBUG nova.objects.instance [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'pci_requests' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.047 226833 DEBUG nova.objects.instance [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'numa_topology' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.065 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.066 226833 INFO nova.compute.claims [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.268 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:17:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1584127127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.738 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.744 226833 DEBUG nova.compute.provider_tree [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.777 226833 DEBUG nova.scheduler.client.report [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:17:48 compute-2 nova_compute[226829]: 2026-01-31 08:17:48.855 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:48.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:48 compute-2 ceph-mon[77282]: pgmap v2483: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 1.7 MiB/s wr, 125 op/s
Jan 31 08:17:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1584127127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:49.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:49 compute-2 nova_compute[226829]: 2026-01-31 08:17:49.833 226833 INFO nova.network.neutron [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating port 41d7c79a-73d5-466b-ba68-192c5a2c01b3 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 08:17:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:49 compute-2 nova_compute[226829]: 2026-01-31 08:17:49.986 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:49 compute-2 nova_compute[226829]: 2026-01-31 08:17:49.987 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:17:50 compute-2 ceph-mon[77282]: pgmap v2484: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 1.4 MiB/s wr, 102 op/s
Jan 31 08:17:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:50.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:51.068 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:51 compute-2 nova_compute[226829]: 2026-01-31 08:17:51.069 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:51.070 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:17:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:51.071 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:51 compute-2 nova_compute[226829]: 2026-01-31 08:17:51.139 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:51.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:51 compute-2 nova_compute[226829]: 2026-01-31 08:17:51.956 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:51 compute-2 nova_compute[226829]: 2026-01-31 08:17:51.956 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquired lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:51 compute-2 nova_compute[226829]: 2026-01-31 08:17:51.956 226833 DEBUG nova.network.neutron [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:17:52 compute-2 nova_compute[226829]: 2026-01-31 08:17:52.242 226833 DEBUG nova.compute.manager [req-4d54873d-db49-407c-8f4c-6261fc438f76 req-1e32a0f5-1511-41a0-8ac0-86b2357e9257 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-changed-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:52 compute-2 nova_compute[226829]: 2026-01-31 08:17:52.242 226833 DEBUG nova.compute.manager [req-4d54873d-db49-407c-8f4c-6261fc438f76 req-1e32a0f5-1511-41a0-8ac0-86b2357e9257 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Refreshing instance network info cache due to event network-changed-41d7c79a-73d5-466b-ba68-192c5a2c01b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:17:52 compute-2 nova_compute[226829]: 2026-01-31 08:17:52.243 226833 DEBUG oslo_concurrency.lockutils [req-4d54873d-db49-407c-8f4c-6261fc438f76 req-1e32a0f5-1511-41a0-8ac0-86b2357e9257 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:52 compute-2 ceph-mon[77282]: pgmap v2485: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 412 KiB/s rd, 567 KiB/s wr, 52 op/s
Jan 31 08:17:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:52.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:17:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:53.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:17:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/242637935' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:17:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/242637935' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:17:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:54.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:54 compute-2 ceph-mon[77282]: pgmap v2486: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 62 KiB/s wr, 7 op/s
Jan 31 08:17:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:55.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.262 226833 DEBUG nova.network.neutron [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating instance_info_cache with network_info: [{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.289 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Releasing lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.291 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.291 226833 INFO nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Creating image(s)
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.317 226833 DEBUG nova.storage.rbd_utils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.320 226833 DEBUG nova.objects.instance [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.322 226833 DEBUG oslo_concurrency.lockutils [req-4d54873d-db49-407c-8f4c-6261fc438f76 req-1e32a0f5-1511-41a0-8ac0-86b2357e9257 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.322 226833 DEBUG nova.network.neutron [req-4d54873d-db49-407c-8f4c-6261fc438f76 req-1e32a0f5-1511-41a0-8ac0-86b2357e9257 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Refreshing network info cache for port 41d7c79a-73d5-466b-ba68-192c5a2c01b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.424 226833 DEBUG nova.storage.rbd_utils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.449 226833 DEBUG nova.storage.rbd_utils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.453 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "88f69edf8c55a0ed4c61dba89d2b838221ffd043" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:55 compute-2 nova_compute[226829]: 2026-01-31 08:17:55.454 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "88f69edf8c55a0ed4c61dba89d2b838221ffd043" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:56 compute-2 ceph-mon[77282]: pgmap v2487: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 51 KiB/s wr, 7 op/s
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.142 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.144 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.144 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.144 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.188 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.189 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.238 226833 DEBUG nova.virt.libvirt.imagebackend [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/9d458b54-ba05-4df1-8123-8bec7fac57ca/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/9d458b54-ba05-4df1-8123-8bec7fac57ca/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.316 226833 DEBUG nova.virt.libvirt.imagebackend [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/9d458b54-ba05-4df1-8123-8bec7fac57ca/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.317 226833 DEBUG nova.storage.rbd_utils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] cloning images/9d458b54-ba05-4df1-8123-8bec7fac57ca@snap to None/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.547 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "88f69edf8c55a0ed4c61dba89d2b838221ffd043" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.683 226833 DEBUG nova.objects.instance [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'migration_context' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:56 compute-2 nova_compute[226829]: 2026-01-31 08:17:56.756 226833 DEBUG nova.storage.rbd_utils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] flattening vms/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 08:17:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:17:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:56.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:17:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:57.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.169 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Image rbd:vms/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.170 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.170 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Ensure instance console log exists: /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.171 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.171 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.171 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.175 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Start _get_guest_xml network_info=[{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:17:15Z,direct_url=<?>,disk_format='raw',id=9d458b54-ba05-4df1-8123-8bec7fac57ca,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-1656500391-shelved',owner='3b06982960ad4453b8e542cb6330835d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:17:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-404c4de7-aa21-488f-9d20-6b6e4016f179', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '404c4de7-aa21-488f-9d20-6b6e4016f179', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attached', 'instance': '2c1aa7ad-f9c1-4e05-8261-defaa3eef40b', 'attached_at': '', 'detached_at': '', 'volume_id': '404c4de7-aa21-488f-9d20-6b6e4016f179', 'serial': '404c4de7-aa21-488f-9d20-6b6e4016f179'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vdc', 'attachment_id': '341e2c00-961d-4cab-a3b1-d0caae0c47b1', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.179 226833 WARNING nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.203 226833 DEBUG nova.virt.libvirt.host [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.204 226833 DEBUG nova.virt.libvirt.host [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.210 226833 DEBUG nova.virt.libvirt.host [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.210 226833 DEBUG nova.virt.libvirt.host [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.211 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.212 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:17:15Z,direct_url=<?>,disk_format='raw',id=9d458b54-ba05-4df1-8123-8bec7fac57ca,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-1656500391-shelved',owner='3b06982960ad4453b8e542cb6330835d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:17:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.212 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.212 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.213 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.213 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.213 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.214 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.214 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.214 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.214 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.215 226833 DEBUG nova.virt.hardware [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.215 226833 DEBUG nova.objects.instance [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.312 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:17:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2776682966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.738 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.766 226833 DEBUG nova.storage.rbd_utils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:57 compute-2 nova_compute[226829]: 2026-01-31 08:17:57.770 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2776682966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:17:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1378512047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.216 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.368 226833 DEBUG nova.virt.libvirt.vif [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:16:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1656500391',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1656500391',id=137,image_ref='9d458b54-ba05-4df1-8123-8bec7fac57ca',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-46056576',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:16:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-e68k7fwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member',shelved_at='2026-01-31T08:17:28.629258',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='9d458b54-ba05-4df1-8123-8bec7fac57ca'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:17:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=2c1aa7ad-f9c1-4e05-8261-defaa3eef40b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.369 226833 DEBUG nova.network.os_vif_util [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.370 226833 DEBUG nova.network.os_vif_util [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.373 226833 DEBUG nova.objects.instance [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.493 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <uuid>2c1aa7ad-f9c1-4e05-8261-defaa3eef40b</uuid>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <name>instance-00000089</name>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-1656500391</nova:name>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:17:57</nova:creationTime>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <nova:user uuid="3b153d2832404e5b9250422b70ba522d">tempest-AttachVolumeShelveTestJSON-332944999-project-member</nova:user>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <nova:project uuid="3b06982960ad4453b8e542cb6330835d">tempest-AttachVolumeShelveTestJSON-332944999</nova:project>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="9d458b54-ba05-4df1-8123-8bec7fac57ca"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <nova:port uuid="41d7c79a-73d5-466b-ba68-192c5a2c01b3">
Jan 31 08:17:58 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <system>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <entry name="serial">2c1aa7ad-f9c1-4e05-8261-defaa3eef40b</entry>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <entry name="uuid">2c1aa7ad-f9c1-4e05-8261-defaa3eef40b</entry>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     </system>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <os>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   </os>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <features>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   </features>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk">
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       </source>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config">
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       </source>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-404c4de7-aa21-488f-9d20-6b6e4016f179">
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       </source>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:17:58 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <target dev="vdc" bus="virtio"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <serial>404c4de7-aa21-488f-9d20-6b6e4016f179</serial>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:b7:40:e9"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <target dev="tap41d7c79a-73"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/console.log" append="off"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <video>
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     </video>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:17:58 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:17:58 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:17:58 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:17:58 compute-2 nova_compute[226829]: </domain>
Jan 31 08:17:58 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.493 226833 DEBUG nova.compute.manager [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Preparing to wait for external event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.494 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.494 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.494 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.495 226833 DEBUG nova.virt.libvirt.vif [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:16:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1656500391',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1656500391',id=137,image_ref='9d458b54-ba05-4df1-8123-8bec7fac57ca',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-46056576',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:16:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-e68k7fwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member',shelved_at='2026-01-31T08:17:28.629258',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='9d458b54-ba05-4df1-8123-8bec7fac57ca'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:17:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=2c1aa7ad-f9c1-4e05-8261-defaa3eef40b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.495 226833 DEBUG nova.network.os_vif_util [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.496 226833 DEBUG nova.network.os_vif_util [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.497 226833 DEBUG os_vif [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.497 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.498 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.498 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.502 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.502 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41d7c79a-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.503 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41d7c79a-73, col_values=(('external_ids', {'iface-id': '41d7c79a-73d5-466b-ba68-192c5a2c01b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:40:e9', 'vm-uuid': '2c1aa7ad-f9c1-4e05-8261-defaa3eef40b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.504 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:58 compute-2 NetworkManager[48999]: <info>  [1769847478.5058] manager: (tap41d7c79a-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.507 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.511 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.512 226833 INFO os_vif [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73')
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.609 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.609 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.609 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.609 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No VIF found with MAC fa:16:3e:b7:40:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.610 226833 INFO nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Using config drive
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.632 226833 DEBUG nova.storage.rbd_utils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.640 226833 DEBUG nova.network.neutron [req-4d54873d-db49-407c-8f4c-6261fc438f76 req-1e32a0f5-1511-41a0-8ac0-86b2357e9257 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updated VIF entry in instance network info cache for port 41d7c79a-73d5-466b-ba68-192c5a2c01b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.641 226833 DEBUG nova.network.neutron [req-4d54873d-db49-407c-8f4c-6261fc438f76 req-1e32a0f5-1511-41a0-8ac0-86b2357e9257 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating instance_info_cache with network_info: [{"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.662 226833 DEBUG nova.objects.instance [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.683 226833 DEBUG oslo_concurrency.lockutils [req-4d54873d-db49-407c-8f4c-6261fc438f76 req-1e32a0f5-1511-41a0-8ac0-86b2357e9257 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:58 compute-2 nova_compute[226829]: 2026-01-31 08:17:58.747 226833 DEBUG nova.objects.instance [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'keypairs' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:17:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:58.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:17:58 compute-2 ceph-mon[77282]: pgmap v2488: 305 pgs: 305 active+clean; 760 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 788 KiB/s wr, 32 op/s
Jan 31 08:17:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1378512047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:17:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:17:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:59.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.364 226833 INFO nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Creating config drive at /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.370 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpor1xbvug execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.502 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpor1xbvug" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.524 226833 DEBUG nova.storage.rbd_utils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.527 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.670 226833 DEBUG oslo_concurrency.processutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.670 226833 INFO nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Deleting local config drive /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b/disk.config because it was imported into RBD.
Jan 31 08:17:59 compute-2 kernel: tap41d7c79a-73: entered promiscuous mode
Jan 31 08:17:59 compute-2 NetworkManager[48999]: <info>  [1769847479.7042] manager: (tap41d7c79a-73): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Jan 31 08:17:59 compute-2 ovn_controller[133834]: 2026-01-31T08:17:59Z|00592|binding|INFO|Claiming lport 41d7c79a-73d5-466b-ba68-192c5a2c01b3 for this chassis.
Jan 31 08:17:59 compute-2 ovn_controller[133834]: 2026-01-31T08:17:59Z|00593|binding|INFO|41d7c79a-73d5-466b-ba68-192c5a2c01b3: Claiming fa:16:3e:b7:40:e9 10.100.0.14
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.705 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.711 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-2 NetworkManager[48999]: <info>  [1769847479.7177] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Jan 31 08:17:59 compute-2 NetworkManager[48999]: <info>  [1769847479.7182] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.716 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-2 systemd-udevd[290809]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:17:59 compute-2 systemd-machined[195142]: New machine qemu-64-instance-00000089.
Jan 31 08:17:59 compute-2 NetworkManager[48999]: <info>  [1769847479.7346] device (tap41d7c79a-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:17:59 compute-2 NetworkManager[48999]: <info>  [1769847479.7354] device (tap41d7c79a-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.735 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:40:e9 10.100.0.14'], port_security=['fa:16:3e:b7:40:e9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2c1aa7ad-f9c1-4e05-8261-defaa3eef40b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b47e290-9853-478f-86cb-c8ea73119a97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b06982960ad4453b8e542cb6330835d', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'd12bd0a3-e514-4fde-9351-39f4527fc3f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c631ec-3e4c-4c27-ae32-3645d3f29853, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=41d7c79a-73d5-466b-ba68-192c5a2c01b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.736 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 41d7c79a-73d5-466b-ba68-192c5a2c01b3 in datapath 2b47e290-9853-478f-86cb-c8ea73119a97 bound to our chassis
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.738 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b47e290-9853-478f-86cb-c8ea73119a97
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.746 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dc34c87c-9174-4530-ac74-68068aebdaf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.747 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b47e290-91 in ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.748 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b47e290-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.749 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6728e5-a87d-4dad-955e-b9492fa70477]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.749 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb83510-87a7-4b65-aaf0-5d1d872d0e49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.759 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[ef72143b-b046-4c44-91d5-70eee772ae4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 systemd[1]: Started Virtual Machine qemu-64-instance-00000089.
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.763 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.768 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[970a190a-9b6c-4fee-8f4f-b4c469269ad8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.781 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-2 ovn_controller[133834]: 2026-01-31T08:17:59Z|00594|binding|INFO|Setting lport 41d7c79a-73d5-466b-ba68-192c5a2c01b3 ovn-installed in OVS
Jan 31 08:17:59 compute-2 ovn_controller[133834]: 2026-01-31T08:17:59Z|00595|binding|INFO|Setting lport 41d7c79a-73d5-466b-ba68-192c5a2c01b3 up in Southbound
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.785 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.790 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ec076e00-76a6-4c08-9758-b0425d18a741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 systemd-udevd[290812]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.795 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9db08d-5660-4111-9daf-83bd4764d9c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 NetworkManager[48999]: <info>  [1769847479.7965] manager: (tap2b47e290-90): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.814 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5514fa-5346-430d-a704-641494b93b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.817 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[208da3a9-fc5e-4e33-b4d6-a8f0773e5103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 NetworkManager[48999]: <info>  [1769847479.8321] device (tap2b47e290-90): carrier: link connected
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.836 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[b40e6c8e-fa76-4dd1-9170-bee27d414a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.847 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ac571894-52ac-4c80-80b3-665d83701941]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b47e290-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:d6:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773230, 'reachable_time': 20247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290844, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.858 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f40b9fcf-6a1d-41d1-a87e-f04970f30b86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:d61c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 773230, 'tstamp': 773230}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290845, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.868 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee69612-49e7-4316-a755-5fe0cee32a29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b47e290-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:d6:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773230, 'reachable_time': 20247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290846, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.882 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e25f0659-931f-4beb-af11-d369d5dcbd2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.916 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea30b4f-109d-4e7d-bfe3-83fbaf2adfc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.917 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b47e290-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.917 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.918 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b47e290-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:59 compute-2 NetworkManager[48999]: <info>  [1769847479.9205] manager: (tap2b47e290-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.919 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-2 kernel: tap2b47e290-90: entered promiscuous mode
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.922 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.924 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b47e290-90, col_values=(('external_ids', {'iface-id': '4fadf8e2-21f6-4df7-9cc2-be518280ee18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.925 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-2 ovn_controller[133834]: 2026-01-31T08:17:59Z|00596|binding|INFO|Releasing lport 4fadf8e2-21f6-4df7-9cc2-be518280ee18 from this chassis (sb_readonly=0)
Jan 31 08:17:59 compute-2 nova_compute[226829]: 2026-01-31 08:17:59.930 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.931 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.932 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[84944b02-473d-434d-8bc0-78d04fd70e65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.933 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-2b47e290-9853-478f-86cb-c8ea73119a97
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 2b47e290-9853-478f-86cb-c8ea73119a97
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:17:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:17:59.935 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'env', 'PROCESS_TAG=haproxy-2b47e290-9853-478f-86cb-c8ea73119a97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b47e290-9853-478f-86cb-c8ea73119a97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:18:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e318 e318: 3 total, 3 up, 3 in
Jan 31 08:18:00 compute-2 ceph-mon[77282]: pgmap v2489: 305 pgs: 305 active+clean; 797 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 56 op/s
Jan 31 08:18:00 compute-2 podman[290878]: 2026-01-31 08:18:00.273638248 +0000 UTC m=+0.072161996 container create 6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:18:00 compute-2 podman[290878]: 2026-01-31 08:18:00.218838013 +0000 UTC m=+0.017361701 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:18:00 compute-2 systemd[1]: Started libpod-conmon-6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104.scope.
Jan 31 08:18:00 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:18:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ffd06b0b64085e1e164ca79de85a85cd95837af3c2bce41881025726d20744/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:00 compute-2 podman[290878]: 2026-01-31 08:18:00.39336806 +0000 UTC m=+0.191891748 container init 6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 08:18:00 compute-2 podman[290878]: 2026-01-31 08:18:00.400640186 +0000 UTC m=+0.199163884 container start 6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:18:00 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[290894]: [NOTICE]   (290898) : New worker (290900) forked
Jan 31 08:18:00 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[290894]: [NOTICE]   (290898) : Loading success.
Jan 31 08:18:00 compute-2 nova_compute[226829]: 2026-01-31 08:18:00.945 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847480.9450386, 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:18:00 compute-2 nova_compute[226829]: 2026-01-31 08:18:00.946 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] VM Started (Lifecycle Event)
Jan 31 08:18:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:18:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:00.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.008 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.013 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847480.9459136, 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.013 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] VM Paused (Lifecycle Event)
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.034 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.037 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.062 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:18:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.189 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:01 compute-2 ceph-mon[77282]: osdmap e318: 3 total, 3 up, 3 in
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.532 226833 DEBUG nova.compute.manager [req-b73a57a2-938e-4e27-8eaa-c5f9177c0524 req-1b83b268-2c2a-43a3-a961-3af654396008 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.532 226833 DEBUG oslo_concurrency.lockutils [req-b73a57a2-938e-4e27-8eaa-c5f9177c0524 req-1b83b268-2c2a-43a3-a961-3af654396008 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.533 226833 DEBUG oslo_concurrency.lockutils [req-b73a57a2-938e-4e27-8eaa-c5f9177c0524 req-1b83b268-2c2a-43a3-a961-3af654396008 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.533 226833 DEBUG oslo_concurrency.lockutils [req-b73a57a2-938e-4e27-8eaa-c5f9177c0524 req-1b83b268-2c2a-43a3-a961-3af654396008 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.534 226833 DEBUG nova.compute.manager [req-b73a57a2-938e-4e27-8eaa-c5f9177c0524 req-1b83b268-2c2a-43a3-a961-3af654396008 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Processing event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.535 226833 DEBUG nova.compute.manager [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.538 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847481.538599, 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.539 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] VM Resumed (Lifecycle Event)
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.540 226833 DEBUG nova.virt.libvirt.driver [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.544 226833 INFO nova.virt.libvirt.driver [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance spawned successfully.
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.624 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.627 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:18:01 compute-2 nova_compute[226829]: 2026-01-31 08:18:01.663 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:18:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e319 e319: 3 total, 3 up, 3 in
Jan 31 08:18:02 compute-2 ceph-mon[77282]: pgmap v2491: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 822 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 132 op/s
Jan 31 08:18:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:02.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:03.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:03 compute-2 ceph-mon[77282]: osdmap e319: 3 total, 3 up, 3 in
Jan 31 08:18:03 compute-2 nova_compute[226829]: 2026-01-31 08:18:03.505 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:04 compute-2 nova_compute[226829]: 2026-01-31 08:18:04.029 226833 DEBUG nova.compute.manager [req-3ff4d264-d863-4c18-a27f-4521c539d4ae req-7c1f868f-1b49-448b-b86c-88fea5659df9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:04 compute-2 nova_compute[226829]: 2026-01-31 08:18:04.030 226833 DEBUG oslo_concurrency.lockutils [req-3ff4d264-d863-4c18-a27f-4521c539d4ae req-7c1f868f-1b49-448b-b86c-88fea5659df9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:04 compute-2 nova_compute[226829]: 2026-01-31 08:18:04.030 226833 DEBUG oslo_concurrency.lockutils [req-3ff4d264-d863-4c18-a27f-4521c539d4ae req-7c1f868f-1b49-448b-b86c-88fea5659df9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:04 compute-2 nova_compute[226829]: 2026-01-31 08:18:04.031 226833 DEBUG oslo_concurrency.lockutils [req-3ff4d264-d863-4c18-a27f-4521c539d4ae req-7c1f868f-1b49-448b-b86c-88fea5659df9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:04 compute-2 nova_compute[226829]: 2026-01-31 08:18:04.031 226833 DEBUG nova.compute.manager [req-3ff4d264-d863-4c18-a27f-4521c539d4ae req-7c1f868f-1b49-448b-b86c-88fea5659df9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] No waiting events found dispatching network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:18:04 compute-2 nova_compute[226829]: 2026-01-31 08:18:04.032 226833 WARNING nova.compute.manager [req-3ff4d264-d863-4c18-a27f-4521c539d4ae req-7c1f868f-1b49-448b-b86c-88fea5659df9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received unexpected event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 for instance with vm_state shelved_offloaded and task_state spawning.
Jan 31 08:18:04 compute-2 nova_compute[226829]: 2026-01-31 08:18:04.165 226833 DEBUG nova.compute.manager [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:18:04 compute-2 nova_compute[226829]: 2026-01-31 08:18:04.382 226833 DEBUG oslo_concurrency.lockutils [None req-b1e6a008-724f-446f-8de1-81f1f932eb05 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 19.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:04 compute-2 ceph-mon[77282]: pgmap v2493: 305 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 795 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 5.9 MiB/s wr, 198 op/s
Jan 31 08:18:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:04.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:05.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:05 compute-2 podman[290971]: 2026-01-31 08:18:05.178217548 +0000 UTC m=+0.065006391 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:18:06 compute-2 nova_compute[226829]: 2026-01-31 08:18:06.228 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:06 compute-2 sudo[290999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:06 compute-2 sudo[290999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:06 compute-2 sudo[290999]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:06.893 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:06.894 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:06.894 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:06 compute-2 sudo[291024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:06 compute-2 sudo[291024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:06 compute-2 sudo[291024]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:06.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:07.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e320 e320: 3 total, 3 up, 3 in
Jan 31 08:18:07 compute-2 ceph-mon[77282]: pgmap v2494: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 668 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 4.7 MiB/s wr, 266 op/s
Jan 31 08:18:08 compute-2 ceph-mon[77282]: osdmap e320: 3 total, 3 up, 3 in
Jan 31 08:18:08 compute-2 ceph-mon[77282]: pgmap v2496: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Jan 31 08:18:08 compute-2 nova_compute[226829]: 2026-01-31 08:18:08.541 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:08.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:09.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2030513395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:10 compute-2 ceph-mon[77282]: pgmap v2497: 305 pgs: 305 active+clean; 559 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 427 KiB/s wr, 230 op/s
Jan 31 08:18:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:10.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:11.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:11 compute-2 nova_compute[226829]: 2026-01-31 08:18:11.230 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e321 e321: 3 total, 3 up, 3 in
Jan 31 08:18:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:12.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:13 compute-2 ceph-mon[77282]: pgmap v2498: 305 pgs: 305 active+clean; 542 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.4 MiB/s wr, 204 op/s
Jan 31 08:18:13 compute-2 ceph-mon[77282]: osdmap e321: 3 total, 3 up, 3 in
Jan 31 08:18:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2716440508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:18:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:13.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:13 compute-2 nova_compute[226829]: 2026-01-31 08:18:13.543 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e322 e322: 3 total, 3 up, 3 in
Jan 31 08:18:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1826215668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:18:14 compute-2 ceph-mon[77282]: pgmap v2500: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 120 op/s
Jan 31 08:18:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:18:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:14.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:18:15 compute-2 podman[291053]: 2026-01-31 08:18:15.167857275 +0000 UTC m=+0.050863828 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:18:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:15.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:15 compute-2 ceph-mon[77282]: osdmap e322: 3 total, 3 up, 3 in
Jan 31 08:18:15 compute-2 ovn_controller[133834]: 2026-01-31T08:18:15Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:40:e9 10.100.0.14
Jan 31 08:18:16 compute-2 nova_compute[226829]: 2026-01-31 08:18:16.233 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:16 compute-2 ceph-mon[77282]: pgmap v2502: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 533 KiB/s rd, 2.7 MiB/s wr, 123 op/s
Jan 31 08:18:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:16.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:18:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:17.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:18:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3663357759' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:18:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3663357759' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:18:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e323 e323: 3 total, 3 up, 3 in
Jan 31 08:18:18 compute-2 nova_compute[226829]: 2026-01-31 08:18:18.547 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:18 compute-2 ceph-mon[77282]: osdmap e323: 3 total, 3 up, 3 in
Jan 31 08:18:18 compute-2 ceph-mon[77282]: pgmap v2504: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 906 KiB/s rd, 1.4 MiB/s wr, 156 op/s
Jan 31 08:18:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:18.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:19.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:19 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 31 08:18:19 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 31 08:18:19 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 31 08:18:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:19 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 31 08:18:19 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 08:18:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e324 e324: 3 total, 3 up, 3 in
Jan 31 08:18:19 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 31 08:18:20 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 31 08:18:20 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 31 08:18:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:18:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:20.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:18:21 compute-2 ceph-mon[77282]: pgmap v2505: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 186 op/s
Jan 31 08:18:21 compute-2 ceph-mon[77282]: osdmap e324: 3 total, 3 up, 3 in
Jan 31 08:18:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1692506578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:18:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1692506578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:18:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:21.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:21 compute-2 nova_compute[226829]: 2026-01-31 08:18:21.234 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:22 compute-2 ceph-mon[77282]: pgmap v2507: 305 pgs: 305 active+clean; 537 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 60 KiB/s wr, 230 op/s
Jan 31 08:18:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:22.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:23 compute-2 nova_compute[226829]: 2026-01-31 08:18:23.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:24 compute-2 nova_compute[226829]: 2026-01-31 08:18:24.877 226833 DEBUG oslo_concurrency.lockutils [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:24 compute-2 nova_compute[226829]: 2026-01-31 08:18:24.878 226833 DEBUG oslo_concurrency.lockutils [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:24 compute-2 nova_compute[226829]: 2026-01-31 08:18:24.900 226833 INFO nova.compute.manager [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Detaching volume 404c4de7-aa21-488f-9d20-6b6e4016f179
Jan 31 08:18:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:24.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:25 compute-2 ceph-mon[77282]: pgmap v2508: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 55 KiB/s wr, 307 op/s
Jan 31 08:18:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:25 compute-2 nova_compute[226829]: 2026-01-31 08:18:25.317 226833 INFO nova.virt.block_device [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Attempting to driver detach volume 404c4de7-aa21-488f-9d20-6b6e4016f179 from mountpoint /dev/vdc
Jan 31 08:18:25 compute-2 nova_compute[226829]: 2026-01-31 08:18:25.328 226833 DEBUG nova.virt.libvirt.driver [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Attempting to detach device vdc from instance 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:18:25 compute-2 nova_compute[226829]: 2026-01-31 08:18:25.328 226833 DEBUG nova.virt.libvirt.guest [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:18:25 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-404c4de7-aa21-488f-9d20-6b6e4016f179">
Jan 31 08:18:25 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]:   </source>
Jan 31 08:18:25 compute-2 nova_compute[226829]:   <target dev="vdc" bus="virtio"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]:   <serial>404c4de7-aa21-488f-9d20-6b6e4016f179</serial>
Jan 31 08:18:25 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]: </disk>
Jan 31 08:18:25 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:18:25 compute-2 nova_compute[226829]: 2026-01-31 08:18:25.368 226833 INFO nova.virt.libvirt.driver [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Successfully detached device vdc from instance 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b from the persistent domain config.
Jan 31 08:18:25 compute-2 nova_compute[226829]: 2026-01-31 08:18:25.369 226833 DEBUG nova.virt.libvirt.driver [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:18:25 compute-2 nova_compute[226829]: 2026-01-31 08:18:25.370 226833 DEBUG nova.virt.libvirt.guest [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:18:25 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-404c4de7-aa21-488f-9d20-6b6e4016f179">
Jan 31 08:18:25 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]:   </source>
Jan 31 08:18:25 compute-2 nova_compute[226829]:   <target dev="vdc" bus="virtio"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]:   <serial>404c4de7-aa21-488f-9d20-6b6e4016f179</serial>
Jan 31 08:18:25 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 08:18:25 compute-2 nova_compute[226829]: </disk>
Jan 31 08:18:25 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:18:25 compute-2 nova_compute[226829]: 2026-01-31 08:18:25.691 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769847505.6915321, 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:18:25 compute-2 nova_compute[226829]: 2026-01-31 08:18:25.693 226833 DEBUG nova.virt.libvirt.driver [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:18:25 compute-2 nova_compute[226829]: 2026-01-31 08:18:25.695 226833 INFO nova.virt.libvirt.driver [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Successfully detached device vdc from instance 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b from the live domain config.
Jan 31 08:18:26 compute-2 nova_compute[226829]: 2026-01-31 08:18:26.237 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:26 compute-2 ceph-mon[77282]: pgmap v2509: 305 pgs: 305 active+clean; 424 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 42 KiB/s wr, 444 op/s
Jan 31 08:18:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3249558292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:26 compute-2 nova_compute[226829]: 2026-01-31 08:18:26.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:26 compute-2 nova_compute[226829]: 2026-01-31 08:18:26.561 226833 DEBUG nova.objects.instance [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'flavor' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:18:26 compute-2 nova_compute[226829]: 2026-01-31 08:18:26.753 226833 DEBUG oslo_concurrency.lockutils [None req-767a60a3-8450-47fb-b2cb-2e73b7929f95 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:26 compute-2 sudo[291081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:26 compute-2 sudo[291081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:26.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:26 compute-2 sudo[291081]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-2 sudo[291105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:27 compute-2 sudo[291113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:27 compute-2 sudo[291105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-2 sudo[291113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-2 sudo[291113]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-2 sudo[291105]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e325 e325: 3 total, 3 up, 3 in
Jan 31 08:18:27 compute-2 sudo[291156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:27 compute-2 sudo[291156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-2 sudo[291159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:27 compute-2 sudo[291156]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-2 sudo[291159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-2 sudo[291159]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-2 sudo[291206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:18:27 compute-2 sudo[291206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:27 compute-2 sudo[291206]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:28 compute-2 ceph-mon[77282]: osdmap e325: 3 total, 3 up, 3 in
Jan 31 08:18:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:18:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:18:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:18:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:18:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:18:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:18:28 compute-2 ceph-mon[77282]: pgmap v2511: 305 pgs: 305 active+clean; 424 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 20 KiB/s wr, 398 op/s
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.207 226833 DEBUG oslo_concurrency.lockutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.209 226833 DEBUG oslo_concurrency.lockutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.209 226833 DEBUG oslo_concurrency.lockutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.209 226833 DEBUG oslo_concurrency.lockutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.209 226833 DEBUG oslo_concurrency.lockutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.211 226833 INFO nova.compute.manager [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Terminating instance
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.212 226833 DEBUG nova.compute.manager [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:18:28 compute-2 kernel: tap41d7c79a-73 (unregistering): left promiscuous mode
Jan 31 08:18:28 compute-2 NetworkManager[48999]: <info>  [1769847508.5194] device (tap41d7c79a-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.519 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:28 compute-2 ovn_controller[133834]: 2026-01-31T08:18:28Z|00597|binding|INFO|Releasing lport 41d7c79a-73d5-466b-ba68-192c5a2c01b3 from this chassis (sb_readonly=0)
Jan 31 08:18:28 compute-2 ovn_controller[133834]: 2026-01-31T08:18:28Z|00598|binding|INFO|Setting lport 41d7c79a-73d5-466b-ba68-192c5a2c01b3 down in Southbound
Jan 31 08:18:28 compute-2 ovn_controller[133834]: 2026-01-31T08:18:28Z|00599|binding|INFO|Removing iface tap41d7c79a-73 ovn-installed in OVS
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.530 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.536 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.551 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:28.558 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:40:e9 10.100.0.14'], port_security=['fa:16:3e:b7:40:e9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2c1aa7ad-f9c1-4e05-8261-defaa3eef40b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b47e290-9853-478f-86cb-c8ea73119a97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b06982960ad4453b8e542cb6330835d', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'd12bd0a3-e514-4fde-9351-39f4527fc3f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c631ec-3e4c-4c27-ae32-3645d3f29853, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=41d7c79a-73d5-466b-ba68-192c5a2c01b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:18:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:28.562 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 41d7c79a-73d5-466b-ba68-192c5a2c01b3 in datapath 2b47e290-9853-478f-86cb-c8ea73119a97 unbound from our chassis
Jan 31 08:18:28 compute-2 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000089.scope: Deactivated successfully.
Jan 31 08:18:28 compute-2 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000089.scope: Consumed 14.909s CPU time.
Jan 31 08:18:28 compute-2 systemd-machined[195142]: Machine qemu-64-instance-00000089 terminated.
Jan 31 08:18:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:28.566 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b47e290-9853-478f-86cb-c8ea73119a97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:18:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:28.570 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f03d5957-92df-4cd3-981c-8d8197c91836]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:28.572 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 namespace which is not needed anymore
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.645 226833 INFO nova.virt.libvirt.driver [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Instance destroyed successfully.
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.647 226833 DEBUG nova.objects.instance [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'resources' on Instance uuid 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:18:28 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[290894]: [NOTICE]   (290898) : haproxy version is 2.8.14-c23fe91
Jan 31 08:18:28 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[290894]: [NOTICE]   (290898) : path to executable is /usr/sbin/haproxy
Jan 31 08:18:28 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[290894]: [WARNING]  (290898) : Exiting Master process...
Jan 31 08:18:28 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[290894]: [WARNING]  (290898) : Exiting Master process...
Jan 31 08:18:28 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[290894]: [ALERT]    (290898) : Current worker (290900) exited with code 143 (Terminated)
Jan 31 08:18:28 compute-2 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[290894]: [WARNING]  (290898) : All workers exited. Exiting... (0)
Jan 31 08:18:28 compute-2 systemd[1]: libpod-6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104.scope: Deactivated successfully.
Jan 31 08:18:28 compute-2 podman[291292]: 2026-01-31 08:18:28.797516591 +0000 UTC m=+0.148644276 container died 6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.828 226833 DEBUG nova.virt.libvirt.vif [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:16:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1656500391',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1656500391',id=137,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFicZlkmwmJx2QcB2FaCtZWA/EPieHD4tlRsDzixOV+Fehvb9d4YKWopnndvdTu7d1fOEkn5wswwcVr24I+nVYUHx/SYBenHvgz9Ve3+IdDKcppTFyb3Gp0ZC2yG6jAiiQ==',key_name='tempest-keypair-46056576',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:18:04Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-e68k7fwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:18:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=2c1aa7ad-f9c1-4e05-8261-defaa3eef40b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.829 226833 DEBUG nova.network.os_vif_util [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "address": "fa:16:3e:b7:40:e9", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d7c79a-73", "ovs_interfaceid": "41d7c79a-73d5-466b-ba68-192c5a2c01b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.830 226833 DEBUG nova.network.os_vif_util [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.831 226833 DEBUG os_vif [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.838 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.838 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41d7c79a-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.840 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.842 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:28 compute-2 nova_compute[226829]: 2026-01-31 08:18:28.850 226833 INFO os_vif [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:40:e9,bridge_name='br-int',has_traffic_filtering=True,id=41d7c79a-73d5-466b-ba68-192c5a2c01b3,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d7c79a-73')
Jan 31 08:18:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:28.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:29 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104-userdata-shm.mount: Deactivated successfully.
Jan 31 08:18:29 compute-2 systemd[1]: var-lib-containers-storage-overlay-d0ffd06b0b64085e1e164ca79de85a85cd95837af3c2bce41881025726d20744-merged.mount: Deactivated successfully.
Jan 31 08:18:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:29.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:29 compute-2 podman[291292]: 2026-01-31 08:18:29.419979207 +0000 UTC m=+0.771106892 container cleanup 6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:18:29 compute-2 systemd[1]: libpod-conmon-6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104.scope: Deactivated successfully.
Jan 31 08:18:29 compute-2 nova_compute[226829]: 2026-01-31 08:18:29.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:29 compute-2 nova_compute[226829]: 2026-01-31 08:18:29.523 226833 DEBUG nova.compute.manager [req-608caf02-3d9a-4442-b5ed-9a6ee0c8b169 req-f28df866-d30d-4896-91a6-f87921faa869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-vif-unplugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:29 compute-2 nova_compute[226829]: 2026-01-31 08:18:29.523 226833 DEBUG oslo_concurrency.lockutils [req-608caf02-3d9a-4442-b5ed-9a6ee0c8b169 req-f28df866-d30d-4896-91a6-f87921faa869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:29 compute-2 nova_compute[226829]: 2026-01-31 08:18:29.524 226833 DEBUG oslo_concurrency.lockutils [req-608caf02-3d9a-4442-b5ed-9a6ee0c8b169 req-f28df866-d30d-4896-91a6-f87921faa869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:29 compute-2 nova_compute[226829]: 2026-01-31 08:18:29.524 226833 DEBUG oslo_concurrency.lockutils [req-608caf02-3d9a-4442-b5ed-9a6ee0c8b169 req-f28df866-d30d-4896-91a6-f87921faa869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:29 compute-2 nova_compute[226829]: 2026-01-31 08:18:29.524 226833 DEBUG nova.compute.manager [req-608caf02-3d9a-4442-b5ed-9a6ee0c8b169 req-f28df866-d30d-4896-91a6-f87921faa869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] No waiting events found dispatching network-vif-unplugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:18:29 compute-2 nova_compute[226829]: 2026-01-31 08:18:29.524 226833 DEBUG nova.compute.manager [req-608caf02-3d9a-4442-b5ed-9a6ee0c8b169 req-f28df866-d30d-4896-91a6-f87921faa869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-vif-unplugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:18:29 compute-2 podman[291346]: 2026-01-31 08:18:29.777417756 +0000 UTC m=+0.340569923 container remove 6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:18:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:29.783 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e841aa-c911-40d9-8dcb-76ad60703e7a]: (4, ('Sat Jan 31 08:18:28 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 (6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104)\n6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104\nSat Jan 31 08:18:29 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 (6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104)\n6b6b32fa64eb00011aebc17bd4e5ffaf42d7ff822e33bc903e3b510bbc7b5104\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:29.786 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e1fe2a26-1553-42cb-bbbe-69ef9f5e3606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:29.789 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b47e290-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:29 compute-2 kernel: tap2b47e290-90: left promiscuous mode
Jan 31 08:18:29 compute-2 nova_compute[226829]: 2026-01-31 08:18:29.793 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:29 compute-2 nova_compute[226829]: 2026-01-31 08:18:29.801 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:29.804 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[773caadd-36cf-4c4f-86ce-d82b35778640]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:29.838 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[de45b604-b70b-49fb-98d7-b43a81f6557b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:29.840 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[62c419b6-9242-44b1-97e6-db1a6f2ec04e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:29.859 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[174f9751-4c45-4335-b27a-74b55b43f38e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773226, 'reachable_time': 36742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291363, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:29 compute-2 systemd[1]: run-netns-ovnmeta\x2d2b47e290\x2d9853\x2d478f\x2d86cb\x2dc8ea73119a97.mount: Deactivated successfully.
Jan 31 08:18:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:29.868 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:18:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:29.869 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[656be7af-5e9e-4473-b474-74a21aed87c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e326 e326: 3 total, 3 up, 3 in
Jan 31 08:18:30 compute-2 ceph-mon[77282]: pgmap v2512: 305 pgs: 305 active+clean; 387 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 331 op/s
Jan 31 08:18:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:30.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:31 compute-2 nova_compute[226829]: 2026-01-31 08:18:31.238 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:31.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:31.498 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:18:31 compute-2 nova_compute[226829]: 2026-01-31 08:18:31.498 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:31.500 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:18:31 compute-2 nova_compute[226829]: 2026-01-31 08:18:31.696 226833 DEBUG nova.compute.manager [req-38722338-0a04-4faf-bad1-283da9a80e53 req-6f996a59-92e3-4d8a-8091-00814c5bf450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:31 compute-2 nova_compute[226829]: 2026-01-31 08:18:31.697 226833 DEBUG oslo_concurrency.lockutils [req-38722338-0a04-4faf-bad1-283da9a80e53 req-6f996a59-92e3-4d8a-8091-00814c5bf450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:31 compute-2 nova_compute[226829]: 2026-01-31 08:18:31.698 226833 DEBUG oslo_concurrency.lockutils [req-38722338-0a04-4faf-bad1-283da9a80e53 req-6f996a59-92e3-4d8a-8091-00814c5bf450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:31 compute-2 nova_compute[226829]: 2026-01-31 08:18:31.698 226833 DEBUG oslo_concurrency.lockutils [req-38722338-0a04-4faf-bad1-283da9a80e53 req-6f996a59-92e3-4d8a-8091-00814c5bf450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:31 compute-2 nova_compute[226829]: 2026-01-31 08:18:31.699 226833 DEBUG nova.compute.manager [req-38722338-0a04-4faf-bad1-283da9a80e53 req-6f996a59-92e3-4d8a-8091-00814c5bf450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] No waiting events found dispatching network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:18:31 compute-2 nova_compute[226829]: 2026-01-31 08:18:31.699 226833 WARNING nova.compute.manager [req-38722338-0a04-4faf-bad1-283da9a80e53 req-6f996a59-92e3-4d8a-8091-00814c5bf450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received unexpected event network-vif-plugged-41d7c79a-73d5-466b-ba68-192c5a2c01b3 for instance with vm_state active and task_state deleting.
Jan 31 08:18:31 compute-2 ceph-mon[77282]: osdmap e326: 3 total, 3 up, 3 in
Jan 31 08:18:32 compute-2 nova_compute[226829]: 2026-01-31 08:18:32.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:32 compute-2 nova_compute[226829]: 2026-01-31 08:18:32.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:18:32 compute-2 nova_compute[226829]: 2026-01-31 08:18:32.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:18:32 compute-2 nova_compute[226829]: 2026-01-31 08:18:32.541 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 08:18:32 compute-2 nova_compute[226829]: 2026-01-31 08:18:32.541 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:18:32 compute-2 nova_compute[226829]: 2026-01-31 08:18:32.542 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:32 compute-2 ceph-mon[77282]: pgmap v2514: 305 pgs: 305 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 167 KiB/s rd, 578 KiB/s wr, 234 op/s
Jan 31 08:18:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:32.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:33 compute-2 nova_compute[226829]: 2026-01-31 08:18:33.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:33 compute-2 nova_compute[226829]: 2026-01-31 08:18:33.841 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:34 compute-2 ceph-mon[77282]: pgmap v2515: 305 pgs: 305 active+clean; 347 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 278 KiB/s rd, 1.9 MiB/s wr, 111 op/s
Jan 31 08:18:34 compute-2 nova_compute[226829]: 2026-01-31 08:18:34.660 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:34.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:35.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:35 compute-2 sudo[291368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:35 compute-2 sudo[291368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:35 compute-2 sudo[291368]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:35 compute-2 sudo[291395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:18:35 compute-2 sudo[291395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:35 compute-2 sudo[291395]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:18:35.502 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:35 compute-2 nova_compute[226829]: 2026-01-31 08:18:35.531 226833 INFO nova.virt.libvirt.driver [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Deleting instance files /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_del
Jan 31 08:18:35 compute-2 nova_compute[226829]: 2026-01-31 08:18:35.532 226833 INFO nova.virt.libvirt.driver [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Deletion of /var/lib/nova/instances/2c1aa7ad-f9c1-4e05-8261-defaa3eef40b_del complete
Jan 31 08:18:35 compute-2 podman[291392]: 2026-01-31 08:18:35.591851425 +0000 UTC m=+0.197227542 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 08:18:35 compute-2 nova_compute[226829]: 2026-01-31 08:18:35.803 226833 INFO nova.compute.manager [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Took 7.59 seconds to destroy the instance on the hypervisor.
Jan 31 08:18:35 compute-2 nova_compute[226829]: 2026-01-31 08:18:35.804 226833 DEBUG oslo.service.loopingcall [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:18:35 compute-2 nova_compute[226829]: 2026-01-31 08:18:35.804 226833 DEBUG nova.compute.manager [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:18:35 compute-2 nova_compute[226829]: 2026-01-31 08:18:35.805 226833 DEBUG nova.network.neutron [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:18:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:18:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:18:36 compute-2 nova_compute[226829]: 2026-01-31 08:18:36.240 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:36.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:37.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:37 compute-2 ceph-mon[77282]: pgmap v2516: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.9 MiB/s wr, 151 op/s
Jan 31 08:18:37 compute-2 nova_compute[226829]: 2026-01-31 08:18:37.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:38 compute-2 ceph-mon[77282]: pgmap v2517: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 429 KiB/s rd, 2.5 MiB/s wr, 139 op/s
Jan 31 08:18:38 compute-2 nova_compute[226829]: 2026-01-31 08:18:38.843 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:38.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:39.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:39 compute-2 nova_compute[226829]: 2026-01-31 08:18:39.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2544434311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:39 compute-2 nova_compute[226829]: 2026-01-31 08:18:39.803 226833 DEBUG nova.network.neutron [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:18:39 compute-2 nova_compute[226829]: 2026-01-31 08:18:39.861 226833 INFO nova.compute.manager [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Took 4.06 seconds to deallocate network for instance.
Jan 31 08:18:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.000 226833 DEBUG oslo_concurrency.lockutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.001 226833 DEBUG oslo_concurrency.lockutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.043 226833 DEBUG nova.scheduler.client.report [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.057 226833 DEBUG nova.scheduler.client.report [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.058 226833 DEBUG nova.compute.provider_tree [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.069 226833 DEBUG nova.scheduler.client.report [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.104 226833 DEBUG nova.scheduler.client.report [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.161 226833 DEBUG oslo_concurrency.processutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:40 compute-2 ceph-mon[77282]: pgmap v2518: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 437 KiB/s rd, 2.6 MiB/s wr, 159 op/s
Jan 31 08:18:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1463827182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3198695922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:18:40 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2614889670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.922 226833 DEBUG oslo_concurrency.processutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.939 226833 DEBUG nova.compute.provider_tree [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:18:40 compute-2 nova_compute[226829]: 2026-01-31 08:18:40.982 226833 DEBUG nova.scheduler.client.report [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:18:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:40.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.062 226833 DEBUG oslo_concurrency.lockutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.107 226833 INFO nova.scheduler.client.report [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Deleted allocations for instance 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.217 226833 DEBUG oslo_concurrency.lockutils [None req-4ef0f27d-f39a-468d-8c0f-6f600debef39 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "2c1aa7ad-f9c1-4e05-8261-defaa3eef40b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.244 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:41.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.289 226833 DEBUG nova.compute.manager [req-2b0a7b0d-b138-44e3-8443-b9059b404241 req-60e479a7-be68-4d0a-bc38-c1c6b3b6b550 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Received event network-vif-deleted-41d7c79a-73d5-466b-ba68-192c5a2c01b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.458 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.526 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.526 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.527 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.527 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.528 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:18:41 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/441361465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:41 compute-2 nova_compute[226829]: 2026-01-31 08:18:41.956 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.090 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.091 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4184MB free_disk=20.926910400390625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.091 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.092 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.266 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.266 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.345 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:18:42 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3826855599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.749 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.755 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.784 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.909 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:18:42 compute-2 nova_compute[226829]: 2026-01-31 08:18:42.909 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:43.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e327 e327: 3 total, 3 up, 3 in
Jan 31 08:18:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:43.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2614889670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2356523471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:43 compute-2 nova_compute[226829]: 2026-01-31 08:18:43.645 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847508.6425521, 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:18:43 compute-2 nova_compute[226829]: 2026-01-31 08:18:43.646 226833 INFO nova.compute.manager [-] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] VM Stopped (Lifecycle Event)
Jan 31 08:18:43 compute-2 nova_compute[226829]: 2026-01-31 08:18:43.752 226833 DEBUG nova.compute.manager [None req-1d85f050-1d86-408b-9a03-774beb358f9a - - - - - -] [instance: 2c1aa7ad-f9c1-4e05-8261-defaa3eef40b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:18:43 compute-2 nova_compute[226829]: 2026-01-31 08:18:43.847 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:44 compute-2 ceph-mon[77282]: pgmap v2519: 305 pgs: 305 active+clean; 248 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:18:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/441361465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3826855599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:44 compute-2 ceph-mon[77282]: osdmap e327: 3 total, 3 up, 3 in
Jan 31 08:18:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3652022244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:44 compute-2 ceph-mon[77282]: pgmap v2521: 305 pgs: 305 active+clean; 248 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 249 KiB/s rd, 1.0 MiB/s wr, 104 op/s
Jan 31 08:18:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e328 e328: 3 total, 3 up, 3 in
Jan 31 08:18:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:45.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:45.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:45 compute-2 ceph-mon[77282]: osdmap e328: 3 total, 3 up, 3 in
Jan 31 08:18:46 compute-2 podman[291519]: 2026-01-31 08:18:46.193495367 +0000 UTC m=+0.073927504 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:18:46 compute-2 nova_compute[226829]: 2026-01-31 08:18:46.245 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:46 compute-2 ceph-mon[77282]: pgmap v2523: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 218 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 69 KiB/s wr, 55 op/s
Jan 31 08:18:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:47.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:47 compute-2 sudo[291536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:47 compute-2 sudo[291536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:47 compute-2 sudo[291536]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:47 compute-2 sudo[291561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:47 compute-2 sudo[291561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:47 compute-2 sudo[291561]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:18:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:47.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:18:48 compute-2 ceph-mon[77282]: pgmap v2524: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 185 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 24 KiB/s wr, 38 op/s
Jan 31 08:18:48 compute-2 nova_compute[226829]: 2026-01-31 08:18:48.850 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:18:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:49.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:18:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:49.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:50 compute-2 ceph-mon[77282]: pgmap v2525: 305 pgs: 305 active+clean; 144 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 21 KiB/s wr, 53 op/s
Jan 31 08:18:50 compute-2 nova_compute[226829]: 2026-01-31 08:18:50.519 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:51.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:51 compute-2 nova_compute[226829]: 2026-01-31 08:18:51.247 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:51.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:51 compute-2 nova_compute[226829]: 2026-01-31 08:18:51.910 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:51 compute-2 nova_compute[226829]: 2026-01-31 08:18:51.911 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:18:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e329 e329: 3 total, 3 up, 3 in
Jan 31 08:18:52 compute-2 ceph-mon[77282]: pgmap v2526: 305 pgs: 305 active+clean; 125 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 64 op/s
Jan 31 08:18:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:53.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:53.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:53 compute-2 ceph-mon[77282]: osdmap e329: 3 total, 3 up, 3 in
Jan 31 08:18:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3720147076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:18:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3720147076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:18:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/467398597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:53 compute-2 nova_compute[226829]: 2026-01-31 08:18:53.852 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:54 compute-2 ceph-mon[77282]: pgmap v2528: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 18 KiB/s wr, 60 op/s
Jan 31 08:18:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:18:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:55.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:18:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:18:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:55.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:18:56 compute-2 nova_compute[226829]: 2026-01-31 08:18:56.296 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:56 compute-2 ceph-mon[77282]: pgmap v2529: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 3.4 KiB/s wr, 49 op/s
Jan 31 08:18:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:57.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:57.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:57 compute-2 nova_compute[226829]: 2026-01-31 08:18:57.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:58 compute-2 nova_compute[226829]: 2026-01-31 08:18:58.855 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:59 compute-2 ceph-mon[77282]: pgmap v2530: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.6 KiB/s wr, 34 op/s
Jan 31 08:18:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:59.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:18:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:59.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3710704562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:00 compute-2 nova_compute[226829]: 2026-01-31 08:19:00.308 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:01.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:01 compute-2 ceph-mon[77282]: pgmap v2531: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 31 08:19:01 compute-2 nova_compute[226829]: 2026-01-31 08:19:01.298 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:01.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:02 compute-2 ceph-mon[77282]: pgmap v2532: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 16 op/s
Jan 31 08:19:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:03.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:19:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1771961321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:19:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:19:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1771961321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:19:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e330 e330: 3 total, 3 up, 3 in
Jan 31 08:19:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/681996875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1771961321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:19:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1771961321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:19:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:03.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:03 compute-2 nova_compute[226829]: 2026-01-31 08:19:03.858 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:04 compute-2 nova_compute[226829]: 2026-01-31 08:19:04.186 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:04 compute-2 nova_compute[226829]: 2026-01-31 08:19:04.262 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:04 compute-2 ceph-mon[77282]: osdmap e330: 3 total, 3 up, 3 in
Jan 31 08:19:04 compute-2 ceph-mon[77282]: pgmap v2534: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 165 KiB/s wr, 39 op/s
Jan 31 08:19:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:05.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:06 compute-2 podman[291597]: 2026-01-31 08:19:06.164156301 +0000 UTC m=+0.055233337 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 31 08:19:06 compute-2 nova_compute[226829]: 2026-01-31 08:19:06.301 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:19:06.896 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:19:06.897 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:19:06.897 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:07 compute-2 ceph-mon[77282]: pgmap v2535: 305 pgs: 305 active+clean; 119 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 4.2 MiB/s wr, 82 op/s
Jan 31 08:19:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4218099084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:07.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:19:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:07.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:19:07 compute-2 sudo[291623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:07 compute-2 sudo[291623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:07 compute-2 sudo[291623]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:07 compute-2 sudo[291648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:07 compute-2 sudo[291648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:07 compute-2 sudo[291648]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1216326437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:08 compute-2 ceph-mon[77282]: pgmap v2536: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 4.2 MiB/s wr, 87 op/s
Jan 31 08:19:08 compute-2 nova_compute[226829]: 2026-01-31 08:19:08.860 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:19:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:19:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:09.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:10 compute-2 ceph-mon[77282]: pgmap v2537: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 4.2 MiB/s wr, 78 op/s
Jan 31 08:19:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:11.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:11 compute-2 nova_compute[226829]: 2026-01-31 08:19:11.303 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:11.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1012524099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:12 compute-2 ceph-mon[77282]: pgmap v2538: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 4.2 MiB/s wr, 76 op/s
Jan 31 08:19:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:13.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:13 compute-2 nova_compute[226829]: 2026-01-31 08:19:13.864 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:15 compute-2 ceph-mon[77282]: pgmap v2539: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 3.8 MiB/s wr, 58 op/s
Jan 31 08:19:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:15.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:15.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:16 compute-2 ceph-mon[77282]: pgmap v2540: 305 pgs: 305 active+clean; 109 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.4 MiB/s wr, 118 op/s
Jan 31 08:19:16 compute-2 nova_compute[226829]: 2026-01-31 08:19:16.304 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:17.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:17 compute-2 podman[291678]: 2026-01-31 08:19:17.155075031 +0000 UTC m=+0.047378744 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 08:19:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:17.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:18 compute-2 nova_compute[226829]: 2026-01-31 08:19:18.867 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:18 compute-2 ceph-mon[77282]: pgmap v2541: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 898 KiB/s wr, 99 op/s
Jan 31 08:19:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1069976186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:19.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:19.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3549304386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:21 compute-2 ceph-mon[77282]: pgmap v2542: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Jan 31 08:19:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:21.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:21 compute-2 nova_compute[226829]: 2026-01-31 08:19:21.306 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:21.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:22 compute-2 ceph-mon[77282]: pgmap v2543: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 31 08:19:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:23.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:23.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:23 compute-2 nova_compute[226829]: 2026-01-31 08:19:23.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:24 compute-2 nova_compute[226829]: 2026-01-31 08:19:24.684 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Acquiring lock "e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:24 compute-2 nova_compute[226829]: 2026-01-31 08:19:24.684 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:24 compute-2 nova_compute[226829]: 2026-01-31 08:19:24.709 226833 DEBUG nova.compute.manager [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:19:24 compute-2 nova_compute[226829]: 2026-01-31 08:19:24.844 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:24 compute-2 nova_compute[226829]: 2026-01-31 08:19:24.845 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:24 compute-2 nova_compute[226829]: 2026-01-31 08:19:24.850 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:19:24 compute-2 nova_compute[226829]: 2026-01-31 08:19:24.851 226833 INFO nova.compute.claims [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:19:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:19:24.988 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:19:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:19:24.989 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:19:25 compute-2 ceph-mon[77282]: pgmap v2544: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 08:19:25 compute-2 nova_compute[226829]: 2026-01-31 08:19:25.003 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:25 compute-2 nova_compute[226829]: 2026-01-31 08:19:25.042 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:25.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:25.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:19:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3709655572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:25 compute-2 nova_compute[226829]: 2026-01-31 08:19:25.484 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:25 compute-2 nova_compute[226829]: 2026-01-31 08:19:25.491 226833 DEBUG nova.compute.provider_tree [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:19:25 compute-2 nova_compute[226829]: 2026-01-31 08:19:25.531 226833 DEBUG nova.scheduler.client.report [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:19:25 compute-2 nova_compute[226829]: 2026-01-31 08:19:25.566 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:25 compute-2 nova_compute[226829]: 2026-01-31 08:19:25.567 226833 DEBUG nova.compute.manager [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:19:25 compute-2 nova_compute[226829]: 2026-01-31 08:19:25.645 226833 DEBUG nova.compute.manager [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 31 08:19:25 compute-2 nova_compute[226829]: 2026-01-31 08:19:25.673 226833 INFO nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:19:25 compute-2 nova_compute[226829]: 2026-01-31 08:19:25.965 226833 DEBUG nova.compute.manager [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:19:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3709655572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.163 226833 DEBUG nova.compute.manager [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.165 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.165 226833 INFO nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Creating image(s)
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.212 226833 DEBUG nova.storage.rbd_utils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] rbd image e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.242 226833 DEBUG nova.storage.rbd_utils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] rbd image e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.274 226833 DEBUG nova.storage.rbd_utils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] rbd image e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.279 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.310 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.335 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.336 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.337 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.337 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.369 226833 DEBUG nova.storage.rbd_utils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] rbd image e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.374 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.715 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.812 226833 DEBUG nova.storage.rbd_utils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] resizing rbd image e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:19:26 compute-2 nova_compute[226829]: 2026-01-31 08:19:26.913 226833 DEBUG nova.objects.instance [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lazy-loading 'migration_context' on Instance uuid e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.005 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.006 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Ensure instance console log exists: /var/lib/nova/instances/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.006 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.006 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.007 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.008 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.012 226833 WARNING nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:19:27 compute-2 ceph-mon[77282]: pgmap v2545: 305 pgs: 305 active+clean; 176 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.4 MiB/s wr, 206 op/s
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.036 226833 DEBUG nova.virt.libvirt.host [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.036 226833 DEBUG nova.virt.libvirt.host [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.040 226833 DEBUG nova.virt.libvirt.host [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.041 226833 DEBUG nova.virt.libvirt.host [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.042 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.042 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.043 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.043 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.043 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.043 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.044 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.044 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.044 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.044 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.045 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.045 226833 DEBUG nova.virt.hardware [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.048 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:27.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:27.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:19:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2092078173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.448 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:27 compute-2 sudo[291912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:27 compute-2 sudo[291912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:27 compute-2 sudo[291912]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.481 226833 DEBUG nova.storage.rbd_utils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] rbd image e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.484 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:27 compute-2 sudo[291954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:27 compute-2 sudo[291954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:27 compute-2 sudo[291954]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.579 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:19:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/70020733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.908 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.911 226833 DEBUG nova.objects.instance [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lazy-loading 'pci_devices' on Instance uuid e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.943 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <uuid>e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8</uuid>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <name>instance-0000008e</name>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <nova:name>tempest-ServersAaction247Test-server-957223132</nova:name>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:19:27</nova:creationTime>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <nova:user uuid="35ab955828314bdc9adf0dff227f0b71">tempest-ServersAaction247Test-1991115500-project-member</nova:user>
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <nova:project uuid="6bd585668c8f42ad9197c73d4bd2ca6f">tempest-ServersAaction247Test-1991115500</nova:project>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <system>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <entry name="serial">e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8</entry>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <entry name="uuid">e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8</entry>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     </system>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <os>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   </os>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <features>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   </features>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk">
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       </source>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk.config">
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       </source>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:19:27 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8/console.log" append="off"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <video>
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     </video>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:19:27 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:19:27 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:19:27 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:19:27 compute-2 nova_compute[226829]: </domain>
Jan 31 08:19:27 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.989 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.989 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:19:27 compute-2 nova_compute[226829]: 2026-01-31 08:19:27.990 226833 INFO nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Using config drive
Jan 31 08:19:28 compute-2 nova_compute[226829]: 2026-01-31 08:19:28.011 226833 DEBUG nova.storage.rbd_utils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] rbd image e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2092078173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/70020733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:28 compute-2 ceph-mon[77282]: pgmap v2546: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 168 op/s
Jan 31 08:19:28 compute-2 nova_compute[226829]: 2026-01-31 08:19:28.629 226833 INFO nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Creating config drive at /var/lib/nova/instances/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8/disk.config
Jan 31 08:19:28 compute-2 nova_compute[226829]: 2026-01-31 08:19:28.635 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe586yu6v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:28 compute-2 nova_compute[226829]: 2026-01-31 08:19:28.759 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe586yu6v" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:28 compute-2 nova_compute[226829]: 2026-01-31 08:19:28.790 226833 DEBUG nova.storage.rbd_utils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] rbd image e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:28 compute-2 nova_compute[226829]: 2026-01-31 08:19:28.793 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8/disk.config e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:28 compute-2 nova_compute[226829]: 2026-01-31 08:19:28.871 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:28 compute-2 nova_compute[226829]: 2026-01-31 08:19:28.975 226833 DEBUG oslo_concurrency.processutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8/disk.config e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:28 compute-2 nova_compute[226829]: 2026-01-31 08:19:28.976 226833 INFO nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Deleting local config drive /var/lib/nova/instances/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8/disk.config because it was imported into RBD.
Jan 31 08:19:29 compute-2 systemd-machined[195142]: New machine qemu-65-instance-0000008e.
Jan 31 08:19:29 compute-2 systemd[1]: Started Virtual Machine qemu-65-instance-0000008e.
Jan 31 08:19:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:29.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.453 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847569.4526415, e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.454 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] VM Resumed (Lifecycle Event)
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.457 226833 DEBUG nova.compute.manager [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.457 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.460 226833 INFO nova.virt.libvirt.driver [-] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Instance spawned successfully.
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.461 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.484 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.487 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.533 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.533 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847569.4533882, e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.534 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] VM Started (Lifecycle Event)
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.539 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.539 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.540 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.540 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.541 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.541 226833 DEBUG nova.virt.libvirt.driver [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.550 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.553 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.583 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.632 226833 INFO nova.compute.manager [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Took 3.47 seconds to spawn the instance on the hypervisor.
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.634 226833 DEBUG nova.compute.manager [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.708 226833 INFO nova.compute.manager [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Took 4.91 seconds to build instance.
Jan 31 08:19:29 compute-2 nova_compute[226829]: 2026-01-31 08:19:29.742 226833 DEBUG oslo_concurrency.lockutils [None req-75c737d7-fdcb-4f79-8793-9e745d8f9d75 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:19:29.992 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:30 compute-2 ceph-mon[77282]: pgmap v2547: 305 pgs: 305 active+clean; 211 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Jan 31 08:19:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.204 226833 DEBUG nova.compute.manager [None req-b6e583db-e56a-4392-b504-a135d657e47a 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.252 226833 INFO nova.compute.manager [None req-b6e583db-e56a-4392-b504-a135d657e47a 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] instance snapshotting
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.253 226833 DEBUG nova.objects.instance [None req-b6e583db-e56a-4392-b504-a135d657e47a 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lazy-loading 'flavor' on Instance uuid e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.310 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:19:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:31.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.447 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Acquiring lock "e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.447 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.448 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Acquiring lock "e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.448 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.448 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.450 226833 INFO nova.compute.manager [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Terminating instance
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.451 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Acquiring lock "refresh_cache-e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.451 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Acquired lock "refresh_cache-e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.451 226833 DEBUG nova.network.neutron [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.695 226833 DEBUG nova.network.neutron [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.750 226833 INFO nova.virt.libvirt.driver [None req-b6e583db-e56a-4392-b504-a135d657e47a 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Beginning live snapshot process
Jan 31 08:19:31 compute-2 nova_compute[226829]: 2026-01-31 08:19:31.839 226833 DEBUG nova.compute.manager [None req-b6e583db-e56a-4392-b504-a135d657e47a 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Jan 31 08:19:32 compute-2 nova_compute[226829]: 2026-01-31 08:19:32.235 226833 DEBUG nova.network.neutron [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:32 compute-2 nova_compute[226829]: 2026-01-31 08:19:32.280 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Releasing lock "refresh_cache-e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:19:32 compute-2 nova_compute[226829]: 2026-01-31 08:19:32.282 226833 DEBUG nova.compute.manager [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:19:32 compute-2 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Jan 31 08:19:32 compute-2 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008e.scope: Consumed 3.360s CPU time.
Jan 31 08:19:32 compute-2 systemd-machined[195142]: Machine qemu-65-instance-0000008e terminated.
Jan 31 08:19:32 compute-2 nova_compute[226829]: 2026-01-31 08:19:32.502 226833 INFO nova.virt.libvirt.driver [-] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Instance destroyed successfully.
Jan 31 08:19:32 compute-2 nova_compute[226829]: 2026-01-31 08:19:32.503 226833 DEBUG nova.objects.instance [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lazy-loading 'resources' on Instance uuid e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:32 compute-2 nova_compute[226829]: 2026-01-31 08:19:32.737 226833 DEBUG nova.compute.manager [None req-b6e583db-e56a-4392-b504-a135d657e47a 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 31 08:19:32 compute-2 ceph-mon[77282]: pgmap v2548: 305 pgs: 305 active+clean; 234 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Jan 31 08:19:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1892932877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.058 226833 INFO nova.virt.libvirt.driver [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Deleting instance files /var/lib/nova/instances/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_del
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.059 226833 INFO nova.virt.libvirt.driver [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Deletion of /var/lib/nova/instances/e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8_del complete
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.179 226833 INFO nova.compute.manager [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Took 0.90 seconds to destroy the instance on the hypervisor.
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.180 226833 DEBUG oslo.service.loopingcall [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.181 226833 DEBUG nova.compute.manager [-] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.181 226833 DEBUG nova.network.neutron [-] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:19:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:33.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.446 226833 DEBUG nova.network.neutron [-] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.543 226833 DEBUG nova.network.neutron [-] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.602 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.603 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.603 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.614 226833 INFO nova.compute.manager [-] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Took 0.43 seconds to deallocate network for instance.
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.670 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.670 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.751 226833 DEBUG oslo_concurrency.processutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:33 compute-2 nova_compute[226829]: 2026-01-31 08:19:33.874 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1809861043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:19:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/336001031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:34 compute-2 nova_compute[226829]: 2026-01-31 08:19:34.232 226833 DEBUG oslo_concurrency.processutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:34 compute-2 nova_compute[226829]: 2026-01-31 08:19:34.237 226833 DEBUG nova.compute.provider_tree [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:19:34 compute-2 nova_compute[226829]: 2026-01-31 08:19:34.258 226833 DEBUG nova.scheduler.client.report [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:19:34 compute-2 nova_compute[226829]: 2026-01-31 08:19:34.299 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:34 compute-2 nova_compute[226829]: 2026-01-31 08:19:34.335 226833 INFO nova.scheduler.client.report [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Deleted allocations for instance e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8
Jan 31 08:19:34 compute-2 nova_compute[226829]: 2026-01-31 08:19:34.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:34 compute-2 nova_compute[226829]: 2026-01-31 08:19:34.493 226833 DEBUG oslo_concurrency.lockutils [None req-f11be0ef-22bd-4f84-9d9f-3476f2f4b1a8 35ab955828314bdc9adf0dff227f0b71 6bd585668c8f42ad9197c73d4bd2ca6f - - default default] Lock "e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:35 compute-2 ceph-mon[77282]: pgmap v2549: 305 pgs: 305 active+clean; 234 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 218 op/s
Jan 31 08:19:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/336001031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:35.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:35 compute-2 sudo[292166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:35 compute-2 sudo[292166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:35 compute-2 sudo[292166]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:35 compute-2 sudo[292191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:35 compute-2 sudo[292191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:35 compute-2 sudo[292191]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:35 compute-2 sudo[292216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:35 compute-2 sudo[292216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:35 compute-2 sudo[292216]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:35 compute-2 sudo[292241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:19:35 compute-2 sudo[292241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:36 compute-2 sudo[292241]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e331 e331: 3 total, 3 up, 3 in
Jan 31 08:19:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1433741234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:36 compute-2 nova_compute[226829]: 2026-01-31 08:19:36.312 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:37.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:37 compute-2 podman[292298]: 2026-01-31 08:19:37.184866805 +0000 UTC m=+0.074308744 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:19:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:37.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:37 compute-2 ceph-mon[77282]: pgmap v2550: 305 pgs: 305 active+clean; 248 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.6 MiB/s wr, 295 op/s
Jan 31 08:19:37 compute-2 ceph-mon[77282]: osdmap e331: 3 total, 3 up, 3 in
Jan 31 08:19:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:38 compute-2 nova_compute[226829]: 2026-01-31 08:19:38.879 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:39.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:39 compute-2 nova_compute[226829]: 2026-01-31 08:19:39.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:39 compute-2 nova_compute[226829]: 2026-01-31 08:19:39.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:19:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:19:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:19:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:19:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:19:39 compute-2 ceph-mon[77282]: pgmap v2552: 305 pgs: 305 active+clean; 261 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.6 MiB/s wr, 265 op/s
Jan 31 08:19:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3559065145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1997723901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:40 compute-2 ceph-mon[77282]: pgmap v2553: 305 pgs: 305 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.7 MiB/s wr, 285 op/s
Jan 31 08:19:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/887722912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:41.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:41 compute-2 nova_compute[226829]: 2026-01-31 08:19:41.314 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:41.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2954870532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:19:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:43.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:19:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/984747512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/249068326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:43 compute-2 ceph-mon[77282]: pgmap v2554: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.8 MiB/s wr, 232 op/s
Jan 31 08:19:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:43.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:43 compute-2 nova_compute[226829]: 2026-01-31 08:19:43.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:43 compute-2 nova_compute[226829]: 2026-01-31 08:19:43.586 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:43 compute-2 nova_compute[226829]: 2026-01-31 08:19:43.587 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:43 compute-2 nova_compute[226829]: 2026-01-31 08:19:43.587 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:43 compute-2 nova_compute[226829]: 2026-01-31 08:19:43.587 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:19:43 compute-2 nova_compute[226829]: 2026-01-31 08:19:43.587 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e332 e332: 3 total, 3 up, 3 in
Jan 31 08:19:43 compute-2 nova_compute[226829]: 2026-01-31 08:19:43.882 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:19:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2846739514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:43 compute-2 nova_compute[226829]: 2026-01-31 08:19:43.988 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:44 compute-2 nova_compute[226829]: 2026-01-31 08:19:44.122 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:19:44 compute-2 nova_compute[226829]: 2026-01-31 08:19:44.123 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4193MB free_disk=20.855762481689453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:19:44 compute-2 nova_compute[226829]: 2026-01-31 08:19:44.124 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:44 compute-2 nova_compute[226829]: 2026-01-31 08:19:44.124 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:44 compute-2 ceph-mon[77282]: pgmap v2555: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.8 MiB/s wr, 219 op/s
Jan 31 08:19:44 compute-2 ceph-mon[77282]: osdmap e332: 3 total, 3 up, 3 in
Jan 31 08:19:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2846739514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:19:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:45.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:19:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:45.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:46 compute-2 ceph-mon[77282]: pgmap v2557: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.7 MiB/s wr, 127 op/s
Jan 31 08:19:46 compute-2 nova_compute[226829]: 2026-01-31 08:19:46.316 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:46 compute-2 nova_compute[226829]: 2026-01-31 08:19:46.472 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:19:46 compute-2 nova_compute[226829]: 2026-01-31 08:19:46.472 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:19:46 compute-2 nova_compute[226829]: 2026-01-31 08:19:46.506 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:19:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3886512229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:46 compute-2 nova_compute[226829]: 2026-01-31 08:19:46.919 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:46 compute-2 nova_compute[226829]: 2026-01-31 08:19:46.925 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:19:46 compute-2 nova_compute[226829]: 2026-01-31 08:19:46.973 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:19:47 compute-2 nova_compute[226829]: 2026-01-31 08:19:47.013 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:19:47 compute-2 nova_compute[226829]: 2026-01-31 08:19:47.013 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3886512229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e333 e333: 3 total, 3 up, 3 in
Jan 31 08:19:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:47.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:47 compute-2 nova_compute[226829]: 2026-01-31 08:19:47.501 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847572.4995987, e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:19:47 compute-2 nova_compute[226829]: 2026-01-31 08:19:47.501 226833 INFO nova.compute.manager [-] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] VM Stopped (Lifecycle Event)
Jan 31 08:19:47 compute-2 nova_compute[226829]: 2026-01-31 08:19:47.543 226833 DEBUG nova.compute.manager [None req-275fcc72-a6d8-46e8-8a99-1f77cc7f48a2 - - - - - -] [instance: e10c32ff-8ad4-41d6-9f1c-81e52bd41fb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:47 compute-2 sudo[292375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:47 compute-2 sudo[292375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:47 compute-2 sudo[292375]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:47 compute-2 sudo[292399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:47 compute-2 sudo[292399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:47 compute-2 sudo[292399]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:47 compute-2 sudo[292431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:19:47 compute-2 sudo[292431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:47 compute-2 sudo[292431]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:47 compute-2 podman[292403]: 2026-01-31 08:19:47.623907673 +0000 UTC m=+0.051492395 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:19:47 compute-2 sudo[292451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:47 compute-2 sudo[292451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:47 compute-2 sudo[292451]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e334 e334: 3 total, 3 up, 3 in
Jan 31 08:19:48 compute-2 ceph-mon[77282]: osdmap e333: 3 total, 3 up, 3 in
Jan 31 08:19:48 compute-2 ceph-mon[77282]: pgmap v2559: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 321 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 955 KiB/s rd, 539 KiB/s wr, 40 op/s
Jan 31 08:19:48 compute-2 nova_compute[226829]: 2026-01-31 08:19:48.884 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:49.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:49 compute-2 ceph-mon[77282]: osdmap e334: 3 total, 3 up, 3 in
Jan 31 08:19:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:50 compute-2 ceph-mon[77282]: pgmap v2561: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Jan 31 08:19:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2154019945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:51.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:51 compute-2 nova_compute[226829]: 2026-01-31 08:19:51.319 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:51.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2076698534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:51 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Jan 31 08:19:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:51.986728) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:19:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Jan 31 08:19:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847591986825, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 2307, "num_deletes": 266, "total_data_size": 5123233, "memory_usage": 5226112, "flush_reason": "Manual Compaction"}
Jan 31 08:19:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592002675, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 3341001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58134, "largest_seqno": 60436, "table_properties": {"data_size": 3331502, "index_size": 5929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20313, "raw_average_key_size": 20, "raw_value_size": 3312244, "raw_average_value_size": 3383, "num_data_blocks": 257, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847423, "oldest_key_time": 1769847423, "file_creation_time": 1769847591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 15993 microseconds, and 5627 cpu microseconds.
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.002730) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 3341001 bytes OK
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.002747) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.004101) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.004112) EVENT_LOG_v1 {"time_micros": 1769847592004108, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.004128) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 5112902, prev total WAL file size 5112902, number of live WAL files 2.
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.005097) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(3262KB)], [114(10MB)]
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592005300, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 14180499, "oldest_snapshot_seqno": -1}
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 8646 keys, 14029872 bytes, temperature: kUnknown
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592097366, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 14029872, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13970405, "index_size": 36763, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 223689, "raw_average_key_size": 25, "raw_value_size": 13814773, "raw_average_value_size": 1597, "num_data_blocks": 1452, "num_entries": 8646, "num_filter_entries": 8646, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847592, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.097669) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 14029872 bytes
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.098670) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.8 rd, 152.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.3 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(8.4) write-amplify(4.2) OK, records in: 9192, records dropped: 546 output_compression: NoCompression
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.098685) EVENT_LOG_v1 {"time_micros": 1769847592098678, "job": 72, "event": "compaction_finished", "compaction_time_micros": 92192, "compaction_time_cpu_micros": 52985, "output_level": 6, "num_output_files": 1, "total_output_size": 14029872, "num_input_records": 9192, "num_output_records": 8646, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592099072, "job": 72, "event": "table_file_deletion", "file_number": 116}
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592099951, "job": 72, "event": "table_file_deletion", "file_number": 114}
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.004740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.100041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.100046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.100048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.100049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:19:52.100051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:53.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:53 compute-2 ceph-mon[77282]: pgmap v2562: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 5.9 MiB/s wr, 207 op/s
Jan 31 08:19:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:53.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:19:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1275390790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:19:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:19:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1275390790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:19:53 compute-2 nova_compute[226829]: 2026-01-31 08:19:53.888 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1275390790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:19:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1275390790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:19:54 compute-2 ceph-mon[77282]: pgmap v2563: 305 pgs: 305 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 208 op/s
Jan 31 08:19:55 compute-2 nova_compute[226829]: 2026-01-31 08:19:55.013 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:55 compute-2 nova_compute[226829]: 2026-01-31 08:19:55.014 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:19:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:55.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:19:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:55.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:19:56 compute-2 nova_compute[226829]: 2026-01-31 08:19:56.321 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e335 e335: 3 total, 3 up, 3 in
Jan 31 08:19:57 compute-2 ceph-mon[77282]: pgmap v2564: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 5.5 MiB/s wr, 276 op/s
Jan 31 08:19:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:57.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:57.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:58 compute-2 ceph-mon[77282]: osdmap e335: 3 total, 3 up, 3 in
Jan 31 08:19:58 compute-2 nova_compute[226829]: 2026-01-31 08:19:58.891 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:19:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/592186997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:59 compute-2 ceph-mon[77282]: pgmap v2566: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 4.6 MiB/s wr, 245 op/s
Jan 31 08:19:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/592186997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:59.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:19:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:59.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:00 compute-2 ceph-mon[77282]: pgmap v2567: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.5 MiB/s wr, 189 op/s
Jan 31 08:20:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 08:20:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:01.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:01 compute-2 nova_compute[226829]: 2026-01-31 08:20:01.372 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:01.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:03 compute-2 ceph-mon[77282]: pgmap v2568: 305 pgs: 305 active+clean; 336 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 606 KiB/s wr, 154 op/s
Jan 31 08:20:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:03.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:20:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:03.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:20:03 compute-2 nova_compute[226829]: 2026-01-31 08:20:03.893 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:04 compute-2 ceph-mon[77282]: pgmap v2569: 305 pgs: 305 active+clean; 327 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 161 op/s
Jan 31 08:20:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e336 e336: 3 total, 3 up, 3 in
Jan 31 08:20:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:05.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:05 compute-2 nova_compute[226829]: 2026-01-31 08:20:05.777 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Acquiring lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:05 compute-2 nova_compute[226829]: 2026-01-31 08:20:05.777 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:05 compute-2 nova_compute[226829]: 2026-01-31 08:20:05.836 226833 DEBUG nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.006 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.007 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.013 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.013 226833 INFO nova.compute.claims [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.377 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.481 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:06 compute-2 ceph-mon[77282]: osdmap e336: 3 total, 3 up, 3 in
Jan 31 08:20:06 compute-2 ceph-mon[77282]: pgmap v2571: 305 pgs: 305 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.2 MiB/s wr, 164 op/s
Jan 31 08:20:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:06.897 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:06.899 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:06.899 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:20:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3739093903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.937 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.943 226833 DEBUG nova.compute.provider_tree [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.966 226833 DEBUG nova.scheduler.client.report [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.999 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:06 compute-2 nova_compute[226829]: 2026-01-31 08:20:06.999 226833 DEBUG nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.065 226833 DEBUG nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.066 226833 DEBUG nova.network.neutron [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:20:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:07.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.096 226833 INFO nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.125 226833 DEBUG nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.302 226833 DEBUG nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.304 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.304 226833 INFO nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Creating image(s)
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.328 226833 DEBUG nova.storage.rbd_utils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] rbd image cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.360 226833 DEBUG nova.storage.rbd_utils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] rbd image cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.386 226833 DEBUG nova.storage.rbd_utils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] rbd image cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.389 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.413 226833 DEBUG nova.policy [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdeb8f10b75f4d65be0e243714de9420', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5946643c9f6741e299159c7a6903a9f9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:20:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:07.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.447 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.448 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.448 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.449 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.475 226833 DEBUG nova.storage.rbd_utils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] rbd image cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:07 compute-2 nova_compute[226829]: 2026-01-31 08:20:07.479 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3739093903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:07 compute-2 sudo[292618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:07 compute-2 sudo[292618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:07 compute-2 sudo[292618]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:07 compute-2 sudo[292649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:07 compute-2 sudo[292649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:07 compute-2 sudo[292649]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:07 compute-2 podman[292642]: 2026-01-31 08:20:07.802339892 +0000 UTC m=+0.062767901 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:20:08 compute-2 ceph-mon[77282]: pgmap v2572: 305 pgs: 305 active+clean; 352 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.2 MiB/s wr, 152 op/s
Jan 31 08:20:08 compute-2 nova_compute[226829]: 2026-01-31 08:20:08.715 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:08 compute-2 nova_compute[226829]: 2026-01-31 08:20:08.884 226833 DEBUG nova.storage.rbd_utils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] resizing rbd image cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:20:08 compute-2 nova_compute[226829]: 2026-01-31 08:20:08.964 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:09.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:09 compute-2 nova_compute[226829]: 2026-01-31 08:20:09.255 226833 DEBUG nova.network.neutron [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Successfully created port: 42f62ba1-093d-4016-adfc-0a6b7c986b43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:20:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:09 compute-2 nova_compute[226829]: 2026-01-31 08:20:09.749 226833 DEBUG nova.objects.instance [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lazy-loading 'migration_context' on Instance uuid cb6d7816-1ee7-4474-8b30-f47c612e9af4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:09 compute-2 nova_compute[226829]: 2026-01-31 08:20:09.765 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:20:09 compute-2 nova_compute[226829]: 2026-01-31 08:20:09.765 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Ensure instance console log exists: /var/lib/nova/instances/cb6d7816-1ee7-4474-8b30-f47c612e9af4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:20:09 compute-2 nova_compute[226829]: 2026-01-31 08:20:09.766 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:09 compute-2 nova_compute[226829]: 2026-01-31 08:20:09.766 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:09 compute-2 nova_compute[226829]: 2026-01-31 08:20:09.766 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3945026701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.711411) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610711454, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 467, "num_deletes": 252, "total_data_size": 562565, "memory_usage": 573016, "flush_reason": "Manual Compaction"}
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610716421, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 370738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60441, "largest_seqno": 60903, "table_properties": {"data_size": 368140, "index_size": 634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6442, "raw_average_key_size": 19, "raw_value_size": 362903, "raw_average_value_size": 1083, "num_data_blocks": 28, "num_entries": 335, "num_filter_entries": 335, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847592, "oldest_key_time": 1769847592, "file_creation_time": 1769847610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 5120 microseconds, and 1949 cpu microseconds.
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.716516) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 370738 bytes OK
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.717086) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.719081) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.719108) EVENT_LOG_v1 {"time_micros": 1769847610719099, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.719143) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 559704, prev total WAL file size 559704, number of live WAL files 2.
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.720188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(362KB)], [117(13MB)]
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610720231, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 14400610, "oldest_snapshot_seqno": -1}
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 8464 keys, 12499741 bytes, temperature: kUnknown
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610800646, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 12499741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12442756, "index_size": 34729, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 220628, "raw_average_key_size": 26, "raw_value_size": 12291682, "raw_average_value_size": 1452, "num_data_blocks": 1359, "num_entries": 8464, "num_filter_entries": 8464, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.800949) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12499741 bytes
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.802547) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.8 rd, 155.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.4 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(72.6) write-amplify(33.7) OK, records in: 8981, records dropped: 517 output_compression: NoCompression
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.802564) EVENT_LOG_v1 {"time_micros": 1769847610802556, "job": 74, "event": "compaction_finished", "compaction_time_micros": 80537, "compaction_time_cpu_micros": 25480, "output_level": 6, "num_output_files": 1, "total_output_size": 12499741, "num_input_records": 8981, "num_output_records": 8464, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610802703, "job": 74, "event": "table_file_deletion", "file_number": 119}
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610803882, "job": 74, "event": "table_file_deletion", "file_number": 117}
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.720069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.803912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.803916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.803917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.803918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:20:10.803920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:11 compute-2 nova_compute[226829]: 2026-01-31 08:20:11.054 226833 DEBUG nova.network.neutron [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Successfully updated port: 42f62ba1-093d-4016-adfc-0a6b7c986b43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:20:11 compute-2 nova_compute[226829]: 2026-01-31 08:20:11.082 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Acquiring lock "refresh_cache-cb6d7816-1ee7-4474-8b30-f47c612e9af4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:20:11 compute-2 nova_compute[226829]: 2026-01-31 08:20:11.082 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Acquired lock "refresh_cache-cb6d7816-1ee7-4474-8b30-f47c612e9af4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:20:11 compute-2 nova_compute[226829]: 2026-01-31 08:20:11.082 226833 DEBUG nova.network.neutron [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:20:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:11.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:11 compute-2 nova_compute[226829]: 2026-01-31 08:20:11.247 226833 DEBUG nova.compute.manager [req-6ef82d50-2047-40c8-b0f4-337e260edc5d req-c452043a-8aa0-49b5-81bb-1cb026cde2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Received event network-changed-42f62ba1-093d-4016-adfc-0a6b7c986b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:11 compute-2 nova_compute[226829]: 2026-01-31 08:20:11.247 226833 DEBUG nova.compute.manager [req-6ef82d50-2047-40c8-b0f4-337e260edc5d req-c452043a-8aa0-49b5-81bb-1cb026cde2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Refreshing instance network info cache due to event network-changed-42f62ba1-093d-4016-adfc-0a6b7c986b43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:20:11 compute-2 nova_compute[226829]: 2026-01-31 08:20:11.247 226833 DEBUG oslo_concurrency.lockutils [req-6ef82d50-2047-40c8-b0f4-337e260edc5d req-c452043a-8aa0-49b5-81bb-1cb026cde2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-cb6d7816-1ee7-4474-8b30-f47c612e9af4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:20:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:20:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1956623074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:11 compute-2 nova_compute[226829]: 2026-01-31 08:20:11.375 226833 DEBUG nova.network.neutron [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:20:11 compute-2 nova_compute[226829]: 2026-01-31 08:20:11.378 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:11.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:11 compute-2 ceph-mon[77282]: pgmap v2573: 305 pgs: 305 active+clean; 384 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 751 KiB/s rd, 5.5 MiB/s wr, 181 op/s
Jan 31 08:20:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1956623074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e337 e337: 3 total, 3 up, 3 in
Jan 31 08:20:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2604866068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:12 compute-2 ceph-mon[77282]: pgmap v2574: 305 pgs: 305 active+clean; 406 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 765 KiB/s rd, 5.9 MiB/s wr, 165 op/s
Jan 31 08:20:12 compute-2 ceph-mon[77282]: osdmap e337: 3 total, 3 up, 3 in
Jan 31 08:20:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:13.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:13.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.454 226833 DEBUG nova.network.neutron [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Updating instance_info_cache with network_info: [{"id": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "address": "fa:16:3e:5e:0b:9b", "network": {"id": "3ed1ec06-3665-43e8-b434-ceccd33fe864", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1689208332-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5946643c9f6741e299159c7a6903a9f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42f62ba1-09", "ovs_interfaceid": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.496 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Releasing lock "refresh_cache-cb6d7816-1ee7-4474-8b30-f47c612e9af4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.496 226833 DEBUG nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Instance network_info: |[{"id": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "address": "fa:16:3e:5e:0b:9b", "network": {"id": "3ed1ec06-3665-43e8-b434-ceccd33fe864", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1689208332-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5946643c9f6741e299159c7a6903a9f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42f62ba1-09", "ovs_interfaceid": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.497 226833 DEBUG oslo_concurrency.lockutils [req-6ef82d50-2047-40c8-b0f4-337e260edc5d req-c452043a-8aa0-49b5-81bb-1cb026cde2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-cb6d7816-1ee7-4474-8b30-f47c612e9af4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.497 226833 DEBUG nova.network.neutron [req-6ef82d50-2047-40c8-b0f4-337e260edc5d req-c452043a-8aa0-49b5-81bb-1cb026cde2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Refreshing network info cache for port 42f62ba1-093d-4016-adfc-0a6b7c986b43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.499 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Start _get_guest_xml network_info=[{"id": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "address": "fa:16:3e:5e:0b:9b", "network": {"id": "3ed1ec06-3665-43e8-b434-ceccd33fe864", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1689208332-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5946643c9f6741e299159c7a6903a9f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42f62ba1-09", "ovs_interfaceid": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.503 226833 WARNING nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.507 226833 DEBUG nova.virt.libvirt.host [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.508 226833 DEBUG nova.virt.libvirt.host [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.514 226833 DEBUG nova.virt.libvirt.host [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.515 226833 DEBUG nova.virt.libvirt.host [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.516 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.516 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.517 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.517 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.517 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.517 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.517 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.518 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.518 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.518 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.518 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.519 226833 DEBUG nova.virt.hardware [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.521 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:20:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1582425122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.943 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.966 226833 DEBUG nova.storage.rbd_utils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] rbd image cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.970 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:13 compute-2 nova_compute[226829]: 2026-01-31 08:20:13.990 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1582425122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:20:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2724465647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.383 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.384 226833 DEBUG nova.virt.libvirt.vif [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:20:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1487227944',display_name='tempest-ServerAddressesNegativeTestJSON-server-1487227944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1487227944',id=145,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5946643c9f6741e299159c7a6903a9f9',ramdisk_id='',reservation_id='r-z9odw6lm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-401177754',owner_user_name='tempest-ServerAddressesNegativeTestJSON-401177754-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:07Z,user_data=None,user_id='fdeb8f10b75f4d65be0e243714de9420',uuid=cb6d7816-1ee7-4474-8b30-f47c612e9af4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "address": "fa:16:3e:5e:0b:9b", "network": {"id": "3ed1ec06-3665-43e8-b434-ceccd33fe864", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1689208332-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5946643c9f6741e299159c7a6903a9f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42f62ba1-09", "ovs_interfaceid": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.385 226833 DEBUG nova.network.os_vif_util [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Converting VIF {"id": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "address": "fa:16:3e:5e:0b:9b", "network": {"id": "3ed1ec06-3665-43e8-b434-ceccd33fe864", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1689208332-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5946643c9f6741e299159c7a6903a9f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42f62ba1-09", "ovs_interfaceid": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.385 226833 DEBUG nova.network.os_vif_util [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=42f62ba1-093d-4016-adfc-0a6b7c986b43,network=Network(3ed1ec06-3665-43e8-b434-ceccd33fe864),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42f62ba1-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.387 226833 DEBUG nova.objects.instance [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb6d7816-1ee7-4474-8b30-f47c612e9af4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.406 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <uuid>cb6d7816-1ee7-4474-8b30-f47c612e9af4</uuid>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <name>instance-00000091</name>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1487227944</nova:name>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:20:13</nova:creationTime>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <nova:user uuid="fdeb8f10b75f4d65be0e243714de9420">tempest-ServerAddressesNegativeTestJSON-401177754-project-member</nova:user>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <nova:project uuid="5946643c9f6741e299159c7a6903a9f9">tempest-ServerAddressesNegativeTestJSON-401177754</nova:project>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <nova:port uuid="42f62ba1-093d-4016-adfc-0a6b7c986b43">
Jan 31 08:20:14 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <system>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <entry name="serial">cb6d7816-1ee7-4474-8b30-f47c612e9af4</entry>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <entry name="uuid">cb6d7816-1ee7-4474-8b30-f47c612e9af4</entry>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     </system>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <os>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   </os>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <features>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   </features>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk">
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       </source>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk.config">
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       </source>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:20:14 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:5e:0b:9b"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <target dev="tap42f62ba1-09"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/cb6d7816-1ee7-4474-8b30-f47c612e9af4/console.log" append="off"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <video>
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     </video>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:20:14 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:20:14 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:20:14 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:20:14 compute-2 nova_compute[226829]: </domain>
Jan 31 08:20:14 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.407 226833 DEBUG nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Preparing to wait for external event network-vif-plugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.407 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Acquiring lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.407 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.407 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.408 226833 DEBUG nova.virt.libvirt.vif [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:20:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1487227944',display_name='tempest-ServerAddressesNegativeTestJSON-server-1487227944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1487227944',id=145,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5946643c9f6741e299159c7a6903a9f9',ramdisk_id='',reservation_id='r-z9odw6lm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-401177754',owner_user_name='tempest-ServerAddressesNegativeTestJSON-401177754-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:07Z,user_data=None,user_id='fdeb8f10b75f4d65be0e243714de9420',uuid=cb6d7816-1ee7-4474-8b30-f47c612e9af4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "address": "fa:16:3e:5e:0b:9b", "network": {"id": "3ed1ec06-3665-43e8-b434-ceccd33fe864", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1689208332-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5946643c9f6741e299159c7a6903a9f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42f62ba1-09", "ovs_interfaceid": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.408 226833 DEBUG nova.network.os_vif_util [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Converting VIF {"id": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "address": "fa:16:3e:5e:0b:9b", "network": {"id": "3ed1ec06-3665-43e8-b434-ceccd33fe864", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1689208332-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5946643c9f6741e299159c7a6903a9f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42f62ba1-09", "ovs_interfaceid": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.409 226833 DEBUG nova.network.os_vif_util [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=42f62ba1-093d-4016-adfc-0a6b7c986b43,network=Network(3ed1ec06-3665-43e8-b434-ceccd33fe864),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42f62ba1-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.409 226833 DEBUG os_vif [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=42f62ba1-093d-4016-adfc-0a6b7c986b43,network=Network(3ed1ec06-3665-43e8-b434-ceccd33fe864),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42f62ba1-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.410 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.410 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.411 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.418 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.418 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42f62ba1-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.419 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap42f62ba1-09, col_values=(('external_ids', {'iface-id': '42f62ba1-093d-4016-adfc-0a6b7c986b43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:0b:9b', 'vm-uuid': 'cb6d7816-1ee7-4474-8b30-f47c612e9af4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.420 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:14 compute-2 NetworkManager[48999]: <info>  [1769847614.4211] manager: (tap42f62ba1-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.424 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.426 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.426 226833 INFO os_vif [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=42f62ba1-093d-4016-adfc-0a6b7c986b43,network=Network(3ed1ec06-3665-43e8-b434-ceccd33fe864),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42f62ba1-09')
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.493 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.494 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.494 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] No VIF found with MAC fa:16:3e:5e:0b:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.495 226833 INFO nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Using config drive
Jan 31 08:20:14 compute-2 nova_compute[226829]: 2026-01-31 08:20:14.518 226833 DEBUG nova.storage.rbd_utils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] rbd image cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:15.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:15 compute-2 ceph-mon[77282]: pgmap v2576: 305 pgs: 305 active+clean; 419 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 5.4 MiB/s wr, 143 op/s
Jan 31 08:20:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2724465647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:15.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:15 compute-2 nova_compute[226829]: 2026-01-31 08:20:15.590 226833 INFO nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Creating config drive at /var/lib/nova/instances/cb6d7816-1ee7-4474-8b30-f47c612e9af4/disk.config
Jan 31 08:20:15 compute-2 nova_compute[226829]: 2026-01-31 08:20:15.597 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb6d7816-1ee7-4474-8b30-f47c612e9af4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpv8358k5r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:15 compute-2 nova_compute[226829]: 2026-01-31 08:20:15.625 226833 DEBUG nova.network.neutron [req-6ef82d50-2047-40c8-b0f4-337e260edc5d req-c452043a-8aa0-49b5-81bb-1cb026cde2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Updated VIF entry in instance network info cache for port 42f62ba1-093d-4016-adfc-0a6b7c986b43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:20:15 compute-2 nova_compute[226829]: 2026-01-31 08:20:15.626 226833 DEBUG nova.network.neutron [req-6ef82d50-2047-40c8-b0f4-337e260edc5d req-c452043a-8aa0-49b5-81bb-1cb026cde2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Updating instance_info_cache with network_info: [{"id": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "address": "fa:16:3e:5e:0b:9b", "network": {"id": "3ed1ec06-3665-43e8-b434-ceccd33fe864", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1689208332-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5946643c9f6741e299159c7a6903a9f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42f62ba1-09", "ovs_interfaceid": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:20:15 compute-2 nova_compute[226829]: 2026-01-31 08:20:15.649 226833 DEBUG oslo_concurrency.lockutils [req-6ef82d50-2047-40c8-b0f4-337e260edc5d req-c452043a-8aa0-49b5-81bb-1cb026cde2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-cb6d7816-1ee7-4474-8b30-f47c612e9af4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:20:15 compute-2 nova_compute[226829]: 2026-01-31 08:20:15.731 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb6d7816-1ee7-4474-8b30-f47c612e9af4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpv8358k5r" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:15 compute-2 nova_compute[226829]: 2026-01-31 08:20:15.757 226833 DEBUG nova.storage.rbd_utils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] rbd image cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:15 compute-2 nova_compute[226829]: 2026-01-31 08:20:15.761 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb6d7816-1ee7-4474-8b30-f47c612e9af4/disk.config cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:16 compute-2 ceph-mon[77282]: pgmap v2577: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.5 MiB/s wr, 175 op/s
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.376 226833 DEBUG oslo_concurrency.processutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb6d7816-1ee7-4474-8b30-f47c612e9af4/disk.config cb6d7816-1ee7-4474-8b30-f47c612e9af4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.377 226833 INFO nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Deleting local config drive /var/lib/nova/instances/cb6d7816-1ee7-4474-8b30-f47c612e9af4/disk.config because it was imported into RBD.
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.379 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:16 compute-2 kernel: tap42f62ba1-09: entered promiscuous mode
Jan 31 08:20:16 compute-2 NetworkManager[48999]: <info>  [1769847616.4124] manager: (tap42f62ba1-09): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Jan 31 08:20:16 compute-2 ovn_controller[133834]: 2026-01-31T08:20:16Z|00600|binding|INFO|Claiming lport 42f62ba1-093d-4016-adfc-0a6b7c986b43 for this chassis.
Jan 31 08:20:16 compute-2 ovn_controller[133834]: 2026-01-31T08:20:16Z|00601|binding|INFO|42f62ba1-093d-4016-adfc-0a6b7c986b43: Claiming fa:16:3e:5e:0b:9b 10.100.0.3
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.413 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.416 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.418 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:16 compute-2 systemd-udevd[292908]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:20:16 compute-2 systemd-machined[195142]: New machine qemu-66-instance-00000091.
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.443 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:16 compute-2 ovn_controller[133834]: 2026-01-31T08:20:16Z|00602|binding|INFO|Setting lport 42f62ba1-093d-4016-adfc-0a6b7c986b43 ovn-installed in OVS
Jan 31 08:20:16 compute-2 NetworkManager[48999]: <info>  [1769847616.4468] device (tap42f62ba1-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:20:16 compute-2 NetworkManager[48999]: <info>  [1769847616.4477] device (tap42f62ba1-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.447 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:16 compute-2 systemd[1]: Started Virtual Machine qemu-66-instance-00000091.
Jan 31 08:20:16 compute-2 ovn_controller[133834]: 2026-01-31T08:20:16Z|00603|binding|INFO|Setting lport 42f62ba1-093d-4016-adfc-0a6b7c986b43 up in Southbound
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.579 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0b:9b 10.100.0.3'], port_security=['fa:16:3e:5e:0b:9b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cb6d7816-1ee7-4474-8b30-f47c612e9af4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ed1ec06-3665-43e8-b434-ceccd33fe864', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5946643c9f6741e299159c7a6903a9f9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab71f962-ccf6-4e70-91cc-7b68c69bd029', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57d9167e-c75e-46ab-8f42-80211b3f95a5, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=42f62ba1-093d-4016-adfc-0a6b7c986b43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.580 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 42f62ba1-093d-4016-adfc-0a6b7c986b43 in datapath 3ed1ec06-3665-43e8-b434-ceccd33fe864 bound to our chassis
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.582 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3ed1ec06-3665-43e8-b434-ceccd33fe864
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.593 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3f860345-dbc1-4e4c-80b0-e905d5bd3059]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.596 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3ed1ec06-31 in ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.600 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3ed1ec06-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.600 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf3b4ba-acad-4166-ad5a-2c1d0a79524d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.601 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2122ff2e-ce3f-4192-b8d7-a5c16b83196b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.621 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[96350074-5134-4646-9f94-66fb78a2b225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.632 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[64e25a8d-04c2-4870-92ee-837162ac64fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.655 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[b898c330-9124-4377-8c68-737043b70e19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.661 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d74ba230-4e6e-4e58-9ab3-2366c49ab9c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 NetworkManager[48999]: <info>  [1769847616.6622] manager: (tap3ed1ec06-30): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.684 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[50c44671-bda1-4e12-ba09-39a23fdec075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.687 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf46733-687d-4a74-9d37-23605e6ece22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 NetworkManager[48999]: <info>  [1769847616.7011] device (tap3ed1ec06-30): carrier: link connected
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.702 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[bbecd059-102b-4b2e-a393-94a204cb32c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.714 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4926a5ad-1b03-4337-ae22-6d568da23b5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ed1ec06-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:97:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786917, 'reachable_time': 16472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292942, 'error': None, 'target': 'ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.728 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e3722ffe-2ae3-42c6-97f0-ae2ac74c0d23]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:9710'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786917, 'tstamp': 786917}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292943, 'error': None, 'target': 'ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.740 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[defff3de-fb16-4f66-b333-74430e1bf418]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ed1ec06-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:97:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786917, 'reachable_time': 16472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292944, 'error': None, 'target': 'ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.761 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c87951c6-d2e3-4e90-8e4b-f65d52074fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.798 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbb4025-3fe8-4e24-a241-3d9aa59aec70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.800 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ed1ec06-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.800 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.801 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ed1ec06-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:16 compute-2 kernel: tap3ed1ec06-30: entered promiscuous mode
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.803 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:16 compute-2 NetworkManager[48999]: <info>  [1769847616.8046] manager: (tap3ed1ec06-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.804 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.806 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3ed1ec06-30, col_values=(('external_ids', {'iface-id': '5cd095e0-87c1-412d-87e6-76aa67f6e554'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:16 compute-2 ovn_controller[133834]: 2026-01-31T08:20:16Z|00604|binding|INFO|Releasing lport 5cd095e0-87c1-412d-87e6-76aa67f6e554 from this chassis (sb_readonly=0)
Jan 31 08:20:16 compute-2 nova_compute[226829]: 2026-01-31 08:20:16.811 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.813 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3ed1ec06-3665-43e8-b434-ceccd33fe864.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3ed1ec06-3665-43e8-b434-ceccd33fe864.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.814 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[96478421-a59b-4887-aa28-c601a299cb48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.815 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-3ed1ec06-3665-43e8-b434-ceccd33fe864
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/3ed1ec06-3665-43e8-b434-ceccd33fe864.pid.haproxy
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 3ed1ec06-3665-43e8-b434-ceccd33fe864
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:20:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:16.816 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864', 'env', 'PROCESS_TAG=haproxy-3ed1ec06-3665-43e8-b434-ceccd33fe864', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3ed1ec06-3665-43e8-b434-ceccd33fe864.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:20:17 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 31 08:20:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:17 compute-2 podman[292976]: 2026-01-31 08:20:17.196987098 +0000 UTC m=+0.051862766 container create d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:20:17 compute-2 systemd[1]: Started libpod-conmon-d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc.scope.
Jan 31 08:20:17 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:20:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de1d1a5f04dd05ebe3e619295f8ae14ecd9e8b94c7b944fafeefc0b661160fa6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:17 compute-2 podman[292976]: 2026-01-31 08:20:17.170989164 +0000 UTC m=+0.025864882 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:20:17 compute-2 podman[292976]: 2026-01-31 08:20:17.272278967 +0000 UTC m=+0.127154645 container init d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 08:20:17 compute-2 podman[292976]: 2026-01-31 08:20:17.276595753 +0000 UTC m=+0.131471421 container start d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:20:17 compute-2 neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864[293015]: [NOTICE]   (293035) : New worker (293038) forked
Jan 31 08:20:17 compute-2 neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864[293015]: [NOTICE]   (293035) : Loading success.
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.378 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847617.3775022, cb6d7816-1ee7-4474-8b30-f47c612e9af4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.378 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] VM Started (Lifecycle Event)
Jan 31 08:20:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/140923325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:17.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.443 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.447 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847617.3778331, cb6d7816-1ee7-4474-8b30-f47c612e9af4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.447 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] VM Paused (Lifecycle Event)
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.511 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.514 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.533 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.593 226833 DEBUG nova.compute.manager [req-76bcc8f0-cf6b-48b9-9fa3-61d3c527ec31 req-eb4e7d71-fe67-42d4-897f-b12804edd618 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Received event network-vif-plugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.593 226833 DEBUG oslo_concurrency.lockutils [req-76bcc8f0-cf6b-48b9-9fa3-61d3c527ec31 req-eb4e7d71-fe67-42d4-897f-b12804edd618 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.593 226833 DEBUG oslo_concurrency.lockutils [req-76bcc8f0-cf6b-48b9-9fa3-61d3c527ec31 req-eb4e7d71-fe67-42d4-897f-b12804edd618 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.594 226833 DEBUG oslo_concurrency.lockutils [req-76bcc8f0-cf6b-48b9-9fa3-61d3c527ec31 req-eb4e7d71-fe67-42d4-897f-b12804edd618 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.594 226833 DEBUG nova.compute.manager [req-76bcc8f0-cf6b-48b9-9fa3-61d3c527ec31 req-eb4e7d71-fe67-42d4-897f-b12804edd618 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Processing event network-vif-plugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.595 226833 DEBUG nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.598 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847617.5981567, cb6d7816-1ee7-4474-8b30-f47c612e9af4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.599 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] VM Resumed (Lifecycle Event)
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.601 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.605 226833 INFO nova.virt.libvirt.driver [-] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Instance spawned successfully.
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.606 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.627 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.633 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.657 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.657 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.658 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.659 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.660 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.660 226833 DEBUG nova.virt.libvirt.driver [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.694 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.852 226833 INFO nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Took 10.55 seconds to spawn the instance on the hypervisor.
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.853 226833 DEBUG nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:17 compute-2 nova_compute[226829]: 2026-01-31 08:20:17.980 226833 INFO nova.compute.manager [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Took 12.02 seconds to build instance.
Jan 31 08:20:18 compute-2 nova_compute[226829]: 2026-01-31 08:20:18.010 226833 DEBUG oslo_concurrency.lockutils [None req-33c187bd-510a-4a3e-b11b-37caf9ed9c4f fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:18 compute-2 podman[293049]: 2026-01-31 08:20:18.185226439 +0000 UTC m=+0.072732101 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:20:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1307699288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/895517308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:18 compute-2 ceph-mon[77282]: pgmap v2578: 305 pgs: 305 active+clean; 456 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 212 op/s
Jan 31 08:20:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2106032858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1156638752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:19.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.421 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:19.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.516 226833 DEBUG oslo_concurrency.lockutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Acquiring lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.516 226833 DEBUG oslo_concurrency.lockutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.517 226833 DEBUG oslo_concurrency.lockutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Acquiring lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.517 226833 DEBUG oslo_concurrency.lockutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.517 226833 DEBUG oslo_concurrency.lockutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.519 226833 INFO nova.compute.manager [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Terminating instance
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.520 226833 DEBUG nova.compute.manager [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:20:19 compute-2 kernel: tap42f62ba1-09 (unregistering): left promiscuous mode
Jan 31 08:20:19 compute-2 NetworkManager[48999]: <info>  [1769847619.5720] device (tap42f62ba1-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.571 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-2 ovn_controller[133834]: 2026-01-31T08:20:19Z|00605|binding|INFO|Releasing lport 42f62ba1-093d-4016-adfc-0a6b7c986b43 from this chassis (sb_readonly=0)
Jan 31 08:20:19 compute-2 ovn_controller[133834]: 2026-01-31T08:20:19Z|00606|binding|INFO|Setting lport 42f62ba1-093d-4016-adfc-0a6b7c986b43 down in Southbound
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.579 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-2 ovn_controller[133834]: 2026-01-31T08:20:19Z|00607|binding|INFO|Removing iface tap42f62ba1-09 ovn-installed in OVS
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.581 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.586 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.593 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:0b:9b 10.100.0.3'], port_security=['fa:16:3e:5e:0b:9b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cb6d7816-1ee7-4474-8b30-f47c612e9af4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ed1ec06-3665-43e8-b434-ceccd33fe864', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5946643c9f6741e299159c7a6903a9f9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab71f962-ccf6-4e70-91cc-7b68c69bd029', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57d9167e-c75e-46ab-8f42-80211b3f95a5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=42f62ba1-093d-4016-adfc-0a6b7c986b43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.594 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 42f62ba1-093d-4016-adfc-0a6b7c986b43 in datapath 3ed1ec06-3665-43e8-b434-ceccd33fe864 unbound from our chassis
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.596 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3ed1ec06-3665-43e8-b434-ceccd33fe864, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.597 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2be5c05f-a19f-4340-a62b-cd3355632af2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.597 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864 namespace which is not needed anymore
Jan 31 08:20:19 compute-2 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000091.scope: Deactivated successfully.
Jan 31 08:20:19 compute-2 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000091.scope: Consumed 2.971s CPU time.
Jan 31 08:20:19 compute-2 systemd-machined[195142]: Machine qemu-66-instance-00000091 terminated.
Jan 31 08:20:19 compute-2 neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864[293015]: [NOTICE]   (293035) : haproxy version is 2.8.14-c23fe91
Jan 31 08:20:19 compute-2 neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864[293015]: [NOTICE]   (293035) : path to executable is /usr/sbin/haproxy
Jan 31 08:20:19 compute-2 neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864[293015]: [WARNING]  (293035) : Exiting Master process...
Jan 31 08:20:19 compute-2 neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864[293015]: [ALERT]    (293035) : Current worker (293038) exited with code 143 (Terminated)
Jan 31 08:20:19 compute-2 neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864[293015]: [WARNING]  (293035) : All workers exited. Exiting... (0)
Jan 31 08:20:19 compute-2 systemd[1]: libpod-d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc.scope: Deactivated successfully.
Jan 31 08:20:19 compute-2 podman[293091]: 2026-01-31 08:20:19.723482803 +0000 UTC m=+0.050924060 container died d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:20:19 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc-userdata-shm.mount: Deactivated successfully.
Jan 31 08:20:19 compute-2 systemd[1]: var-lib-containers-storage-overlay-de1d1a5f04dd05ebe3e619295f8ae14ecd9e8b94c7b944fafeefc0b661160fa6-merged.mount: Deactivated successfully.
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.760 226833 INFO nova.virt.libvirt.driver [-] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Instance destroyed successfully.
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.762 226833 DEBUG nova.objects.instance [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lazy-loading 'resources' on Instance uuid cb6d7816-1ee7-4474-8b30-f47c612e9af4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:19 compute-2 podman[293091]: 2026-01-31 08:20:19.781870714 +0000 UTC m=+0.109311951 container cleanup d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 08:20:19 compute-2 systemd[1]: libpod-conmon-d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc.scope: Deactivated successfully.
Jan 31 08:20:19 compute-2 podman[293131]: 2026-01-31 08:20:19.83602032 +0000 UTC m=+0.040084696 container remove d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.841 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cae0740d-2a4b-4961-8acf-53e431c2167c]: (4, ('Sat Jan 31 08:20:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864 (d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc)\nd4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc\nSat Jan 31 08:20:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864 (d4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc)\nd4eabf8fdc2f51786766a4c7deaec1ca4c28b9caeb6479c082dac55e6859f3cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.843 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9d317e00-7ddc-48c3-bf0f-5a1043394a22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.844 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ed1ec06-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.846 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-2 kernel: tap3ed1ec06-30: left promiscuous mode
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.854 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.856 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fb94be-c160-4de3-8eb8-bbba070e4384]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.866 226833 DEBUG nova.virt.libvirt.vif [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:20:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1487227944',display_name='tempest-ServerAddressesNegativeTestJSON-server-1487227944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1487227944',id=145,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:20:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5946643c9f6741e299159c7a6903a9f9',ramdisk_id='',reservation_id='r-z9odw6lm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-401177754',owner_user_name='tempest-ServerAddressesNegativeTestJSON-401177754-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:20:17Z,user_data=None,user_id='fdeb8f10b75f4d65be0e243714de9420',uuid=cb6d7816-1ee7-4474-8b30-f47c612e9af4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "address": "fa:16:3e:5e:0b:9b", "network": {"id": "3ed1ec06-3665-43e8-b434-ceccd33fe864", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1689208332-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5946643c9f6741e299159c7a6903a9f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42f62ba1-09", "ovs_interfaceid": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.867 226833 DEBUG nova.network.os_vif_util [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Converting VIF {"id": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "address": "fa:16:3e:5e:0b:9b", "network": {"id": "3ed1ec06-3665-43e8-b434-ceccd33fe864", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1689208332-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5946643c9f6741e299159c7a6903a9f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42f62ba1-09", "ovs_interfaceid": "42f62ba1-093d-4016-adfc-0a6b7c986b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.868 226833 DEBUG nova.network.os_vif_util [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=42f62ba1-093d-4016-adfc-0a6b7c986b43,network=Network(3ed1ec06-3665-43e8-b434-ceccd33fe864),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42f62ba1-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.868 226833 DEBUG os_vif [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=42f62ba1-093d-4016-adfc-0a6b7c986b43,network=Network(3ed1ec06-3665-43e8-b434-ceccd33fe864),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42f62ba1-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.871 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.871 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42f62ba1-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.872 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.873 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.875 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[376319df-37db-4676-b8a3-765b0e948b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.876 226833 INFO os_vif [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=42f62ba1-093d-4016-adfc-0a6b7c986b43,network=Network(3ed1ec06-3665-43e8-b434-ceccd33fe864),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42f62ba1-09')
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.876 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[afb24111-8bee-4568-8c56-f5ebc3638e14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.886 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cac07af7-facd-425e-95b6-d3800eaf7b6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786912, 'reachable_time': 36643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293151, 'error': None, 'target': 'ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.889 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3ed1ec06-3665-43e8-b434-ceccd33fe864 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:20:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:19.889 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a3c0ad-bb55-46fb-9bda-723dc8c9353e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-2 systemd[1]: run-netns-ovnmeta\x2d3ed1ec06\x2d3665\x2d43e8\x2db434\x2dceccd33fe864.mount: Deactivated successfully.
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.903 226833 DEBUG nova.compute.manager [req-0569501a-364a-4980-ac3b-e9b3aa2b67ab req-85701179-0730-4a80-b468-08d309912f42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Received event network-vif-plugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.904 226833 DEBUG oslo_concurrency.lockutils [req-0569501a-364a-4980-ac3b-e9b3aa2b67ab req-85701179-0730-4a80-b468-08d309912f42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.904 226833 DEBUG oslo_concurrency.lockutils [req-0569501a-364a-4980-ac3b-e9b3aa2b67ab req-85701179-0730-4a80-b468-08d309912f42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.904 226833 DEBUG oslo_concurrency.lockutils [req-0569501a-364a-4980-ac3b-e9b3aa2b67ab req-85701179-0730-4a80-b468-08d309912f42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.905 226833 DEBUG nova.compute.manager [req-0569501a-364a-4980-ac3b-e9b3aa2b67ab req-85701179-0730-4a80-b468-08d309912f42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] No waiting events found dispatching network-vif-plugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:19 compute-2 nova_compute[226829]: 2026-01-31 08:20:19.905 226833 WARNING nova.compute.manager [req-0569501a-364a-4980-ac3b-e9b3aa2b67ab req-85701179-0730-4a80-b468-08d309912f42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Received unexpected event network-vif-plugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 for instance with vm_state active and task_state deleting.
Jan 31 08:20:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:20 compute-2 nova_compute[226829]: 2026-01-31 08:20:20.329 226833 INFO nova.virt.libvirt.driver [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Deleting instance files /var/lib/nova/instances/cb6d7816-1ee7-4474-8b30-f47c612e9af4_del
Jan 31 08:20:20 compute-2 nova_compute[226829]: 2026-01-31 08:20:20.329 226833 INFO nova.virt.libvirt.driver [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Deletion of /var/lib/nova/instances/cb6d7816-1ee7-4474-8b30-f47c612e9af4_del complete
Jan 31 08:20:20 compute-2 nova_compute[226829]: 2026-01-31 08:20:20.417 226833 INFO nova.compute.manager [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Took 0.90 seconds to destroy the instance on the hypervisor.
Jan 31 08:20:20 compute-2 nova_compute[226829]: 2026-01-31 08:20:20.417 226833 DEBUG oslo.service.loopingcall [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:20:20 compute-2 nova_compute[226829]: 2026-01-31 08:20:20.418 226833 DEBUG nova.compute.manager [-] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:20:20 compute-2 nova_compute[226829]: 2026-01-31 08:20:20.418 226833 DEBUG nova.network.neutron [-] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:20:21 compute-2 ceph-mon[77282]: pgmap v2579: 305 pgs: 305 active+clean; 522 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 7.3 MiB/s wr, 276 op/s
Jan 31 08:20:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e338 e338: 3 total, 3 up, 3 in
Jan 31 08:20:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:21.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:21 compute-2 nova_compute[226829]: 2026-01-31 08:20:21.380 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:21.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:21 compute-2 nova_compute[226829]: 2026-01-31 08:20:21.443 226833 DEBUG nova.network.neutron [-] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:20:21 compute-2 nova_compute[226829]: 2026-01-31 08:20:21.469 226833 INFO nova.compute.manager [-] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Took 1.05 seconds to deallocate network for instance.
Jan 31 08:20:21 compute-2 nova_compute[226829]: 2026-01-31 08:20:21.538 226833 DEBUG oslo_concurrency.lockutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:21 compute-2 nova_compute[226829]: 2026-01-31 08:20:21.539 226833 DEBUG oslo_concurrency.lockutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:21 compute-2 nova_compute[226829]: 2026-01-31 08:20:21.610 226833 DEBUG oslo_concurrency.processutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:20:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3235684417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.055 226833 DEBUG oslo_concurrency.processutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.061 226833 DEBUG nova.compute.provider_tree [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.097 226833 DEBUG nova.compute.manager [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Received event network-vif-unplugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.097 226833 DEBUG oslo_concurrency.lockutils [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.098 226833 DEBUG oslo_concurrency.lockutils [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.098 226833 DEBUG oslo_concurrency.lockutils [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.098 226833 DEBUG nova.compute.manager [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] No waiting events found dispatching network-vif-unplugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.098 226833 WARNING nova.compute.manager [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Received unexpected event network-vif-unplugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 for instance with vm_state deleted and task_state None.
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.098 226833 DEBUG nova.compute.manager [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Received event network-vif-plugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.099 226833 DEBUG oslo_concurrency.lockutils [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.099 226833 DEBUG oslo_concurrency.lockutils [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.099 226833 DEBUG oslo_concurrency.lockutils [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.099 226833 DEBUG nova.compute.manager [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] No waiting events found dispatching network-vif-plugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.099 226833 WARNING nova.compute.manager [req-daa207a5-ca55-4adb-82a1-8d2a8388c22d req-2253ba59-8000-4485-8fb6-3678f5bcc4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Received unexpected event network-vif-plugged-42f62ba1-093d-4016-adfc-0a6b7c986b43 for instance with vm_state deleted and task_state None.
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.122 226833 DEBUG nova.scheduler.client.report [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.163 226833 DEBUG oslo_concurrency.lockutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.196 226833 INFO nova.scheduler.client.report [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Deleted allocations for instance cb6d7816-1ee7-4474-8b30-f47c612e9af4
Jan 31 08:20:22 compute-2 ceph-mon[77282]: osdmap e338: 3 total, 3 up, 3 in
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.288 226833 DEBUG oslo_concurrency.lockutils [None req-46b1a9f1-c6cd-477c-b6aa-bb73ca30ecd4 fdeb8f10b75f4d65be0e243714de9420 5946643c9f6741e299159c7a6903a9f9 - - default default] Lock "cb6d7816-1ee7-4474-8b30-f47c612e9af4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:22 compute-2 nova_compute[226829]: 2026-01-31 08:20:22.383 226833 DEBUG nova.compute.manager [req-d77706e6-1c52-42cb-a57e-fe98ea0b7cb1 req-68d5049b-2b3f-4f76-8b5a-12888b38f2d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Received event network-vif-deleted-42f62ba1-093d-4016-adfc-0a6b7c986b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:23.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:23 compute-2 ceph-mon[77282]: pgmap v2581: 305 pgs: 305 active+clean; 534 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 7.8 MiB/s wr, 350 op/s
Jan 31 08:20:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3235684417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:23.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:24 compute-2 ceph-mon[77282]: pgmap v2582: 305 pgs: 305 active+clean; 518 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.9 MiB/s wr, 401 op/s
Jan 31 08:20:24 compute-2 nova_compute[226829]: 2026-01-31 08:20:24.873 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:25.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:25.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:26 compute-2 nova_compute[226829]: 2026-01-31 08:20:26.414 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:27 compute-2 ceph-mon[77282]: pgmap v2583: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 6.8 MiB/s wr, 491 op/s
Jan 31 08:20:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e339 e339: 3 total, 3 up, 3 in
Jan 31 08:20:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:27.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:27.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:27 compute-2 sudo[293196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:27 compute-2 sudo[293196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:27 compute-2 sudo[293196]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:27 compute-2 sudo[293222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:27 compute-2 sudo[293222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:27 compute-2 sudo[293222]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:28 compute-2 ceph-mon[77282]: osdmap e339: 3 total, 3 up, 3 in
Jan 31 08:20:28 compute-2 nova_compute[226829]: 2026-01-31 08:20:28.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:29 compute-2 ceph-mon[77282]: pgmap v2585: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 1.7 MiB/s wr, 415 op/s
Jan 31 08:20:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:29.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:29.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:29 compute-2 nova_compute[226829]: 2026-01-31 08:20:29.875 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:30 compute-2 ceph-mon[77282]: pgmap v2586: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 20 KiB/s wr, 308 op/s
Jan 31 08:20:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:30 compute-2 nova_compute[226829]: 2026-01-31 08:20:30.658 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:31.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:31 compute-2 nova_compute[226829]: 2026-01-31 08:20:31.415 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:31.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:32 compute-2 nova_compute[226829]: 2026-01-31 08:20:32.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:33 compute-2 ceph-mon[77282]: pgmap v2587: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 20 KiB/s wr, 279 op/s
Jan 31 08:20:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:33.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:33.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:33 compute-2 nova_compute[226829]: 2026-01-31 08:20:33.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:33 compute-2 nova_compute[226829]: 2026-01-31 08:20:33.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:20:33 compute-2 nova_compute[226829]: 2026-01-31 08:20:33.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:20:33 compute-2 nova_compute[226829]: 2026-01-31 08:20:33.527 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:20:34 compute-2 nova_compute[226829]: 2026-01-31 08:20:34.751 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847619.74969, cb6d7816-1ee7-4474-8b30-f47c612e9af4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:20:34 compute-2 nova_compute[226829]: 2026-01-31 08:20:34.752 226833 INFO nova.compute.manager [-] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] VM Stopped (Lifecycle Event)
Jan 31 08:20:34 compute-2 nova_compute[226829]: 2026-01-31 08:20:34.839 226833 DEBUG nova.compute.manager [None req-4e986278-5438-4599-a5d4-05ebb8a05848 - - - - - -] [instance: cb6d7816-1ee7-4474-8b30-f47c612e9af4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:34 compute-2 nova_compute[226829]: 2026-01-31 08:20:34.877 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:35 compute-2 ceph-mon[77282]: pgmap v2588: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 208 op/s
Jan 31 08:20:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:35.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:35.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:35 compute-2 nova_compute[226829]: 2026-01-31 08:20:35.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:36 compute-2 ceph-mon[77282]: pgmap v2589: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 42 KiB/s wr, 159 op/s
Jan 31 08:20:36 compute-2 nova_compute[226829]: 2026-01-31 08:20:36.418 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:36 compute-2 nova_compute[226829]: 2026-01-31 08:20:36.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:37.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:37.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:38 compute-2 podman[293253]: 2026-01-31 08:20:38.276356016 +0000 UTC m=+0.149260353 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 31 08:20:38 compute-2 nova_compute[226829]: 2026-01-31 08:20:38.378 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:38.378 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:20:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:38.380 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:20:38 compute-2 ceph-mon[77282]: pgmap v2590: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 42 KiB/s wr, 151 op/s
Jan 31 08:20:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:39.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1771619444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:39.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:39 compute-2 nova_compute[226829]: 2026-01-31 08:20:39.878 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:40 compute-2 ceph-mon[77282]: pgmap v2591: 305 pgs: 305 active+clean; 429 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 49 KiB/s wr, 126 op/s
Jan 31 08:20:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1056137073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:40 compute-2 nova_compute[226829]: 2026-01-31 08:20:40.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:41.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:41 compute-2 nova_compute[226829]: 2026-01-31 08:20:41.421 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:41.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:41 compute-2 nova_compute[226829]: 2026-01-31 08:20:41.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:43 compute-2 ceph-mon[77282]: pgmap v2592: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 79 KiB/s wr, 119 op/s
Jan 31 08:20:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/342037608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:43.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:20:43.382 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:43.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:43 compute-2 nova_compute[226829]: 2026-01-31 08:20:43.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:43 compute-2 nova_compute[226829]: 2026-01-31 08:20:43.521 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:43 compute-2 nova_compute[226829]: 2026-01-31 08:20:43.521 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:43 compute-2 nova_compute[226829]: 2026-01-31 08:20:43.522 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:43 compute-2 nova_compute[226829]: 2026-01-31 08:20:43.522 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:20:43 compute-2 nova_compute[226829]: 2026-01-31 08:20:43.522 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:20:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2878432924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:43 compute-2 nova_compute[226829]: 2026-01-31 08:20:43.948 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/481649911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2769729325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1405474155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2878432924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1314591952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.114 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.115 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4162MB free_disk=20.79541015625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.115 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.115 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.231 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.231 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.267 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:20:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1977395580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.723 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.730 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.751 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.838 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.839 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:44 compute-2 nova_compute[226829]: 2026-01-31 08:20:44.880 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:45.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:45 compute-2 ceph-mon[77282]: pgmap v2593: 305 pgs: 305 active+clean; 406 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 266 KiB/s wr, 117 op/s
Jan 31 08:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/983807351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1977395580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:45.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:46 compute-2 ceph-mon[77282]: pgmap v2594: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 31 08:20:46 compute-2 nova_compute[226829]: 2026-01-31 08:20:46.467 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:47.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:47.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:47 compute-2 sudo[293329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:47 compute-2 sudo[293329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:47 compute-2 sudo[293329]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:47 compute-2 sudo[293354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:47 compute-2 sudo[293354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:47 compute-2 sudo[293354]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:47 compute-2 sudo[293379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:47 compute-2 sudo[293379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:47 compute-2 sudo[293379]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:47 compute-2 sudo[293404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 08:20:47 compute-2 sudo[293404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:47 compute-2 sudo[293430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:48 compute-2 sudo[293430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-2 sudo[293430]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-2 sudo[293466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:48 compute-2 sudo[293466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-2 sudo[293466]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-2 sudo[293404]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-2 sudo[293500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:48 compute-2 sudo[293500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-2 sudo[293500]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-2 sudo[293525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:48 compute-2 sudo[293525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-2 sudo[293525]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-2 sudo[293556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:48 compute-2 sudo[293556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-2 sudo[293556]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-2 podman[293549]: 2026-01-31 08:20:48.300787785 +0000 UTC m=+0.051161236 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, managed_by=edpm_ansible)
Jan 31 08:20:48 compute-2 sudo[293595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:20:48 compute-2 sudo[293595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:20:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/514583558' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:20:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:20:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/514583558' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:20:48 compute-2 sudo[293595]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:49 compute-2 ceph-mon[77282]: pgmap v2595: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/514583558' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/514583558' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:20:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:20:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:49.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:49 compute-2 nova_compute[226829]: 2026-01-31 08:20:49.882 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:51 compute-2 ceph-mon[77282]: pgmap v2596: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Jan 31 08:20:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1739381354' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:20:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1739381354' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:20:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:51.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:51 compute-2 nova_compute[226829]: 2026-01-31 08:20:51.469 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:51.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:52 compute-2 ceph-mon[77282]: pgmap v2597: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 31 08:20:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:53.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:20:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2573482399' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:20:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:20:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2573482399' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2573482399' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2573482399' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:20:54 compute-2 ceph-mon[77282]: pgmap v2598: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Jan 31 08:20:54 compute-2 nova_compute[226829]: 2026-01-31 08:20:54.840 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:54 compute-2 nova_compute[226829]: 2026-01-31 08:20:54.840 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:20:54 compute-2 nova_compute[226829]: 2026-01-31 08:20:54.884 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:55.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:55 compute-2 sudo[293654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:55 compute-2 sudo[293654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:55 compute-2 sudo[293654]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:55 compute-2 sudo[293679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:20:55 compute-2 sudo[293679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:55 compute-2 sudo[293679]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:20:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:55.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:20:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:56 compute-2 nova_compute[226829]: 2026-01-31 08:20:56.472 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:57 compute-2 ceph-mon[77282]: pgmap v2599: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.6 MiB/s wr, 243 op/s
Jan 31 08:20:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:20:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:57.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:20:57 compute-2 nova_compute[226829]: 2026-01-31 08:20:57.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:57.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:58 compute-2 nova_compute[226829]: 2026-01-31 08:20:58.073 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "e8e7a13f-a648-45dc-b768-ac5deac97083" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:58 compute-2 nova_compute[226829]: 2026-01-31 08:20:58.073 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:58 compute-2 ceph-mon[77282]: pgmap v2600: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 19 KiB/s wr, 205 op/s
Jan 31 08:20:58 compute-2 nova_compute[226829]: 2026-01-31 08:20:58.255 226833 DEBUG nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:20:58 compute-2 nova_compute[226829]: 2026-01-31 08:20:58.519 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:58 compute-2 nova_compute[226829]: 2026-01-31 08:20:58.520 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:58 compute-2 nova_compute[226829]: 2026-01-31 08:20:58.528 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:20:58 compute-2 nova_compute[226829]: 2026-01-31 08:20:58.528 226833 INFO nova.compute.claims [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:20:58 compute-2 nova_compute[226829]: 2026-01-31 08:20:58.997 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:59.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:20:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4028800591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:59 compute-2 nova_compute[226829]: 2026-01-31 08:20:59.479 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:59 compute-2 nova_compute[226829]: 2026-01-31 08:20:59.490 226833 DEBUG nova.compute.provider_tree [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:20:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:20:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:59.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4028800591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:59 compute-2 nova_compute[226829]: 2026-01-31 08:20:59.628 226833 DEBUG nova.scheduler.client.report [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:20:59 compute-2 nova_compute[226829]: 2026-01-31 08:20:59.810 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:59 compute-2 nova_compute[226829]: 2026-01-31 08:20:59.812 226833 DEBUG nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:20:59 compute-2 nova_compute[226829]: 2026-01-31 08:20:59.884 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.006 226833 DEBUG nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.006 226833 DEBUG nova.network.neutron [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.081 226833 INFO nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:21:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.201 226833 DEBUG nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.322 226833 DEBUG nova.policy [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '038e2b3b4f174162a3ac6c4870857e60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c90ea7f1be5f484bb873548236fadc00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.536 226833 DEBUG nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.538 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.538 226833 INFO nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Creating image(s)
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.569 226833 DEBUG nova.storage.rbd_utils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image e8e7a13f-a648-45dc-b768-ac5deac97083_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:00 compute-2 ceph-mon[77282]: pgmap v2601: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 192 op/s
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.664 226833 DEBUG nova.storage.rbd_utils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image e8e7a13f-a648-45dc-b768-ac5deac97083_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.689 226833 DEBUG nova.storage.rbd_utils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image e8e7a13f-a648-45dc-b768-ac5deac97083_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.692 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.751 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.751 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.752 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.752 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.776 226833 DEBUG nova.storage.rbd_utils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image e8e7a13f-a648-45dc-b768-ac5deac97083_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:00 compute-2 nova_compute[226829]: 2026-01-31 08:21:00.778 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e8e7a13f-a648-45dc-b768-ac5deac97083_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:01.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:01 compute-2 nova_compute[226829]: 2026-01-31 08:21:01.163 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e8e7a13f-a648-45dc-b768-ac5deac97083_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:01 compute-2 nova_compute[226829]: 2026-01-31 08:21:01.230 226833 DEBUG nova.storage.rbd_utils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] resizing rbd image e8e7a13f-a648-45dc-b768-ac5deac97083_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:21:01 compute-2 nova_compute[226829]: 2026-01-31 08:21:01.330 226833 DEBUG nova.objects.instance [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'migration_context' on Instance uuid e8e7a13f-a648-45dc-b768-ac5deac97083 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:01 compute-2 nova_compute[226829]: 2026-01-31 08:21:01.462 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:21:01 compute-2 nova_compute[226829]: 2026-01-31 08:21:01.463 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Ensure instance console log exists: /var/lib/nova/instances/e8e7a13f-a648-45dc-b768-ac5deac97083/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:21:01 compute-2 nova_compute[226829]: 2026-01-31 08:21:01.463 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:01 compute-2 nova_compute[226829]: 2026-01-31 08:21:01.464 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:01 compute-2 nova_compute[226829]: 2026-01-31 08:21:01.464 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:01 compute-2 nova_compute[226829]: 2026-01-31 08:21:01.473 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:01.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:01 compute-2 nova_compute[226829]: 2026-01-31 08:21:01.674 226833 DEBUG nova.network.neutron [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Successfully created port: 22013b45-81b3-43ce-9b55-b18d9c07bbef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:21:03 compute-2 ceph-mon[77282]: pgmap v2602: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.1 KiB/s wr, 141 op/s
Jan 31 08:21:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:03.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:03.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:03 compute-2 nova_compute[226829]: 2026-01-31 08:21:03.709 226833 DEBUG nova.network.neutron [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Successfully updated port: 22013b45-81b3-43ce-9b55-b18d9c07bbef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:21:03 compute-2 nova_compute[226829]: 2026-01-31 08:21:03.841 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:03 compute-2 nova_compute[226829]: 2026-01-31 08:21:03.841 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquired lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:03 compute-2 nova_compute[226829]: 2026-01-31 08:21:03.841 226833 DEBUG nova.network.neutron [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:21:03 compute-2 nova_compute[226829]: 2026-01-31 08:21:03.962 226833 DEBUG nova.compute.manager [req-b6b9cda7-bbfc-4c13-b45c-bc6880dd10ee req-9f89632a-cc53-4abf-b3a9-36342a74f929 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Received event network-changed-22013b45-81b3-43ce-9b55-b18d9c07bbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:03 compute-2 nova_compute[226829]: 2026-01-31 08:21:03.962 226833 DEBUG nova.compute.manager [req-b6b9cda7-bbfc-4c13-b45c-bc6880dd10ee req-9f89632a-cc53-4abf-b3a9-36342a74f929 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Refreshing instance network info cache due to event network-changed-22013b45-81b3-43ce-9b55-b18d9c07bbef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:21:03 compute-2 nova_compute[226829]: 2026-01-31 08:21:03.963 226833 DEBUG oslo_concurrency.lockutils [req-b6b9cda7-bbfc-4c13-b45c-bc6880dd10ee req-9f89632a-cc53-4abf-b3a9-36342a74f929 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:04 compute-2 ceph-mon[77282]: pgmap v2603: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 284 KiB/s wr, 120 op/s
Jan 31 08:21:04 compute-2 nova_compute[226829]: 2026-01-31 08:21:04.299 226833 DEBUG nova.network.neutron [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:21:04 compute-2 nova_compute[226829]: 2026-01-31 08:21:04.886 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:05.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:05.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3138236152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:21:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3138236152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:21:05 compute-2 nova_compute[226829]: 2026-01-31 08:21:05.990 226833 DEBUG nova.network.neutron [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Updating instance_info_cache with network_info: [{"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.159 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Releasing lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.160 226833 DEBUG nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Instance network_info: |[{"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.160 226833 DEBUG oslo_concurrency.lockutils [req-b6b9cda7-bbfc-4c13-b45c-bc6880dd10ee req-9f89632a-cc53-4abf-b3a9-36342a74f929 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.160 226833 DEBUG nova.network.neutron [req-b6b9cda7-bbfc-4c13-b45c-bc6880dd10ee req-9f89632a-cc53-4abf-b3a9-36342a74f929 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Refreshing network info cache for port 22013b45-81b3-43ce-9b55-b18d9c07bbef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.164 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Start _get_guest_xml network_info=[{"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.168 226833 WARNING nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.172 226833 DEBUG nova.virt.libvirt.host [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.172 226833 DEBUG nova.virt.libvirt.host [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.176 226833 DEBUG nova.virt.libvirt.host [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.176 226833 DEBUG nova.virt.libvirt.host [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.177 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.178 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.178 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.178 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.179 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.179 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.179 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.179 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.179 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.180 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.180 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.180 226833 DEBUG nova.virt.hardware [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.183 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.476 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:21:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/253835237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.599 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.621 226833 DEBUG nova.storage.rbd_utils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image e8e7a13f-a648-45dc-b768-ac5deac97083_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:06 compute-2 nova_compute[226829]: 2026-01-31 08:21:06.625 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:06 compute-2 ceph-mon[77282]: pgmap v2604: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Jan 31 08:21:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/253835237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:06.898 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:06.898 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:06.898 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:21:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2403454183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.036 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.039 226833 DEBUG nova.virt.libvirt.vif [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:20:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2045886920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2045886920',id=146,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-re5yql0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:00Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=e8e7a13f-a648-45dc-b768-ac5deac97083,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.040 226833 DEBUG nova.network.os_vif_util [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.042 226833 DEBUG nova.network.os_vif_util [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:cc:b0,bridge_name='br-int',has_traffic_filtering=True,id=22013b45-81b3-43ce-9b55-b18d9c07bbef,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22013b45-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.044 226833 DEBUG nova.objects.instance [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8e7a13f-a648-45dc-b768-ac5deac97083 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.143 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <uuid>e8e7a13f-a648-45dc-b768-ac5deac97083</uuid>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <name>instance-00000092</name>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-2045886920</nova:name>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:21:06</nova:creationTime>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <nova:user uuid="038e2b3b4f174162a3ac6c4870857e60">tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member</nova:user>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <nova:project uuid="c90ea7f1be5f484bb873548236fadc00">tempest-ServerBootFromVolumeStableRescueTest-1116995694</nova:project>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <nova:port uuid="22013b45-81b3-43ce-9b55-b18d9c07bbef">
Jan 31 08:21:07 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <system>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <entry name="serial">e8e7a13f-a648-45dc-b768-ac5deac97083</entry>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <entry name="uuid">e8e7a13f-a648-45dc-b768-ac5deac97083</entry>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     </system>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <os>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   </os>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <features>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   </features>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e8e7a13f-a648-45dc-b768-ac5deac97083_disk">
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       </source>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e8e7a13f-a648-45dc-b768-ac5deac97083_disk.config">
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       </source>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:21:07 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:74:cc:b0"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <target dev="tap22013b45-81"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/e8e7a13f-a648-45dc-b768-ac5deac97083/console.log" append="off"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <video>
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     </video>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:21:07 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:21:07 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:21:07 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:21:07 compute-2 nova_compute[226829]: </domain>
Jan 31 08:21:07 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.144 226833 DEBUG nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Preparing to wait for external event network-vif-plugged-22013b45-81b3-43ce-9b55-b18d9c07bbef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.144 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.145 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.145 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.145 226833 DEBUG nova.virt.libvirt.vif [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:20:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2045886920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2045886920',id=146,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-re5yql0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:00Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=e8e7a13f-a648-45dc-b768-ac5deac97083,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.146 226833 DEBUG nova.network.os_vif_util [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.146 226833 DEBUG nova.network.os_vif_util [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:cc:b0,bridge_name='br-int',has_traffic_filtering=True,id=22013b45-81b3-43ce-9b55-b18d9c07bbef,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22013b45-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.146 226833 DEBUG os_vif [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:cc:b0,bridge_name='br-int',has_traffic_filtering=True,id=22013b45-81b3-43ce-9b55-b18d9c07bbef,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22013b45-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.147 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.147 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.148 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.150 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.151 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22013b45-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.151 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22013b45-81, col_values=(('external_ids', {'iface-id': '22013b45-81b3-43ce-9b55-b18d9c07bbef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:cc:b0', 'vm-uuid': 'e8e7a13f-a648-45dc-b768-ac5deac97083'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.153 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:07 compute-2 NetworkManager[48999]: <info>  [1769847667.1550] manager: (tap22013b45-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.156 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:21:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:07.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.159 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.160 226833 INFO os_vif [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:cc:b0,bridge_name='br-int',has_traffic_filtering=True,id=22013b45-81b3-43ce-9b55-b18d9c07bbef,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22013b45-81')
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.246 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.247 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.247 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No VIF found with MAC fa:16:3e:74:cc:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.248 226833 INFO nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Using config drive
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.269 226833 DEBUG nova.storage.rbd_utils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image e8e7a13f-a648-45dc-b768-ac5deac97083_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:07.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2403454183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.893 226833 INFO nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Creating config drive at /var/lib/nova/instances/e8e7a13f-a648-45dc-b768-ac5deac97083/disk.config
Jan 31 08:21:07 compute-2 nova_compute[226829]: 2026-01-31 08:21:07.897 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8e7a13f-a648-45dc-b768-ac5deac97083/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpb5x7i01g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.021 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8e7a13f-a648-45dc-b768-ac5deac97083/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpb5x7i01g" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.050 226833 DEBUG nova.storage.rbd_utils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image e8e7a13f-a648-45dc-b768-ac5deac97083_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.053 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e8e7a13f-a648-45dc-b768-ac5deac97083/disk.config e8e7a13f-a648-45dc-b768-ac5deac97083_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:08 compute-2 sudo[294010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:08 compute-2 sudo[294010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:08 compute-2 sudo[294010]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:08 compute-2 sudo[294044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:08 compute-2 sudo[294044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:08 compute-2 sudo[294044]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.278 226833 DEBUG oslo_concurrency.processutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e8e7a13f-a648-45dc-b768-ac5deac97083/disk.config e8e7a13f-a648-45dc-b768-ac5deac97083_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.279 226833 INFO nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Deleting local config drive /var/lib/nova/instances/e8e7a13f-a648-45dc-b768-ac5deac97083/disk.config because it was imported into RBD.
Jan 31 08:21:08 compute-2 kernel: tap22013b45-81: entered promiscuous mode
Jan 31 08:21:08 compute-2 ovn_controller[133834]: 2026-01-31T08:21:08Z|00608|binding|INFO|Claiming lport 22013b45-81b3-43ce-9b55-b18d9c07bbef for this chassis.
Jan 31 08:21:08 compute-2 ovn_controller[133834]: 2026-01-31T08:21:08Z|00609|binding|INFO|22013b45-81b3-43ce-9b55-b18d9c07bbef: Claiming fa:16:3e:74:cc:b0 10.100.0.6
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.322 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:08 compute-2 NetworkManager[48999]: <info>  [1769847668.3239] manager: (tap22013b45-81): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.340 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:cc:b0 10.100.0.6'], port_security=['fa:16:3e:74:cc:b0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e8e7a13f-a648-45dc-b768-ac5deac97083', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=22013b45-81b3-43ce-9b55-b18d9c07bbef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.341 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 22013b45-81b3-43ce-9b55-b18d9c07bbef in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 bound to our chassis
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.343 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.355 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.355 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[06f35386-ec37-4b02-bd18-d637804bcd89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.356 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap936cead9-b1 in ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:21:08 compute-2 ovn_controller[133834]: 2026-01-31T08:21:08Z|00610|binding|INFO|Setting lport 22013b45-81b3-43ce-9b55-b18d9c07bbef ovn-installed in OVS
Jan 31 08:21:08 compute-2 ovn_controller[133834]: 2026-01-31T08:21:08Z|00611|binding|INFO|Setting lport 22013b45-81b3-43ce-9b55-b18d9c07bbef up in Southbound
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.358 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap936cead9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.358 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d55a2034-9548-41a9-90ae-3251df12fdcf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.360 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2e200e49-69f5-4b84-9032-465466118ca9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.371 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3477f5-7500-43c3-be29-0e02acc4a40e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 systemd-machined[195142]: New machine qemu-67-instance-00000092.
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.399 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.401 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4cefeabe-4508-48f3-b705-a030db5c7243]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 systemd[1]: Started Virtual Machine qemu-67-instance-00000092.
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.427 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2a3bf4-49b2-4527-823d-0ddb38fff108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 systemd-udevd[294108]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:21:08 compute-2 systemd-udevd[294111]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.433 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[570ab648-668e-4232-b7cc-74f34c213439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 NetworkManager[48999]: <info>  [1769847668.4346] manager: (tap936cead9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/300)
Jan 31 08:21:08 compute-2 NetworkManager[48999]: <info>  [1769847668.4477] device (tap22013b45-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:21:08 compute-2 NetworkManager[48999]: <info>  [1769847668.4488] device (tap22013b45-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.462 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[60caee85-720f-44ab-a106-0d5f6408c075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.466 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3d5b565f-c3fe-4240-8e02-ea2256594100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 NetworkManager[48999]: <info>  [1769847668.4844] device (tap936cead9-b0): carrier: link connected
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.489 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e270ec7f-b9ab-4086-a2d8-5d6bfe69d1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 podman[294081]: 2026-01-31 08:21:08.496589084 +0000 UTC m=+0.138929952 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.509 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf73577-2739-49ff-9202-f762124da210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792095, 'reachable_time': 23630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294142, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.522 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[85604a38-d57b-4b8a-aafe-34dbcefdfb1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:62a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 792095, 'tstamp': 792095}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294143, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.532 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[41ce91ef-b5e5-4dcc-94ab-27903d67a821]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792095, 'reachable_time': 23630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294144, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.555 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e26cb06c-3ad9-4ea9-b2c2-94eff4f5a708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.595 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b43ea37a-9bde-4729-bb58-da762d2e1f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.597 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.597 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.598 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:08 compute-2 kernel: tap936cead9-b0: entered promiscuous mode
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.600 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:08 compute-2 NetworkManager[48999]: <info>  [1769847668.6009] manager: (tap936cead9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.605 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.606 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:08 compute-2 ovn_controller[133834]: 2026-01-31T08:21:08Z|00612|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.609 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.610 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eab4f231-74ca-4a9e-8efc-6663666b752f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.611 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.613 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:08.614 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'env', 'PROCESS_TAG=haproxy-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/936cead9-bc2f-4c2d-8b4c-6079d2159263.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.649 226833 DEBUG nova.network.neutron [req-b6b9cda7-bbfc-4c13-b45c-bc6880dd10ee req-9f89632a-cc53-4abf-b3a9-36342a74f929 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Updated VIF entry in instance network info cache for port 22013b45-81b3-43ce-9b55-b18d9c07bbef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.650 226833 DEBUG nova.network.neutron [req-b6b9cda7-bbfc-4c13-b45c-bc6880dd10ee req-9f89632a-cc53-4abf-b3a9-36342a74f929 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Updating instance_info_cache with network_info: [{"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.678 226833 DEBUG oslo_concurrency.lockutils [req-b6b9cda7-bbfc-4c13-b45c-bc6880dd10ee req-9f89632a-cc53-4abf-b3a9-36342a74f929 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.740 226833 DEBUG nova.compute.manager [req-a4cd4f50-3dd5-443f-a62d-e044d18fd121 req-fe5c0f96-f45f-4785-929b-8578dc2f824a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Received event network-vif-plugged-22013b45-81b3-43ce-9b55-b18d9c07bbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:08 compute-2 ceph-mon[77282]: pgmap v2605: 305 pgs: 305 active+clean; 361 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.740 226833 DEBUG oslo_concurrency.lockutils [req-a4cd4f50-3dd5-443f-a62d-e044d18fd121 req-fe5c0f96-f45f-4785-929b-8578dc2f824a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/552483045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.741 226833 DEBUG oslo_concurrency.lockutils [req-a4cd4f50-3dd5-443f-a62d-e044d18fd121 req-fe5c0f96-f45f-4785-929b-8578dc2f824a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.741 226833 DEBUG oslo_concurrency.lockutils [req-a4cd4f50-3dd5-443f-a62d-e044d18fd121 req-fe5c0f96-f45f-4785-929b-8578dc2f824a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:08 compute-2 nova_compute[226829]: 2026-01-31 08:21:08.741 226833 DEBUG nova.compute.manager [req-a4cd4f50-3dd5-443f-a62d-e044d18fd121 req-fe5c0f96-f45f-4785-929b-8578dc2f824a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Processing event network-vif-plugged-22013b45-81b3-43ce-9b55-b18d9c07bbef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:21:08 compute-2 podman[294210]: 2026-01-31 08:21:08.995189526 +0000 UTC m=+0.078482557 container create ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:21:09 compute-2 systemd[1]: Started libpod-conmon-ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765.scope.
Jan 31 08:21:09 compute-2 podman[294210]: 2026-01-31 08:21:08.940235338 +0000 UTC m=+0.023528469 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.042 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847669.042177, e8e7a13f-a648-45dc-b768-ac5deac97083 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.043 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] VM Started (Lifecycle Event)
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.045 226833 DEBUG nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:21:09 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:21:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec31cbd6ebb378f66e8365d43ae8cb5af452285c9de5d7a35adda0e26f6d69e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.049 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.053 226833 INFO nova.virt.libvirt.driver [-] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Instance spawned successfully.
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.053 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:21:09 compute-2 podman[294210]: 2026-01-31 08:21:09.062311813 +0000 UTC m=+0.145604874 container init ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:21:09 compute-2 podman[294210]: 2026-01-31 08:21:09.067681899 +0000 UTC m=+0.150974930 container start ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:09 compute-2 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[294234]: [NOTICE]   (294238) : New worker (294240) forked
Jan 31 08:21:09 compute-2 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[294234]: [NOTICE]   (294238) : Loading success.
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.121 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.124 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.132 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.132 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.133 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.133 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.133 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.134 226833 DEBUG nova.virt.libvirt.driver [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.149 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.150 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847669.042332, e8e7a13f-a648-45dc-b768-ac5deac97083 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.150 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] VM Paused (Lifecycle Event)
Jan 31 08:21:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:09.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.207 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.210 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847669.0491917, e8e7a13f-a648-45dc-b768-ac5deac97083 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.211 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] VM Resumed (Lifecycle Event)
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.267 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.272 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.279 226833 INFO nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Took 8.74 seconds to spawn the instance on the hypervisor.
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.280 226833 DEBUG nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.318 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.389 226833 INFO nova.compute.manager [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Took 10.94 seconds to build instance.
Jan 31 08:21:09 compute-2 nova_compute[226829]: 2026-01-31 08:21:09.412 226833 DEBUG oslo_concurrency.lockutils [None req-ea38d498-ad5b-4ec0-a36d-d4ff12c0b627 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:09.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1237790933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:10 compute-2 nova_compute[226829]: 2026-01-31 08:21:10.933 226833 DEBUG nova.compute.manager [req-2358f144-f719-499a-974c-5b06ff5ef939 req-715a24ca-590e-4c82-bd06-15800cad9585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Received event network-vif-plugged-22013b45-81b3-43ce-9b55-b18d9c07bbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:10 compute-2 nova_compute[226829]: 2026-01-31 08:21:10.934 226833 DEBUG oslo_concurrency.lockutils [req-2358f144-f719-499a-974c-5b06ff5ef939 req-715a24ca-590e-4c82-bd06-15800cad9585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:10 compute-2 nova_compute[226829]: 2026-01-31 08:21:10.934 226833 DEBUG oslo_concurrency.lockutils [req-2358f144-f719-499a-974c-5b06ff5ef939 req-715a24ca-590e-4c82-bd06-15800cad9585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:10 compute-2 nova_compute[226829]: 2026-01-31 08:21:10.934 226833 DEBUG oslo_concurrency.lockutils [req-2358f144-f719-499a-974c-5b06ff5ef939 req-715a24ca-590e-4c82-bd06-15800cad9585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:10 compute-2 nova_compute[226829]: 2026-01-31 08:21:10.935 226833 DEBUG nova.compute.manager [req-2358f144-f719-499a-974c-5b06ff5ef939 req-715a24ca-590e-4c82-bd06-15800cad9585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] No waiting events found dispatching network-vif-plugged-22013b45-81b3-43ce-9b55-b18d9c07bbef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:21:10 compute-2 nova_compute[226829]: 2026-01-31 08:21:10.935 226833 WARNING nova.compute.manager [req-2358f144-f719-499a-974c-5b06ff5ef939 req-715a24ca-590e-4c82-bd06-15800cad9585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Received unexpected event network-vif-plugged-22013b45-81b3-43ce-9b55-b18d9c07bbef for instance with vm_state active and task_state None.
Jan 31 08:21:10 compute-2 ceph-mon[77282]: pgmap v2606: 305 pgs: 305 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 2.7 MiB/s wr, 132 op/s
Jan 31 08:21:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:21:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:11.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:21:11 compute-2 nova_compute[226829]: 2026-01-31 08:21:11.477 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:11.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e340 e340: 3 total, 3 up, 3 in
Jan 31 08:21:12 compute-2 nova_compute[226829]: 2026-01-31 08:21:12.154 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:12 compute-2 nova_compute[226829]: 2026-01-31 08:21:12.532 226833 DEBUG nova.compute.manager [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:12 compute-2 nova_compute[226829]: 2026-01-31 08:21:12.597 226833 INFO nova.compute.manager [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] instance snapshotting
Jan 31 08:21:12 compute-2 nova_compute[226829]: 2026-01-31 08:21:12.940 226833 INFO nova.virt.libvirt.driver [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Beginning live snapshot process
Jan 31 08:21:12 compute-2 ceph-mon[77282]: pgmap v2607: 305 pgs: 305 active+clean; 356 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 171 op/s
Jan 31 08:21:12 compute-2 ceph-mon[77282]: osdmap e340: 3 total, 3 up, 3 in
Jan 31 08:21:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2956749508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:13 compute-2 nova_compute[226829]: 2026-01-31 08:21:13.114 226833 DEBUG nova.virt.libvirt.imagebackend [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 08:21:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:13.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:13 compute-2 nova_compute[226829]: 2026-01-31 08:21:13.440 226833 DEBUG nova.storage.rbd_utils [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] creating snapshot(363e933d253245f28d7643c489c67128) on rbd image(e8e7a13f-a648-45dc-b768-ac5deac97083_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:21:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:13.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2828417063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e341 e341: 3 total, 3 up, 3 in
Jan 31 08:21:14 compute-2 nova_compute[226829]: 2026-01-31 08:21:14.071 226833 DEBUG nova.storage.rbd_utils [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] cloning vms/e8e7a13f-a648-45dc-b768-ac5deac97083_disk@363e933d253245f28d7643c489c67128 to images/af02be56-bd6f-4200-837f-ea1f7e8d93ec clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:21:14 compute-2 nova_compute[226829]: 2026-01-31 08:21:14.198 226833 DEBUG nova.storage.rbd_utils [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] flattening images/af02be56-bd6f-4200-837f-ea1f7e8d93ec flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 08:21:14 compute-2 nova_compute[226829]: 2026-01-31 08:21:14.585 226833 DEBUG nova.storage.rbd_utils [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] removing snapshot(363e933d253245f28d7643c489c67128) on rbd image(e8e7a13f-a648-45dc-b768-ac5deac97083_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:21:15 compute-2 ceph-mon[77282]: pgmap v2609: 305 pgs: 305 active+clean; 363 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 245 op/s
Jan 31 08:21:15 compute-2 ceph-mon[77282]: osdmap e341: 3 total, 3 up, 3 in
Jan 31 08:21:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e342 e342: 3 total, 3 up, 3 in
Jan 31 08:21:15 compute-2 nova_compute[226829]: 2026-01-31 08:21:15.067 226833 DEBUG nova.storage.rbd_utils [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] creating snapshot(snap) on rbd image(af02be56-bd6f-4200-837f-ea1f7e8d93ec) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:21:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:15.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:15.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:16 compute-2 ceph-mon[77282]: osdmap e342: 3 total, 3 up, 3 in
Jan 31 08:21:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e343 e343: 3 total, 3 up, 3 in
Jan 31 08:21:16 compute-2 nova_compute[226829]: 2026-01-31 08:21:16.480 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:17 compute-2 ceph-mon[77282]: pgmap v2612: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.7 MiB/s wr, 310 op/s
Jan 31 08:21:17 compute-2 ceph-mon[77282]: osdmap e343: 3 total, 3 up, 3 in
Jan 31 08:21:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e344 e344: 3 total, 3 up, 3 in
Jan 31 08:21:17 compute-2 nova_compute[226829]: 2026-01-31 08:21:17.157 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:17.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:21:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:17.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:21:18 compute-2 nova_compute[226829]: 2026-01-31 08:21:18.088 226833 INFO nova.virt.libvirt.driver [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Snapshot image upload complete
Jan 31 08:21:18 compute-2 nova_compute[226829]: 2026-01-31 08:21:18.088 226833 INFO nova.compute.manager [None req-61341980-5333-49a1-98f0-fba5fca8aa85 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Took 5.49 seconds to snapshot the instance on the hypervisor.
Jan 31 08:21:18 compute-2 ceph-mon[77282]: osdmap e344: 3 total, 3 up, 3 in
Jan 31 08:21:18 compute-2 ceph-mon[77282]: pgmap v2615: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 10 MiB/s wr, 325 op/s
Jan 31 08:21:18 compute-2 nova_compute[226829]: 2026-01-31 08:21:18.678 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:18 compute-2 nova_compute[226829]: 2026-01-31 08:21:18.678 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:18 compute-2 nova_compute[226829]: 2026-01-31 08:21:18.704 226833 DEBUG nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:21:18 compute-2 nova_compute[226829]: 2026-01-31 08:21:18.790 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:18 compute-2 nova_compute[226829]: 2026-01-31 08:21:18.791 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:18 compute-2 nova_compute[226829]: 2026-01-31 08:21:18.802 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:21:18 compute-2 nova_compute[226829]: 2026-01-31 08:21:18.803 226833 INFO nova.compute.claims [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:21:18 compute-2 nova_compute[226829]: 2026-01-31 08:21:18.985 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:19.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:19 compute-2 podman[294397]: 2026-01-31 08:21:19.170682516 +0000 UTC m=+0.051575358 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 08:21:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:21:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1073583248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.417 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.423 226833 DEBUG nova.compute.provider_tree [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.451 226833 DEBUG nova.scheduler.client.report [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:21:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1073583248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.487 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.488 226833 DEBUG nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:21:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:19.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.548 226833 DEBUG nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.549 226833 DEBUG nova.network.neutron [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.580 226833 INFO nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.662 226833 DEBUG nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.848 226833 DEBUG nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.849 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.850 226833 INFO nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Creating image(s)
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.876 226833 DEBUG nova.storage.rbd_utils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image aa396f7d-4c1b-445e-807c-05107a729be4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.907 226833 DEBUG nova.storage.rbd_utils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image aa396f7d-4c1b-445e-807c-05107a729be4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.938 226833 DEBUG nova.storage.rbd_utils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image aa396f7d-4c1b-445e-807c-05107a729be4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.943 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:19 compute-2 nova_compute[226829]: 2026-01-31 08:21:19.944 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:20 compute-2 nova_compute[226829]: 2026-01-31 08:21:20.013 226833 DEBUG nova.policy [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f0be9090fdf49d2ac15246a0a820d3f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '134c066ac92844ff853b216870fa8eed', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:21:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:20 compute-2 nova_compute[226829]: 2026-01-31 08:21:20.293 226833 DEBUG nova.virt.libvirt.imagebackend [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/1a3bb6f8-bfef-4edf-a7ea-1489b5cad196/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/1a3bb6f8-bfef-4edf-a7ea-1489b5cad196/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 08:21:20 compute-2 ceph-mon[77282]: pgmap v2616: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 7.0 MiB/s wr, 289 op/s
Jan 31 08:21:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:21.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:21 compute-2 nova_compute[226829]: 2026-01-31 08:21:21.297 226833 DEBUG nova.network.neutron [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Successfully created port: c7810272-d139-4528-b358-19b623e1a34d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:21:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:21.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:21 compute-2 nova_compute[226829]: 2026-01-31 08:21:21.602 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:21 compute-2 ovn_controller[133834]: 2026-01-31T08:21:21Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:cc:b0 10.100.0.6
Jan 31 08:21:21 compute-2 ovn_controller[133834]: 2026-01-31T08:21:21Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:cc:b0 10.100.0.6
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.147 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.168 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.204 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb.part --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.205 226833 DEBUG nova.virt.images [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] 1a3bb6f8-bfef-4edf-a7ea-1489b5cad196 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.260 226833 DEBUG nova.privsep.utils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.260 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb.part /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 e345: 3 total, 3 up, 3 in
Jan 31 08:21:22 compute-2 ceph-mon[77282]: pgmap v2617: 305 pgs: 305 active+clean; 413 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.4 MiB/s wr, 231 op/s
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.691 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb.part /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb.converted" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.695 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.751 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb.converted --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.752 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.780 226833 DEBUG nova.storage.rbd_utils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image aa396f7d-4c1b-445e-807c-05107a729be4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:22 compute-2 nova_compute[226829]: 2026-01-31 08:21:22.783 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb aa396f7d-4c1b-445e-807c-05107a729be4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.136 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb aa396f7d-4c1b-445e-807c-05107a729be4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:23.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.214 226833 DEBUG nova.storage.rbd_utils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] resizing rbd image aa396f7d-4c1b-445e-807c-05107a729be4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.279 226833 DEBUG nova.network.neutron [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Successfully updated port: c7810272-d139-4528-b358-19b623e1a34d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.321 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.321 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquired lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.321 226833 DEBUG nova.network.neutron [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.326 226833 DEBUG nova.objects.instance [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'migration_context' on Instance uuid aa396f7d-4c1b-445e-807c-05107a729be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.344 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.345 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Ensure instance console log exists: /var/lib/nova/instances/aa396f7d-4c1b-445e-807c-05107a729be4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.345 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.345 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.346 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.408 226833 DEBUG nova.compute.manager [req-039cbce3-a9c7-43dc-a1db-ec82fd72828f req-60c3a4ad-10e0-4463-b109-dfbf815a0c49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-changed-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.408 226833 DEBUG nova.compute.manager [req-039cbce3-a9c7-43dc-a1db-ec82fd72828f req-60c3a4ad-10e0-4463-b109-dfbf815a0c49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing instance network info cache due to event network-changed-c7810272-d139-4528-b358-19b623e1a34d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.409 226833 DEBUG oslo_concurrency.lockutils [req-039cbce3-a9c7-43dc-a1db-ec82fd72828f req-60c3a4ad-10e0-4463-b109-dfbf815a0c49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:23 compute-2 ceph-mon[77282]: osdmap e345: 3 total, 3 up, 3 in
Jan 31 08:21:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:23.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:23 compute-2 nova_compute[226829]: 2026-01-31 08:21:23.635 226833 DEBUG nova.network.neutron [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:21:24 compute-2 ceph-mon[77282]: pgmap v2619: 305 pgs: 305 active+clean; 417 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 181 op/s
Jan 31 08:21:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:25.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.364 226833 DEBUG nova.network.neutron [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.391 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Releasing lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.391 226833 DEBUG nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Instance network_info: |[{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.392 226833 DEBUG oslo_concurrency.lockutils [req-039cbce3-a9c7-43dc-a1db-ec82fd72828f req-60c3a4ad-10e0-4463-b109-dfbf815a0c49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.392 226833 DEBUG nova.network.neutron [req-039cbce3-a9c7-43dc-a1db-ec82fd72828f req-60c3a4ad-10e0-4463-b109-dfbf815a0c49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing network info cache for port c7810272-d139-4528-b358-19b623e1a34d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.395 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Start _get_guest_xml network_info=[{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T08:21:12Z,direct_url=<?>,disk_format='qcow2',id=1a3bb6f8-bfef-4edf-a7ea-1489b5cad196,min_disk=0,min_ram=0,name='tempest-scenario-img--231190322',owner='134c066ac92844ff853b216870fa8eed',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T08:21:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '1a3bb6f8-bfef-4edf-a7ea-1489b5cad196'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.399 226833 WARNING nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.404 226833 DEBUG nova.virt.libvirt.host [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.404 226833 DEBUG nova.virt.libvirt.host [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.411 226833 DEBUG nova.virt.libvirt.host [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.411 226833 DEBUG nova.virt.libvirt.host [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.412 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.412 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T08:21:12Z,direct_url=<?>,disk_format='qcow2',id=1a3bb6f8-bfef-4edf-a7ea-1489b5cad196,min_disk=0,min_ram=0,name='tempest-scenario-img--231190322',owner='134c066ac92844ff853b216870fa8eed',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T08:21:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.413 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.413 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.413 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.414 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.414 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.414 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.414 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.414 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.415 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.415 226833 DEBUG nova.virt.hardware [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.418 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2298688102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:21:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:25.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:21:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:21:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3591730240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.860 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.884 226833 DEBUG nova.storage.rbd_utils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image aa396f7d-4c1b-445e-807c-05107a729be4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:25 compute-2 nova_compute[226829]: 2026-01-31 08:21:25.888 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:21:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3856109048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.298 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.300 226833 DEBUG nova.virt.libvirt.vif [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:21:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-485097177',display_name='tempest-TestMinimumBasicScenario-server-485097177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-485097177',id=148,image_ref='1a3bb6f8-bfef-4edf-a7ea-1489b5cad196',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGOpaJxQC88oU1BAE6agEO8/UDKwtI95jIn5J+NZB6IktiWeDMwJPGWyNwRbN0e4Zkyig+zlgMtyl/CXhKkrxfhvbob06bGVRfII17t2PzSbTbwb3feBP+Tv9H8WPOTqeQ==',key_name='tempest-TestMinimumBasicScenario-761057108',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='134c066ac92844ff853b216870fa8eed',ramdisk_id='',reservation_id='r-uxkefg5k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1a3bb6f8-bfef-4edf-a7ea-1489b5cad196',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-975831205',owner_user_name='tempest-TestMinimumBasicScenario-975831205-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:19Z,user_data=None,user_id='7f0be9090fdf49d2ac15246a0a820d3f',uuid=aa396f7d-4c1b-445e-807c-05107a729be4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.300 226833 DEBUG nova.network.os_vif_util [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converting VIF {"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.301 226833 DEBUG nova.network.os_vif_util [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e7:7f,bridge_name='br-int',has_traffic_filtering=True,id=c7810272-d139-4528-b358-19b623e1a34d,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7810272-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.302 226833 DEBUG nova.objects.instance [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'pci_devices' on Instance uuid aa396f7d-4c1b-445e-807c-05107a729be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.331 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <uuid>aa396f7d-4c1b-445e-807c-05107a729be4</uuid>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <name>instance-00000094</name>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <nova:name>tempest-TestMinimumBasicScenario-server-485097177</nova:name>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:21:25</nova:creationTime>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <nova:user uuid="7f0be9090fdf49d2ac15246a0a820d3f">tempest-TestMinimumBasicScenario-975831205-project-member</nova:user>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <nova:project uuid="134c066ac92844ff853b216870fa8eed">tempest-TestMinimumBasicScenario-975831205</nova:project>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="1a3bb6f8-bfef-4edf-a7ea-1489b5cad196"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <nova:port uuid="c7810272-d139-4528-b358-19b623e1a34d">
Jan 31 08:21:26 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <system>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <entry name="serial">aa396f7d-4c1b-445e-807c-05107a729be4</entry>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <entry name="uuid">aa396f7d-4c1b-445e-807c-05107a729be4</entry>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     </system>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <os>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   </os>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <features>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   </features>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/aa396f7d-4c1b-445e-807c-05107a729be4_disk">
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       </source>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/aa396f7d-4c1b-445e-807c-05107a729be4_disk.config">
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       </source>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:21:26 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:d0:e7:7f"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <target dev="tapc7810272-d1"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/aa396f7d-4c1b-445e-807c-05107a729be4/console.log" append="off"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <video>
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     </video>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:21:26 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:21:26 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:21:26 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:21:26 compute-2 nova_compute[226829]: </domain>
Jan 31 08:21:26 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.332 226833 DEBUG nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Preparing to wait for external event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.333 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.333 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.333 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.334 226833 DEBUG nova.virt.libvirt.vif [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:21:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-485097177',display_name='tempest-TestMinimumBasicScenario-server-485097177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-485097177',id=148,image_ref='1a3bb6f8-bfef-4edf-a7ea-1489b5cad196',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGOpaJxQC88oU1BAE6agEO8/UDKwtI95jIn5J+NZB6IktiWeDMwJPGWyNwRbN0e4Zkyig+zlgMtyl/CXhKkrxfhvbob06bGVRfII17t2PzSbTbwb3feBP+Tv9H8WPOTqeQ==',key_name='tempest-TestMinimumBasicScenario-761057108',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='134c066ac92844ff853b216870fa8eed',ramdisk_id='',reservation_id='r-uxkefg5k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1a3bb6f8-bfef-4edf-a7ea-1489b5cad196',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-975831205',owner_user_name='tempest-TestMinimumBasicScenario-975831205-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:19Z,user_data=None,user_id='7f0be9090fdf49d2ac15246a0a820d3f',uuid=aa396f7d-4c1b-445e-807c-05107a729be4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.334 226833 DEBUG nova.network.os_vif_util [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converting VIF {"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.335 226833 DEBUG nova.network.os_vif_util [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e7:7f,bridge_name='br-int',has_traffic_filtering=True,id=c7810272-d139-4528-b358-19b623e1a34d,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7810272-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.335 226833 DEBUG os_vif [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e7:7f,bridge_name='br-int',has_traffic_filtering=True,id=c7810272-d139-4528-b358-19b623e1a34d,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7810272-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.336 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.336 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.337 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.349 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.349 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7810272-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.349 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7810272-d1, col_values=(('external_ids', {'iface-id': 'c7810272-d139-4528-b358-19b623e1a34d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d0:e7:7f', 'vm-uuid': 'aa396f7d-4c1b-445e-807c-05107a729be4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.351 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:26 compute-2 NetworkManager[48999]: <info>  [1769847686.3527] manager: (tapc7810272-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.354 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.360 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.361 226833 INFO os_vif [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e7:7f,bridge_name='br-int',has_traffic_filtering=True,id=c7810272-d139-4528-b358-19b623e1a34d,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7810272-d1')
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.485 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.589 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.589 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.589 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No VIF found with MAC fa:16:3e:d0:e7:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.590 226833 INFO nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Using config drive
Jan 31 08:21:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1999579920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3591730240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:26 compute-2 ceph-mon[77282]: pgmap v2620: 305 pgs: 305 active+clean; 483 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 6.3 MiB/s wr, 254 op/s
Jan 31 08:21:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3856109048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:26 compute-2 nova_compute[226829]: 2026-01-31 08:21:26.755 226833 DEBUG nova.storage.rbd_utils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image aa396f7d-4c1b-445e-807c-05107a729be4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:27.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:21:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:27.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:21:27 compute-2 nova_compute[226829]: 2026-01-31 08:21:27.660 226833 INFO nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Creating config drive at /var/lib/nova/instances/aa396f7d-4c1b-445e-807c-05107a729be4/disk.config
Jan 31 08:21:27 compute-2 nova_compute[226829]: 2026-01-31 08:21:27.667 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa396f7d-4c1b-445e-807c-05107a729be4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsjt2cjow execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:27 compute-2 nova_compute[226829]: 2026-01-31 08:21:27.808 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa396f7d-4c1b-445e-807c-05107a729be4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsjt2cjow" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:27 compute-2 nova_compute[226829]: 2026-01-31 08:21:27.857 226833 DEBUG nova.storage.rbd_utils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image aa396f7d-4c1b-445e-807c-05107a729be4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:27 compute-2 nova_compute[226829]: 2026-01-31 08:21:27.862 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa396f7d-4c1b-445e-807c-05107a729be4/disk.config aa396f7d-4c1b-445e-807c-05107a729be4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.067 226833 DEBUG oslo_concurrency.processutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa396f7d-4c1b-445e-807c-05107a729be4/disk.config aa396f7d-4c1b-445e-807c-05107a729be4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.068 226833 INFO nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Deleting local config drive /var/lib/nova/instances/aa396f7d-4c1b-445e-807c-05107a729be4/disk.config because it was imported into RBD.
Jan 31 08:21:28 compute-2 kernel: tapc7810272-d1: entered promiscuous mode
Jan 31 08:21:28 compute-2 NetworkManager[48999]: <info>  [1769847688.1102] manager: (tapc7810272-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/303)
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.144 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:28 compute-2 ovn_controller[133834]: 2026-01-31T08:21:28Z|00613|binding|INFO|Claiming lport c7810272-d139-4528-b358-19b623e1a34d for this chassis.
Jan 31 08:21:28 compute-2 ovn_controller[133834]: 2026-01-31T08:21:28Z|00614|binding|INFO|c7810272-d139-4528-b358-19b623e1a34d: Claiming fa:16:3e:d0:e7:7f 10.100.0.7
Jan 31 08:21:28 compute-2 systemd-udevd[294753]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.148 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:28 compute-2 NetworkManager[48999]: <info>  [1769847688.1564] device (tapc7810272-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:21:28 compute-2 NetworkManager[48999]: <info>  [1769847688.1574] device (tapc7810272-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.162 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:e7:7f 10.100.0.7'], port_security=['fa:16:3e:d0:e7:7f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'aa396f7d-4c1b-445e-807c-05107a729be4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '134c066ac92844ff853b216870fa8eed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '967ea74e-50db-4569-92ae-9b918e86440d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bd5646e-2523-4ee9-a162-795050792e9d, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c7810272-d139-4528-b358-19b623e1a34d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.166 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c7810272-d139-4528-b358-19b623e1a34d in datapath b8453b6a-05bd-4d59-86e9-a509416a9ef0 bound to our chassis
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.170 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:21:28 compute-2 ovn_controller[133834]: 2026-01-31T08:21:28Z|00615|binding|INFO|Setting lport c7810272-d139-4528-b358-19b623e1a34d ovn-installed in OVS
Jan 31 08:21:28 compute-2 ovn_controller[133834]: 2026-01-31T08:21:28Z|00616|binding|INFO|Setting lport c7810272-d139-4528-b358-19b623e1a34d up in Southbound
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.174 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:28 compute-2 systemd-machined[195142]: New machine qemu-68-instance-00000094.
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.182 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3f008c-7103-4793-937c-0f7b1d4ff545]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.183 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8453b6a-01 in ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.185 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8453b6a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.185 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[237e5200-a2f4-4069-b025-8fc1f08e9717]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.186 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[384deaf7-d0f2-4d61-8694-e07ab35d67ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 systemd[1]: Started Virtual Machine qemu-68-instance-00000094.
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.199 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9b0ea3-9709-4eb0-acab-077d60f1d271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.209 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c04918bc-b595-4cc1-86cb-1b26a29e0e01]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.239 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[eef9ffaf-8480-4b7a-8908-fc4dc233022c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.247 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[11a0237e-cfaf-424b-99cc-af2e7417cac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 NetworkManager[48999]: <info>  [1769847688.2483] manager: (tapb8453b6a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/304)
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.276 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[81fbb008-64d2-4eb4-9026-e678d8a3dbe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.279 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9a26985b-aa87-4396-a718-8b4d55c1df44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 NetworkManager[48999]: <info>  [1769847688.2953] device (tapb8453b6a-00): carrier: link connected
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.298 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7684fa-feaf-410f-aedb-3c53f4969373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.314 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[80ca4d6e-fa02-441a-9954-29f9906c2aa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8453b6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:cc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 191], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794076, 'reachable_time': 16317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294814, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 sudo[294786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:28 compute-2 sudo[294786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:28 compute-2 sudo[294786]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.327 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[78075811-7dde-4711-87e6-a5d0a9663474]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:cc04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794076, 'tstamp': 794076}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294816, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.341 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a27033-9a13-44f9-b537-e8fe9ab22c5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8453b6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:cc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 191], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794076, 'reachable_time': 16317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294818, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.367 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[22a3bf74-31ef-4d5d-be33-a1779bbd83a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 sudo[294819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.379 226833 DEBUG nova.network.neutron [req-039cbce3-a9c7-43dc-a1db-ec82fd72828f req-60c3a4ad-10e0-4463-b109-dfbf815a0c49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updated VIF entry in instance network info cache for port c7810272-d139-4528-b358-19b623e1a34d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:21:28 compute-2 sudo[294819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.381 226833 DEBUG nova.network.neutron [req-039cbce3-a9c7-43dc-a1db-ec82fd72828f req-60c3a4ad-10e0-4463-b109-dfbf815a0c49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:28 compute-2 sudo[294819]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.410 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1fef9f0c-856a-4863-b3c3-658e3e4f5290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.411 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8453b6a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.412 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.412 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8453b6a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:28 compute-2 NetworkManager[48999]: <info>  [1769847688.4150] manager: (tapb8453b6a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Jan 31 08:21:28 compute-2 kernel: tapb8453b6a-00: entered promiscuous mode
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.416 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.417 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8453b6a-00, col_values=(('external_ids', {'iface-id': 'eb4259dc-1b35-4b46-af47-bdd24739342f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:28 compute-2 ovn_controller[133834]: 2026-01-31T08:21:28Z|00617|binding|INFO|Releasing lport eb4259dc-1b35-4b46-af47-bdd24739342f from this chassis (sb_readonly=0)
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.422 226833 DEBUG oslo_concurrency.lockutils [req-039cbce3-a9c7-43dc-a1db-ec82fd72828f req-60c3a4ad-10e0-4463-b109-dfbf815a0c49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.423 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.424 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.425 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[10f65994-2573-4367-8172-bfdd9f94a386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.426 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:21:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:28.427 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'env', 'PROCESS_TAG=haproxy-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8453b6a-05bd-4d59-86e9-a509416a9ef0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.724 226833 DEBUG nova.compute.manager [req-14711ca5-d895-49f3-8433-c88b59a00762 req-e228d5d1-cdfb-4793-9580-3c6fbcbdf9e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.726 226833 DEBUG oslo_concurrency.lockutils [req-14711ca5-d895-49f3-8433-c88b59a00762 req-e228d5d1-cdfb-4793-9580-3c6fbcbdf9e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.726 226833 DEBUG oslo_concurrency.lockutils [req-14711ca5-d895-49f3-8433-c88b59a00762 req-e228d5d1-cdfb-4793-9580-3c6fbcbdf9e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.726 226833 DEBUG oslo_concurrency.lockutils [req-14711ca5-d895-49f3-8433-c88b59a00762 req-e228d5d1-cdfb-4793-9580-3c6fbcbdf9e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.726 226833 DEBUG nova.compute.manager [req-14711ca5-d895-49f3-8433-c88b59a00762 req-e228d5d1-cdfb-4793-9580-3c6fbcbdf9e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Processing event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:21:28 compute-2 podman[294908]: 2026-01-31 08:21:28.830308757 +0000 UTC m=+0.043549601 container create 2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:21:28 compute-2 systemd[1]: Started libpod-conmon-2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce.scope.
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.890 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847688.8899796, aa396f7d-4c1b-445e-807c-05107a729be4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.891 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] VM Started (Lifecycle Event)
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.893 226833 DEBUG nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.897 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.900 226833 INFO nova.virt.libvirt.driver [-] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Instance spawned successfully.
Jan 31 08:21:28 compute-2 nova_compute[226829]: 2026-01-31 08:21:28.901 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:21:28 compute-2 podman[294908]: 2026-01-31 08:21:28.806696157 +0000 UTC m=+0.019937031 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:21:28 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:21:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4a2c821fc8139b63b62a18180193cadc61e12be3babef0fbca1dbfb9e7496c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:28 compute-2 podman[294908]: 2026-01-31 08:21:28.932097553 +0000 UTC m=+0.145338447 container init 2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:28 compute-2 podman[294908]: 2026-01-31 08:21:28.940418328 +0000 UTC m=+0.153659182 container start 2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:21:28 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[294932]: [NOTICE]   (294936) : New worker (294938) forked
Jan 31 08:21:28 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[294932]: [NOTICE]   (294936) : Loading success.
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.075 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.080 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:21:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:29.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.210 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.211 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847688.891082, aa396f7d-4c1b-445e-807c-05107a729be4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.212 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] VM Paused (Lifecycle Event)
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.223 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.224 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.225 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.226 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.228 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.229 226833 DEBUG nova.virt.libvirt.driver [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:29 compute-2 ceph-mon[77282]: pgmap v2621: 305 pgs: 305 active+clean; 504 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.7 MiB/s wr, 234 op/s
Jan 31 08:21:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/907480789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.319 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.323 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847688.89613, aa396f7d-4c1b-445e-807c-05107a729be4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.324 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] VM Resumed (Lifecycle Event)
Jan 31 08:21:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.542 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:29.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.545 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.568 226833 INFO nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Took 9.72 seconds to spawn the instance on the hypervisor.
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.569 226833 DEBUG nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.584 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.742 226833 INFO nova.compute.manager [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Took 10.98 seconds to build instance.
Jan 31 08:21:29 compute-2 nova_compute[226829]: 2026-01-31 08:21:29.822 226833 DEBUG oslo_concurrency.lockutils [None req-d37960b2-5001-4764-b848-9341b4c7513a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:30 compute-2 ceph-mon[77282]: pgmap v2622: 305 pgs: 305 active+clean; 530 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.1 MiB/s wr, 231 op/s
Jan 31 08:21:30 compute-2 nova_compute[226829]: 2026-01-31 08:21:30.895 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:30 compute-2 nova_compute[226829]: 2026-01-31 08:21:30.914 226833 DEBUG nova.compute.manager [req-1e10e5a0-6078-4561-a889-ffc22c18df23 req-85503af6-2788-4baa-9717-e88b581a6409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:30 compute-2 nova_compute[226829]: 2026-01-31 08:21:30.914 226833 DEBUG oslo_concurrency.lockutils [req-1e10e5a0-6078-4561-a889-ffc22c18df23 req-85503af6-2788-4baa-9717-e88b581a6409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:30 compute-2 nova_compute[226829]: 2026-01-31 08:21:30.915 226833 DEBUG oslo_concurrency.lockutils [req-1e10e5a0-6078-4561-a889-ffc22c18df23 req-85503af6-2788-4baa-9717-e88b581a6409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:30 compute-2 nova_compute[226829]: 2026-01-31 08:21:30.915 226833 DEBUG oslo_concurrency.lockutils [req-1e10e5a0-6078-4561-a889-ffc22c18df23 req-85503af6-2788-4baa-9717-e88b581a6409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:30 compute-2 nova_compute[226829]: 2026-01-31 08:21:30.915 226833 DEBUG nova.compute.manager [req-1e10e5a0-6078-4561-a889-ffc22c18df23 req-85503af6-2788-4baa-9717-e88b581a6409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] No waiting events found dispatching network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:21:30 compute-2 nova_compute[226829]: 2026-01-31 08:21:30.916 226833 WARNING nova.compute.manager [req-1e10e5a0-6078-4561-a889-ffc22c18df23 req-85503af6-2788-4baa-9717-e88b581a6409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received unexpected event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d for instance with vm_state active and task_state None.
Jan 31 08:21:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:21:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:31.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:21:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/410450162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:31 compute-2 nova_compute[226829]: 2026-01-31 08:21:31.352 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:31 compute-2 nova_compute[226829]: 2026-01-31 08:21:31.501 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:31.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:32 compute-2 ceph-mon[77282]: pgmap v2623: 305 pgs: 305 active+clean; 557 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 8.6 MiB/s wr, 247 op/s
Jan 31 08:21:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:33.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3987036066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:33.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:34 compute-2 nova_compute[226829]: 2026-01-31 08:21:34.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:34 compute-2 nova_compute[226829]: 2026-01-31 08:21:34.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:21:34 compute-2 nova_compute[226829]: 2026-01-31 08:21:34.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:21:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2232488299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:34 compute-2 ceph-mon[77282]: pgmap v2624: 305 pgs: 305 active+clean; 565 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.7 MiB/s wr, 250 op/s
Jan 31 08:21:34 compute-2 nova_compute[226829]: 2026-01-31 08:21:34.790 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:34 compute-2 nova_compute[226829]: 2026-01-31 08:21:34.790 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:34 compute-2 nova_compute[226829]: 2026-01-31 08:21:34.791 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:21:34 compute-2 nova_compute[226829]: 2026-01-31 08:21:34.791 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid e8e7a13f-a648-45dc-b768-ac5deac97083 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:35.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:21:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:35.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:21:36 compute-2 ceph-mon[77282]: pgmap v2625: 305 pgs: 305 active+clean; 568 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.3 MiB/s wr, 261 op/s
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.271 226833 DEBUG oslo_concurrency.lockutils [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.271 226833 DEBUG oslo_concurrency.lockutils [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.289 226833 DEBUG nova.objects.instance [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'flavor' on Instance uuid aa396f7d-4c1b-445e-807c-05107a729be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.344 226833 DEBUG oslo_concurrency.lockutils [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.355 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.502 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.661 226833 DEBUG oslo_concurrency.lockutils [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.661 226833 DEBUG oslo_concurrency.lockutils [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.662 226833 INFO nova.compute.manager [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Attaching volume 9e79b760-3e32-4b0f-9644-d12b3e5189ad to /dev/vdb
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.874 226833 DEBUG os_brick.utils [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.877 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.890 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.890 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[1a924513-8d70-414e-a0a9-7f70c58bf551]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.892 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.900 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.900 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[b2950f67-a803-4b28-b87f-73f9502631e2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.902 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.910 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.910 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e92174-a02e-4ec2-a1d5-ebf3352b6c9e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.912 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[04689bd7-9500-4299-aa35-942d197e3b99]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.912 226833 DEBUG oslo_concurrency.processutils [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.943 226833 DEBUG oslo_concurrency.processutils [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.945 226833 DEBUG os_brick.initiator.connectors.lightos [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.946 226833 DEBUG os_brick.initiator.connectors.lightos [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.946 226833 DEBUG os_brick.initiator.connectors.lightos [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.947 226833 DEBUG os_brick.utils [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:21:36 compute-2 nova_compute[226829]: 2026-01-31 08:21:36.947 226833 DEBUG nova.virt.block_device [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating existing volume attachment record: f22791e6-dc5c-4007-ae59-452a2e598637 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:21:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:37.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:37 compute-2 nova_compute[226829]: 2026-01-31 08:21:37.386 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Updating instance_info_cache with network_info: [{"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:37 compute-2 nova_compute[226829]: 2026-01-31 08:21:37.426 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:37 compute-2 nova_compute[226829]: 2026-01-31 08:21:37.427 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:21:37 compute-2 nova_compute[226829]: 2026-01-31 08:21:37.428 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:37 compute-2 nova_compute[226829]: 2026-01-31 08:21:37.429 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:37.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3451054505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:37 compute-2 nova_compute[226829]: 2026-01-31 08:21:37.942 226833 DEBUG nova.objects.instance [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'flavor' on Instance uuid aa396f7d-4c1b-445e-807c-05107a729be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:37 compute-2 nova_compute[226829]: 2026-01-31 08:21:37.983 226833 DEBUG nova.virt.libvirt.driver [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Attempting to attach volume 9e79b760-3e32-4b0f-9644-d12b3e5189ad with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:21:37 compute-2 nova_compute[226829]: 2026-01-31 08:21:37.987 226833 DEBUG nova.virt.libvirt.guest [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:21:37 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:21:37 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-9e79b760-3e32-4b0f-9644-d12b3e5189ad">
Jan 31 08:21:37 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:21:37 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:21:37 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:21:37 compute-2 nova_compute[226829]:   </source>
Jan 31 08:21:37 compute-2 nova_compute[226829]:   <auth username="openstack">
Jan 31 08:21:37 compute-2 nova_compute[226829]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:21:37 compute-2 nova_compute[226829]:   </auth>
Jan 31 08:21:37 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:21:37 compute-2 nova_compute[226829]:   <serial>9e79b760-3e32-4b0f-9644-d12b3e5189ad</serial>
Jan 31 08:21:37 compute-2 nova_compute[226829]: </disk>
Jan 31 08:21:37 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:21:38 compute-2 nova_compute[226829]: 2026-01-31 08:21:38.376 226833 DEBUG nova.virt.libvirt.driver [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:38 compute-2 nova_compute[226829]: 2026-01-31 08:21:38.378 226833 DEBUG nova.virt.libvirt.driver [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:38 compute-2 nova_compute[226829]: 2026-01-31 08:21:38.378 226833 DEBUG nova.virt.libvirt.driver [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:38 compute-2 nova_compute[226829]: 2026-01-31 08:21:38.379 226833 DEBUG nova.virt.libvirt.driver [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No VIF found with MAC fa:16:3e:d0:e7:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:21:38 compute-2 nova_compute[226829]: 2026-01-31 08:21:38.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:38 compute-2 nova_compute[226829]: 2026-01-31 08:21:38.790 226833 DEBUG oslo_concurrency.lockutils [None req-b4acf1d9-a11e-4a81-b453-cd0d1746a3b4 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:38 compute-2 ceph-mon[77282]: pgmap v2626: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 200 op/s
Jan 31 08:21:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:39.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:39 compute-2 podman[294980]: 2026-01-31 08:21:39.191934888 +0000 UTC m=+0.074335033 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:21:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:39.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:40 compute-2 ceph-mon[77282]: pgmap v2627: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.1 MiB/s wr, 178 op/s
Jan 31 08:21:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2092639563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:40.949 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:21:40 compute-2 nova_compute[226829]: 2026-01-31 08:21:40.952 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:40 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:40.953 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:21:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:41.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:41 compute-2 nova_compute[226829]: 2026-01-31 08:21:41.357 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:41 compute-2 nova_compute[226829]: 2026-01-31 08:21:41.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:41 compute-2 nova_compute[226829]: 2026-01-31 08:21:41.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:41 compute-2 nova_compute[226829]: 2026-01-31 08:21:41.505 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:41.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1798986891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:42 compute-2 nova_compute[226829]: 2026-01-31 08:21:42.236 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:42 compute-2 NetworkManager[48999]: <info>  [1769847702.2372] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Jan 31 08:21:42 compute-2 NetworkManager[48999]: <info>  [1769847702.2380] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Jan 31 08:21:42 compute-2 nova_compute[226829]: 2026-01-31 08:21:42.323 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:42 compute-2 ovn_controller[133834]: 2026-01-31T08:21:42Z|00618|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 08:21:42 compute-2 ovn_controller[133834]: 2026-01-31T08:21:42Z|00619|binding|INFO|Releasing lport eb4259dc-1b35-4b46-af47-bdd24739342f from this chassis (sb_readonly=0)
Jan 31 08:21:42 compute-2 nova_compute[226829]: 2026-01-31 08:21:42.342 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:42 compute-2 ceph-mon[77282]: pgmap v2628: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 165 op/s
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.120 226833 DEBUG nova.compute.manager [req-d23ca892-a2fe-4267-abe2-adfb35757978 req-254dad75-04a5-4f7c-8ff1-f17ac66847e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-changed-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.120 226833 DEBUG nova.compute.manager [req-d23ca892-a2fe-4267-abe2-adfb35757978 req-254dad75-04a5-4f7c-8ff1-f17ac66847e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing instance network info cache due to event network-changed-c7810272-d139-4528-b358-19b623e1a34d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.121 226833 DEBUG oslo_concurrency.lockutils [req-d23ca892-a2fe-4267-abe2-adfb35757978 req-254dad75-04a5-4f7c-8ff1-f17ac66847e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.121 226833 DEBUG oslo_concurrency.lockutils [req-d23ca892-a2fe-4267-abe2-adfb35757978 req-254dad75-04a5-4f7c-8ff1-f17ac66847e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.121 226833 DEBUG nova.network.neutron [req-d23ca892-a2fe-4267-abe2-adfb35757978 req-254dad75-04a5-4f7c-8ff1-f17ac66847e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing network info cache for port c7810272-d139-4528-b358-19b623e1a34d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:21:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:43.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:43 compute-2 ovn_controller[133834]: 2026-01-31T08:21:43Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d0:e7:7f 10.100.0.7
Jan 31 08:21:43 compute-2 ovn_controller[133834]: 2026-01-31T08:21:43Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d0:e7:7f 10.100.0.7
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:43.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.740 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.741 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.741 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.741 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:21:43 compute-2 nova_compute[226829]: 2026-01-31 08:21:43.742 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1191907265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:21:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1300955847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.198 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.329 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.329 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.329 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.332 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.332 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.514 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.516 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3847MB free_disk=20.743247985839844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.516 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.516 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.665 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e8e7a13f-a648-45dc-b768-ac5deac97083 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.665 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance aa396f7d-4c1b-445e-807c-05107a729be4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.665 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.665 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:21:44 compute-2 nova_compute[226829]: 2026-01-31 08:21:44.746 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:45 compute-2 ceph-mon[77282]: pgmap v2629: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 315 KiB/s wr, 142 op/s
Jan 31 08:21:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1300955847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2707252371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:21:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3079979552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:45 compute-2 nova_compute[226829]: 2026-01-31 08:21:45.168 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:45 compute-2 nova_compute[226829]: 2026-01-31 08:21:45.173 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:21:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:45.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:45 compute-2 nova_compute[226829]: 2026-01-31 08:21:45.209 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:21:45 compute-2 nova_compute[226829]: 2026-01-31 08:21:45.286 226833 DEBUG nova.compute.manager [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-changed-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:45 compute-2 nova_compute[226829]: 2026-01-31 08:21:45.287 226833 DEBUG nova.compute.manager [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing instance network info cache due to event network-changed-c7810272-d139-4528-b358-19b623e1a34d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:21:45 compute-2 nova_compute[226829]: 2026-01-31 08:21:45.287 226833 DEBUG oslo_concurrency.lockutils [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:45 compute-2 nova_compute[226829]: 2026-01-31 08:21:45.318 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:21:45 compute-2 nova_compute[226829]: 2026-01-31 08:21:45.318 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:45.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3079979552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:46 compute-2 ceph-mon[77282]: pgmap v2630: 305 pgs: 305 active+clean; 593 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 162 op/s
Jan 31 08:21:46 compute-2 nova_compute[226829]: 2026-01-31 08:21:46.343 226833 DEBUG nova.network.neutron [req-d23ca892-a2fe-4267-abe2-adfb35757978 req-254dad75-04a5-4f7c-8ff1-f17ac66847e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updated VIF entry in instance network info cache for port c7810272-d139-4528-b358-19b623e1a34d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:21:46 compute-2 nova_compute[226829]: 2026-01-31 08:21:46.344 226833 DEBUG nova.network.neutron [req-d23ca892-a2fe-4267-abe2-adfb35757978 req-254dad75-04a5-4f7c-8ff1-f17ac66847e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:46 compute-2 nova_compute[226829]: 2026-01-31 08:21:46.359 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:46 compute-2 nova_compute[226829]: 2026-01-31 08:21:46.486 226833 DEBUG oslo_concurrency.lockutils [req-d23ca892-a2fe-4267-abe2-adfb35757978 req-254dad75-04a5-4f7c-8ff1-f17ac66847e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:46 compute-2 nova_compute[226829]: 2026-01-31 08:21:46.486 226833 DEBUG oslo_concurrency.lockutils [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:46 compute-2 nova_compute[226829]: 2026-01-31 08:21:46.487 226833 DEBUG nova.network.neutron [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing network info cache for port c7810272-d139-4528-b358-19b623e1a34d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:21:46 compute-2 nova_compute[226829]: 2026-01-31 08:21:46.543 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:47.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:48 compute-2 sudo[295058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:48 compute-2 sudo[295058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:48 compute-2 sudo[295058]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:48 compute-2 sudo[295083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:48 compute-2 sudo[295083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:48 compute-2 sudo[295083]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:48 compute-2 nova_compute[226829]: 2026-01-31 08:21:48.737 226833 DEBUG nova.network.neutron [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updated VIF entry in instance network info cache for port c7810272-d139-4528-b358-19b623e1a34d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:21:48 compute-2 nova_compute[226829]: 2026-01-31 08:21:48.738 226833 DEBUG nova.network.neutron [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:48 compute-2 nova_compute[226829]: 2026-01-31 08:21:48.781 226833 DEBUG oslo_concurrency.lockutils [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:48 compute-2 nova_compute[226829]: 2026-01-31 08:21:48.781 226833 DEBUG nova.compute.manager [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-changed-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:48 compute-2 nova_compute[226829]: 2026-01-31 08:21:48.781 226833 DEBUG nova.compute.manager [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing instance network info cache due to event network-changed-c7810272-d139-4528-b358-19b623e1a34d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:21:48 compute-2 nova_compute[226829]: 2026-01-31 08:21:48.782 226833 DEBUG oslo_concurrency.lockutils [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:48 compute-2 nova_compute[226829]: 2026-01-31 08:21:48.782 226833 DEBUG oslo_concurrency.lockutils [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:48 compute-2 nova_compute[226829]: 2026-01-31 08:21:48.782 226833 DEBUG nova.network.neutron [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing network info cache for port c7810272-d139-4528-b358-19b623e1a34d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:21:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:21:48.957 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:49 compute-2 ceph-mon[77282]: pgmap v2631: 305 pgs: 305 active+clean; 601 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Jan 31 08:21:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:49.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:50 compute-2 podman[295109]: 2026-01-31 08:21:50.173365933 +0000 UTC m=+0.055513513 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 08:21:50 compute-2 nova_compute[226829]: 2026-01-31 08:21:50.669 226833 DEBUG nova.network.neutron [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updated VIF entry in instance network info cache for port c7810272-d139-4528-b358-19b623e1a34d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:21:50 compute-2 nova_compute[226829]: 2026-01-31 08:21:50.669 226833 DEBUG nova.network.neutron [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:50 compute-2 nova_compute[226829]: 2026-01-31 08:21:50.711 226833 DEBUG oslo_concurrency.lockutils [req-16fa6e58-dc21-4e24-b17b-820b5aa31cb1 req-f512d6c6-dc06-44ee-b7e3-fbaaf598b93a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:51 compute-2 ceph-mon[77282]: pgmap v2632: 305 pgs: 305 active+clean; 614 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 158 op/s
Jan 31 08:21:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:51 compute-2 nova_compute[226829]: 2026-01-31 08:21:51.361 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:51 compute-2 nova_compute[226829]: 2026-01-31 08:21:51.548 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:51.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:52 compute-2 nova_compute[226829]: 2026-01-31 08:21:52.798 226833 DEBUG nova.compute.manager [req-61dd5be0-071d-4edc-b785-4a108bdf5fa5 req-0b75b549-4b79-43b5-b134-2e75f1508050 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-changed-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:52 compute-2 nova_compute[226829]: 2026-01-31 08:21:52.798 226833 DEBUG nova.compute.manager [req-61dd5be0-071d-4edc-b785-4a108bdf5fa5 req-0b75b549-4b79-43b5-b134-2e75f1508050 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing instance network info cache due to event network-changed-c7810272-d139-4528-b358-19b623e1a34d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:21:52 compute-2 nova_compute[226829]: 2026-01-31 08:21:52.799 226833 DEBUG oslo_concurrency.lockutils [req-61dd5be0-071d-4edc-b785-4a108bdf5fa5 req-0b75b549-4b79-43b5-b134-2e75f1508050 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:52 compute-2 nova_compute[226829]: 2026-01-31 08:21:52.799 226833 DEBUG oslo_concurrency.lockutils [req-61dd5be0-071d-4edc-b785-4a108bdf5fa5 req-0b75b549-4b79-43b5-b134-2e75f1508050 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:52 compute-2 nova_compute[226829]: 2026-01-31 08:21:52.799 226833 DEBUG nova.network.neutron [req-61dd5be0-071d-4edc-b785-4a108bdf5fa5 req-0b75b549-4b79-43b5-b134-2e75f1508050 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing network info cache for port c7810272-d139-4528-b358-19b623e1a34d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:21:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2951126738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:52 compute-2 ceph-mon[77282]: pgmap v2633: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.1 MiB/s wr, 152 op/s
Jan 31 08:21:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:53.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:53.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:53 compute-2 nova_compute[226829]: 2026-01-31 08:21:53.841 226833 DEBUG oslo_concurrency.lockutils [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:53 compute-2 nova_compute[226829]: 2026-01-31 08:21:53.841 226833 DEBUG oslo_concurrency.lockutils [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:53 compute-2 nova_compute[226829]: 2026-01-31 08:21:53.842 226833 INFO nova.compute.manager [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Rebooting instance
Jan 31 08:21:53 compute-2 nova_compute[226829]: 2026-01-31 08:21:53.881 226833 DEBUG oslo_concurrency.lockutils [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3918858088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:21:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3918858088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:21:54 compute-2 nova_compute[226829]: 2026-01-31 08:21:54.750 226833 DEBUG nova.network.neutron [req-61dd5be0-071d-4edc-b785-4a108bdf5fa5 req-0b75b549-4b79-43b5-b134-2e75f1508050 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updated VIF entry in instance network info cache for port c7810272-d139-4528-b358-19b623e1a34d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:21:54 compute-2 nova_compute[226829]: 2026-01-31 08:21:54.751 226833 DEBUG nova.network.neutron [req-61dd5be0-071d-4edc-b785-4a108bdf5fa5 req-0b75b549-4b79-43b5-b134-2e75f1508050 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:54 compute-2 nova_compute[226829]: 2026-01-31 08:21:54.780 226833 DEBUG oslo_concurrency.lockutils [req-61dd5be0-071d-4edc-b785-4a108bdf5fa5 req-0b75b549-4b79-43b5-b134-2e75f1508050 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:54 compute-2 nova_compute[226829]: 2026-01-31 08:21:54.781 226833 DEBUG oslo_concurrency.lockutils [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquired lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:54 compute-2 nova_compute[226829]: 2026-01-31 08:21:54.782 226833 DEBUG nova.network.neutron [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:21:55 compute-2 ceph-mon[77282]: pgmap v2634: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.3 MiB/s wr, 154 op/s
Jan 31 08:21:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:55.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:55 compute-2 sudo[295131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:55 compute-2 sudo[295131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:55 compute-2 sudo[295131]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:55.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:55 compute-2 sudo[295156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:55 compute-2 sudo[295156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:55 compute-2 sudo[295156]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:55 compute-2 sudo[295181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:55 compute-2 sudo[295181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:55 compute-2 sudo[295181]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:55 compute-2 sudo[295206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:21:55 compute-2 sudo[295206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:56 compute-2 ceph-mon[77282]: pgmap v2635: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 726 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 31 08:21:56 compute-2 sudo[295206]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:56 compute-2 nova_compute[226829]: 2026-01-31 08:21:56.363 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:56 compute-2 nova_compute[226829]: 2026-01-31 08:21:56.550 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:21:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:21:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:21:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:21:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:21:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:21:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:21:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:57.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:21:57 compute-2 nova_compute[226829]: 2026-01-31 08:21:57.320 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:57 compute-2 nova_compute[226829]: 2026-01-31 08:21:57.320 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:21:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:57.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:57 compute-2 nova_compute[226829]: 2026-01-31 08:21:57.970 226833 DEBUG nova.network.neutron [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:57 compute-2 nova_compute[226829]: 2026-01-31 08:21:57.995 226833 DEBUG oslo_concurrency.lockutils [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Releasing lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:57 compute-2 nova_compute[226829]: 2026-01-31 08:21:57.998 226833 DEBUG nova.compute.manager [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:58 compute-2 ceph-mon[77282]: pgmap v2636: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 337 KiB/s rd, 2.2 MiB/s wr, 77 op/s
Jan 31 08:21:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:21:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:59.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:21:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:21:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:21:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:59.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:00 compute-2 kernel: tapc7810272-d1 (unregistering): left promiscuous mode
Jan 31 08:22:00 compute-2 NetworkManager[48999]: <info>  [1769847720.4442] device (tapc7810272-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:22:00 compute-2 ovn_controller[133834]: 2026-01-31T08:22:00Z|00620|binding|INFO|Releasing lport c7810272-d139-4528-b358-19b623e1a34d from this chassis (sb_readonly=0)
Jan 31 08:22:00 compute-2 nova_compute[226829]: 2026-01-31 08:22:00.463 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:00 compute-2 ovn_controller[133834]: 2026-01-31T08:22:00Z|00621|binding|INFO|Setting lport c7810272-d139-4528-b358-19b623e1a34d down in Southbound
Jan 31 08:22:00 compute-2 ovn_controller[133834]: 2026-01-31T08:22:00Z|00622|binding|INFO|Removing iface tapc7810272-d1 ovn-installed in OVS
Jan 31 08:22:00 compute-2 nova_compute[226829]: 2026-01-31 08:22:00.468 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:00 compute-2 nova_compute[226829]: 2026-01-31 08:22:00.477 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:00 compute-2 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000094.scope: Deactivated successfully.
Jan 31 08:22:00 compute-2 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000094.scope: Consumed 15.312s CPU time.
Jan 31 08:22:00 compute-2 systemd-machined[195142]: Machine qemu-68-instance-00000094 terminated.
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.606 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:e7:7f 10.100.0.7'], port_security=['fa:16:3e:d0:e7:7f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'aa396f7d-4c1b-445e-807c-05107a729be4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '134c066ac92844ff853b216870fa8eed', 'neutron:revision_number': '5', 'neutron:security_group_ids': '33b20316-44ef-4271-9cab-250dd8d7e1f5 967ea74e-50db-4569-92ae-9b918e86440d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bd5646e-2523-4ee9-a162-795050792e9d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c7810272-d139-4528-b358-19b623e1a34d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.609 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c7810272-d139-4528-b358-19b623e1a34d in datapath b8453b6a-05bd-4d59-86e9-a509416a9ef0 unbound from our chassis
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.611 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8453b6a-05bd-4d59-86e9-a509416a9ef0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.614 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2c6d23-d34d-4644-bcf6-066f2b343520]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.616 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 namespace which is not needed anymore
Jan 31 08:22:00 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[294932]: [NOTICE]   (294936) : haproxy version is 2.8.14-c23fe91
Jan 31 08:22:00 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[294932]: [NOTICE]   (294936) : path to executable is /usr/sbin/haproxy
Jan 31 08:22:00 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[294932]: [WARNING]  (294936) : Exiting Master process...
Jan 31 08:22:00 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[294932]: [ALERT]    (294936) : Current worker (294938) exited with code 143 (Terminated)
Jan 31 08:22:00 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[294932]: [WARNING]  (294936) : All workers exited. Exiting... (0)
Jan 31 08:22:00 compute-2 systemd[1]: libpod-2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce.scope: Deactivated successfully.
Jan 31 08:22:00 compute-2 podman[295292]: 2026-01-31 08:22:00.738639269 +0000 UTC m=+0.046511361 container died 2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:22:00 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce-userdata-shm.mount: Deactivated successfully.
Jan 31 08:22:00 compute-2 systemd[1]: var-lib-containers-storage-overlay-6d4a2c821fc8139b63b62a18180193cadc61e12be3babef0fbca1dbfb9e7496c-merged.mount: Deactivated successfully.
Jan 31 08:22:00 compute-2 podman[295292]: 2026-01-31 08:22:00.818315476 +0000 UTC m=+0.126187568 container cleanup 2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:00 compute-2 systemd[1]: libpod-conmon-2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce.scope: Deactivated successfully.
Jan 31 08:22:00 compute-2 podman[295330]: 2026-01-31 08:22:00.864640621 +0000 UTC m=+0.033006875 container remove 2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.868 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[67afa036-370c-4e67-be0b-6eef0980ed56]: (4, ('Sat Jan 31 08:22:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 (2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce)\n2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce\nSat Jan 31 08:22:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 (2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce)\n2049d71ee55d29ff10ccea72d0b33a6296de1765ff9fdbdbc3a460ea318f40ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.870 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[39583670-7749-44ba-900b-5c28eaeee88f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.871 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8453b6a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:00 compute-2 kernel: tapb8453b6a-00: left promiscuous mode
Jan 31 08:22:00 compute-2 nova_compute[226829]: 2026-01-31 08:22:00.873 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:00 compute-2 nova_compute[226829]: 2026-01-31 08:22:00.881 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.883 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7adff1ea-7224-49d5-b97e-a8c1fd8f345e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.907 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[76184890-8247-4951-82f4-c786b3a1683d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.909 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eeec120b-70bf-4580-ac61-e3a0a85c70ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.920 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aea183da-3e1f-4dc0-8a23-73cd51572d09]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794071, 'reachable_time': 37183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295349, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:00 compute-2 systemd[1]: run-netns-ovnmeta\x2db8453b6a\x2d05bd\x2d4d59\x2d86e9\x2da509416a9ef0.mount: Deactivated successfully.
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.925 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:22:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:00.925 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7e21ca-a6e0-460a-8231-c5097c7984a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ceph-mon[77282]: pgmap v2637: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.127 226833 INFO nova.virt.libvirt.driver [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Instance shutdown successfully.
Jan 31 08:22:01 compute-2 kernel: tapc7810272-d1: entered promiscuous mode
Jan 31 08:22:01 compute-2 systemd-udevd[295267]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:22:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:01 compute-2 NetworkManager[48999]: <info>  [1769847721.2268] manager: (tapc7810272-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Jan 31 08:22:01 compute-2 ovn_controller[133834]: 2026-01-31T08:22:01Z|00623|binding|INFO|Claiming lport c7810272-d139-4528-b358-19b623e1a34d for this chassis.
Jan 31 08:22:01 compute-2 ovn_controller[133834]: 2026-01-31T08:22:01Z|00624|binding|INFO|c7810272-d139-4528-b358-19b623e1a34d: Claiming fa:16:3e:d0:e7:7f 10.100.0.7
Jan 31 08:22:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 31 08:22:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:01.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.227 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:01 compute-2 NetworkManager[48999]: <info>  [1769847721.2302] device (tapc7810272-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:22:01 compute-2 NetworkManager[48999]: <info>  [1769847721.2314] device (tapc7810272-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.230 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:01 compute-2 ovn_controller[133834]: 2026-01-31T08:22:01Z|00625|binding|INFO|Setting lport c7810272-d139-4528-b358-19b623e1a34d ovn-installed in OVS
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.241 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.243 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:01 compute-2 systemd-machined[195142]: New machine qemu-69-instance-00000094.
Jan 31 08:22:01 compute-2 systemd[1]: Started Virtual Machine qemu-69-instance-00000094.
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.333 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:e7:7f 10.100.0.7'], port_security=['fa:16:3e:d0:e7:7f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'aa396f7d-4c1b-445e-807c-05107a729be4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '134c066ac92844ff853b216870fa8eed', 'neutron:revision_number': '5', 'neutron:security_group_ids': '33b20316-44ef-4271-9cab-250dd8d7e1f5 967ea74e-50db-4569-92ae-9b918e86440d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bd5646e-2523-4ee9-a162-795050792e9d, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c7810272-d139-4528-b358-19b623e1a34d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:22:01 compute-2 ovn_controller[133834]: 2026-01-31T08:22:01Z|00626|binding|INFO|Setting lport c7810272-d139-4528-b358-19b623e1a34d up in Southbound
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.334 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c7810272-d139-4528-b358-19b623e1a34d in datapath b8453b6a-05bd-4d59-86e9-a509416a9ef0 bound to our chassis
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.336 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.343 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea1e1a8-2260-4fad-9eac-00c4e1b5ea53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.343 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8453b6a-01 in ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.345 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8453b6a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.345 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[63f97c54-802f-4e3a-a93b-9370ff263b98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.346 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc77c83-bc08-4d87-a6a4-daa31cba7e16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.356 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[a2abcd63-726a-409b-89da-78bf608329c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.366 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.366 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[02d09da1-3a99-4ba7-8e27-50aa994c9bfb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.394 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[61a6912c-6680-4f12-9ed5-83c7f7dde7ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 NetworkManager[48999]: <info>  [1769847721.4037] manager: (tapb8453b6a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/309)
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.399 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9721ee20-7fef-422a-93df-4772cca5a1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.433 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[622275a6-d691-463c-80f2-488aee1c92e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.438 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2caa26fa-cf6d-4665-af8f-2b8119507753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 NetworkManager[48999]: <info>  [1769847721.4562] device (tapb8453b6a-00): carrier: link connected
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.461 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0a0b96-5a69-44cb-abb9-93b4389a5f2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.474 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[97fd4fcb-381a-4f1a-9a04-9efd97712e5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8453b6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:cc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797392, 'reachable_time': 19031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295394, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.485 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ba05de5b-841b-4e59-83ca-530ee3f7112c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:cc04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797392, 'tstamp': 797392}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295395, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.500 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e5be80-b776-4688-ad7d-e98526df1970]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8453b6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:cc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797392, 'reachable_time': 19031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295396, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.523 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5f24c209-1bd8-492c-a514-9eddd793c0fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.565 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8c214f-bd4d-41cf-9ad5-97a529121074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.568 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8453b6a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.568 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.569 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8453b6a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.571 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:01 compute-2 kernel: tapb8453b6a-00: entered promiscuous mode
Jan 31 08:22:01 compute-2 NetworkManager[48999]: <info>  [1769847721.5721] manager: (tapb8453b6a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.575 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8453b6a-00, col_values=(('external_ids', {'iface-id': 'eb4259dc-1b35-4b46-af47-bdd24739342f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.576 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:01 compute-2 ovn_controller[133834]: 2026-01-31T08:22:01Z|00627|binding|INFO|Releasing lport eb4259dc-1b35-4b46-af47-bdd24739342f from this chassis (sb_readonly=0)
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.582 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.584 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.586 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[38486b27-6753-4c2b-ab61-30266a870308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.587 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:22:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:01.588 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'env', 'PROCESS_TAG=haproxy-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8453b6a-05bd-4d59-86e9-a509416a9ef0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:22:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:01.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.930 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for aa396f7d-4c1b-445e-807c-05107a729be4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.932 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847721.9304838, aa396f7d-4c1b-445e-807c-05107a729be4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.932 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] VM Resumed (Lifecycle Event)
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.936 226833 INFO nova.virt.libvirt.driver [-] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Instance running successfully.
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.937 226833 INFO nova.virt.libvirt.driver [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Instance soft rebooted successfully.
Jan 31 08:22:01 compute-2 nova_compute[226829]: 2026-01-31 08:22:01.937 226833 DEBUG nova.compute.manager [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:22:01 compute-2 podman[295487]: 2026-01-31 08:22:01.956314202 +0000 UTC m=+0.043993082 container create 66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:01 compute-2 systemd[1]: Started libpod-conmon-66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955.scope.
Jan 31 08:22:02 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:22:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a437e9b7e50a39709876ed53a23d04e408cac5a30051b6a571b94b6166200870/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:02 compute-2 podman[295487]: 2026-01-31 08:22:01.932353163 +0000 UTC m=+0.020032043 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:22:02 compute-2 podman[295487]: 2026-01-31 08:22:02.029777432 +0000 UTC m=+0.117456312 container init 66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:22:02 compute-2 podman[295487]: 2026-01-31 08:22:02.035848226 +0000 UTC m=+0.123527116 container start 66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.048 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.051 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.056 226833 DEBUG oslo_concurrency.lockutils [None req-ade44371-1e4b-40b1-9571-7dd9a63a1d95 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:02 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[295503]: [NOTICE]   (295507) : New worker (295509) forked
Jan 31 08:22:02 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[295503]: [NOTICE]   (295507) : Loading success.
Jan 31 08:22:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3300814650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1469410732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.078 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847721.9313078, aa396f7d-4c1b-445e-807c-05107a729be4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.079 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] VM Started (Lifecycle Event)
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.098 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.102 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.876 226833 DEBUG nova.compute.manager [req-9fadcab2-4b9d-48d0-a607-c29065461356 req-05601637-31af-4662-9f6b-9d392c81de2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-vif-unplugged-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.877 226833 DEBUG oslo_concurrency.lockutils [req-9fadcab2-4b9d-48d0-a607-c29065461356 req-05601637-31af-4662-9f6b-9d392c81de2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.877 226833 DEBUG oslo_concurrency.lockutils [req-9fadcab2-4b9d-48d0-a607-c29065461356 req-05601637-31af-4662-9f6b-9d392c81de2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.877 226833 DEBUG oslo_concurrency.lockutils [req-9fadcab2-4b9d-48d0-a607-c29065461356 req-05601637-31af-4662-9f6b-9d392c81de2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.877 226833 DEBUG nova.compute.manager [req-9fadcab2-4b9d-48d0-a607-c29065461356 req-05601637-31af-4662-9f6b-9d392c81de2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] No waiting events found dispatching network-vif-unplugged-c7810272-d139-4528-b358-19b623e1a34d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:02 compute-2 nova_compute[226829]: 2026-01-31 08:22:02.878 226833 WARNING nova.compute.manager [req-9fadcab2-4b9d-48d0-a607-c29065461356 req-05601637-31af-4662-9f6b-9d392c81de2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received unexpected event network-vif-unplugged-c7810272-d139-4528-b358-19b623e1a34d for instance with vm_state active and task_state None.
Jan 31 08:22:02 compute-2 sudo[295518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:02 compute-2 sudo[295518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:02 compute-2 sudo[295518]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:02 compute-2 sudo[295543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:22:02 compute-2 sudo[295543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:02 compute-2 sudo[295543]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:03 compute-2 ceph-mon[77282]: pgmap v2638: 305 pgs: 305 active+clean; 640 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 1.2 MiB/s wr, 53 op/s
Jan 31 08:22:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3061871491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:22:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:22:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:03.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:03.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:04 compute-2 ceph-mon[77282]: pgmap v2639: 305 pgs: 305 active+clean; 651 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 897 KiB/s wr, 50 op/s
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.512 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.970 226833 DEBUG nova.compute.manager [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.971 226833 DEBUG oslo_concurrency.lockutils [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.972 226833 DEBUG oslo_concurrency.lockutils [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.972 226833 DEBUG oslo_concurrency.lockutils [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.973 226833 DEBUG nova.compute.manager [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] No waiting events found dispatching network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.973 226833 WARNING nova.compute.manager [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received unexpected event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d for instance with vm_state active and task_state None.
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.973 226833 DEBUG nova.compute.manager [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.974 226833 DEBUG oslo_concurrency.lockutils [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.974 226833 DEBUG oslo_concurrency.lockutils [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.975 226833 DEBUG oslo_concurrency.lockutils [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.975 226833 DEBUG nova.compute.manager [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] No waiting events found dispatching network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.976 226833 WARNING nova.compute.manager [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received unexpected event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d for instance with vm_state active and task_state None.
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.976 226833 DEBUG nova.compute.manager [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.976 226833 DEBUG oslo_concurrency.lockutils [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.977 226833 DEBUG oslo_concurrency.lockutils [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.977 226833 DEBUG oslo_concurrency.lockutils [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.978 226833 DEBUG nova.compute.manager [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] No waiting events found dispatching network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:04 compute-2 nova_compute[226829]: 2026-01-31 08:22:04.978 226833 WARNING nova.compute.manager [req-02aad357-f84e-4630-863f-af238dc7c52f req-756f2bbd-1f86-4c74-af52-ffc5f4d535c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received unexpected event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d for instance with vm_state active and task_state None.
Jan 31 08:22:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:05 compute-2 nova_compute[226829]: 2026-01-31 08:22:05.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:05 compute-2 nova_compute[226829]: 2026-01-31 08:22:05.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:22:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:05.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:06 compute-2 nova_compute[226829]: 2026-01-31 08:22:06.368 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:06 compute-2 nova_compute[226829]: 2026-01-31 08:22:06.554 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:06.898 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:06.899 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:06.900 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:07 compute-2 ceph-mon[77282]: pgmap v2640: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.106453) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727106552, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1663, "num_deletes": 254, "total_data_size": 3458312, "memory_usage": 3504400, "flush_reason": "Manual Compaction"}
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727123826, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 2267714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60909, "largest_seqno": 62566, "table_properties": {"data_size": 2260799, "index_size": 3922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14863, "raw_average_key_size": 19, "raw_value_size": 2246497, "raw_average_value_size": 2952, "num_data_blocks": 170, "num_entries": 761, "num_filter_entries": 761, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847611, "oldest_key_time": 1769847611, "file_creation_time": 1769847727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 17419 microseconds, and 5011 cpu microseconds.
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.123880) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 2267714 bytes OK
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.123897) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.124845) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.124858) EVENT_LOG_v1 {"time_micros": 1769847727124854, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.124875) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 3450629, prev total WAL file size 3450629, number of live WAL files 2.
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.125569) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(2214KB)], [120(11MB)]
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727125649, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 14767455, "oldest_snapshot_seqno": -1}
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 8696 keys, 13659858 bytes, temperature: kUnknown
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727204446, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 13659858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13600317, "index_size": 36709, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227368, "raw_average_key_size": 26, "raw_value_size": 13443992, "raw_average_value_size": 1545, "num_data_blocks": 1425, "num_entries": 8696, "num_filter_entries": 8696, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.207832) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13659858 bytes
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.209335) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.4 rd, 166.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.9 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(12.5) write-amplify(6.0) OK, records in: 9225, records dropped: 529 output_compression: NoCompression
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.209363) EVENT_LOG_v1 {"time_micros": 1769847727209350, "job": 76, "event": "compaction_finished", "compaction_time_micros": 81850, "compaction_time_cpu_micros": 26006, "output_level": 6, "num_output_files": 1, "total_output_size": 13659858, "num_input_records": 9225, "num_output_records": 8696, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727210098, "job": 76, "event": "table_file_deletion", "file_number": 122}
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727211702, "job": 76, "event": "table_file_deletion", "file_number": 120}
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.125388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.211887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.211891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.211893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.211894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:22:07.211898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:07.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:07.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:08 compute-2 ceph-mon[77282]: pgmap v2641: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 31 08:22:08 compute-2 sudo[295571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:08 compute-2 sudo[295571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:08 compute-2 sudo[295571]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:08 compute-2 sudo[295596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:08 compute-2 sudo[295596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:08 compute-2 sudo[295596]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:09.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:09.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:10 compute-2 podman[295622]: 2026-01-31 08:22:10.254818096 +0000 UTC m=+0.124573675 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 08:22:10 compute-2 nova_compute[226829]: 2026-01-31 08:22:10.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:11 compute-2 ceph-mon[77282]: pgmap v2642: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Jan 31 08:22:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:11.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:11 compute-2 nova_compute[226829]: 2026-01-31 08:22:11.371 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:11 compute-2 nova_compute[226829]: 2026-01-31 08:22:11.555 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:11.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:12 compute-2 ceph-mon[77282]: pgmap v2643: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Jan 31 08:22:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:13.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:13.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:15 compute-2 ceph-mon[77282]: pgmap v2644: 305 pgs: 305 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 169 op/s
Jan 31 08:22:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:22:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:15.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:22:15 compute-2 ovn_controller[133834]: 2026-01-31T08:22:15Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d0:e7:7f 10.100.0.7
Jan 31 08:22:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:15.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/845999303' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:16 compute-2 ceph-mon[77282]: pgmap v2645: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 1.2 MiB/s wr, 273 op/s
Jan 31 08:22:16 compute-2 nova_compute[226829]: 2026-01-31 08:22:16.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:16 compute-2 nova_compute[226829]: 2026-01-31 08:22:16.558 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:17.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:17.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:19 compute-2 ceph-mon[77282]: pgmap v2646: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 23 KiB/s wr, 210 op/s
Jan 31 08:22:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:19.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:19.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:20 compute-2 ceph-mon[77282]: pgmap v2647: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 48 KiB/s wr, 198 op/s
Jan 31 08:22:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:21 compute-2 nova_compute[226829]: 2026-01-31 08:22:21.109 226833 DEBUG nova.compute.manager [req-7bf7953e-9127-4242-8cd4-d58c88ad07ec req-74a7e39d-889d-450f-8a71-dc5ab13aea30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-changed-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:21 compute-2 nova_compute[226829]: 2026-01-31 08:22:21.110 226833 DEBUG nova.compute.manager [req-7bf7953e-9127-4242-8cd4-d58c88ad07ec req-74a7e39d-889d-450f-8a71-dc5ab13aea30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing instance network info cache due to event network-changed-c7810272-d139-4528-b358-19b623e1a34d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:22:21 compute-2 nova_compute[226829]: 2026-01-31 08:22:21.110 226833 DEBUG oslo_concurrency.lockutils [req-7bf7953e-9127-4242-8cd4-d58c88ad07ec req-74a7e39d-889d-450f-8a71-dc5ab13aea30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:22:21 compute-2 nova_compute[226829]: 2026-01-31 08:22:21.111 226833 DEBUG oslo_concurrency.lockutils [req-7bf7953e-9127-4242-8cd4-d58c88ad07ec req-74a7e39d-889d-450f-8a71-dc5ab13aea30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:22:21 compute-2 nova_compute[226829]: 2026-01-31 08:22:21.111 226833 DEBUG nova.network.neutron [req-7bf7953e-9127-4242-8cd4-d58c88ad07ec req-74a7e39d-889d-450f-8a71-dc5ab13aea30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing network info cache for port c7810272-d139-4528-b358-19b623e1a34d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:22:21 compute-2 podman[295654]: 2026-01-31 08:22:21.195287301 +0000 UTC m=+0.071639710 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:22:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:21.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:21 compute-2 nova_compute[226829]: 2026-01-31 08:22:21.378 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:21 compute-2 nova_compute[226829]: 2026-01-31 08:22:21.561 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:21.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:23 compute-2 ceph-mon[77282]: pgmap v2648: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 42 KiB/s wr, 157 op/s
Jan 31 08:22:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:23.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:23.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:23 compute-2 nova_compute[226829]: 2026-01-31 08:22:23.887 226833 DEBUG nova.network.neutron [req-7bf7953e-9127-4242-8cd4-d58c88ad07ec req-74a7e39d-889d-450f-8a71-dc5ab13aea30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updated VIF entry in instance network info cache for port c7810272-d139-4528-b358-19b623e1a34d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:22:23 compute-2 nova_compute[226829]: 2026-01-31 08:22:23.888 226833 DEBUG nova.network.neutron [req-7bf7953e-9127-4242-8cd4-d58c88ad07ec req-74a7e39d-889d-450f-8a71-dc5ab13aea30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:23 compute-2 nova_compute[226829]: 2026-01-31 08:22:23.950 226833 DEBUG oslo_concurrency.lockutils [req-7bf7953e-9127-4242-8cd4-d58c88ad07ec req-74a7e39d-889d-450f-8a71-dc5ab13aea30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:22:24 compute-2 ceph-mon[77282]: pgmap v2649: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 43 KiB/s wr, 153 op/s
Jan 31 08:22:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:25.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:25.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:26 compute-2 nova_compute[226829]: 2026-01-31 08:22:26.380 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:26 compute-2 nova_compute[226829]: 2026-01-31 08:22:26.563 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:27 compute-2 ceph-mon[77282]: pgmap v2650: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 45 KiB/s wr, 174 op/s
Jan 31 08:22:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:22:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:27.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:22:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:27.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:28 compute-2 ceph-mon[77282]: pgmap v2651: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 831 KiB/s rd, 43 KiB/s wr, 70 op/s
Jan 31 08:22:28 compute-2 sudo[295678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:28 compute-2 sudo[295678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:28 compute-2 sudo[295678]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:28 compute-2 sudo[295703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:28 compute-2 sudo[295703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:28 compute-2 sudo[295703]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:22:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:29.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:22:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:29.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:30 compute-2 ceph-mon[77282]: pgmap v2652: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 725 KiB/s rd, 57 KiB/s wr, 65 op/s
Jan 31 08:22:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:31.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:31 compute-2 nova_compute[226829]: 2026-01-31 08:22:31.431 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:31 compute-2 nova_compute[226829]: 2026-01-31 08:22:31.565 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:31.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:32 compute-2 nova_compute[226829]: 2026-01-31 08:22:32.502 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:32 compute-2 nova_compute[226829]: 2026-01-31 08:22:32.720 226833 DEBUG nova.compute.manager [req-dc028e1f-644e-4552-86bd-c559c05c053e req-12c77c83-788c-4aa8-9417-7c1d05d8b98e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-changed-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:32 compute-2 nova_compute[226829]: 2026-01-31 08:22:32.720 226833 DEBUG nova.compute.manager [req-dc028e1f-644e-4552-86bd-c559c05c053e req-12c77c83-788c-4aa8-9417-7c1d05d8b98e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing instance network info cache due to event network-changed-c7810272-d139-4528-b358-19b623e1a34d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:22:32 compute-2 nova_compute[226829]: 2026-01-31 08:22:32.720 226833 DEBUG oslo_concurrency.lockutils [req-dc028e1f-644e-4552-86bd-c559c05c053e req-12c77c83-788c-4aa8-9417-7c1d05d8b98e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:22:32 compute-2 nova_compute[226829]: 2026-01-31 08:22:32.720 226833 DEBUG oslo_concurrency.lockutils [req-dc028e1f-644e-4552-86bd-c559c05c053e req-12c77c83-788c-4aa8-9417-7c1d05d8b98e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:22:32 compute-2 nova_compute[226829]: 2026-01-31 08:22:32.720 226833 DEBUG nova.network.neutron [req-dc028e1f-644e-4552-86bd-c559c05c053e req-12c77c83-788c-4aa8-9417-7c1d05d8b98e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Refreshing network info cache for port c7810272-d139-4528-b358-19b623e1a34d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:22:33 compute-2 ceph-mon[77282]: pgmap v2653: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 534 KiB/s rd, 33 KiB/s wr, 50 op/s
Jan 31 08:22:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:33.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:33.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/235625692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2068840843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:34 compute-2 ceph-mon[77282]: pgmap v2654: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 534 KiB/s rd, 33 KiB/s wr, 49 op/s
Jan 31 08:22:34 compute-2 ceph-mgr[77635]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 08:22:34 compute-2 nova_compute[226829]: 2026-01-31 08:22:34.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:34 compute-2 nova_compute[226829]: 2026-01-31 08:22:34.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:22:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:34.536 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:22:34 compute-2 nova_compute[226829]: 2026-01-31 08:22:34.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:34.539 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:22:35 compute-2 nova_compute[226829]: 2026-01-31 08:22:35.004 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:22:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:35.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:35 compute-2 nova_compute[226829]: 2026-01-31 08:22:35.638 226833 DEBUG nova.network.neutron [req-dc028e1f-644e-4552-86bd-c559c05c053e req-12c77c83-788c-4aa8-9417-7c1d05d8b98e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updated VIF entry in instance network info cache for port c7810272-d139-4528-b358-19b623e1a34d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:22:35 compute-2 nova_compute[226829]: 2026-01-31 08:22:35.639 226833 DEBUG nova.network.neutron [req-dc028e1f-644e-4552-86bd-c559c05c053e req-12c77c83-788c-4aa8-9417-7c1d05d8b98e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:35.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:35 compute-2 nova_compute[226829]: 2026-01-31 08:22:35.940 226833 DEBUG oslo_concurrency.lockutils [req-dc028e1f-644e-4552-86bd-c559c05c053e req-12c77c83-788c-4aa8-9417-7c1d05d8b98e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:22:35 compute-2 nova_compute[226829]: 2026-01-31 08:22:35.940 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:22:35 compute-2 nova_compute[226829]: 2026-01-31 08:22:35.940 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:22:36 compute-2 ceph-mon[77282]: pgmap v2655: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 825 KiB/s rd, 33 KiB/s wr, 62 op/s
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.160 226833 DEBUG oslo_concurrency.lockutils [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.160 226833 DEBUG oslo_concurrency.lockutils [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.215 226833 INFO nova.compute.manager [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Detaching volume 9e79b760-3e32-4b0f-9644-d12b3e5189ad
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.432 226833 INFO nova.virt.block_device [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Attempting to driver detach volume 9e79b760-3e32-4b0f-9644-d12b3e5189ad from mountpoint /dev/vdb
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.434 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.447 226833 DEBUG nova.virt.libvirt.driver [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Attempting to detach device vdb from instance aa396f7d-4c1b-445e-807c-05107a729be4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.448 226833 DEBUG nova.virt.libvirt.guest [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:22:36 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-9e79b760-3e32-4b0f-9644-d12b3e5189ad">
Jan 31 08:22:36 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]:   </source>
Jan 31 08:22:36 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]:   <serial>9e79b760-3e32-4b0f-9644-d12b3e5189ad</serial>
Jan 31 08:22:36 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]: </disk>
Jan 31 08:22:36 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.464 226833 INFO nova.virt.libvirt.driver [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Successfully detached device vdb from instance aa396f7d-4c1b-445e-807c-05107a729be4 from the persistent domain config.
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.465 226833 DEBUG nova.virt.libvirt.driver [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance aa396f7d-4c1b-445e-807c-05107a729be4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.466 226833 DEBUG nova.virt.libvirt.guest [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:22:36 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-9e79b760-3e32-4b0f-9644-d12b3e5189ad">
Jan 31 08:22:36 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]:   </source>
Jan 31 08:22:36 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]:   <serial>9e79b760-3e32-4b0f-9644-d12b3e5189ad</serial>
Jan 31 08:22:36 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:22:36 compute-2 nova_compute[226829]: </disk>
Jan 31 08:22:36 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.568 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.608 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769847756.6076307, aa396f7d-4c1b-445e-807c-05107a729be4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.609 226833 DEBUG nova.virt.libvirt.driver [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance aa396f7d-4c1b-445e-807c-05107a729be4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:22:36 compute-2 nova_compute[226829]: 2026-01-31 08:22:36.612 226833 INFO nova.virt.libvirt.driver [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Successfully detached device vdb from instance aa396f7d-4c1b-445e-807c-05107a729be4 from the live domain config.
Jan 31 08:22:37 compute-2 nova_compute[226829]: 2026-01-31 08:22:37.101 226833 DEBUG nova.objects.instance [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'flavor' on Instance uuid aa396f7d-4c1b-445e-807c-05107a729be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:37.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:37 compute-2 nova_compute[226829]: 2026-01-31 08:22:37.531 226833 DEBUG oslo_concurrency.lockutils [None req-19761aa6-e33b-4851-a29c-77e6d1969f6b 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:37.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:38 compute-2 nova_compute[226829]: 2026-01-31 08:22:38.072 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [{"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:38 compute-2 ceph-mon[77282]: pgmap v2656: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 30 KiB/s wr, 65 op/s
Jan 31 08:22:38 compute-2 nova_compute[226829]: 2026-01-31 08:22:38.398 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-aa396f7d-4c1b-445e-807c-05107a729be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:22:38 compute-2 nova_compute[226829]: 2026-01-31 08:22:38.398 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:22:38 compute-2 nova_compute[226829]: 2026-01-31 08:22:38.398 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:38 compute-2 nova_compute[226829]: 2026-01-31 08:22:38.399 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:39.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:39.542 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:39.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:40 compute-2 ceph-mon[77282]: pgmap v2657: 305 pgs: 305 active+clean; 611 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 21 KiB/s wr, 86 op/s
Jan 31 08:22:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:40 compute-2 nova_compute[226829]: 2026-01-31 08:22:40.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:41 compute-2 podman[295737]: 2026-01-31 08:22:41.220981055 +0000 UTC m=+0.099698051 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 08:22:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:41.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:41 compute-2 nova_compute[226829]: 2026-01-31 08:22:41.435 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:41 compute-2 nova_compute[226829]: 2026-01-31 08:22:41.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:41 compute-2 nova_compute[226829]: 2026-01-31 08:22:41.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:41 compute-2 nova_compute[226829]: 2026-01-31 08:22:41.570 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:41.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:42 compute-2 ceph-mon[77282]: pgmap v2658: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.8 KiB/s wr, 101 op/s
Jan 31 08:22:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3035816931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1237920725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1237920725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:43.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:43.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:44 compute-2 ceph-mon[77282]: pgmap v2659: 305 pgs: 305 active+clean; 557 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.0 KiB/s wr, 113 op/s
Jan 31 08:22:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3227282260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2076322004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:45.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:45 compute-2 nova_compute[226829]: 2026-01-31 08:22:45.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:45.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:46 compute-2 ceph-mon[77282]: pgmap v2660: 305 pgs: 305 active+clean; 557 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 KiB/s wr, 124 op/s
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.396 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.396 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.396 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.397 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.397 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.420 226833 DEBUG oslo_concurrency.lockutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.421 226833 DEBUG oslo_concurrency.lockutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.421 226833 DEBUG oslo_concurrency.lockutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.422 226833 DEBUG oslo_concurrency.lockutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.422 226833 DEBUG oslo_concurrency.lockutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.424 226833 INFO nova.compute.manager [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Terminating instance
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.426 226833 DEBUG nova.compute.manager [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.437 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 kernel: tapc7810272-d1 (unregistering): left promiscuous mode
Jan 31 08:22:46 compute-2 NetworkManager[48999]: <info>  [1769847766.5035] device (tapc7810272-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.510 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 ovn_controller[133834]: 2026-01-31T08:22:46Z|00628|binding|INFO|Releasing lport c7810272-d139-4528-b358-19b623e1a34d from this chassis (sb_readonly=0)
Jan 31 08:22:46 compute-2 ovn_controller[133834]: 2026-01-31T08:22:46Z|00629|binding|INFO|Setting lport c7810272-d139-4528-b358-19b623e1a34d down in Southbound
Jan 31 08:22:46 compute-2 ovn_controller[133834]: 2026-01-31T08:22:46Z|00630|binding|INFO|Removing iface tapc7810272-d1 ovn-installed in OVS
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.513 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.518 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.538 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:e7:7f 10.100.0.7'], port_security=['fa:16:3e:d0:e7:7f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'aa396f7d-4c1b-445e-807c-05107a729be4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '134c066ac92844ff853b216870fa8eed', 'neutron:revision_number': '8', 'neutron:security_group_ids': '967ea74e-50db-4569-92ae-9b918e86440d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bd5646e-2523-4ee9-a162-795050792e9d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c7810272-d139-4528-b358-19b623e1a34d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.540 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c7810272-d139-4528-b358-19b623e1a34d in datapath b8453b6a-05bd-4d59-86e9-a509416a9ef0 unbound from our chassis
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.542 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8453b6a-05bd-4d59-86e9-a509416a9ef0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.545 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fc27a4-d2f4-423e-9ef1-80afea95f8b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.546 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 namespace which is not needed anymore
Jan 31 08:22:46 compute-2 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000094.scope: Deactivated successfully.
Jan 31 08:22:46 compute-2 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000094.scope: Consumed 15.545s CPU time.
Jan 31 08:22:46 compute-2 systemd-machined[195142]: Machine qemu-69-instance-00000094 terminated.
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.572 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.645 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.652 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[295503]: [NOTICE]   (295507) : haproxy version is 2.8.14-c23fe91
Jan 31 08:22:46 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[295503]: [NOTICE]   (295507) : path to executable is /usr/sbin/haproxy
Jan 31 08:22:46 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[295503]: [WARNING]  (295507) : Exiting Master process...
Jan 31 08:22:46 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[295503]: [WARNING]  (295507) : Exiting Master process...
Jan 31 08:22:46 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[295503]: [ALERT]    (295507) : Current worker (295509) exited with code 143 (Terminated)
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.660 226833 INFO nova.virt.libvirt.driver [-] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Instance destroyed successfully.
Jan 31 08:22:46 compute-2 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[295503]: [WARNING]  (295507) : All workers exited. Exiting... (0)
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.661 226833 DEBUG nova.objects.instance [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'resources' on Instance uuid aa396f7d-4c1b-445e-807c-05107a729be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:46 compute-2 systemd[1]: libpod-66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955.scope: Deactivated successfully.
Jan 31 08:22:46 compute-2 podman[295811]: 2026-01-31 08:22:46.67121837 +0000 UTC m=+0.045023430 container died 66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.689 226833 DEBUG nova.virt.libvirt.vif [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-485097177',display_name='tempest-TestMinimumBasicScenario-server-485097177',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-485097177',id=148,image_ref='1a3bb6f8-bfef-4edf-a7ea-1489b5cad196',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGOpaJxQC88oU1BAE6agEO8/UDKwtI95jIn5J+NZB6IktiWeDMwJPGWyNwRbN0e4Zkyig+zlgMtyl/CXhKkrxfhvbob06bGVRfII17t2PzSbTbwb3feBP+Tv9H8WPOTqeQ==',key_name='tempest-TestMinimumBasicScenario-761057108',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:29Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='134c066ac92844ff853b216870fa8eed',ramdisk_id='',reservation_id='r-uxkefg5k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1a3bb6f8-bfef-4edf-a7ea-1489b5cad196',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-975831205',owner_user_name='tempest-TestMinimumBasicScenario-975831205-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:22:02Z,user_data=None,user_id='7f0be9090fdf49d2ac15246a0a820d3f',uuid=aa396f7d-4c1b-445e-807c-05107a729be4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.689 226833 DEBUG nova.network.os_vif_util [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converting VIF {"id": "c7810272-d139-4528-b358-19b623e1a34d", "address": "fa:16:3e:d0:e7:7f", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7810272-d1", "ovs_interfaceid": "c7810272-d139-4528-b358-19b623e1a34d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.690 226833 DEBUG nova.network.os_vif_util [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d0:e7:7f,bridge_name='br-int',has_traffic_filtering=True,id=c7810272-d139-4528-b358-19b623e1a34d,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7810272-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.690 226833 DEBUG os_vif [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d0:e7:7f,bridge_name='br-int',has_traffic_filtering=True,id=c7810272-d139-4528-b358-19b623e1a34d,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7810272-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:22:46 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955-userdata-shm.mount: Deactivated successfully.
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.694 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7810272-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.695 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 systemd[1]: var-lib-containers-storage-overlay-a437e9b7e50a39709876ed53a23d04e408cac5a30051b6a571b94b6166200870-merged.mount: Deactivated successfully.
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.697 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.704 226833 INFO os_vif [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d0:e7:7f,bridge_name='br-int',has_traffic_filtering=True,id=c7810272-d139-4528-b358-19b623e1a34d,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7810272-d1')
Jan 31 08:22:46 compute-2 podman[295811]: 2026-01-31 08:22:46.707747359 +0000 UTC m=+0.081552419 container cleanup 66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:22:46 compute-2 systemd[1]: libpod-conmon-66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955.scope: Deactivated successfully.
Jan 31 08:22:46 compute-2 podman[295849]: 2026-01-31 08:22:46.757012884 +0000 UTC m=+0.035389530 container remove 66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.762 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e2cf5b1b-3b75-4c9b-97de-9eae03e3ebc7]: (4, ('Sat Jan 31 08:22:46 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 (66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955)\n66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955\nSat Jan 31 08:22:46 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 (66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955)\n66db40dc27e0866d4e2577f85057e4832a61202c7fdee56cc06d07b451667955\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.764 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[37350042-7a2c-42b8-88eb-79c8924102de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.765 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8453b6a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.767 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 kernel: tapb8453b6a-00: left promiscuous mode
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.771 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.774 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7b57397a-f1ed-4715-b9eb-cdc931fde4c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.791 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6b54b880-c24d-4c71-906d-5d58f1293a5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.792 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e89e2513-52e5-4bef-ae79-162bd5595fa5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.803 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ba98c25a-2cf3-447f-b386-5f758b7f959d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797386, 'reachable_time': 35228, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295882, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:46 compute-2 systemd[1]: run-netns-ovnmeta\x2db8453b6a\x2d05bd\x2d4d59\x2d86e9\x2da509416a9ef0.mount: Deactivated successfully.
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.808 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:22:46 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:22:46.808 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2f2d1e-8593-4e0b-ac54-42441bb0ed3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:22:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/492939812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:46 compute-2 nova_compute[226829]: 2026-01-31 08:22:46.864 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1587069712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/492939812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2237483166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3903040416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:47 compute-2 nova_compute[226829]: 2026-01-31 08:22:47.212 226833 INFO nova.virt.libvirt.driver [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Deleting instance files /var/lib/nova/instances/aa396f7d-4c1b-445e-807c-05107a729be4_del
Jan 31 08:22:47 compute-2 nova_compute[226829]: 2026-01-31 08:22:47.212 226833 INFO nova.virt.libvirt.driver [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Deletion of /var/lib/nova/instances/aa396f7d-4c1b-445e-807c-05107a729be4_del complete
Jan 31 08:22:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:47.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:47.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:47 compute-2 nova_compute[226829]: 2026-01-31 08:22:47.932 226833 DEBUG nova.compute.manager [req-09b18c5f-6f25-46d8-8841-01fad897117a req-2aab51ad-93eb-48cf-807a-f1caa6f0c7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-vif-unplugged-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:47 compute-2 nova_compute[226829]: 2026-01-31 08:22:47.932 226833 DEBUG oslo_concurrency.lockutils [req-09b18c5f-6f25-46d8-8841-01fad897117a req-2aab51ad-93eb-48cf-807a-f1caa6f0c7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:47 compute-2 nova_compute[226829]: 2026-01-31 08:22:47.933 226833 DEBUG oslo_concurrency.lockutils [req-09b18c5f-6f25-46d8-8841-01fad897117a req-2aab51ad-93eb-48cf-807a-f1caa6f0c7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:47 compute-2 nova_compute[226829]: 2026-01-31 08:22:47.933 226833 DEBUG oslo_concurrency.lockutils [req-09b18c5f-6f25-46d8-8841-01fad897117a req-2aab51ad-93eb-48cf-807a-f1caa6f0c7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:47 compute-2 nova_compute[226829]: 2026-01-31 08:22:47.933 226833 DEBUG nova.compute.manager [req-09b18c5f-6f25-46d8-8841-01fad897117a req-2aab51ad-93eb-48cf-807a-f1caa6f0c7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] No waiting events found dispatching network-vif-unplugged-c7810272-d139-4528-b358-19b623e1a34d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:47 compute-2 nova_compute[226829]: 2026-01-31 08:22:47.933 226833 DEBUG nova.compute.manager [req-09b18c5f-6f25-46d8-8841-01fad897117a req-2aab51ad-93eb-48cf-807a-f1caa6f0c7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-vif-unplugged-c7810272-d139-4528-b358-19b623e1a34d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.056 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.056 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.060 226833 INFO nova.compute.manager [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Took 1.63 seconds to destroy the instance on the hypervisor.
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.061 226833 DEBUG oslo.service.loopingcall [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.061 226833 DEBUG nova.compute.manager [-] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.062 226833 DEBUG nova.network.neutron [-] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.067 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.067 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.260 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.262 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3994MB free_disk=20.739330291748047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.262 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.263 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:48 compute-2 ceph-mon[77282]: pgmap v2661: 305 pgs: 305 active+clean; 545 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 KiB/s wr, 147 op/s
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.698 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e8e7a13f-a648-45dc-b768-ac5deac97083 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.699 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance aa396f7d-4c1b-445e-807c-05107a729be4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.699 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.699 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:22:48 compute-2 nova_compute[226829]: 2026-01-31 08:22:48.984 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:48 compute-2 sudo[295888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:48 compute-2 sudo[295888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:48 compute-2 sudo[295888]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:49 compute-2 sudo[295914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:49 compute-2 sudo[295914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:49 compute-2 sudo[295914]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:49.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:22:49 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/851481534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.412 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.418 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.475 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.479 226833 DEBUG nova.network.neutron [-] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.595 226833 INFO nova.compute.manager [-] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Took 1.53 seconds to deallocate network for instance.
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.611 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.611 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/851481534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:49.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.733 226833 DEBUG nova.compute.manager [req-de84c4ae-f48c-4270-9a1b-bd1d1d2f44b5 req-5708c112-edc3-4e40-8909-4aa93f1c2695 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-vif-deleted-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.738 226833 DEBUG oslo_concurrency.lockutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.739 226833 DEBUG oslo_concurrency.lockutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:49 compute-2 nova_compute[226829]: 2026-01-31 08:22:49.904 226833 DEBUG oslo_concurrency.processutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.157 226833 DEBUG nova.compute.manager [req-8583dadd-6104-4408-ae84-933748a0d6d4 req-b8e515b0-fbcb-4b69-835c-685e11198034 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.158 226833 DEBUG oslo_concurrency.lockutils [req-8583dadd-6104-4408-ae84-933748a0d6d4 req-b8e515b0-fbcb-4b69-835c-685e11198034 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.158 226833 DEBUG oslo_concurrency.lockutils [req-8583dadd-6104-4408-ae84-933748a0d6d4 req-b8e515b0-fbcb-4b69-835c-685e11198034 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.159 226833 DEBUG oslo_concurrency.lockutils [req-8583dadd-6104-4408-ae84-933748a0d6d4 req-b8e515b0-fbcb-4b69-835c-685e11198034 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.159 226833 DEBUG nova.compute.manager [req-8583dadd-6104-4408-ae84-933748a0d6d4 req-b8e515b0-fbcb-4b69-835c-685e11198034 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] No waiting events found dispatching network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.159 226833 WARNING nova.compute.manager [req-8583dadd-6104-4408-ae84-933748a0d6d4 req-b8e515b0-fbcb-4b69-835c-685e11198034 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Received unexpected event network-vif-plugged-c7810272-d139-4528-b358-19b623e1a34d for instance with vm_state deleted and task_state None.
Jan 31 08:22:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:22:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2482508508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.335 226833 DEBUG oslo_concurrency.processutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.339 226833 DEBUG nova.compute.provider_tree [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.561 226833 DEBUG nova.scheduler.client.report [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:22:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/245710413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:50 compute-2 ceph-mon[77282]: pgmap v2662: 305 pgs: 305 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 15 KiB/s wr, 142 op/s
Jan 31 08:22:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2482508508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.874 226833 DEBUG oslo_concurrency.lockutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:50 compute-2 nova_compute[226829]: 2026-01-31 08:22:50.995 226833 INFO nova.scheduler.client.report [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Deleted allocations for instance aa396f7d-4c1b-445e-807c-05107a729be4
Jan 31 08:22:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:51.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:51 compute-2 nova_compute[226829]: 2026-01-31 08:22:51.326 226833 DEBUG oslo_concurrency.lockutils [None req-ac47efe3-1160-437a-b0ac-d9dfdba3cdca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "aa396f7d-4c1b-445e-807c-05107a729be4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:51 compute-2 nova_compute[226829]: 2026-01-31 08:22:51.575 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:22:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:51.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:22:51 compute-2 nova_compute[226829]: 2026-01-31 08:22:51.695 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:52 compute-2 ceph-mon[77282]: pgmap v2663: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 858 KiB/s rd, 16 KiB/s wr, 135 op/s
Jan 31 08:22:52 compute-2 podman[295985]: 2026-01-31 08:22:52.177939796 +0000 UTC m=+0.048953026 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:22:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e346 e346: 3 total, 3 up, 3 in
Jan 31 08:22:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1518565786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1518565786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:53.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:53.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:54 compute-2 ceph-mon[77282]: osdmap e346: 3 total, 3 up, 3 in
Jan 31 08:22:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2446759486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2446759486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:54 compute-2 ceph-mon[77282]: pgmap v2665: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 33 KiB/s wr, 172 op/s
Jan 31 08:22:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:22:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2483533362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:22:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2483533362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:55.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2483533362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2483533362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:55.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:56 compute-2 ceph-mon[77282]: pgmap v2666: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 46 KiB/s wr, 234 op/s
Jan 31 08:22:56 compute-2 nova_compute[226829]: 2026-01-31 08:22:56.577 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:56 compute-2 nova_compute[226829]: 2026-01-31 08:22:56.745 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:57.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:57.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:58 compute-2 ceph-mon[77282]: pgmap v2667: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 45 KiB/s wr, 213 op/s
Jan 31 08:22:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:22:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:59.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:22:59 compute-2 nova_compute[226829]: 2026-01-31 08:22:59.612 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:22:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:59.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:59 compute-2 nova_compute[226829]: 2026-01-31 08:22:59.691 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:59 compute-2 nova_compute[226829]: 2026-01-31 08:22:59.691 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:23:00 compute-2 ceph-mon[77282]: pgmap v2668: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 33 KiB/s wr, 242 op/s
Jan 31 08:23:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:00 compute-2 ovn_controller[133834]: 2026-01-31T08:23:00Z|00631|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 08:23:00 compute-2 nova_compute[226829]: 2026-01-31 08:23:00.730 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:00 compute-2 ovn_controller[133834]: 2026-01-31T08:23:00Z|00632|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 08:23:00 compute-2 nova_compute[226829]: 2026-01-31 08:23:00.791 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1867350194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:01.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:01 compute-2 nova_compute[226829]: 2026-01-31 08:23:01.579 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:01 compute-2 nova_compute[226829]: 2026-01-31 08:23:01.659 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847766.6581647, aa396f7d-4c1b-445e-807c-05107a729be4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:23:01 compute-2 nova_compute[226829]: 2026-01-31 08:23:01.659 226833 INFO nova.compute.manager [-] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] VM Stopped (Lifecycle Event)
Jan 31 08:23:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:01.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:01 compute-2 nova_compute[226829]: 2026-01-31 08:23:01.747 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:01 compute-2 nova_compute[226829]: 2026-01-31 08:23:01.764 226833 DEBUG nova.compute.manager [None req-ad60d1c8-8ef2-47eb-ac46-a2a8d43d6620 - - - - - -] [instance: aa396f7d-4c1b-445e-807c-05107a729be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e347 e347: 3 total, 3 up, 3 in
Jan 31 08:23:02 compute-2 ceph-mon[77282]: pgmap v2669: 305 pgs: 305 active+clean; 348 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 238 op/s
Jan 31 08:23:02 compute-2 ceph-mon[77282]: osdmap e347: 3 total, 3 up, 3 in
Jan 31 08:23:03 compute-2 sudo[296010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:03 compute-2 sudo[296010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-2 sudo[296010]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-2 sudo[296035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:03 compute-2 sudo[296035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-2 sudo[296035]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-2 sudo[296060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:03 compute-2 sudo[296060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-2 sudo[296060]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-2 sudo[296085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:23:03 compute-2 sudo[296085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:03.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:03 compute-2 sudo[296085]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:23:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:03.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:23:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:23:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:23:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:23:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:23:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:23:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:23:04 compute-2 ceph-mon[77282]: pgmap v2671: 305 pgs: 305 active+clean; 329 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 22 KiB/s wr, 197 op/s
Jan 31 08:23:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:05.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:05.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:06 compute-2 ceph-mon[77282]: pgmap v2672: 305 pgs: 305 active+clean; 261 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 9.1 KiB/s wr, 132 op/s
Jan 31 08:23:06 compute-2 nova_compute[226829]: 2026-01-31 08:23:06.581 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:06 compute-2 nova_compute[226829]: 2026-01-31 08:23:06.749 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:23:06.900 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:23:06.902 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:23:06.903 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:07.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/506549681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:23:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:07.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:23:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4137991365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:08 compute-2 ceph-mon[77282]: pgmap v2673: 305 pgs: 305 active+clean; 248 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 8.6 KiB/s wr, 133 op/s
Jan 31 08:23:09 compute-2 sudo[296144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:09 compute-2 sudo[296144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:09 compute-2 sudo[296144]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:09 compute-2 sudo[296169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:09 compute-2 sudo[296169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:09 compute-2 sudo[296169]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:09.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:09.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:10 compute-2 ceph-mon[77282]: pgmap v2674: 305 pgs: 305 active+clean; 268 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 622 KiB/s wr, 86 op/s
Jan 31 08:23:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:23:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:11.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:23:11 compute-2 nova_compute[226829]: 2026-01-31 08:23:11.583 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:11.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:11 compute-2 nova_compute[226829]: 2026-01-31 08:23:11.751 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:12 compute-2 sudo[296196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:12 compute-2 sudo[296196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:12 compute-2 sudo[296196]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:12 compute-2 sudo[296227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:23:12 compute-2 sudo[296227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:12 compute-2 sudo[296227]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:12 compute-2 podman[296220]: 2026-01-31 08:23:12.23084921 +0000 UTC m=+0.114425526 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:23:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:23:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:23:12 compute-2 ceph-mon[77282]: pgmap v2675: 305 pgs: 305 active+clean; 283 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.4 MiB/s wr, 57 op/s
Jan 31 08:23:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:13.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:13.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:14 compute-2 ceph-mon[77282]: pgmap v2676: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 31 08:23:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:15.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/97320698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4268451175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:15.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:16 compute-2 ceph-mon[77282]: pgmap v2677: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 31 08:23:16 compute-2 nova_compute[226829]: 2026-01-31 08:23:16.586 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:16 compute-2 nova_compute[226829]: 2026-01-31 08:23:16.753 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:17.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:18 compute-2 ceph-mon[77282]: pgmap v2678: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 31 08:23:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:19.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:19.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:20 compute-2 ceph-mon[77282]: pgmap v2679: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 652 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 08:23:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:23:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:21.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:23:21 compute-2 nova_compute[226829]: 2026-01-31 08:23:21.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:21.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:21 compute-2 nova_compute[226829]: 2026-01-31 08:23:21.754 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.143756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802143817, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1020, "num_deletes": 251, "total_data_size": 2059275, "memory_usage": 2078808, "flush_reason": "Manual Compaction"}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802150350, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 863214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62571, "largest_seqno": 63586, "table_properties": {"data_size": 859378, "index_size": 1488, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10386, "raw_average_key_size": 21, "raw_value_size": 851117, "raw_average_value_size": 1729, "num_data_blocks": 66, "num_entries": 492, "num_filter_entries": 492, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847728, "oldest_key_time": 1769847728, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 6670 microseconds, and 2899 cpu microseconds.
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.150423) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 863214 bytes OK
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.150442) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.151968) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.151988) EVENT_LOG_v1 {"time_micros": 1769847802151976, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.152005) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2054248, prev total WAL file size 2054248, number of live WAL files 2.
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.152653) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303039' seq:72057594037927935, type:22 .. '6D6772737461740032323630' seq:0, type:0; will stop at (end)
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(842KB)], [123(13MB)]
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802152720, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 14523072, "oldest_snapshot_seqno": -1}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: pgmap v2680: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 69 op/s
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 8702 keys, 11173514 bytes, temperature: kUnknown
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802226495, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 11173514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11117749, "index_size": 32933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227788, "raw_average_key_size": 26, "raw_value_size": 10965030, "raw_average_value_size": 1260, "num_data_blocks": 1270, "num_entries": 8702, "num_filter_entries": 8702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.226771) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11173514 bytes
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.227932) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.6 rd, 151.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 13.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(29.8) write-amplify(12.9) OK, records in: 9188, records dropped: 486 output_compression: NoCompression
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.227960) EVENT_LOG_v1 {"time_micros": 1769847802227950, "job": 78, "event": "compaction_finished", "compaction_time_micros": 73880, "compaction_time_cpu_micros": 26930, "output_level": 6, "num_output_files": 1, "total_output_size": 11173514, "num_input_records": 9188, "num_output_records": 8702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.152528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.228161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.228164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.228166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.228167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.228169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.228445) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802228466, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 265, "num_deletes": 251, "total_data_size": 23485, "memory_usage": 29008, "flush_reason": "Manual Compaction"}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802229620, "job": 0, "event": "table_file_deletion", "file_number": 125}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802229932, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 14621, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63591, "largest_seqno": 63851, "table_properties": {"data_size": 12786, "index_size": 67, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4731, "raw_average_key_size": 18, "raw_value_size": 9304, "raw_average_value_size": 35, "num_data_blocks": 3, "num_entries": 260, "num_filter_entries": 260, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847802, "oldest_key_time": 1769847802, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 2078 microseconds, and 473 cpu microseconds.
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802231460, "job": 0, "event": "table_file_deletion", "file_number": 123}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.229957) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 14621 bytes OK
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.230540) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.232751) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.232766) EVENT_LOG_v1 {"time_micros": 1769847802232762, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.232779) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 21427, prev total WAL file size 21427, number of live WAL files 2.
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.233045) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(14KB)], [126(10MB)]
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802233134, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 11188135, "oldest_snapshot_seqno": -1}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 8455 keys, 9195600 bytes, temperature: kUnknown
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802280090, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 9195600, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9143332, "index_size": 30011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 223352, "raw_average_key_size": 26, "raw_value_size": 8996733, "raw_average_value_size": 1064, "num_data_blocks": 1139, "num_entries": 8455, "num_filter_entries": 8455, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.280404) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9195600 bytes
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.281784) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.7 rd, 195.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.7 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(1394.1) write-amplify(628.9) OK, records in: 8962, records dropped: 507 output_compression: NoCompression
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.281801) EVENT_LOG_v1 {"time_micros": 1769847802281793, "job": 80, "event": "compaction_finished", "compaction_time_micros": 47060, "compaction_time_cpu_micros": 16788, "output_level": 6, "num_output_files": 1, "total_output_size": 9195600, "num_input_records": 8962, "num_output_records": 8455, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802281895, "job": 80, "event": "table_file_deletion", "file_number": 128}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802283004, "job": 80, "event": "table_file_deletion", "file_number": 126}
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.232981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.283047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.283051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.283053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.283055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:23:22.283115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:23 compute-2 podman[296279]: 2026-01-31 08:23:23.154793606 +0000 UTC m=+0.041727484 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 08:23:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:23:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:23.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:23:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.004000107s ======
Jan 31 08:23:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:23.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000107s
Jan 31 08:23:24 compute-2 ceph-mon[77282]: pgmap v2681: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 614 KiB/s wr, 89 op/s
Jan 31 08:23:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1503404094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:25.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:25 compute-2 nova_compute[226829]: 2026-01-31 08:23:25.583 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "47072fbd-0188-49f0-8a82-0cfa42eca7e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:25 compute-2 nova_compute[226829]: 2026-01-31 08:23:25.584 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "47072fbd-0188-49f0-8a82-0cfa42eca7e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:25 compute-2 nova_compute[226829]: 2026-01-31 08:23:25.654 226833 DEBUG nova.compute.manager [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:23:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:25.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:25 compute-2 nova_compute[226829]: 2026-01-31 08:23:25.873 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:25 compute-2 nova_compute[226829]: 2026-01-31 08:23:25.874 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:25 compute-2 nova_compute[226829]: 2026-01-31 08:23:25.880 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:23:25 compute-2 nova_compute[226829]: 2026-01-31 08:23:25.881 226833 INFO nova.compute.claims [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.085 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:23:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/344176187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.522 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.528 226833 DEBUG nova.compute.provider_tree [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:23:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e348 e348: 3 total, 3 up, 3 in
Jan 31 08:23:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/597961286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:26 compute-2 ceph-mon[77282]: pgmap v2682: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 76 op/s
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.576 226833 DEBUG nova.scheduler.client.report [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.590 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.684 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.685 226833 DEBUG nova.compute.manager [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.755 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.781 226833 DEBUG nova.compute.manager [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.840 226833 INFO nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:23:26 compute-2 nova_compute[226829]: 2026-01-31 08:23:26.920 226833 DEBUG nova.compute.manager [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.096 226833 DEBUG nova.compute.manager [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.098 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.099 226833 INFO nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Creating image(s)
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.139 226833 DEBUG nova.storage.rbd_utils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.172 226833 DEBUG nova.storage.rbd_utils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.199 226833 DEBUG nova.storage.rbd_utils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.203 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.258 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.259 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.259 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.260 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.283 226833 DEBUG nova.storage.rbd_utils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:27 compute-2 nova_compute[226829]: 2026-01-31 08:23:27.287 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:27.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:23:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:27.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:23:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/344176187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:28 compute-2 ceph-mon[77282]: osdmap e348: 3 total, 3 up, 3 in
Jan 31 08:23:29 compute-2 ceph-mon[77282]: pgmap v2684: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 16 KiB/s wr, 137 op/s
Jan 31 08:23:29 compute-2 sudo[296417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:29 compute-2 sudo[296417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:29 compute-2 sudo[296417]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:29.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:29 compute-2 sudo[296442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:29 compute-2 sudo[296442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:29 compute-2 sudo[296442]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:29 compute-2 nova_compute[226829]: 2026-01-31 08:23:29.461 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:29 compute-2 nova_compute[226829]: 2026-01-31 08:23:29.541 226833 DEBUG nova.storage.rbd_utils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] resizing rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:23:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:29.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:29 compute-2 nova_compute[226829]: 2026-01-31 08:23:29.871 226833 DEBUG nova.objects.instance [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'migration_context' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:30 compute-2 ceph-mon[77282]: pgmap v2685: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 152 op/s
Jan 31 08:23:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e349 e349: 3 total, 3 up, 3 in
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.706 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.706 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Ensure instance console log exists: /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.706 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.707 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.707 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.709 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.713 226833 WARNING nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.718 226833 DEBUG nova.virt.libvirt.host [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.718 226833 DEBUG nova.virt.libvirt.host [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.728 226833 DEBUG nova.virt.libvirt.host [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.729 226833 DEBUG nova.virt.libvirt.host [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.730 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.730 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.731 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.731 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.731 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.731 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.732 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.732 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.732 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.732 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.732 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.733 226833 DEBUG nova.virt.hardware [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:23:30 compute-2 nova_compute[226829]: 2026-01-31 08:23:30.736 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:23:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/879210536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:31 compute-2 nova_compute[226829]: 2026-01-31 08:23:31.166 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:31 compute-2 nova_compute[226829]: 2026-01-31 08:23:31.195 226833 DEBUG nova.storage.rbd_utils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:31 compute-2 nova_compute[226829]: 2026-01-31 08:23:31.198 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:31.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e350 e350: 3 total, 3 up, 3 in
Jan 31 08:23:31 compute-2 nova_compute[226829]: 2026-01-31 08:23:31.591 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:31.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:31 compute-2 nova_compute[226829]: 2026-01-31 08:23:31.757 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:23:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/836179707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:31 compute-2 ceph-mon[77282]: osdmap e349: 3 total, 3 up, 3 in
Jan 31 08:23:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/879210536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:31 compute-2 nova_compute[226829]: 2026-01-31 08:23:31.974 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.776s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:31 compute-2 nova_compute[226829]: 2026-01-31 08:23:31.976 226833 DEBUG nova.objects.instance [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:32 compute-2 nova_compute[226829]: 2026-01-31 08:23:32.007 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <uuid>47072fbd-0188-49f0-8a82-0cfa42eca7e4</uuid>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <name>instance-00000098</name>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerShowV257Test-server-565689888</nova:name>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:23:30</nova:creationTime>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <nova:user uuid="a1da90915f9344c292a208246f6a75ff">tempest-ServerShowV257Test-736447834-project-member</nova:user>
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <nova:project uuid="17987cbe856c4f6aab27787ff02572d5">tempest-ServerShowV257Test-736447834</nova:project>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <system>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <entry name="serial">47072fbd-0188-49f0-8a82-0cfa42eca7e4</entry>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <entry name="uuid">47072fbd-0188-49f0-8a82-0cfa42eca7e4</entry>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     </system>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <os>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   </os>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <features>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   </features>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk">
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       </source>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config">
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       </source>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:23:32 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/console.log" append="off"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <video>
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     </video>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:23:32 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:23:32 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:23:32 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:23:32 compute-2 nova_compute[226829]: </domain>
Jan 31 08:23:32 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:23:32 compute-2 nova_compute[226829]: 2026-01-31 08:23:32.150 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:23:32 compute-2 nova_compute[226829]: 2026-01-31 08:23:32.150 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:23:32 compute-2 nova_compute[226829]: 2026-01-31 08:23:32.151 226833 INFO nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Using config drive
Jan 31 08:23:32 compute-2 nova_compute[226829]: 2026-01-31 08:23:32.181 226833 DEBUG nova.storage.rbd_utils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:32 compute-2 nova_compute[226829]: 2026-01-31 08:23:32.497 226833 INFO nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Creating config drive at /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config
Jan 31 08:23:32 compute-2 nova_compute[226829]: 2026-01-31 08:23:32.502 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp38v979ma execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:32 compute-2 nova_compute[226829]: 2026-01-31 08:23:32.639 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp38v979ma" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:32 compute-2 nova_compute[226829]: 2026-01-31 08:23:32.672 226833 DEBUG nova.storage.rbd_utils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:32 compute-2 nova_compute[226829]: 2026-01-31 08:23:32.675 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:32 compute-2 ceph-mon[77282]: osdmap e350: 3 total, 3 up, 3 in
Jan 31 08:23:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/836179707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:32 compute-2 ceph-mon[77282]: pgmap v2688: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 363 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 5.2 MiB/s wr, 245 op/s
Jan 31 08:23:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:33.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:33 compute-2 nova_compute[226829]: 2026-01-31 08:23:33.424 226833 DEBUG oslo_concurrency.processutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.749s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:33 compute-2 nova_compute[226829]: 2026-01-31 08:23:33.425 226833 INFO nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Deleting local config drive /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config because it was imported into RBD.
Jan 31 08:23:33 compute-2 systemd-machined[195142]: New machine qemu-70-instance-00000098.
Jan 31 08:23:33 compute-2 systemd[1]: Started Virtual Machine qemu-70-instance-00000098.
Jan 31 08:23:33 compute-2 nova_compute[226829]: 2026-01-31 08:23:33.563 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:33.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:34 compute-2 ceph-mon[77282]: pgmap v2689: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 395 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 6.8 MiB/s wr, 276 op/s
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.699 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847814.6985445, 47072fbd-0188-49f0-8a82-0cfa42eca7e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.699 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] VM Resumed (Lifecycle Event)
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.703 226833 DEBUG nova.compute.manager [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.704 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.708 226833 INFO nova.virt.libvirt.driver [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance spawned successfully.
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.709 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.747 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.755 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.759 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.760 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.760 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.761 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.761 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.761 226833 DEBUG nova.virt.libvirt.driver [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.793 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.794 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847814.6996284, 47072fbd-0188-49f0-8a82-0cfa42eca7e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:23:34 compute-2 nova_compute[226829]: 2026-01-31 08:23:34.794 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] VM Started (Lifecycle Event)
Jan 31 08:23:35 compute-2 nova_compute[226829]: 2026-01-31 08:23:35.052 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:35 compute-2 nova_compute[226829]: 2026-01-31 08:23:35.055 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:23:35 compute-2 nova_compute[226829]: 2026-01-31 08:23:35.088 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:23:35 compute-2 nova_compute[226829]: 2026-01-31 08:23:35.106 226833 INFO nova.compute.manager [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Took 8.01 seconds to spawn the instance on the hypervisor.
Jan 31 08:23:35 compute-2 nova_compute[226829]: 2026-01-31 08:23:35.107 226833 DEBUG nova.compute.manager [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:35 compute-2 nova_compute[226829]: 2026-01-31 08:23:35.247 226833 INFO nova.compute.manager [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Took 9.47 seconds to build instance.
Jan 31 08:23:35 compute-2 nova_compute[226829]: 2026-01-31 08:23:35.282 226833 DEBUG oslo_concurrency.lockutils [None req-6074a3de-6b1c-466f-9f46-c223279c34f9 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "47072fbd-0188-49f0-8a82-0cfa42eca7e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:35.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:35 compute-2 nova_compute[226829]: 2026-01-31 08:23:35.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:35 compute-2 nova_compute[226829]: 2026-01-31 08:23:35.490 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:23:35 compute-2 nova_compute[226829]: 2026-01-31 08:23:35.490 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:23:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:23:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:35.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:23:36 compute-2 nova_compute[226829]: 2026-01-31 08:23:36.034 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:23:36 compute-2 nova_compute[226829]: 2026-01-31 08:23:36.035 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:23:36 compute-2 nova_compute[226829]: 2026-01-31 08:23:36.035 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:23:36 compute-2 nova_compute[226829]: 2026-01-31 08:23:36.035 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid e8e7a13f-a648-45dc-b768-ac5deac97083 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:36 compute-2 ceph-mon[77282]: pgmap v2690: 305 pgs: 305 active+clean; 415 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 8.5 MiB/s wr, 305 op/s
Jan 31 08:23:36 compute-2 nova_compute[226829]: 2026-01-31 08:23:36.594 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:36 compute-2 nova_compute[226829]: 2026-01-31 08:23:36.759 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:37.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 e351: 3 total, 3 up, 3 in
Jan 31 08:23:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:37.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:38 compute-2 nova_compute[226829]: 2026-01-31 08:23:38.328 226833 INFO nova.compute.manager [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Rebuilding instance
Jan 31 08:23:38 compute-2 ceph-mon[77282]: osdmap e351: 3 total, 3 up, 3 in
Jan 31 08:23:38 compute-2 ceph-mon[77282]: pgmap v2692: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.1 MiB/s wr, 344 op/s
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.143 226833 DEBUG nova.objects.instance [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.179 226833 DEBUG nova.compute.manager [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.289 226833 DEBUG nova.objects.instance [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.304 226833 DEBUG nova.objects.instance [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.308 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Updating instance_info_cache with network_info: [{"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.323 226833 DEBUG nova.objects.instance [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'resources' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:39.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.374 226833 DEBUG nova.objects.instance [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'migration_context' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.376 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-e8e7a13f-a648-45dc-b768-ac5deac97083" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.377 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.378 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.394 226833 DEBUG nova.objects.instance [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 08:23:39 compute-2 nova_compute[226829]: 2026-01-31 08:23:39.398 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:23:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:39.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:40 compute-2 ceph-mon[77282]: pgmap v2693: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.4 MiB/s wr, 287 op/s
Jan 31 08:23:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:41.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:41 compute-2 nova_compute[226829]: 2026-01-31 08:23:41.479 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:23:41.479 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:23:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:23:41.482 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:23:41 compute-2 nova_compute[226829]: 2026-01-31 08:23:41.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:41 compute-2 nova_compute[226829]: 2026-01-31 08:23:41.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:41 compute-2 nova_compute[226829]: 2026-01-31 08:23:41.598 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:41.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:41 compute-2 nova_compute[226829]: 2026-01-31 08:23:41.761 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:42 compute-2 ceph-mon[77282]: pgmap v2694: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.8 MiB/s wr, 278 op/s
Jan 31 08:23:42 compute-2 nova_compute[226829]: 2026-01-31 08:23:42.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1289977359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3228575897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:43 compute-2 podman[296724]: 2026-01-31 08:23:43.206539701 +0000 UTC m=+0.080183746 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:23:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:43.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:23:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:43.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:23:44 compute-2 ceph-mon[77282]: pgmap v2695: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Jan 31 08:23:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1119347368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:23:44.486 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:23:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:23:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:45.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:23:45 compute-2 nova_compute[226829]: 2026-01-31 08:23:45.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:45 compute-2 nova_compute[226829]: 2026-01-31 08:23:45.521 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:45 compute-2 nova_compute[226829]: 2026-01-31 08:23:45.522 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:45 compute-2 nova_compute[226829]: 2026-01-31 08:23:45.522 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:45 compute-2 nova_compute[226829]: 2026-01-31 08:23:45.523 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:23:45 compute-2 nova_compute[226829]: 2026-01-31 08:23:45.524 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/841049691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:23:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:45.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:23:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:23:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1246198736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.049 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.188 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.189 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.195 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.195 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.350 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.351 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3900MB free_disk=20.83056640625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.351 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.351 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1246198736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:46 compute-2 ceph-mon[77282]: pgmap v2696: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 45 KiB/s wr, 131 op/s
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.642 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:46 compute-2 nova_compute[226829]: 2026-01-31 08:23:46.762 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:47 compute-2 nova_compute[226829]: 2026-01-31 08:23:47.318 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e8e7a13f-a648-45dc-b768-ac5deac97083 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:23:47 compute-2 nova_compute[226829]: 2026-01-31 08:23:47.318 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 47072fbd-0188-49f0-8a82-0cfa42eca7e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:23:47 compute-2 nova_compute[226829]: 2026-01-31 08:23:47.318 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:23:47 compute-2 nova_compute[226829]: 2026-01-31 08:23:47.319 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:23:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:23:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:47.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:23:47 compute-2 nova_compute[226829]: 2026-01-31 08:23:47.536 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:23:47 compute-2 nova_compute[226829]: 2026-01-31 08:23:47.726 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:23:47 compute-2 nova_compute[226829]: 2026-01-31 08:23:47.727 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:23:47 compute-2 nova_compute[226829]: 2026-01-31 08:23:47.744 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:23:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:47 compute-2 nova_compute[226829]: 2026-01-31 08:23:47.769 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:23:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/320626493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:47 compute-2 nova_compute[226829]: 2026-01-31 08:23:47.863 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:23:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3497861562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:48 compute-2 nova_compute[226829]: 2026-01-31 08:23:48.306 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:48 compute-2 nova_compute[226829]: 2026-01-31 08:23:48.311 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:23:48 compute-2 nova_compute[226829]: 2026-01-31 08:23:48.388 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:23:48 compute-2 nova_compute[226829]: 2026-01-31 08:23:48.440 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:23:48 compute-2 nova_compute[226829]: 2026-01-31 08:23:48.441 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:48 compute-2 ceph-mon[77282]: pgmap v2697: 305 pgs: 305 active+clean; 413 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 968 KiB/s wr, 109 op/s
Jan 31 08:23:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3497861562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:49.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:49 compute-2 sudo[296796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:49 compute-2 sudo[296796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:49 compute-2 sudo[296796]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:49 compute-2 nova_compute[226829]: 2026-01-31 08:23:49.440 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 08:23:49 compute-2 sudo[296821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:49 compute-2 sudo[296821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:49 compute-2 sudo[296821]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:49.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:50 compute-2 ceph-mon[77282]: pgmap v2698: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 114 op/s
Jan 31 08:23:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1434908461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:51 compute-2 nova_compute[226829]: 2026-01-31 08:23:51.644 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:51 compute-2 nova_compute[226829]: 2026-01-31 08:23:51.764 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:23:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:23:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1673176264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:52 compute-2 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000098.scope: Deactivated successfully.
Jan 31 08:23:52 compute-2 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000098.scope: Consumed 13.817s CPU time.
Jan 31 08:23:52 compute-2 systemd-machined[195142]: Machine qemu-70-instance-00000098 terminated.
Jan 31 08:23:52 compute-2 nova_compute[226829]: 2026-01-31 08:23:52.613 226833 INFO nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance shutdown successfully after 13 seconds.
Jan 31 08:23:52 compute-2 nova_compute[226829]: 2026-01-31 08:23:52.620 226833 INFO nova.virt.libvirt.driver [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance destroyed successfully.
Jan 31 08:23:52 compute-2 nova_compute[226829]: 2026-01-31 08:23:52.627 226833 INFO nova.virt.libvirt.driver [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance destroyed successfully.
Jan 31 08:23:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:53.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:53 compute-2 ceph-mon[77282]: pgmap v2699: 305 pgs: 305 active+clean; 368 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 116 op/s
Jan 31 08:23:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:54 compute-2 podman[296870]: 2026-01-31 08:23:54.174967482 +0000 UTC m=+0.060014350 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 08:23:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1351194824' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:23:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1351194824' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:23:54 compute-2 ceph-mon[77282]: pgmap v2700: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 101 op/s
Jan 31 08:23:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:55.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:55.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:56 compute-2 ceph-mon[77282]: pgmap v2701: 305 pgs: 305 active+clean; 384 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 119 op/s
Jan 31 08:23:56 compute-2 nova_compute[226829]: 2026-01-31 08:23:56.645 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:56 compute-2 nova_compute[226829]: 2026-01-31 08:23:56.782 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.118 226833 INFO nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Deleting instance files /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4_del
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.118 226833 INFO nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Deletion of /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4_del complete
Jan 31 08:23:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:57.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.510 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.511 226833 INFO nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Creating image(s)
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.537 226833 DEBUG nova.storage.rbd_utils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.563 226833 DEBUG nova.storage.rbd_utils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.587 226833 DEBUG nova.storage.rbd_utils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.590 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.652 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.653 226833 DEBUG oslo_concurrency.lockutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.654 226833 DEBUG oslo_concurrency.lockutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.654 226833 DEBUG oslo_concurrency.lockutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.682 226833 DEBUG nova.storage.rbd_utils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:57 compute-2 nova_compute[226829]: 2026-01-31 08:23:57.686 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:23:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:57.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.014 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.069 226833 DEBUG nova.storage.rbd_utils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] resizing rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:23:58 compute-2 ceph-mon[77282]: pgmap v2702: 305 pgs: 305 active+clean; 388 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.2 MiB/s wr, 158 op/s
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.167 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.168 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Ensure instance console log exists: /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.168 226833 DEBUG oslo_concurrency.lockutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.168 226833 DEBUG oslo_concurrency.lockutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.169 226833 DEBUG oslo_concurrency.lockutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.170 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.174 226833 WARNING nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.183 226833 DEBUG nova.virt.libvirt.host [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.183 226833 DEBUG nova.virt.libvirt.host [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.188 226833 DEBUG nova.virt.libvirt.host [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.189 226833 DEBUG nova.virt.libvirt.host [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.190 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.190 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.191 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.191 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.192 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.192 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.192 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.192 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.193 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.193 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.193 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.194 226833 DEBUG nova.virt.hardware [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.194 226833 DEBUG nova.objects.instance [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.234 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:23:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/851229378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.701 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.741 226833 DEBUG nova.storage.rbd_utils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:58 compute-2 nova_compute[226829]: 2026-01-31 08:23:58.747 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/851229378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:23:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2749672095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:59 compute-2 nova_compute[226829]: 2026-01-31 08:23:59.207 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:59 compute-2 nova_compute[226829]: 2026-01-31 08:23:59.210 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <uuid>47072fbd-0188-49f0-8a82-0cfa42eca7e4</uuid>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <name>instance-00000098</name>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerShowV257Test-server-565689888</nova:name>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:23:58</nova:creationTime>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <nova:user uuid="a1da90915f9344c292a208246f6a75ff">tempest-ServerShowV257Test-736447834-project-member</nova:user>
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <nova:project uuid="17987cbe856c4f6aab27787ff02572d5">tempest-ServerShowV257Test-736447834</nova:project>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <system>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <entry name="serial">47072fbd-0188-49f0-8a82-0cfa42eca7e4</entry>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <entry name="uuid">47072fbd-0188-49f0-8a82-0cfa42eca7e4</entry>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     </system>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <os>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   </os>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <features>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   </features>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk">
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       </source>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config">
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       </source>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:23:59 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/console.log" append="off"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <video>
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     </video>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:23:59 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:23:59 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:23:59 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:23:59 compute-2 nova_compute[226829]: </domain>
Jan 31 08:23:59 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:23:59 compute-2 nova_compute[226829]: 2026-01-31 08:23:59.265 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:23:59 compute-2 nova_compute[226829]: 2026-01-31 08:23:59.266 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:23:59 compute-2 nova_compute[226829]: 2026-01-31 08:23:59.266 226833 INFO nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Using config drive
Jan 31 08:23:59 compute-2 nova_compute[226829]: 2026-01-31 08:23:59.296 226833 DEBUG nova.storage.rbd_utils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:59 compute-2 nova_compute[226829]: 2026-01-31 08:23:59.328 226833 DEBUG nova.objects.instance [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:59.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:59 compute-2 nova_compute[226829]: 2026-01-31 08:23:59.385 226833 DEBUG nova.objects.instance [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'keypairs' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:23:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:59.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:00 compute-2 nova_compute[226829]: 2026-01-31 08:24:00.082 226833 INFO nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Creating config drive at /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config
Jan 31 08:24:00 compute-2 nova_compute[226829]: 2026-01-31 08:24:00.089 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgb0nbtiv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2749672095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:00 compute-2 ceph-mon[77282]: pgmap v2703: 305 pgs: 305 active+clean; 397 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.5 MiB/s wr, 152 op/s
Jan 31 08:24:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/681131750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:00 compute-2 nova_compute[226829]: 2026-01-31 08:24:00.219 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgb0nbtiv" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:00 compute-2 nova_compute[226829]: 2026-01-31 08:24:00.258 226833 DEBUG nova.storage.rbd_utils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] rbd image 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:00 compute-2 nova_compute[226829]: 2026-01-31 08:24:00.264 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:00 compute-2 nova_compute[226829]: 2026-01-31 08:24:00.441 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:00 compute-2 nova_compute[226829]: 2026-01-31 08:24:00.442 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:24:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:01.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1572303610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2372279131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:01 compute-2 nova_compute[226829]: 2026-01-31 08:24:01.647 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:01 compute-2 nova_compute[226829]: 2026-01-31 08:24:01.707 226833 DEBUG oslo_concurrency.processutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config 47072fbd-0188-49f0-8a82-0cfa42eca7e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:01 compute-2 nova_compute[226829]: 2026-01-31 08:24:01.708 226833 INFO nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Deleting local config drive /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4/disk.config because it was imported into RBD.
Jan 31 08:24:01 compute-2 systemd-machined[195142]: New machine qemu-71-instance-00000098.
Jan 31 08:24:01 compute-2 systemd[1]: Started Virtual Machine qemu-71-instance-00000098.
Jan 31 08:24:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:01.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:01 compute-2 nova_compute[226829]: 2026-01-31 08:24:01.784 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.292 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for 47072fbd-0188-49f0-8a82-0cfa42eca7e4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.294 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847842.2921047, 47072fbd-0188-49f0-8a82-0cfa42eca7e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.294 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] VM Resumed (Lifecycle Event)
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.297 226833 DEBUG nova.compute.manager [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.297 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.304 226833 INFO nova.virt.libvirt.driver [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance spawned successfully.
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.304 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.336 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.343 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.343 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.344 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.344 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.344 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.345 226833 DEBUG nova.virt.libvirt.driver [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.347 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.394 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.395 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847842.2930796, 47072fbd-0188-49f0-8a82-0cfa42eca7e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.395 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] VM Started (Lifecycle Event)
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.418 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.421 226833 DEBUG nova.compute.manager [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.424 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.466 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.522 226833 DEBUG oslo_concurrency.lockutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.523 226833 DEBUG oslo_concurrency.lockutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.523 226833 DEBUG nova.objects.instance [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 08:24:02 compute-2 ceph-mon[77282]: pgmap v2704: 305 pgs: 305 active+clean; 416 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 151 op/s
Jan 31 08:24:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2384994557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:02 compute-2 nova_compute[226829]: 2026-01-31 08:24:02.618 226833 DEBUG oslo_concurrency.lockutils [None req-384714e2-b144-4aa1-a82f-e52e0424a2d3 a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.164 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "47072fbd-0188-49f0-8a82-0cfa42eca7e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.165 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "47072fbd-0188-49f0-8a82-0cfa42eca7e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.165 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "47072fbd-0188-49f0-8a82-0cfa42eca7e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.166 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "47072fbd-0188-49f0-8a82-0cfa42eca7e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.166 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "47072fbd-0188-49f0-8a82-0cfa42eca7e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.167 226833 INFO nova.compute.manager [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Terminating instance
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.169 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "refresh_cache-47072fbd-0188-49f0-8a82-0cfa42eca7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.169 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquired lock "refresh_cache-47072fbd-0188-49f0-8a82-0cfa42eca7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.170 226833 DEBUG nova.network.neutron [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:24:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:24:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:03.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.438 226833 DEBUG nova.network.neutron [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:24:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:03.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3841614421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.864 226833 DEBUG nova.network.neutron [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.891 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Releasing lock "refresh_cache-47072fbd-0188-49f0-8a82-0cfa42eca7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:24:03 compute-2 nova_compute[226829]: 2026-01-31 08:24:03.892 226833 DEBUG nova.compute.manager [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:24:03 compute-2 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000098.scope: Deactivated successfully.
Jan 31 08:24:03 compute-2 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000098.scope: Consumed 2.064s CPU time.
Jan 31 08:24:03 compute-2 systemd-machined[195142]: Machine qemu-71-instance-00000098 terminated.
Jan 31 08:24:04 compute-2 nova_compute[226829]: 2026-01-31 08:24:04.109 226833 INFO nova.virt.libvirt.driver [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance destroyed successfully.
Jan 31 08:24:04 compute-2 nova_compute[226829]: 2026-01-31 08:24:04.110 226833 DEBUG nova.objects.instance [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lazy-loading 'resources' on Instance uuid 47072fbd-0188-49f0-8a82-0cfa42eca7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:04 compute-2 nova_compute[226829]: 2026-01-31 08:24:04.688 226833 INFO nova.virt.libvirt.driver [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Deleting instance files /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4_del
Jan 31 08:24:04 compute-2 nova_compute[226829]: 2026-01-31 08:24:04.688 226833 INFO nova.virt.libvirt.driver [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Deletion of /var/lib/nova/instances/47072fbd-0188-49f0-8a82-0cfa42eca7e4_del complete
Jan 31 08:24:04 compute-2 nova_compute[226829]: 2026-01-31 08:24:04.829 226833 INFO nova.compute.manager [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Took 0.94 seconds to destroy the instance on the hypervisor.
Jan 31 08:24:04 compute-2 nova_compute[226829]: 2026-01-31 08:24:04.830 226833 DEBUG oslo.service.loopingcall [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:24:04 compute-2 nova_compute[226829]: 2026-01-31 08:24:04.831 226833 DEBUG nova.compute.manager [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:24:04 compute-2 nova_compute[226829]: 2026-01-31 08:24:04.831 226833 DEBUG nova.network.neutron [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:24:04 compute-2 ceph-mon[77282]: pgmap v2705: 305 pgs: 305 active+clean; 431 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 5.3 MiB/s wr, 134 op/s
Jan 31 08:24:05 compute-2 nova_compute[226829]: 2026-01-31 08:24:05.092 226833 DEBUG nova.network.neutron [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:24:05 compute-2 nova_compute[226829]: 2026-01-31 08:24:05.124 226833 DEBUG nova.network.neutron [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:05 compute-2 nova_compute[226829]: 2026-01-31 08:24:05.158 226833 INFO nova.compute.manager [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Took 0.33 seconds to deallocate network for instance.
Jan 31 08:24:05 compute-2 nova_compute[226829]: 2026-01-31 08:24:05.243 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:05 compute-2 nova_compute[226829]: 2026-01-31 08:24:05.244 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:05.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:05 compute-2 nova_compute[226829]: 2026-01-31 08:24:05.366 226833 DEBUG oslo_concurrency.processutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:24:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2866160867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:05 compute-2 nova_compute[226829]: 2026-01-31 08:24:05.807 226833 DEBUG oslo_concurrency.processutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:05 compute-2 nova_compute[226829]: 2026-01-31 08:24:05.816 226833 DEBUG nova.compute.provider_tree [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:24:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2866160867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:05 compute-2 nova_compute[226829]: 2026-01-31 08:24:05.891 226833 DEBUG nova.scheduler.client.report [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:24:05 compute-2 nova_compute[226829]: 2026-01-31 08:24:05.945 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:06 compute-2 nova_compute[226829]: 2026-01-31 08:24:06.000 226833 INFO nova.scheduler.client.report [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Deleted allocations for instance 47072fbd-0188-49f0-8a82-0cfa42eca7e4
Jan 31 08:24:06 compute-2 nova_compute[226829]: 2026-01-31 08:24:06.086 226833 DEBUG oslo_concurrency.lockutils [None req-aa7f2156-500d-40ca-85ae-630895250f7d a1da90915f9344c292a208246f6a75ff 17987cbe856c4f6aab27787ff02572d5 - - default default] Lock "47072fbd-0188-49f0-8a82-0cfa42eca7e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:06 compute-2 nova_compute[226829]: 2026-01-31 08:24:06.688 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:06 compute-2 nova_compute[226829]: 2026-01-31 08:24:06.785 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:06 compute-2 ceph-mon[77282]: pgmap v2706: 305 pgs: 305 active+clean; 429 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.3 MiB/s wr, 272 op/s
Jan 31 08:24:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:06.901 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:06.902 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:06.903 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:07.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:07.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:08 compute-2 ceph-mon[77282]: pgmap v2707: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.5 MiB/s wr, 360 op/s
Jan 31 08:24:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:09.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:09 compute-2 sudo[297284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:09 compute-2 sudo[297284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:09 compute-2 sudo[297284]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:09 compute-2 sudo[297309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:09 compute-2 sudo[297309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:09 compute-2 sudo[297309]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:09.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:10 compute-2 ceph-mon[77282]: pgmap v2708: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.1 MiB/s wr, 331 op/s
Jan 31 08:24:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:11.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2899694428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:11 compute-2 nova_compute[226829]: 2026-01-31 08:24:11.690 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:11 compute-2 nova_compute[226829]: 2026-01-31 08:24:11.787 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:11.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:12 compute-2 sudo[297336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:12 compute-2 sudo[297336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:12 compute-2 sudo[297336]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:12 compute-2 sudo[297361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:12 compute-2 sudo[297361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:12 compute-2 sudo[297361]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:12 compute-2 sudo[297386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:12 compute-2 sudo[297386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:12 compute-2 sudo[297386]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:12 compute-2 sudo[297411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:24:12 compute-2 sudo[297411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4275877689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:12 compute-2 ceph-mon[77282]: pgmap v2709: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 304 op/s
Jan 31 08:24:12 compute-2 sudo[297411]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:13.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:13.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:14 compute-2 podman[297469]: 2026-01-31 08:24:14.21795624 +0000 UTC m=+0.090148746 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:24:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:14 compute-2 ceph-mon[77282]: pgmap v2710: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.6 MiB/s wr, 287 op/s
Jan 31 08:24:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:24:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:24:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:24:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:24:14 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:24:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:15.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:15.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:16 compute-2 ceph-mon[77282]: pgmap v2711: 305 pgs: 305 active+clean; 448 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 346 op/s
Jan 31 08:24:16 compute-2 nova_compute[226829]: 2026-01-31 08:24:16.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:16 compute-2 nova_compute[226829]: 2026-01-31 08:24:16.787 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:17.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:17.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:18 compute-2 ceph-mon[77282]: pgmap v2712: 305 pgs: 305 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.0 MiB/s wr, 265 op/s
Jan 31 08:24:19 compute-2 nova_compute[226829]: 2026-01-31 08:24:19.109 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847844.1072564, 47072fbd-0188-49f0-8a82-0cfa42eca7e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:19 compute-2 nova_compute[226829]: 2026-01-31 08:24:19.110 226833 INFO nova.compute.manager [-] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] VM Stopped (Lifecycle Event)
Jan 31 08:24:19 compute-2 nova_compute[226829]: 2026-01-31 08:24:19.134 226833 DEBUG nova.compute.manager [None req-8ed0901f-ca65-4153-8e28-360118e55ad3 - - - - - -] [instance: 47072fbd-0188-49f0-8a82-0cfa42eca7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:19.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:19.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:20 compute-2 ceph-mon[77282]: pgmap v2713: 305 pgs: 305 active+clean; 489 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 189 op/s
Jan 31 08:24:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:20 compute-2 sudo[297500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:20 compute-2 sudo[297500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:20 compute-2 sudo[297500]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:20 compute-2 sudo[297525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:24:20 compute-2 sudo[297525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:20 compute-2 sudo[297525]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:21.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:21 compute-2 nova_compute[226829]: 2026-01-31 08:24:21.695 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:21 compute-2 nova_compute[226829]: 2026-01-31 08:24:21.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:21.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:22 compute-2 ceph-mon[77282]: pgmap v2714: 305 pgs: 305 active+clean; 496 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Jan 31 08:24:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:23.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:23.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:24 compute-2 ceph-mon[77282]: pgmap v2715: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Jan 31 08:24:25 compute-2 podman[297552]: 2026-01-31 08:24:25.167331635 +0000 UTC m=+0.052831414 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 31 08:24:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:24:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:25.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:24:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:25.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:26 compute-2 ceph-mon[77282]: pgmap v2716: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Jan 31 08:24:26 compute-2 nova_compute[226829]: 2026-01-31 08:24:26.697 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:26 compute-2 nova_compute[226829]: 2026-01-31 08:24:26.790 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:27.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:27.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:28 compute-2 ceph-mon[77282]: pgmap v2717: 305 pgs: 305 active+clean; 513 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Jan 31 08:24:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:24:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:29.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:24:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3456242543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2646330775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:29 compute-2 sudo[297574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:29 compute-2 sudo[297574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:29 compute-2 sudo[297574]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:29 compute-2 sudo[297599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:29 compute-2 sudo[297599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:29 compute-2 sudo[297599]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:29.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.272 226833 DEBUG nova.compute.manager [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.525 226833 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.526 226833 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.569 226833 DEBUG nova.objects.instance [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'pci_requests' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.594 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.595 226833 INFO nova.compute.claims [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.595 226833 DEBUG nova.objects.instance [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'resources' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.607 226833 DEBUG nova.objects.instance [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'numa_topology' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.627 226833 DEBUG nova.objects.instance [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'pci_devices' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.683 226833 INFO nova.compute.resource_tracker [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating resource usage from migration 28009c64-41b3-4fe9-854a-e346c8d0b39b
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.684 226833 DEBUG nova.compute.resource_tracker [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Starting to track incoming migration 28009c64-41b3-4fe9-854a-e346c8d0b39b with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 08:24:30 compute-2 nova_compute[226829]: 2026-01-31 08:24:30.802 226833 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:30 compute-2 ceph-mon[77282]: pgmap v2718: 305 pgs: 305 active+clean; 529 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 665 KiB/s rd, 3.2 MiB/s wr, 124 op/s
Jan 31 08:24:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:24:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2082796801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:31 compute-2 nova_compute[226829]: 2026-01-31 08:24:31.328 226833 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:31 compute-2 nova_compute[226829]: 2026-01-31 08:24:31.333 226833 DEBUG nova.compute.provider_tree [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:24:31 compute-2 nova_compute[226829]: 2026-01-31 08:24:31.356 226833 DEBUG nova.scheduler.client.report [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:24:31 compute-2 nova_compute[226829]: 2026-01-31 08:24:31.380 226833 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:31 compute-2 nova_compute[226829]: 2026-01-31 08:24:31.380 226833 INFO nova.compute.manager [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Migrating
Jan 31 08:24:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:31.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:31 compute-2 nova_compute[226829]: 2026-01-31 08:24:31.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:31 compute-2 nova_compute[226829]: 2026-01-31 08:24:31.792 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2082796801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:32 compute-2 ceph-mon[77282]: pgmap v2719: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 461 KiB/s rd, 2.4 MiB/s wr, 115 op/s
Jan 31 08:24:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:33.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:33 compute-2 sshd-session[297648]: Accepted publickey for nova from 192.168.122.101 port 39918 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:24:33 compute-2 systemd-logind[801]: New session 57 of user nova.
Jan 31 08:24:33 compute-2 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 08:24:33 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 08:24:33 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 08:24:33 compute-2 systemd[1]: Starting User Manager for UID 42436...
Jan 31 08:24:33 compute-2 systemd[297652]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:24:33 compute-2 systemd[297652]: Queued start job for default target Main User Target.
Jan 31 08:24:33 compute-2 systemd[297652]: Created slice User Application Slice.
Jan 31 08:24:33 compute-2 systemd[297652]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:24:33 compute-2 systemd[297652]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:24:33 compute-2 systemd[297652]: Reached target Paths.
Jan 31 08:24:33 compute-2 systemd[297652]: Reached target Timers.
Jan 31 08:24:33 compute-2 systemd[297652]: Starting D-Bus User Message Bus Socket...
Jan 31 08:24:33 compute-2 systemd[297652]: Starting Create User's Volatile Files and Directories...
Jan 31 08:24:33 compute-2 systemd[297652]: Finished Create User's Volatile Files and Directories.
Jan 31 08:24:33 compute-2 systemd[297652]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:24:33 compute-2 systemd[297652]: Reached target Sockets.
Jan 31 08:24:33 compute-2 systemd[297652]: Reached target Basic System.
Jan 31 08:24:33 compute-2 systemd[297652]: Reached target Main User Target.
Jan 31 08:24:33 compute-2 systemd[297652]: Startup finished in 140ms.
Jan 31 08:24:33 compute-2 systemd[1]: Started User Manager for UID 42436.
Jan 31 08:24:33 compute-2 systemd[1]: Started Session 57 of User nova.
Jan 31 08:24:33 compute-2 sshd-session[297648]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:24:33 compute-2 sshd-session[297667]: Received disconnect from 192.168.122.101 port 39918:11: disconnected by user
Jan 31 08:24:33 compute-2 sshd-session[297667]: Disconnected from user nova 192.168.122.101 port 39918
Jan 31 08:24:33 compute-2 sshd-session[297648]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:24:33 compute-2 systemd[1]: session-57.scope: Deactivated successfully.
Jan 31 08:24:33 compute-2 systemd-logind[801]: Session 57 logged out. Waiting for processes to exit.
Jan 31 08:24:33 compute-2 systemd-logind[801]: Removed session 57.
Jan 31 08:24:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:33.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:33 compute-2 sshd-session[297669]: Accepted publickey for nova from 192.168.122.101 port 39934 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:24:33 compute-2 systemd-logind[801]: New session 59 of user nova.
Jan 31 08:24:33 compute-2 systemd[1]: Started Session 59 of User nova.
Jan 31 08:24:33 compute-2 sshd-session[297669]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:24:33 compute-2 sshd-session[297673]: Received disconnect from 192.168.122.101 port 39934:11: disconnected by user
Jan 31 08:24:33 compute-2 sshd-session[297673]: Disconnected from user nova 192.168.122.101 port 39934
Jan 31 08:24:33 compute-2 sshd-session[297669]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:24:33 compute-2 systemd[1]: session-59.scope: Deactivated successfully.
Jan 31 08:24:33 compute-2 systemd-logind[801]: Session 59 logged out. Waiting for processes to exit.
Jan 31 08:24:33 compute-2 systemd-logind[801]: Removed session 59.
Jan 31 08:24:34 compute-2 ceph-mon[77282]: pgmap v2720: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 2.2 MiB/s wr, 97 op/s
Jan 31 08:24:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:35.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:35 compute-2 nova_compute[226829]: 2026-01-31 08:24:35.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:35 compute-2 nova_compute[226829]: 2026-01-31 08:24:35.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:35 compute-2 nova_compute[226829]: 2026-01-31 08:24:35.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:24:35 compute-2 nova_compute[226829]: 2026-01-31 08:24:35.514 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:24:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:35.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:36 compute-2 ceph-mon[77282]: pgmap v2721: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Jan 31 08:24:36 compute-2 nova_compute[226829]: 2026-01-31 08:24:36.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:36 compute-2 nova_compute[226829]: 2026-01-31 08:24:36.702 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:36 compute-2 nova_compute[226829]: 2026-01-31 08:24:36.793 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:37 compute-2 nova_compute[226829]: 2026-01-31 08:24:37.001 226833 DEBUG nova.compute.manager [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-unplugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:37 compute-2 nova_compute[226829]: 2026-01-31 08:24:37.001 226833 DEBUG oslo_concurrency.lockutils [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:37 compute-2 nova_compute[226829]: 2026-01-31 08:24:37.002 226833 DEBUG oslo_concurrency.lockutils [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:37 compute-2 nova_compute[226829]: 2026-01-31 08:24:37.002 226833 DEBUG oslo_concurrency.lockutils [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:37 compute-2 nova_compute[226829]: 2026-01-31 08:24:37.002 226833 DEBUG nova.compute.manager [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-unplugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:37 compute-2 nova_compute[226829]: 2026-01-31 08:24:37.002 226833 WARNING nova.compute.manager [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received unexpected event network-vif-unplugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with vm_state active and task_state resize_migrating.
Jan 31 08:24:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:37.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:37 compute-2 nova_compute[226829]: 2026-01-31 08:24:37.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1950468783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:38 compute-2 nova_compute[226829]: 2026-01-31 08:24:38.193 226833 INFO nova.network.neutron [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 08:24:39 compute-2 ceph-mon[77282]: pgmap v2722: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 31 08:24:39 compute-2 nova_compute[226829]: 2026-01-31 08:24:39.157 226833 DEBUG nova.compute.manager [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:39 compute-2 nova_compute[226829]: 2026-01-31 08:24:39.159 226833 DEBUG oslo_concurrency.lockutils [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:39 compute-2 nova_compute[226829]: 2026-01-31 08:24:39.159 226833 DEBUG oslo_concurrency.lockutils [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:39 compute-2 nova_compute[226829]: 2026-01-31 08:24:39.160 226833 DEBUG oslo_concurrency.lockutils [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:39 compute-2 nova_compute[226829]: 2026-01-31 08:24:39.160 226833 DEBUG nova.compute.manager [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:39 compute-2 nova_compute[226829]: 2026-01-31 08:24:39.161 226833 WARNING nova.compute.manager [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received unexpected event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with vm_state active and task_state resize_migrated.
Jan 31 08:24:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:39.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:39.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:39 compute-2 nova_compute[226829]: 2026-01-31 08:24:39.864 226833 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:24:39 compute-2 nova_compute[226829]: 2026-01-31 08:24:39.865 226833 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquired lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:24:39 compute-2 nova_compute[226829]: 2026-01-31 08:24:39.865 226833 DEBUG nova.network.neutron [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:24:40 compute-2 nova_compute[226829]: 2026-01-31 08:24:40.026 226833 DEBUG nova.compute.manager [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-changed-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:40 compute-2 nova_compute[226829]: 2026-01-31 08:24:40.026 226833 DEBUG nova.compute.manager [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Refreshing instance network info cache due to event network-changed-7d9d74bf-cfe7-4c4d-aaec-f0662642996b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:24:40 compute-2 nova_compute[226829]: 2026-01-31 08:24:40.026 226833 DEBUG oslo_concurrency.lockutils [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:24:40 compute-2 ceph-mon[77282]: pgmap v2723: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.2 MiB/s wr, 188 op/s
Jan 31 08:24:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:41.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:41 compute-2 nova_compute[226829]: 2026-01-31 08:24:41.704 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:41 compute-2 nova_compute[226829]: 2026-01-31 08:24:41.794 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:41.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:42 compute-2 ceph-mon[77282]: pgmap v2724: 305 pgs: 305 active+clean; 551 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 678 KiB/s wr, 192 op/s
Jan 31 08:24:42 compute-2 nova_compute[226829]: 2026-01-31 08:24:42.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:42.633 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:42 compute-2 nova_compute[226829]: 2026-01-31 08:24:42.634 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:42 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:42.636 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:24:42 compute-2 nova_compute[226829]: 2026-01-31 08:24:42.722 226833 DEBUG nova.network.neutron [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:42 compute-2 nova_compute[226829]: 2026-01-31 08:24:42.768 226833 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Releasing lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:24:42 compute-2 nova_compute[226829]: 2026-01-31 08:24:42.771 226833 DEBUG oslo_concurrency.lockutils [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:24:42 compute-2 nova_compute[226829]: 2026-01-31 08:24:42.771 226833 DEBUG nova.network.neutron [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Refreshing network info cache for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:24:42 compute-2 nova_compute[226829]: 2026-01-31 08:24:42.915 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 08:24:42 compute-2 nova_compute[226829]: 2026-01-31 08:24:42.917 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:24:42 compute-2 nova_compute[226829]: 2026-01-31 08:24:42.917 226833 INFO nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Creating image(s)
Jan 31 08:24:42 compute-2 nova_compute[226829]: 2026-01-31 08:24:42.959 226833 DEBUG nova.storage.rbd_utils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] creating snapshot(nova-resize) on rbd image(90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:24:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e352 e352: 3 total, 3 up, 3 in
Jan 31 08:24:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2563591530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.399 226833 DEBUG nova.objects.instance [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'trusted_certs' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:43.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.520 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.521 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Ensure instance console log exists: /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.521 226833 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.522 226833 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.522 226833 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.525 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Start _get_guest_xml network_info=[{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--863310389", "vif_mac": "fa:16:3e:b4:62:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.530 226833 WARNING nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.536 226833 DEBUG nova.virt.libvirt.host [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.537 226833 DEBUG nova.virt.libvirt.host [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.541 226833 DEBUG nova.virt.libvirt.host [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.542 226833 DEBUG nova.virt.libvirt.host [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.543 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.543 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.544 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.544 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.544 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.545 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.545 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.545 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.546 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.546 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.546 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.546 226833 DEBUG nova.virt.hardware [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.547 226833 DEBUG nova.objects.instance [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'vcpu_model' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:43 compute-2 nova_compute[226829]: 2026-01-31 08:24:43.590 226833 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:24:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3317738892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:43.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:24:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1905162500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.049 226833 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:44 compute-2 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 08:24:44 compute-2 systemd[297652]: Activating special unit Exit the Session...
Jan 31 08:24:44 compute-2 systemd[297652]: Stopped target Main User Target.
Jan 31 08:24:44 compute-2 systemd[297652]: Stopped target Basic System.
Jan 31 08:24:44 compute-2 systemd[297652]: Stopped target Paths.
Jan 31 08:24:44 compute-2 systemd[297652]: Stopped target Sockets.
Jan 31 08:24:44 compute-2 systemd[297652]: Stopped target Timers.
Jan 31 08:24:44 compute-2 systemd[297652]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:24:44 compute-2 systemd[297652]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.094 226833 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:44 compute-2 systemd[297652]: Closed D-Bus User Message Bus Socket.
Jan 31 08:24:44 compute-2 systemd[297652]: Stopped Create User's Volatile Files and Directories.
Jan 31 08:24:44 compute-2 systemd[297652]: Removed slice User Application Slice.
Jan 31 08:24:44 compute-2 systemd[297652]: Reached target Shutdown.
Jan 31 08:24:44 compute-2 systemd[297652]: Finished Exit the Session.
Jan 31 08:24:44 compute-2 systemd[297652]: Reached target Exit the Session.
Jan 31 08:24:44 compute-2 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 08:24:44 compute-2 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 08:24:44 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 08:24:44 compute-2 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 08:24:44 compute-2 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 08:24:44 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 08:24:44 compute-2 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 08:24:44 compute-2 ceph-mon[77282]: osdmap e352: 3 total, 3 up, 3 in
Jan 31 08:24:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3633162157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3317738892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1905162500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-2 ceph-mon[77282]: pgmap v2726: 305 pgs: 305 active+clean; 571 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.0 MiB/s wr, 278 op/s
Jan 31 08:24:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/933169485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:24:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3702335835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.578 226833 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.582 226833 DEBUG nova.virt.libvirt.vif [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:23:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2050667150',display_name='tempest-TestNetworkAdvancedServerOps-server-2050667150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2050667150',id=154,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEPErrFuMatngmbPFI9nTUxaXvpce6NcVYOjpFna1bVYBBpAXT5DWY6THfcwtLeuI/NzxTjJ2/S962+d7YMt9k52/1Ce4TFS9tLoM3ovnpGdPHYwJISH1c/E+TlOqwaBJw==',key_name='tempest-TestNetworkAdvancedServerOps-1275181138',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:24:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-wq345vcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:24:37Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=90aa4e13-650f-43f2-8ebe-19a34e0cc605,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--863310389", "vif_mac": "fa:16:3e:b4:62:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.583 226833 DEBUG nova.network.os_vif_util [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converting VIF {"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--863310389", "vif_mac": "fa:16:3e:b4:62:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.585 226833 DEBUG nova.network.os_vif_util [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.591 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <uuid>90aa4e13-650f-43f2-8ebe-19a34e0cc605</uuid>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <name>instance-0000009a</name>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2050667150</nova:name>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:24:43</nova:creationTime>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <nova:port uuid="7d9d74bf-cfe7-4c4d-aaec-f0662642996b">
Jan 31 08:24:44 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <system>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <entry name="serial">90aa4e13-650f-43f2-8ebe-19a34e0cc605</entry>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <entry name="uuid">90aa4e13-650f-43f2-8ebe-19a34e0cc605</entry>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     </system>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <os>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   </os>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <features>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   </features>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk">
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       </source>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk.config">
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       </source>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:24:44 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:b4:62:8a"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <target dev="tap7d9d74bf-cf"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/console.log" append="off"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <video>
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     </video>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:24:44 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:24:44 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:24:44 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:24:44 compute-2 nova_compute[226829]: </domain>
Jan 31 08:24:44 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.593 226833 DEBUG nova.virt.libvirt.vif [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:23:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2050667150',display_name='tempest-TestNetworkAdvancedServerOps-server-2050667150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2050667150',id=154,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEPErrFuMatngmbPFI9nTUxaXvpce6NcVYOjpFna1bVYBBpAXT5DWY6THfcwtLeuI/NzxTjJ2/S962+d7YMt9k52/1Ce4TFS9tLoM3ovnpGdPHYwJISH1c/E+TlOqwaBJw==',key_name='tempest-TestNetworkAdvancedServerOps-1275181138',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:24:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-wq345vcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:24:37Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=90aa4e13-650f-43f2-8ebe-19a34e0cc605,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--863310389", "vif_mac": "fa:16:3e:b4:62:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.594 226833 DEBUG nova.network.os_vif_util [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converting VIF {"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--863310389", "vif_mac": "fa:16:3e:b4:62:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.594 226833 DEBUG nova.network.os_vif_util [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.595 226833 DEBUG os_vif [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.595 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.596 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.596 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.605 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.606 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d9d74bf-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.606 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d9d74bf-cf, col_values=(('external_ids', {'iface-id': '7d9d74bf-cfe7-4c4d-aaec-f0662642996b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:62:8a', 'vm-uuid': '90aa4e13-650f-43f2-8ebe-19a34e0cc605'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.607 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:44 compute-2 NetworkManager[48999]: <info>  [1769847884.6098] manager: (tap7d9d74bf-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.610 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.614 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.614 226833 INFO os_vif [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf')
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.741 226833 DEBUG nova.network.neutron [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updated VIF entry in instance network info cache for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.742 226833 DEBUG nova.network.neutron [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.756 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.756 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.756 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] No VIF found with MAC fa:16:3e:b4:62:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.757 226833 INFO nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Using config drive
Jan 31 08:24:44 compute-2 podman[297819]: 2026-01-31 08:24:44.768205758 +0000 UTC m=+0.122458874 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.804 226833 DEBUG oslo_concurrency.lockutils [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:24:44 compute-2 kernel: tap7d9d74bf-cf: entered promiscuous mode
Jan 31 08:24:44 compute-2 NetworkManager[48999]: <info>  [1769847884.8467] manager: (tap7d9d74bf-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/312)
Jan 31 08:24:44 compute-2 ovn_controller[133834]: 2026-01-31T08:24:44Z|00633|binding|INFO|Claiming lport 7d9d74bf-cfe7-4c4d-aaec-f0662642996b for this chassis.
Jan 31 08:24:44 compute-2 ovn_controller[133834]: 2026-01-31T08:24:44Z|00634|binding|INFO|7d9d74bf-cfe7-4c4d-aaec-f0662642996b: Claiming fa:16:3e:b4:62:8a 10.100.0.3
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.846 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.849 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.854 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:44 compute-2 NetworkManager[48999]: <info>  [1769847884.8707] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Jan 31 08:24:44 compute-2 NetworkManager[48999]: <info>  [1769847884.8715] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Jan 31 08:24:44 compute-2 systemd-udevd[297876]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:24:44 compute-2 systemd-machined[195142]: New machine qemu-72-instance-0000009a.
Jan 31 08:24:44 compute-2 systemd[1]: Started Virtual Machine qemu-72-instance-0000009a.
Jan 31 08:24:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:44.882 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:62:8a 10.100.0.3'], port_security=['fa:16:3e:b4:62:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '90aa4e13-650f-43f2-8ebe-19a34e0cc605', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a584cda0-df11-4171-9687-b79f1d3fe460', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3f15617b-dce2-4914-b18c-70facd7e86fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b34831f-93cf-4037-8766-7bee8dbb9141, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=7d9d74bf-cfe7-4c4d-aaec-f0662642996b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:44.884 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b in datapath a584cda0-df11-4171-9687-b79f1d3fe460 bound to our chassis
Jan 31 08:24:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:44.887 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a584cda0-df11-4171-9687-b79f1d3fe460
Jan 31 08:24:44 compute-2 NetworkManager[48999]: <info>  [1769847884.8918] device (tap7d9d74bf-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:24:44 compute-2 NetworkManager[48999]: <info>  [1769847884.8924] device (tap7d9d74bf-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:24:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:44.902 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[642c3ba3-219e-4afc-aaf5-123efcd9b119]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:44.906 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa584cda0-d1 in ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:24:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:44.913 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa584cda0-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:24:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:44.913 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[99abad50-22d3-4ecc-9b64-bece6ab9621b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.915 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:44.916 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[47e9c565-b71e-4fb4-bb87-65bfa197d01b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:44 compute-2 ovn_controller[133834]: 2026-01-31T08:24:44Z|00635|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 08:24:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:44.932 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[31d18420-1311-48fe-be78-f040fad91213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:44 compute-2 ovn_controller[133834]: 2026-01-31T08:24:44Z|00636|binding|INFO|Setting lport 7d9d74bf-cfe7-4c4d-aaec-f0662642996b up in Southbound
Jan 31 08:24:44 compute-2 ovn_controller[133834]: 2026-01-31T08:24:44Z|00637|binding|INFO|Setting lport 7d9d74bf-cfe7-4c4d-aaec-f0662642996b ovn-installed in OVS
Jan 31 08:24:44 compute-2 nova_compute[226829]: 2026-01-31 08:24:44.989 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:44.990 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8aad1f18-e99c-40b0-a31d-15b61871bc53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.014 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[35307807-e5d1-4b87-b26b-0442f8a37c10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 NetworkManager[48999]: <info>  [1769847885.0194] manager: (tapa584cda0-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/315)
Jan 31 08:24:45 compute-2 systemd-udevd[297878]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.018 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e24c487c-7a11-418e-bb38-de53a3a15086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.043 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c51bbf-a343-4850-8193-21ba492c853b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.046 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f54070db-e1b4-4a57-94f7-debd6625ae3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 NetworkManager[48999]: <info>  [1769847885.0626] device (tapa584cda0-d0): carrier: link connected
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.064 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[31c446b9-7f00-477f-b79f-b1d25036e3e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.078 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f34b8b-f74c-4130-b079-ebf6c4be06c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa584cda0-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:1f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813753, 'reachable_time': 17300, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297909, 'error': None, 'target': 'ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.091 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0d797bf2-deb2-4d92-8faf-e2193a241ad0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:1f9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813753, 'tstamp': 813753}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297910, 'error': None, 'target': 'ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.103 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[18c1f4cd-6ccf-40cf-b496-6032b8667221]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa584cda0-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:1f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813753, 'reachable_time': 17300, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297911, 'error': None, 'target': 'ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.126 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f2530691-cc5b-4f2e-9adb-6dd00d06a8a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.173 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9a636d29-72ec-4098-9ce2-a50736920940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.175 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa584cda0-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.176 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.176 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa584cda0-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.178 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:45 compute-2 kernel: tapa584cda0-d0: entered promiscuous mode
Jan 31 08:24:45 compute-2 NetworkManager[48999]: <info>  [1769847885.1791] manager: (tapa584cda0-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.180 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.182 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa584cda0-d0, col_values=(('external_ids', {'iface-id': '82a1b950-f7d7-4649-a61e-273ceb65ba23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.183 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:45 compute-2 ovn_controller[133834]: 2026-01-31T08:24:45Z|00638|binding|INFO|Releasing lport 82a1b950-f7d7-4649-a61e-273ceb65ba23 from this chassis (sb_readonly=0)
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.184 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a584cda0-df11-4171-9687-b79f1d3fe460.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a584cda0-df11-4171-9687-b79f1d3fe460.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.185 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef99897-ec95-437b-a906-5d17cd284d27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.186 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-a584cda0-df11-4171-9687-b79f1d3fe460
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/a584cda0-df11-4171-9687-b79f1d3fe460.pid.haproxy
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID a584cda0-df11-4171-9687-b79f1d3fe460
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:24:45 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:45.187 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460', 'env', 'PROCESS_TAG=haproxy-a584cda0-df11-4171-9687-b79f1d3fe460', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a584cda0-df11-4171-9687-b79f1d3fe460.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.188 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.381 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847885.3806825, 90aa4e13-650f-43f2-8ebe-19a34e0cc605 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.381 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] VM Resumed (Lifecycle Event)
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.384 226833 DEBUG nova.compute.manager [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:24:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1098951561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3702335835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.388 226833 INFO nova.virt.libvirt.driver [-] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Instance running successfully.
Jan 31 08:24:45 compute-2 virtqemud[226546]: argument unsupported: QEMU guest agent is not configured
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.390 226833 DEBUG nova.virt.libvirt.guest [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.390 226833 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 08:24:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:45.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.417 226833 DEBUG nova.compute.manager [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.418 226833 DEBUG oslo_concurrency.lockutils [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.418 226833 DEBUG oslo_concurrency.lockutils [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.418 226833 DEBUG oslo_concurrency.lockutils [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.418 226833 DEBUG nova.compute.manager [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.419 226833 WARNING nova.compute.manager [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received unexpected event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with vm_state active and task_state resize_finish.
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.435 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.444 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.516 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.517 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847885.384071, 90aa4e13-650f-43f2-8ebe-19a34e0cc605 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.517 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] VM Started (Lifecycle Event)
Jan 31 08:24:45 compute-2 podman[297985]: 2026-01-31 08:24:45.538583461 +0000 UTC m=+0.045773622 container create b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.554 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.555 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.555 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.555 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.556 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:45 compute-2 systemd[1]: Started libpod-conmon-b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea.scope.
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.583 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:45 compute-2 nova_compute[226829]: 2026-01-31 08:24:45.588 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:45 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:24:45 compute-2 podman[297985]: 2026-01-31 08:24:45.513446139 +0000 UTC m=+0.020636310 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:24:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d69dcbf25090644f5f972ef5ddc2c4a6205a6bebda5a1254f33460ec54a8937c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:45 compute-2 podman[297985]: 2026-01-31 08:24:45.628296346 +0000 UTC m=+0.135486537 container init b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:24:45 compute-2 podman[297985]: 2026-01-31 08:24:45.632457508 +0000 UTC m=+0.139647699 container start b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:24:45 compute-2 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[298001]: [NOTICE]   (298005) : New worker (298007) forked
Jan 31 08:24:45 compute-2 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[298001]: [NOTICE]   (298005) : Loading success.
Jan 31 08:24:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:45.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:24:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1145741290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.035 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.212 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.212 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.215 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.216 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.362 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.364 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3941MB free_disk=20.805789947509766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.364 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.364 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e353 e353: 3 total, 3 up, 3 in
Jan 31 08:24:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2806860288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1145741290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:46 compute-2 ceph-mon[77282]: pgmap v2727: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.434 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Applying migration context for instance 90aa4e13-650f-43f2-8ebe-19a34e0cc605 as it has an incoming, in-progress migration 28009c64-41b3-4fe9-854a-e346c8d0b39b. Migration status is finished _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.435 226833 INFO nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating resource usage from migration 28009c64-41b3-4fe9-854a-e346c8d0b39b
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.461 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e8e7a13f-a648-45dc-b768-ac5deac97083 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.462 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 90aa4e13-650f-43f2-8ebe-19a34e0cc605 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.462 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.462 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.528 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.707 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:24:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3613285156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.967 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:46 compute-2 nova_compute[226829]: 2026-01-31 08:24:46.973 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:24:47 compute-2 nova_compute[226829]: 2026-01-31 08:24:47.009 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:24:47 compute-2 nova_compute[226829]: 2026-01-31 08:24:47.088 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:24:47 compute-2 nova_compute[226829]: 2026-01-31 08:24:47.089 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:47.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:47 compute-2 ceph-mon[77282]: osdmap e353: 3 total, 3 up, 3 in
Jan 31 08:24:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3565020080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3613285156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:47 compute-2 nova_compute[226829]: 2026-01-31 08:24:47.725 226833 DEBUG nova.compute.manager [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:47 compute-2 nova_compute[226829]: 2026-01-31 08:24:47.726 226833 DEBUG oslo_concurrency.lockutils [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:47 compute-2 nova_compute[226829]: 2026-01-31 08:24:47.726 226833 DEBUG oslo_concurrency.lockutils [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:47 compute-2 nova_compute[226829]: 2026-01-31 08:24:47.726 226833 DEBUG oslo_concurrency.lockutils [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:47 compute-2 nova_compute[226829]: 2026-01-31 08:24:47.726 226833 DEBUG nova.compute.manager [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:47 compute-2 nova_compute[226829]: 2026-01-31 08:24:47.726 226833 WARNING nova.compute.manager [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received unexpected event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with vm_state resized and task_state None.
Jan 31 08:24:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:47.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:24:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2870581475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:48 compute-2 ceph-mon[77282]: pgmap v2729: 305 pgs: 305 active+clean; 566 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 228 op/s
Jan 31 08:24:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2870581475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:24:48.639 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:49.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:49 compute-2 nova_compute[226829]: 2026-01-31 08:24:49.608 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:49.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:49 compute-2 sudo[298062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:49 compute-2 sudo[298062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:49 compute-2 sudo[298062]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:49 compute-2 sudo[298088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:49 compute-2 sudo[298088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:49 compute-2 sudo[298088]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:50 compute-2 ceph-mon[77282]: pgmap v2730: 305 pgs: 305 active+clean; 523 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Jan 31 08:24:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3577017772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e354 e354: 3 total, 3 up, 3 in
Jan 31 08:24:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:51.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:51 compute-2 nova_compute[226829]: 2026-01-31 08:24:51.709 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:51.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:52 compute-2 ceph-mon[77282]: osdmap e354: 3 total, 3 up, 3 in
Jan 31 08:24:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4119211165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:52 compute-2 ceph-mon[77282]: pgmap v2732: 305 pgs: 305 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 268 KiB/s wr, 187 op/s
Jan 31 08:24:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e355 e355: 3 total, 3 up, 3 in
Jan 31 08:24:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/737181080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:53 compute-2 ceph-mon[77282]: osdmap e355: 3 total, 3 up, 3 in
Jan 31 08:24:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:24:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1721956627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:24:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:24:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1721956627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:24:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:53.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:24:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:53.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:24:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1721956627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:24:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1721956627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:24:54 compute-2 ceph-mon[77282]: pgmap v2734: 305 pgs: 305 active+clean; 417 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.2 KiB/s wr, 225 op/s
Jan 31 08:24:54 compute-2 nova_compute[226829]: 2026-01-31 08:24:54.611 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:55.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:55.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/88557555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:56 compute-2 podman[298116]: 2026-01-31 08:24:56.172631359 +0000 UTC m=+0.059824484 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:24:56 compute-2 nova_compute[226829]: 2026-01-31 08:24:56.711 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:57 compute-2 ceph-mon[77282]: pgmap v2735: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 27 KiB/s wr, 228 op/s
Jan 31 08:24:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3119264981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:57.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e356 e356: 3 total, 3 up, 3 in
Jan 31 08:24:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:57.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e357 e357: 3 total, 3 up, 3 in
Jan 31 08:24:58 compute-2 ceph-mon[77282]: osdmap e356: 3 total, 3 up, 3 in
Jan 31 08:24:58 compute-2 ceph-mon[77282]: pgmap v2737: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 45 KiB/s wr, 142 op/s
Jan 31 08:24:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:59 compute-2 ovn_controller[133834]: 2026-01-31T08:24:59Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:62:8a 10.100.0.3
Jan 31 08:24:59 compute-2 nova_compute[226829]: 2026-01-31 08:24:59.614 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:59 compute-2 ceph-mon[77282]: osdmap e357: 3 total, 3 up, 3 in
Jan 31 08:24:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:24:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:59.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.498 226833 DEBUG oslo_concurrency.lockutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "e8e7a13f-a648-45dc-b768-ac5deac97083" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.499 226833 DEBUG oslo_concurrency.lockutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.499 226833 DEBUG oslo_concurrency.lockutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.499 226833 DEBUG oslo_concurrency.lockutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.500 226833 DEBUG oslo_concurrency.lockutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.501 226833 INFO nova.compute.manager [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Terminating instance
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.502 226833 DEBUG nova.compute.manager [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:25:00 compute-2 kernel: tap22013b45-81 (unregistering): left promiscuous mode
Jan 31 08:25:00 compute-2 NetworkManager[48999]: <info>  [1769847900.5709] device (tap22013b45-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.576 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:00 compute-2 ovn_controller[133834]: 2026-01-31T08:25:00Z|00639|binding|INFO|Releasing lport 22013b45-81b3-43ce-9b55-b18d9c07bbef from this chassis (sb_readonly=0)
Jan 31 08:25:00 compute-2 ovn_controller[133834]: 2026-01-31T08:25:00Z|00640|binding|INFO|Setting lport 22013b45-81b3-43ce-9b55-b18d9c07bbef down in Southbound
Jan 31 08:25:00 compute-2 ovn_controller[133834]: 2026-01-31T08:25:00Z|00641|binding|INFO|Removing iface tap22013b45-81 ovn-installed in OVS
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.582 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:00 compute-2 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000092.scope: Deactivated successfully.
Jan 31 08:25:00 compute-2 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000092.scope: Consumed 23.022s CPU time.
Jan 31 08:25:00 compute-2 systemd-machined[195142]: Machine qemu-67-instance-00000092 terminated.
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.665 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:cc:b0 10.100.0.6'], port_security=['fa:16:3e:74:cc:b0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e8e7a13f-a648-45dc-b768-ac5deac97083', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=22013b45-81b3-43ce-9b55-b18d9c07bbef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.666 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 22013b45-81b3-43ce-9b55-b18d9c07bbef in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 unbound from our chassis
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.668 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 936cead9-bc2f-4c2d-8b4c-6079d2159263, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.669 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[97154a2e-e083-4026-8079-549ace25c110]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.669 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 namespace which is not needed anymore
Jan 31 08:25:00 compute-2 ceph-mon[77282]: pgmap v2739: 305 pgs: 305 active+clean; 361 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 43 KiB/s wr, 105 op/s
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.746 226833 INFO nova.virt.libvirt.driver [-] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Instance destroyed successfully.
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.747 226833 DEBUG nova.objects.instance [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'resources' on Instance uuid e8e7a13f-a648-45dc-b768-ac5deac97083 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:25:00 compute-2 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[294234]: [NOTICE]   (294238) : haproxy version is 2.8.14-c23fe91
Jan 31 08:25:00 compute-2 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[294234]: [NOTICE]   (294238) : path to executable is /usr/sbin/haproxy
Jan 31 08:25:00 compute-2 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[294234]: [WARNING]  (294238) : Exiting Master process...
Jan 31 08:25:00 compute-2 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[294234]: [ALERT]    (294238) : Current worker (294240) exited with code 143 (Terminated)
Jan 31 08:25:00 compute-2 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[294234]: [WARNING]  (294238) : All workers exited. Exiting... (0)
Jan 31 08:25:00 compute-2 systemd[1]: libpod-ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765.scope: Deactivated successfully.
Jan 31 08:25:00 compute-2 podman[298170]: 2026-01-31 08:25:00.795795656 +0000 UTC m=+0.046103622 container died ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.804 226833 DEBUG nova.virt.libvirt.vif [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:20:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2045886920',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2045886920',id=146,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:09Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-re5yql0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:21:18Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=e8e7a13f-a648-45dc-b768-ac5deac97083,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.805 226833 DEBUG nova.network.os_vif_util [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "address": "fa:16:3e:74:cc:b0", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22013b45-81", "ovs_interfaceid": "22013b45-81b3-43ce-9b55-b18d9c07bbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.805 226833 DEBUG nova.network.os_vif_util [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:cc:b0,bridge_name='br-int',has_traffic_filtering=True,id=22013b45-81b3-43ce-9b55-b18d9c07bbef,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22013b45-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.806 226833 DEBUG os_vif [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:cc:b0,bridge_name='br-int',has_traffic_filtering=True,id=22013b45-81b3-43ce-9b55-b18d9c07bbef,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22013b45-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.808 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22013b45-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.809 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.811 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.816 226833 INFO os_vif [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:cc:b0,bridge_name='br-int',has_traffic_filtering=True,id=22013b45-81b3-43ce-9b55-b18d9c07bbef,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22013b45-81')
Jan 31 08:25:00 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765-userdata-shm.mount: Deactivated successfully.
Jan 31 08:25:00 compute-2 systemd[1]: var-lib-containers-storage-overlay-ec31cbd6ebb378f66e8365d43ae8cb5af452285c9de5d7a35adda0e26f6d69e1-merged.mount: Deactivated successfully.
Jan 31 08:25:00 compute-2 podman[298170]: 2026-01-31 08:25:00.835415111 +0000 UTC m=+0.085723067 container cleanup ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:25:00 compute-2 systemd[1]: libpod-conmon-ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765.scope: Deactivated successfully.
Jan 31 08:25:00 compute-2 podman[298214]: 2026-01-31 08:25:00.885148611 +0000 UTC m=+0.034756304 container remove ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.889 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[968f01db-a0a4-4ece-bcb4-841a691b8e70]: (4, ('Sat Jan 31 08:25:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 (ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765)\nec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765\nSat Jan 31 08:25:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 (ec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765)\nec8194a389145cf35e33b2ff4b224f4b236a1f61a89e59d838672231e93e6765\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.891 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[299388e2-0312-4335-8f56-4d2b2b2e673a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.892 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.893 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:00 compute-2 kernel: tap936cead9-b0: left promiscuous mode
Jan 31 08:25:00 compute-2 nova_compute[226829]: 2026-01-31 08:25:00.898 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.901 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[69830515-d3f9-4e4a-b780-a9730dc0b6cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.915 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[85c3b508-62fd-4a0d-b06d-7d7efda754dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.916 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[72baf166-5cc8-4505-aa7c-6b5fc6365767]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.926 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3e75ac3b-fe0c-4d70-ab61-6f940bf48591]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792089, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298231, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:00 compute-2 systemd[1]: run-netns-ovnmeta\x2d936cead9\x2dbc2f\x2d4c2d\x2d8b4c\x2d6079d2159263.mount: Deactivated successfully.
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.930 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:25:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:00.930 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[d017ec51-f32d-4567-b531-bf36f39886de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.090 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.090 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.218 226833 INFO nova.virt.libvirt.driver [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Deleting instance files /var/lib/nova/instances/e8e7a13f-a648-45dc-b768-ac5deac97083_del
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.219 226833 INFO nova.virt.libvirt.driver [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Deletion of /var/lib/nova/instances/e8e7a13f-a648-45dc-b768-ac5deac97083_del complete
Jan 31 08:25:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:01.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.465 226833 INFO nova.compute.manager [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Took 0.96 seconds to destroy the instance on the hypervisor.
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.466 226833 DEBUG oslo.service.loopingcall [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.466 226833 DEBUG nova.compute.manager [-] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.467 226833 DEBUG nova.network.neutron [-] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:25:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1055653894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:25:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1055653894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.713 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.790 226833 DEBUG nova.compute.manager [req-8159e916-92fd-4a42-be19-09d1344ed530 req-37ca6e03-228f-496c-a033-08342a8dd8e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Received event network-vif-unplugged-22013b45-81b3-43ce-9b55-b18d9c07bbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.790 226833 DEBUG oslo_concurrency.lockutils [req-8159e916-92fd-4a42-be19-09d1344ed530 req-37ca6e03-228f-496c-a033-08342a8dd8e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.791 226833 DEBUG oslo_concurrency.lockutils [req-8159e916-92fd-4a42-be19-09d1344ed530 req-37ca6e03-228f-496c-a033-08342a8dd8e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.792 226833 DEBUG oslo_concurrency.lockutils [req-8159e916-92fd-4a42-be19-09d1344ed530 req-37ca6e03-228f-496c-a033-08342a8dd8e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.792 226833 DEBUG nova.compute.manager [req-8159e916-92fd-4a42-be19-09d1344ed530 req-37ca6e03-228f-496c-a033-08342a8dd8e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] No waiting events found dispatching network-vif-unplugged-22013b45-81b3-43ce-9b55-b18d9c07bbef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:25:01 compute-2 nova_compute[226829]: 2026-01-31 08:25:01.793 226833 DEBUG nova.compute.manager [req-8159e916-92fd-4a42-be19-09d1344ed530 req-37ca6e03-228f-496c-a033-08342a8dd8e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Received event network-vif-unplugged-22013b45-81b3-43ce-9b55-b18d9c07bbef for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:25:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:01.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:02 compute-2 nova_compute[226829]: 2026-01-31 08:25:02.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:02 compute-2 ceph-mon[77282]: pgmap v2740: 305 pgs: 305 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 42 KiB/s wr, 180 op/s
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.037 226833 DEBUG nova.network.neutron [-] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.065 226833 INFO nova.compute.manager [-] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Took 1.60 seconds to deallocate network for instance.
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.145 226833 DEBUG oslo_concurrency.lockutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.146 226833 DEBUG oslo_concurrency.lockutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.251 226833 DEBUG oslo_concurrency.processutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.376 226833 DEBUG nova.compute.manager [req-f52fbef0-4a96-4ccd-9563-20e9b1fd3b9c req-4e4ab0cf-b782-4ea8-8116-f7a570267c19 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Received event network-vif-deleted-22013b45-81b3-43ce-9b55-b18d9c07bbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:25:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:03.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:25:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2295090872' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:25:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:25:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2295090872' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:25:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:25:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3430440580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.692 226833 DEBUG oslo_concurrency.processutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.697 226833 DEBUG nova.compute.provider_tree [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.729 226833 DEBUG nova.scheduler.client.report [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:25:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2295090872' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:25:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2295090872' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:25:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3430440580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.759 226833 DEBUG oslo_concurrency.lockutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.824 226833 INFO nova.scheduler.client.report [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Deleted allocations for instance e8e7a13f-a648-45dc-b768-ac5deac97083
Jan 31 08:25:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:03.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.960 226833 DEBUG oslo_concurrency.lockutils [None req-021baf99-e455-45a3-9041-4c59917e1103 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.963 226833 DEBUG nova.compute.manager [req-5ed843d4-b8c9-46fe-91d1-7364efe0077d req-8df242d4-73b1-4207-8c64-a54aba9d92d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Received event network-vif-plugged-22013b45-81b3-43ce-9b55-b18d9c07bbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.963 226833 DEBUG oslo_concurrency.lockutils [req-5ed843d4-b8c9-46fe-91d1-7364efe0077d req-8df242d4-73b1-4207-8c64-a54aba9d92d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.964 226833 DEBUG oslo_concurrency.lockutils [req-5ed843d4-b8c9-46fe-91d1-7364efe0077d req-8df242d4-73b1-4207-8c64-a54aba9d92d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.964 226833 DEBUG oslo_concurrency.lockutils [req-5ed843d4-b8c9-46fe-91d1-7364efe0077d req-8df242d4-73b1-4207-8c64-a54aba9d92d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8e7a13f-a648-45dc-b768-ac5deac97083-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.964 226833 DEBUG nova.compute.manager [req-5ed843d4-b8c9-46fe-91d1-7364efe0077d req-8df242d4-73b1-4207-8c64-a54aba9d92d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] No waiting events found dispatching network-vif-plugged-22013b45-81b3-43ce-9b55-b18d9c07bbef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:25:03 compute-2 nova_compute[226829]: 2026-01-31 08:25:03.964 226833 WARNING nova.compute.manager [req-5ed843d4-b8c9-46fe-91d1-7364efe0077d req-8df242d4-73b1-4207-8c64-a54aba9d92d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Received unexpected event network-vif-plugged-22013b45-81b3-43ce-9b55-b18d9c07bbef for instance with vm_state deleted and task_state None.
Jan 31 08:25:04 compute-2 ceph-mon[77282]: pgmap v2741: 305 pgs: 305 active+clean; 311 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 21 KiB/s wr, 256 op/s
Jan 31 08:25:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:05.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:05 compute-2 nova_compute[226829]: 2026-01-31 08:25:05.812 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:05.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:06 compute-2 ceph-mon[77282]: pgmap v2742: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 19 KiB/s wr, 252 op/s
Jan 31 08:25:06 compute-2 nova_compute[226829]: 2026-01-31 08:25:06.715 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:06.902 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:06.903 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:06.903 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:07 compute-2 nova_compute[226829]: 2026-01-31 08:25:07.208 226833 INFO nova.compute.manager [None req-02031dc3-a122-4dff-9fcc-ba0e4b6c22a0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Get console output
Jan 31 08:25:07 compute-2 nova_compute[226829]: 2026-01-31 08:25:07.215 267670 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:25:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:07.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 e358: 3 total, 3 up, 3 in
Jan 31 08:25:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:07.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:08 compute-2 ceph-mon[77282]: osdmap e358: 3 total, 3 up, 3 in
Jan 31 08:25:08 compute-2 ceph-mon[77282]: pgmap v2744: 305 pgs: 305 active+clean; 248 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 19 KiB/s wr, 230 op/s
Jan 31 08:25:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2344999837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:25:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2344999837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:25:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:09.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:09.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:10 compute-2 sudo[298260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:10 compute-2 sudo[298260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:10 compute-2 sudo[298260]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:10 compute-2 sudo[298285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:10 compute-2 sudo[298285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:10 compute-2 sudo[298285]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:10 compute-2 ceph-mon[77282]: pgmap v2745: 305 pgs: 305 active+clean; 251 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 216 KiB/s wr, 206 op/s
Jan 31 08:25:10 compute-2 nova_compute[226829]: 2026-01-31 08:25:10.814 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:10 compute-2 nova_compute[226829]: 2026-01-31 08:25:10.849 226833 DEBUG nova.compute.manager [req-f5405e73-5042-4254-9183-38dff3cf10dd req-e552b7d6-827e-4fee-988c-3610741f8cc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-changed-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:25:10 compute-2 nova_compute[226829]: 2026-01-31 08:25:10.849 226833 DEBUG nova.compute.manager [req-f5405e73-5042-4254-9183-38dff3cf10dd req-e552b7d6-827e-4fee-988c-3610741f8cc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Refreshing instance network info cache due to event network-changed-7d9d74bf-cfe7-4c4d-aaec-f0662642996b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:25:10 compute-2 nova_compute[226829]: 2026-01-31 08:25:10.850 226833 DEBUG oslo_concurrency.lockutils [req-f5405e73-5042-4254-9183-38dff3cf10dd req-e552b7d6-827e-4fee-988c-3610741f8cc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:25:10 compute-2 nova_compute[226829]: 2026-01-31 08:25:10.850 226833 DEBUG oslo_concurrency.lockutils [req-f5405e73-5042-4254-9183-38dff3cf10dd req-e552b7d6-827e-4fee-988c-3610741f8cc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:25:10 compute-2 nova_compute[226829]: 2026-01-31 08:25:10.850 226833 DEBUG nova.network.neutron [req-f5405e73-5042-4254-9183-38dff3cf10dd req-e552b7d6-827e-4fee-988c-3610741f8cc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Refreshing network info cache for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.037 226833 DEBUG oslo_concurrency.lockutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.037 226833 DEBUG oslo_concurrency.lockutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.038 226833 DEBUG oslo_concurrency.lockutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.038 226833 DEBUG oslo_concurrency.lockutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.038 226833 DEBUG oslo_concurrency.lockutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.039 226833 INFO nova.compute.manager [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Terminating instance
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.040 226833 DEBUG nova.compute.manager [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:25:11 compute-2 kernel: tap7d9d74bf-cf (unregistering): left promiscuous mode
Jan 31 08:25:11 compute-2 NetworkManager[48999]: <info>  [1769847911.0997] device (tap7d9d74bf-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:25:11 compute-2 ovn_controller[133834]: 2026-01-31T08:25:11Z|00642|binding|INFO|Releasing lport 7d9d74bf-cfe7-4c4d-aaec-f0662642996b from this chassis (sb_readonly=0)
Jan 31 08:25:11 compute-2 ovn_controller[133834]: 2026-01-31T08:25:11Z|00643|binding|INFO|Setting lport 7d9d74bf-cfe7-4c4d-aaec-f0662642996b down in Southbound
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.105 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-2 ovn_controller[133834]: 2026-01-31T08:25:11Z|00644|binding|INFO|Removing iface tap7d9d74bf-cf ovn-installed in OVS
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.107 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.113 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.116 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:62:8a 10.100.0.3'], port_security=['fa:16:3e:b4:62:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '90aa4e13-650f-43f2-8ebe-19a34e0cc605', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a584cda0-df11-4171-9687-b79f1d3fe460', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3f15617b-dce2-4914-b18c-70facd7e86fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b34831f-93cf-4037-8766-7bee8dbb9141, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=7d9d74bf-cfe7-4c4d-aaec-f0662642996b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.118 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b in datapath a584cda0-df11-4171-9687-b79f1d3fe460 unbound from our chassis
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.120 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a584cda0-df11-4171-9687-b79f1d3fe460, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.121 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7c6c76-7929-4202-8cb9-a77398604f8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.121 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460 namespace which is not needed anymore
Jan 31 08:25:11 compute-2 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Jan 31 08:25:11 compute-2 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009a.scope: Consumed 13.605s CPU time.
Jan 31 08:25:11 compute-2 systemd-machined[195142]: Machine qemu-72-instance-0000009a terminated.
Jan 31 08:25:11 compute-2 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[298001]: [NOTICE]   (298005) : haproxy version is 2.8.14-c23fe91
Jan 31 08:25:11 compute-2 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[298001]: [NOTICE]   (298005) : path to executable is /usr/sbin/haproxy
Jan 31 08:25:11 compute-2 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[298001]: [ALERT]    (298005) : Current worker (298007) exited with code 143 (Terminated)
Jan 31 08:25:11 compute-2 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[298001]: [WARNING]  (298005) : All workers exited. Exiting... (0)
Jan 31 08:25:11 compute-2 systemd[1]: libpod-b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea.scope: Deactivated successfully.
Jan 31 08:25:11 compute-2 podman[298334]: 2026-01-31 08:25:11.228340914 +0000 UTC m=+0.041592150 container died b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:25:11 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea-userdata-shm.mount: Deactivated successfully.
Jan 31 08:25:11 compute-2 systemd[1]: var-lib-containers-storage-overlay-d69dcbf25090644f5f972ef5ddc2c4a6205a6bebda5a1254f33460ec54a8937c-merged.mount: Deactivated successfully.
Jan 31 08:25:11 compute-2 podman[298334]: 2026-01-31 08:25:11.265358838 +0000 UTC m=+0.078610064 container cleanup b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:25:11 compute-2 systemd[1]: libpod-conmon-b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea.scope: Deactivated successfully.
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.271 226833 INFO nova.virt.libvirt.driver [-] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Instance destroyed successfully.
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.272 226833 DEBUG nova.objects.instance [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.289 226833 DEBUG nova.virt.libvirt.vif [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:23:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2050667150',display_name='tempest-TestNetworkAdvancedServerOps-server-2050667150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2050667150',id=154,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEPErrFuMatngmbPFI9nTUxaXvpce6NcVYOjpFna1bVYBBpAXT5DWY6THfcwtLeuI/NzxTjJ2/S962+d7YMt9k52/1Ce4TFS9tLoM3ovnpGdPHYwJISH1c/E+TlOqwaBJw==',key_name='tempest-TestNetworkAdvancedServerOps-1275181138',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:24:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-wq345vcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:24:52Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=90aa4e13-650f-43f2-8ebe-19a34e0cc605,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.291 226833 DEBUG nova.network.os_vif_util [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.292 226833 DEBUG nova.network.os_vif_util [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.293 226833 DEBUG os_vif [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.294 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.295 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d9d74bf-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.296 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.297 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.300 226833 INFO os_vif [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf')
Jan 31 08:25:11 compute-2 podman[298375]: 2026-01-31 08:25:11.330312641 +0000 UTC m=+0.046478542 container remove b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.335 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[729c7708-efe0-471a-8121-51d3e1e9924e]: (4, ('Sat Jan 31 08:25:11 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460 (b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea)\nb94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea\nSat Jan 31 08:25:11 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460 (b94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea)\nb94b832fd3ba84ffcb318d8e772f40e7dc85d63fbdb7ea0e6bb484abafefd5ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.336 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ae48e019-7416-49d5-b1fb-83d4e9ed7b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.337 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa584cda0-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.378 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-2 kernel: tapa584cda0-d0: left promiscuous mode
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.384 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.386 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.387 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a407a5-ab29-4a79-aaed-d05a318e47b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.402 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d40e86-341b-46d7-8e5d-57ee59c0735e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.404 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[88915827-8ea6-4d96-9150-c500a19bbf98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.415 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[57f4aff8-5e94-45bf-937d-b6a1880be169]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813748, 'reachable_time': 26979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298409, 'error': None, 'target': 'ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.417 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:25:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:11.418 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[91546bd2-77c5-4783-b80f-bef828e19bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:25:11 compute-2 systemd[1]: run-netns-ovnmeta\x2da584cda0\x2ddf11\x2d4171\x2d9687\x2db79f1d3fe460.mount: Deactivated successfully.
Jan 31 08:25:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:11.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:11 compute-2 nova_compute[226829]: 2026-01-31 08:25:11.717 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1401734021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:11.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:12 compute-2 nova_compute[226829]: 2026-01-31 08:25:12.683 226833 INFO nova.virt.libvirt.driver [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Deleting instance files /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605_del
Jan 31 08:25:12 compute-2 nova_compute[226829]: 2026-01-31 08:25:12.684 226833 INFO nova.virt.libvirt.driver [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Deletion of /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605_del complete
Jan 31 08:25:12 compute-2 nova_compute[226829]: 2026-01-31 08:25:12.797 226833 INFO nova.compute.manager [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Took 1.76 seconds to destroy the instance on the hypervisor.
Jan 31 08:25:12 compute-2 nova_compute[226829]: 2026-01-31 08:25:12.797 226833 DEBUG oslo.service.loopingcall [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:25:12 compute-2 nova_compute[226829]: 2026-01-31 08:25:12.798 226833 DEBUG nova.compute.manager [-] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:25:12 compute-2 nova_compute[226829]: 2026-01-31 08:25:12.798 226833 DEBUG nova.network.neutron [-] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:25:12 compute-2 ceph-mon[77282]: pgmap v2746: 305 pgs: 305 active+clean; 256 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 163 op/s
Jan 31 08:25:13 compute-2 nova_compute[226829]: 2026-01-31 08:25:13.201 226833 DEBUG nova.compute.manager [req-2e76186d-5b76-4f15-bd76-adc605051567 req-548a841c-5871-4053-a022-7d57cf2f6404 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-unplugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:25:13 compute-2 nova_compute[226829]: 2026-01-31 08:25:13.201 226833 DEBUG oslo_concurrency.lockutils [req-2e76186d-5b76-4f15-bd76-adc605051567 req-548a841c-5871-4053-a022-7d57cf2f6404 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:13 compute-2 nova_compute[226829]: 2026-01-31 08:25:13.201 226833 DEBUG oslo_concurrency.lockutils [req-2e76186d-5b76-4f15-bd76-adc605051567 req-548a841c-5871-4053-a022-7d57cf2f6404 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:13 compute-2 nova_compute[226829]: 2026-01-31 08:25:13.201 226833 DEBUG oslo_concurrency.lockutils [req-2e76186d-5b76-4f15-bd76-adc605051567 req-548a841c-5871-4053-a022-7d57cf2f6404 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:13 compute-2 nova_compute[226829]: 2026-01-31 08:25:13.202 226833 DEBUG nova.compute.manager [req-2e76186d-5b76-4f15-bd76-adc605051567 req-548a841c-5871-4053-a022-7d57cf2f6404 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-unplugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:25:13 compute-2 nova_compute[226829]: 2026-01-31 08:25:13.202 226833 DEBUG nova.compute.manager [req-2e76186d-5b76-4f15-bd76-adc605051567 req-548a841c-5871-4053-a022-7d57cf2f6404 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-unplugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:25:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:25:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:13.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:25:13 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Jan 31 08:25:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:13.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:14 compute-2 ceph-mon[77282]: pgmap v2747: 305 pgs: 305 active+clean; 239 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 1.9 MiB/s wr, 97 op/s
Jan 31 08:25:15 compute-2 podman[298413]: 2026-01-31 08:25:15.229918364 +0000 UTC m=+0.115529646 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:25:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:15.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:15 compute-2 nova_compute[226829]: 2026-01-31 08:25:15.744 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847900.7439637, e8e7a13f-a648-45dc-b768-ac5deac97083 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:25:15 compute-2 nova_compute[226829]: 2026-01-31 08:25:15.745 226833 INFO nova.compute.manager [-] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] VM Stopped (Lifecycle Event)
Jan 31 08:25:15 compute-2 nova_compute[226829]: 2026-01-31 08:25:15.848 226833 DEBUG nova.compute.manager [None req-c43f1fef-0f08-42ab-a08a-8858fa3cae32 - - - - - -] [instance: e8e7a13f-a648-45dc-b768-ac5deac97083] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:25:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:15.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:16 compute-2 ceph-mon[77282]: pgmap v2748: 305 pgs: 305 active+clean; 231 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 4.0 MiB/s wr, 132 op/s
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.270 226833 DEBUG nova.compute.manager [req-0d033b98-387b-4dcf-9532-c35ded9cc166 req-1120824b-e3fc-4829-826e-3b4a893901d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.271 226833 DEBUG oslo_concurrency.lockutils [req-0d033b98-387b-4dcf-9532-c35ded9cc166 req-1120824b-e3fc-4829-826e-3b4a893901d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.271 226833 DEBUG oslo_concurrency.lockutils [req-0d033b98-387b-4dcf-9532-c35ded9cc166 req-1120824b-e3fc-4829-826e-3b4a893901d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.272 226833 DEBUG oslo_concurrency.lockutils [req-0d033b98-387b-4dcf-9532-c35ded9cc166 req-1120824b-e3fc-4829-826e-3b4a893901d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.272 226833 DEBUG nova.compute.manager [req-0d033b98-387b-4dcf-9532-c35ded9cc166 req-1120824b-e3fc-4829-826e-3b4a893901d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.272 226833 WARNING nova.compute.manager [req-0d033b98-387b-4dcf-9532-c35ded9cc166 req-1120824b-e3fc-4829-826e-3b4a893901d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received unexpected event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with vm_state active and task_state deleting.
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.285 226833 DEBUG nova.network.neutron [-] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.344 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.421 226833 INFO nova.compute.manager [-] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Took 3.62 seconds to deallocate network for instance.
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.433 226833 DEBUG nova.compute.manager [req-06870163-c65c-4a86-81e6-5bc8c6f7244f req-a934396d-c38e-423b-8b35-92c2c96b4ac9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-deleted-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.434 226833 INFO nova.compute.manager [req-06870163-c65c-4a86-81e6-5bc8c6f7244f req-a934396d-c38e-423b-8b35-92c2c96b4ac9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Neutron deleted interface 7d9d74bf-cfe7-4c4d-aaec-f0662642996b; detaching it from the instance and deleting it from the info cache
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.434 226833 DEBUG nova.network.neutron [req-06870163-c65c-4a86-81e6-5bc8c6f7244f req-a934396d-c38e-423b-8b35-92c2c96b4ac9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.619 226833 DEBUG nova.compute.manager [req-06870163-c65c-4a86-81e6-5bc8c6f7244f req-a934396d-c38e-423b-8b35-92c2c96b4ac9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Detach interface failed, port_id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b, reason: Instance 90aa4e13-650f-43f2-8ebe-19a34e0cc605 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.655 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.725 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.733 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.740 226833 DEBUG oslo_concurrency.lockutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.741 226833 DEBUG oslo_concurrency.lockutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:16 compute-2 nova_compute[226829]: 2026-01-31 08:25:16.852 226833 DEBUG oslo_concurrency.processutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:25:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1727550074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:17 compute-2 nova_compute[226829]: 2026-01-31 08:25:17.287 226833 DEBUG oslo_concurrency.processutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:17 compute-2 nova_compute[226829]: 2026-01-31 08:25:17.292 226833 DEBUG nova.compute.provider_tree [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:25:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1727550074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:17 compute-2 nova_compute[226829]: 2026-01-31 08:25:17.381 226833 DEBUG nova.scheduler.client.report [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:25:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:17.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:17 compute-2 nova_compute[226829]: 2026-01-31 08:25:17.766 226833 DEBUG oslo_concurrency.lockutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:25:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:17.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:25:18 compute-2 nova_compute[226829]: 2026-01-31 08:25:18.011 226833 INFO nova.scheduler.client.report [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocations for instance 90aa4e13-650f-43f2-8ebe-19a34e0cc605
Jan 31 08:25:18 compute-2 nova_compute[226829]: 2026-01-31 08:25:18.301 226833 DEBUG nova.network.neutron [req-f5405e73-5042-4254-9183-38dff3cf10dd req-e552b7d6-827e-4fee-988c-3610741f8cc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updated VIF entry in instance network info cache for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:25:18 compute-2 nova_compute[226829]: 2026-01-31 08:25:18.302 226833 DEBUG nova.network.neutron [req-f5405e73-5042-4254-9183-38dff3cf10dd req-e552b7d6-827e-4fee-988c-3610741f8cc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:25:18 compute-2 ceph-mon[77282]: pgmap v2749: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 436 KiB/s rd, 4.5 MiB/s wr, 141 op/s
Jan 31 08:25:18 compute-2 nova_compute[226829]: 2026-01-31 08:25:18.646 226833 DEBUG oslo_concurrency.lockutils [req-f5405e73-5042-4254-9183-38dff3cf10dd req-e552b7d6-827e-4fee-988c-3610741f8cc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:25:18 compute-2 nova_compute[226829]: 2026-01-31 08:25:18.816 226833 DEBUG oslo_concurrency.lockutils [None req-2c81c3bd-49a1-4736-82e8-9d96098071fc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:19.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:19.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:20 compute-2 ceph-mon[77282]: pgmap v2750: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Jan 31 08:25:20 compute-2 sudo[298465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:20 compute-2 sudo[298465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:20 compute-2 sudo[298465]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:20 compute-2 sudo[298490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:20 compute-2 sudo[298490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:20 compute-2 sudo[298490]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:20 compute-2 sudo[298515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:20 compute-2 sudo[298515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:20 compute-2 sudo[298515]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:20 compute-2 sudo[298540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:25:20 compute-2 sudo[298540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:21 compute-2 nova_compute[226829]: 2026-01-31 08:25:21.346 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:21.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:21 compute-2 sudo[298540]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:21 compute-2 nova_compute[226829]: 2026-01-31 08:25:21.727 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:21.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:22 compute-2 ceph-mon[77282]: pgmap v2751: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 3.7 MiB/s wr, 109 op/s
Jan 31 08:25:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:23.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:25:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:25:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:23.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:24 compute-2 ceph-mon[77282]: pgmap v2752: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 307 KiB/s rd, 3.0 MiB/s wr, 90 op/s
Jan 31 08:25:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:25.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:25.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:26 compute-2 ceph-mon[77282]: pgmap v2753: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Jan 31 08:25:26 compute-2 nova_compute[226829]: 2026-01-31 08:25:26.270 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847911.2694101, 90aa4e13-650f-43f2-8ebe-19a34e0cc605 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:25:26 compute-2 nova_compute[226829]: 2026-01-31 08:25:26.270 226833 INFO nova.compute.manager [-] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] VM Stopped (Lifecycle Event)
Jan 31 08:25:26 compute-2 nova_compute[226829]: 2026-01-31 08:25:26.348 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:26 compute-2 nova_compute[226829]: 2026-01-31 08:25:26.364 226833 DEBUG nova.compute.manager [None req-47303753-5789-408b-9902-d20b583b531c - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:25:26 compute-2 nova_compute[226829]: 2026-01-31 08:25:26.730 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:27 compute-2 podman[298599]: 2026-01-31 08:25:27.186129495 +0000 UTC m=+0.064865851 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 08:25:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/18559313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:27.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:27.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4210798319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:28 compute-2 ceph-mon[77282]: pgmap v2754: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 606 KiB/s wr, 16 op/s
Jan 31 08:25:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:29.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:29 compute-2 sudo[298620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:29 compute-2 sudo[298620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:29 compute-2 sudo[298620]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:29.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:29 compute-2 sudo[298646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:25:29 compute-2 sudo[298646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:29 compute-2 sudo[298646]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:30 compute-2 sudo[298671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:30 compute-2 sudo[298671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:30 compute-2 sudo[298671]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:30 compute-2 sudo[298696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:30 compute-2 sudo[298696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:30 compute-2 sudo[298696]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:30 compute-2 ceph-mon[77282]: pgmap v2755: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s wr, 0 op/s
Jan 31 08:25:31 compute-2 nova_compute[226829]: 2026-01-31 08:25:31.350 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:25:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:31.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:25:31 compute-2 nova_compute[226829]: 2026-01-31 08:25:31.733 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:25:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:31.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:25:32 compute-2 ceph-mon[77282]: pgmap v2756: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 22 KiB/s wr, 0 op/s
Jan 31 08:25:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:33.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:33.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:34 compute-2 ceph-mon[77282]: pgmap v2757: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 16 KiB/s wr, 8 op/s
Jan 31 08:25:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:35.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:35.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:36 compute-2 ceph-mon[77282]: pgmap v2758: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 15 KiB/s wr, 55 op/s
Jan 31 08:25:36 compute-2 nova_compute[226829]: 2026-01-31 08:25:36.353 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:36 compute-2 nova_compute[226829]: 2026-01-31 08:25:36.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:36 compute-2 nova_compute[226829]: 2026-01-31 08:25:36.734 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.201 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "6a8b37e1-e818-43a4-bb83-940e1968472d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.202 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "6a8b37e1-e818-43a4-bb83-940e1968472d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.221 226833 DEBUG nova.compute.manager [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.314 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.314 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.325 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.325 226833 INFO nova.compute.claims [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:25:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:37.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.487 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.531 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.554 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.554 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.555 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:25:37 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/567916548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:37.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.954 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:37 compute-2 nova_compute[226829]: 2026-01-31 08:25:37.961 226833 DEBUG nova.compute.provider_tree [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:25:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/567916548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:38 compute-2 nova_compute[226829]: 2026-01-31 08:25:38.226 226833 DEBUG nova.scheduler.client.report [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:25:38 compute-2 nova_compute[226829]: 2026-01-31 08:25:38.720 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:38 compute-2 nova_compute[226829]: 2026-01-31 08:25:38.720 226833 DEBUG nova.compute.manager [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:25:38 compute-2 nova_compute[226829]: 2026-01-31 08:25:38.916 226833 DEBUG nova.compute.manager [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 31 08:25:38 compute-2 nova_compute[226829]: 2026-01-31 08:25:38.950 226833 INFO nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:25:39 compute-2 ceph-mon[77282]: pgmap v2759: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.175 226833 DEBUG nova.compute.manager [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:25:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:39.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.738 226833 DEBUG nova.compute.manager [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.739 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.740 226833 INFO nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Creating image(s)
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.761 226833 DEBUG nova.storage.rbd_utils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image 6a8b37e1-e818-43a4-bb83-940e1968472d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.783 226833 DEBUG nova.storage.rbd_utils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image 6a8b37e1-e818-43a4-bb83-940e1968472d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.805 226833 DEBUG nova.storage.rbd_utils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image 6a8b37e1-e818-43a4-bb83-940e1968472d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.809 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.882 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.883 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.883 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.883 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.903 226833 DEBUG nova.storage.rbd_utils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image 6a8b37e1-e818-43a4-bb83-940e1968472d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:39 compute-2 nova_compute[226829]: 2026-01-31 08:25:39.906 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6a8b37e1-e818-43a4-bb83-940e1968472d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:39.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:40 compute-2 ceph-mon[77282]: pgmap v2760: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.268 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6a8b37e1-e818-43a4-bb83-940e1968472d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.337 226833 DEBUG nova.storage.rbd_utils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] resizing rbd image 6a8b37e1-e818-43a4-bb83-940e1968472d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:25:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.449 226833 DEBUG nova.objects.instance [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'migration_context' on Instance uuid 6a8b37e1-e818-43a4-bb83-940e1968472d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.622 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.622 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Ensure instance console log exists: /var/lib/nova/instances/6a8b37e1-e818-43a4-bb83-940e1968472d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.623 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.624 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.625 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.628 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.635 226833 WARNING nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.641 226833 DEBUG nova.virt.libvirt.host [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.642 226833 DEBUG nova.virt.libvirt.host [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.647 226833 DEBUG nova.virt.libvirt.host [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.647 226833 DEBUG nova.virt.libvirt.host [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.650 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.650 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.651 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.652 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.652 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.653 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.653 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.653 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.654 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.655 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.655 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.655 226833 DEBUG nova.virt.hardware [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:25:40 compute-2 nova_compute[226829]: 2026-01-31 08:25:40.662 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:25:41 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3804637156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:41 compute-2 nova_compute[226829]: 2026-01-31 08:25:41.101 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:41 compute-2 nova_compute[226829]: 2026-01-31 08:25:41.127 226833 DEBUG nova.storage.rbd_utils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image 6a8b37e1-e818-43a4-bb83-940e1968472d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:41 compute-2 nova_compute[226829]: 2026-01-31 08:25:41.130 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:41 compute-2 nova_compute[226829]: 2026-01-31 08:25:41.355 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3804637156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:41.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:25:41 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2327237780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:41 compute-2 nova_compute[226829]: 2026-01-31 08:25:41.581 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:41 compute-2 nova_compute[226829]: 2026-01-31 08:25:41.583 226833 DEBUG nova.objects.instance [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a8b37e1-e818-43a4-bb83-940e1968472d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:25:41 compute-2 nova_compute[226829]: 2026-01-31 08:25:41.682 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <uuid>6a8b37e1-e818-43a4-bb83-940e1968472d</uuid>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <name>instance-0000009e</name>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <nova:name>tempest-ServerShowV247Test-server-618988045</nova:name>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:25:40</nova:creationTime>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <nova:user uuid="3ebb4f01dd6e420b91e5f2282ecfd49d">tempest-ServerShowV247Test-1312169099-project-member</nova:user>
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <nova:project uuid="03f29bb9f284401a8ff5c6431219974b">tempest-ServerShowV247Test-1312169099</nova:project>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <nova:ports/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <system>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <entry name="serial">6a8b37e1-e818-43a4-bb83-940e1968472d</entry>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <entry name="uuid">6a8b37e1-e818-43a4-bb83-940e1968472d</entry>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     </system>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <os>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   </os>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <features>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   </features>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/6a8b37e1-e818-43a4-bb83-940e1968472d_disk">
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       </source>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/6a8b37e1-e818-43a4-bb83-940e1968472d_disk.config">
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       </source>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:25:41 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/6a8b37e1-e818-43a4-bb83-940e1968472d/console.log" append="off"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <video>
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     </video>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:25:41 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:25:41 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:25:41 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:25:41 compute-2 nova_compute[226829]: </domain>
Jan 31 08:25:41 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:25:41 compute-2 nova_compute[226829]: 2026-01-31 08:25:41.736 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:41.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2644805062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2327237780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:42 compute-2 ceph-mon[77282]: pgmap v2761: 305 pgs: 305 active+clean; 265 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 718 KiB/s wr, 75 op/s
Jan 31 08:25:42 compute-2 nova_compute[226829]: 2026-01-31 08:25:42.602 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:25:42 compute-2 nova_compute[226829]: 2026-01-31 08:25:42.602 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:25:42 compute-2 nova_compute[226829]: 2026-01-31 08:25:42.603 226833 INFO nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Using config drive
Jan 31 08:25:42 compute-2 nova_compute[226829]: 2026-01-31 08:25:42.628 226833 DEBUG nova.storage.rbd_utils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image 6a8b37e1-e818-43a4-bb83-940e1968472d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:42 compute-2 nova_compute[226829]: 2026-01-31 08:25:42.927 226833 INFO nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Creating config drive at /var/lib/nova/instances/6a8b37e1-e818-43a4-bb83-940e1968472d/disk.config
Jan 31 08:25:42 compute-2 nova_compute[226829]: 2026-01-31 08:25:42.931 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a8b37e1-e818-43a4-bb83-940e1968472d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyp51seh2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.071 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a8b37e1-e818-43a4-bb83-940e1968472d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyp51seh2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.103 226833 DEBUG nova.storage.rbd_utils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image 6a8b37e1-e818-43a4-bb83-940e1968472d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.107 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a8b37e1-e818-43a4-bb83-940e1968472d/disk.config 6a8b37e1-e818-43a4-bb83-940e1968472d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:43.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.500 226833 DEBUG oslo_concurrency.processutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a8b37e1-e818-43a4-bb83-940e1968472d/disk.config 6a8b37e1-e818-43a4-bb83-940e1968472d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.501 226833 INFO nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Deleting local config drive /var/lib/nova/instances/6a8b37e1-e818-43a4-bb83-940e1968472d/disk.config because it was imported into RBD.
Jan 31 08:25:43 compute-2 systemd-machined[195142]: New machine qemu-73-instance-0000009e.
Jan 31 08:25:43 compute-2 systemd[1]: Started Virtual Machine qemu-73-instance-0000009e.
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.900 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847943.8992608, 6a8b37e1-e818-43a4-bb83-940e1968472d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.901 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] VM Resumed (Lifecycle Event)
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.906 226833 DEBUG nova.compute.manager [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.907 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.913 226833 INFO nova.virt.libvirt.driver [-] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Instance spawned successfully.
Jan 31 08:25:43 compute-2 nova_compute[226829]: 2026-01-31 08:25:43.913 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:25:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:43.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:44 compute-2 ceph-mon[77282]: pgmap v2762: 305 pgs: 305 active+clean; 269 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 747 KiB/s wr, 75 op/s
Jan 31 08:25:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2109718097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.197 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.204 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.205 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.205 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.206 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.207 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.208 226833 DEBUG nova.virt.libvirt.driver [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.215 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.739 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:44.739 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:25:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:44.742 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.783 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.784 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847943.9008546, 6a8b37e1-e818-43a4-bb83-940e1968472d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.784 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] VM Started (Lifecycle Event)
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.840 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.843 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.846 226833 INFO nova.compute.manager [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Took 5.11 seconds to spawn the instance on the hypervisor.
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.847 226833 DEBUG nova.compute.manager [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.877 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.911 226833 INFO nova.compute.manager [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Took 7.63 seconds to build instance.
Jan 31 08:25:44 compute-2 nova_compute[226829]: 2026-01-31 08:25:44.932 226833 DEBUG oslo_concurrency.lockutils [None req-928e0b14-9ed3-436f-90c3-c57ee62b5e41 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "6a8b37e1-e818-43a4-bb83-940e1968472d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2405147875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2694520024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:45.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:45 compute-2 nova_compute[226829]: 2026-01-31 08:25:45.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:45 compute-2 nova_compute[226829]: 2026-01-31 08:25:45.520 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:45 compute-2 nova_compute[226829]: 2026-01-31 08:25:45.521 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:45 compute-2 nova_compute[226829]: 2026-01-31 08:25:45.521 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:45 compute-2 nova_compute[226829]: 2026-01-31 08:25:45.522 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:25:45 compute-2 nova_compute[226829]: 2026-01-31 08:25:45.523 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:45.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:25:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/579878461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:45 compute-2 nova_compute[226829]: 2026-01-31 08:25:45.980 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:46 compute-2 podman[299120]: 2026-01-31 08:25:46.102075835 +0000 UTC m=+0.078838350 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.176 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.177 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:25:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1072790435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/579878461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:46 compute-2 ceph-mon[77282]: pgmap v2763: 305 pgs: 305 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 152 op/s
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.310 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.311 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4034MB free_disk=20.958667755126953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.311 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.311 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.358 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.630 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 6a8b37e1-e818-43a4-bb83-940e1968472d actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.631 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.631 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.688 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:46 compute-2 nova_compute[226829]: 2026-01-31 08:25:46.738 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:25:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1443221660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:47 compute-2 nova_compute[226829]: 2026-01-31 08:25:47.132 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:47 compute-2 nova_compute[226829]: 2026-01-31 08:25:47.137 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:25:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1443221660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:25:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:47.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:25:47 compute-2 nova_compute[226829]: 2026-01-31 08:25:47.489 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:25:47 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:25:47.746 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:25:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:47.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:48 compute-2 nova_compute[226829]: 2026-01-31 08:25:48.209 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:25:48 compute-2 nova_compute[226829]: 2026-01-31 08:25:48.210 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1949229944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:48 compute-2 ceph-mon[77282]: pgmap v2764: 305 pgs: 305 active+clean; 371 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.6 MiB/s wr, 216 op/s
Jan 31 08:25:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:49.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:49.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:50 compute-2 sudo[299171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:50 compute-2 sudo[299171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:50 compute-2 sudo[299171]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:50 compute-2 sudo[299196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:50 compute-2 sudo[299196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:50 compute-2 sudo[299196]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:50 compute-2 ceph-mon[77282]: pgmap v2765: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.3 MiB/s wr, 276 op/s
Jan 31 08:25:51 compute-2 nova_compute[226829]: 2026-01-31 08:25:51.360 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:51.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3045328600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2226192119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:51 compute-2 nova_compute[226829]: 2026-01-31 08:25:51.741 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:51.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:52 compute-2 ceph-mon[77282]: pgmap v2766: 305 pgs: 305 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.7 MiB/s wr, 292 op/s
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.673375) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952673454, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2039, "num_deletes": 262, "total_data_size": 4385333, "memory_usage": 4455728, "flush_reason": "Manual Compaction"}
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952695199, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 2866418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63856, "largest_seqno": 65890, "table_properties": {"data_size": 2858202, "index_size": 4901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18167, "raw_average_key_size": 20, "raw_value_size": 2841245, "raw_average_value_size": 3203, "num_data_blocks": 213, "num_entries": 887, "num_filter_entries": 887, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847804, "oldest_key_time": 1769847804, "file_creation_time": 1769847952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 22004 microseconds, and 10601 cpu microseconds.
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.695373) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 2866418 bytes OK
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.695419) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.697559) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.697576) EVENT_LOG_v1 {"time_micros": 1769847952697570, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.697596) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 4376138, prev total WAL file size 4376138, number of live WAL files 2.
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.698569) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323631' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(2799KB)], [129(8980KB)]
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952698640, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 12062018, "oldest_snapshot_seqno": -1}
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 8801 keys, 11909443 bytes, temperature: kUnknown
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952781129, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 11909443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11852075, "index_size": 34279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 231714, "raw_average_key_size": 26, "raw_value_size": 11696660, "raw_average_value_size": 1329, "num_data_blocks": 1318, "num_entries": 8801, "num_filter_entries": 8801, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.781514) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11909443 bytes
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.782962) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.0 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.8 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(8.4) write-amplify(4.2) OK, records in: 9342, records dropped: 541 output_compression: NoCompression
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.782992) EVENT_LOG_v1 {"time_micros": 1769847952782979, "job": 82, "event": "compaction_finished", "compaction_time_micros": 82639, "compaction_time_cpu_micros": 40565, "output_level": 6, "num_output_files": 1, "total_output_size": 11909443, "num_input_records": 9342, "num_output_records": 8801, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952783694, "job": 82, "event": "table_file_deletion", "file_number": 131}
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952784962, "job": 82, "event": "table_file_deletion", "file_number": 129}
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.698490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.785055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.785061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.785064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.785067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:25:52.785070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:25:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3522562724' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:25:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:25:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3522562724' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:25:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:25:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:53.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:25:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3522562724' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:25:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3522562724' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:25:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:53.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:55 compute-2 ceph-mon[77282]: pgmap v2767: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.8 MiB/s wr, 292 op/s
Jan 31 08:25:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:55.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:55.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:56 compute-2 nova_compute[226829]: 2026-01-31 08:25:56.363 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:56 compute-2 ceph-mon[77282]: pgmap v2768: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.8 MiB/s wr, 292 op/s
Jan 31 08:25:56 compute-2 nova_compute[226829]: 2026-01-31 08:25:56.743 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:57.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3704513804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:57.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:58 compute-2 podman[299225]: 2026-01-31 08:25:58.181206054 +0000 UTC m=+0.051078677 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:25:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2656717281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:59.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:25:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:59.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:00 compute-2 ceph-mon[77282]: pgmap v2769: 305 pgs: 305 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.8 MiB/s wr, 214 op/s
Jan 31 08:26:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:01 compute-2 nova_compute[226829]: 2026-01-31 08:26:01.366 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:01.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:01 compute-2 ceph-mon[77282]: pgmap v2770: 305 pgs: 305 active+clean; 449 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 154 op/s
Jan 31 08:26:01 compute-2 nova_compute[226829]: 2026-01-31 08:26:01.747 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:26:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:01.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:26:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:03.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:03 compute-2 ceph-mon[77282]: pgmap v2771: 305 pgs: 305 active+clean; 467 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 5.1 MiB/s wr, 116 op/s
Jan 31 08:26:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:03.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:04 compute-2 nova_compute[226829]: 2026-01-31 08:26:04.211 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:04 compute-2 nova_compute[226829]: 2026-01-31 08:26:04.212 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:26:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:05.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:05 compute-2 ceph-mon[77282]: pgmap v2772: 305 pgs: 305 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 4.9 MiB/s wr, 122 op/s
Jan 31 08:26:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:05.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:06 compute-2 nova_compute[226829]: 2026-01-31 08:26:06.368 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:06 compute-2 nova_compute[226829]: 2026-01-31 08:26:06.749 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:06.903 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:06.903 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:06.903 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:07.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:07 compute-2 ceph-mon[77282]: pgmap v2773: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 602 KiB/s rd, 4.3 MiB/s wr, 153 op/s
Jan 31 08:26:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1500801380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2069233769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:07.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:08 compute-2 nova_compute[226829]: 2026-01-31 08:26:08.731 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:08 compute-2 nova_compute[226829]: 2026-01-31 08:26:08.731 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:08 compute-2 nova_compute[226829]: 2026-01-31 08:26:08.770 226833 DEBUG nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:26:08 compute-2 nova_compute[226829]: 2026-01-31 08:26:08.963 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:08 compute-2 nova_compute[226829]: 2026-01-31 08:26:08.964 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:08 compute-2 nova_compute[226829]: 2026-01-31 08:26:08.970 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:26:08 compute-2 nova_compute[226829]: 2026-01-31 08:26:08.970 226833 INFO nova.compute.claims [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:26:09 compute-2 nova_compute[226829]: 2026-01-31 08:26:09.248 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:26:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:09.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:26:09 compute-2 nova_compute[226829]: 2026-01-31 08:26:09.717 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:09 compute-2 nova_compute[226829]: 2026-01-31 08:26:09.723 226833 DEBUG nova.compute.provider_tree [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:26:09 compute-2 ceph-mon[77282]: pgmap v2774: 305 pgs: 305 active+clean; 417 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 4.4 MiB/s wr, 175 op/s
Jan 31 08:26:09 compute-2 nova_compute[226829]: 2026-01-31 08:26:09.801 226833 DEBUG nova.scheduler.client.report [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:26:09 compute-2 nova_compute[226829]: 2026-01-31 08:26:09.868 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:09 compute-2 nova_compute[226829]: 2026-01-31 08:26:09.869 226833 DEBUG nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:26:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:09.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.012 226833 DEBUG nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.013 226833 DEBUG nova.network.neutron [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.052 226833 INFO nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.077 226833 DEBUG nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.195 226833 DEBUG nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.196 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.196 226833 INFO nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Creating image(s)
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.219 226833 DEBUG nova.storage.rbd_utils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 651d6b65-a0ee-4942-bf60-88b037eb6508_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.244 226833 DEBUG nova.storage.rbd_utils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 651d6b65-a0ee-4942-bf60-88b037eb6508_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.266 226833 DEBUG nova.storage.rbd_utils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 651d6b65-a0ee-4942-bf60-88b037eb6508_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.269 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.330 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.331 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.332 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.332 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.371 226833 DEBUG nova.storage.rbd_utils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 651d6b65-a0ee-4942-bf60-88b037eb6508_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.376 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 651d6b65-a0ee-4942-bf60-88b037eb6508_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.408 226833 DEBUG nova.policy [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d0e9d918b4041fabd5ded633b4cf404', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9710f0cf77d84353ae13fa47922b085d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:26:10 compute-2 sudo[299364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:10 compute-2 sudo[299364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:10 compute-2 sudo[299364]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:10 compute-2 sudo[299389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:10 compute-2 sudo[299389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:10 compute-2 sudo[299389]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.768 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 651d6b65-a0ee-4942-bf60-88b037eb6508_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2906156154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:10 compute-2 nova_compute[226829]: 2026-01-31 08:26:10.888 226833 DEBUG nova.storage.rbd_utils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] resizing rbd image 651d6b65-a0ee-4942-bf60-88b037eb6508_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:26:11 compute-2 nova_compute[226829]: 2026-01-31 08:26:11.024 226833 DEBUG nova.objects.instance [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:11 compute-2 nova_compute[226829]: 2026-01-31 08:26:11.095 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:26:11 compute-2 nova_compute[226829]: 2026-01-31 08:26:11.095 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Ensure instance console log exists: /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:26:11 compute-2 nova_compute[226829]: 2026-01-31 08:26:11.096 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:11 compute-2 nova_compute[226829]: 2026-01-31 08:26:11.096 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:11 compute-2 nova_compute[226829]: 2026-01-31 08:26:11.096 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:11 compute-2 nova_compute[226829]: 2026-01-31 08:26:11.370 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:11.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:11 compute-2 nova_compute[226829]: 2026-01-31 08:26:11.753 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:11 compute-2 ceph-mon[77282]: pgmap v2775: 305 pgs: 305 active+clean; 430 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.7 MiB/s wr, 205 op/s
Jan 31 08:26:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:11.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2260443752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:13 compute-2 nova_compute[226829]: 2026-01-31 08:26:13.262 226833 DEBUG nova.network.neutron [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Successfully created port: ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:26:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:13.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:13 compute-2 ceph-mon[77282]: pgmap v2776: 305 pgs: 305 active+clean; 463 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.5 MiB/s wr, 220 op/s
Jan 31 08:26:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:14.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:15.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:16.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:16 compute-2 ceph-mon[77282]: pgmap v2777: 305 pgs: 305 active+clean; 470 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.0 MiB/s wr, 226 op/s
Jan 31 08:26:16 compute-2 nova_compute[226829]: 2026-01-31 08:26:16.372 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:16 compute-2 nova_compute[226829]: 2026-01-31 08:26:16.754 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:16 compute-2 nova_compute[226829]: 2026-01-31 08:26:16.885 226833 DEBUG nova.network.neutron [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Successfully updated port: ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:26:16 compute-2 nova_compute[226829]: 2026-01-31 08:26:16.909 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:16 compute-2 nova_compute[226829]: 2026-01-31 08:26:16.910 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:16 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:26:16 compute-2 nova_compute[226829]: 2026-01-31 08:26:16.911 226833 DEBUG nova.network.neutron [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:26:16 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:26:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3581095025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:17 compute-2 podman[299493]: 2026-01-31 08:26:17.222831235 +0000 UTC m=+0.090935018 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:26:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:17.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:18.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:18 compute-2 nova_compute[226829]: 2026-01-31 08:26:18.016 226833 DEBUG nova.network.neutron [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:26:18 compute-2 ceph-mon[77282]: pgmap v2778: 305 pgs: 305 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.4 MiB/s wr, 277 op/s
Jan 31 08:26:18 compute-2 nova_compute[226829]: 2026-01-31 08:26:18.822 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:19 compute-2 ceph-mon[77282]: pgmap v2779: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 255 op/s
Jan 31 08:26:19 compute-2 nova_compute[226829]: 2026-01-31 08:26:19.377 226833 DEBUG nova.compute.manager [req-6567d7e4-8f50-4a15-bf30-dee090558bae req-016be00e-b8b1-4e2c-94c1-012f3ae3ccd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:19 compute-2 nova_compute[226829]: 2026-01-31 08:26:19.378 226833 DEBUG nova.compute.manager [req-6567d7e4-8f50-4a15-bf30-dee090558bae req-016be00e-b8b1-4e2c-94c1-012f3ae3ccd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing instance network info cache due to event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:26:19 compute-2 nova_compute[226829]: 2026-01-31 08:26:19.378 226833 DEBUG oslo_concurrency.lockutils [req-6567d7e4-8f50-4a15-bf30-dee090558bae req-016be00e-b8b1-4e2c-94c1-012f3ae3ccd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:20.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:20 compute-2 nova_compute[226829]: 2026-01-31 08:26:20.227 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "6a8b37e1-e818-43a4-bb83-940e1968472d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:20 compute-2 nova_compute[226829]: 2026-01-31 08:26:20.228 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "6a8b37e1-e818-43a4-bb83-940e1968472d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:20 compute-2 nova_compute[226829]: 2026-01-31 08:26:20.228 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "6a8b37e1-e818-43a4-bb83-940e1968472d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:20 compute-2 nova_compute[226829]: 2026-01-31 08:26:20.229 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "6a8b37e1-e818-43a4-bb83-940e1968472d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:20 compute-2 nova_compute[226829]: 2026-01-31 08:26:20.230 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "6a8b37e1-e818-43a4-bb83-940e1968472d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:20 compute-2 nova_compute[226829]: 2026-01-31 08:26:20.232 226833 INFO nova.compute.manager [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Terminating instance
Jan 31 08:26:20 compute-2 nova_compute[226829]: 2026-01-31 08:26:20.233 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "refresh_cache-6a8b37e1-e818-43a4-bb83-940e1968472d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:20 compute-2 nova_compute[226829]: 2026-01-31 08:26:20.234 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquired lock "refresh_cache-6a8b37e1-e818-43a4-bb83-940e1968472d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:20 compute-2 nova_compute[226829]: 2026-01-31 08:26:20.234 226833 DEBUG nova.network.neutron [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:26:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:21 compute-2 nova_compute[226829]: 2026-01-31 08:26:21.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:21 compute-2 ceph-mon[77282]: pgmap v2780: 305 pgs: 305 active+clean; 497 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.2 MiB/s wr, 233 op/s
Jan 31 08:26:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:21.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:21 compute-2 nova_compute[226829]: 2026-01-31 08:26:21.756 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:21 compute-2 nova_compute[226829]: 2026-01-31 08:26:21.908 226833 DEBUG nova.network.neutron [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:26:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:26:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:22.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.620 226833 DEBUG nova.network.neutron [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.636 226833 DEBUG nova.network.neutron [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.750 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.751 226833 DEBUG nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance network_info: |[{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.752 226833 DEBUG oslo_concurrency.lockutils [req-6567d7e4-8f50-4a15-bf30-dee090558bae req-016be00e-b8b1-4e2c-94c1-012f3ae3ccd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.752 226833 DEBUG nova.network.neutron [req-6567d7e4-8f50-4a15-bf30-dee090558bae req-016be00e-b8b1-4e2c-94c1-012f3ae3ccd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.757 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Start _get_guest_xml network_info=[{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.764 226833 WARNING nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.777 226833 DEBUG nova.virt.libvirt.host [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.778 226833 DEBUG nova.virt.libvirt.host [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.794 226833 DEBUG nova.virt.libvirt.host [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.795 226833 DEBUG nova.virt.libvirt.host [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.797 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.798 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.798 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.799 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.799 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.799 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.800 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.800 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.801 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.801 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.801 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.802 226833 DEBUG nova.virt.hardware [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.807 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.841 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Releasing lock "refresh_cache-6a8b37e1-e818-43a4-bb83-940e1968472d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:22 compute-2 nova_compute[226829]: 2026-01-31 08:26:22.842 226833 DEBUG nova.compute.manager [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:26:22 compute-2 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009e.scope: Deactivated successfully.
Jan 31 08:26:22 compute-2 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009e.scope: Consumed 13.930s CPU time.
Jan 31 08:26:22 compute-2 systemd-machined[195142]: Machine qemu-73-instance-0000009e terminated.
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.076 226833 INFO nova.virt.libvirt.driver [-] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Instance destroyed successfully.
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.078 226833 DEBUG nova.objects.instance [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'resources' on Instance uuid 6a8b37e1-e818-43a4-bb83-940e1968472d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:26:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3860694169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.276 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.308 226833 DEBUG nova.storage.rbd_utils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 651d6b65-a0ee-4942-bf60-88b037eb6508_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.312 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:23.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:23 compute-2 ceph-mon[77282]: pgmap v2781: 305 pgs: 305 active+clean; 497 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Jan 31 08:26:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3860694169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:26:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3903398586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.806 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.808 226833 DEBUG nova.virt.libvirt.vif [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2107337279',display_name='tempest-TestNetworkAdvancedServerOps-server-2107337279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2107337279',id=161,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWmLoXJJT8HJ06ToetETaDpi1L/xOsFHTQftPXbFuOs01yOf7k2Y65Sy9zq5BZjupL2Q9fI+9QOPb9ecuesAa9df6vRKXVMZCeU6kML4//7cHw0FciNooD1B0Aw/wAsgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1409064792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ka16rdmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:26:10Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=651d6b65-a0ee-4942-bf60-88b037eb6508,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.809 226833 DEBUG nova.network.os_vif_util [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.811 226833 DEBUG nova.network.os_vif_util [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.813 226833 DEBUG nova.objects.instance [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.916 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <uuid>651d6b65-a0ee-4942-bf60-88b037eb6508</uuid>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <name>instance-000000a1</name>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2107337279</nova:name>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:26:22</nova:creationTime>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <nova:port uuid="ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f">
Jan 31 08:26:23 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <system>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <entry name="serial">651d6b65-a0ee-4942-bf60-88b037eb6508</entry>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <entry name="uuid">651d6b65-a0ee-4942-bf60-88b037eb6508</entry>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     </system>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <os>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   </os>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <features>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   </features>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/651d6b65-a0ee-4942-bf60-88b037eb6508_disk">
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       </source>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/651d6b65-a0ee-4942-bf60-88b037eb6508_disk.config">
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       </source>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:26:23 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:75:f3:24"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <target dev="tapec7fbb6b-9a"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/console.log" append="off"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <video>
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     </video>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:26:23 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:26:23 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:26:23 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:26:23 compute-2 nova_compute[226829]: </domain>
Jan 31 08:26:23 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.917 226833 DEBUG nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Preparing to wait for external event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.917 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.918 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.918 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.919 226833 DEBUG nova.virt.libvirt.vif [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2107337279',display_name='tempest-TestNetworkAdvancedServerOps-server-2107337279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2107337279',id=161,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWmLoXJJT8HJ06ToetETaDpi1L/xOsFHTQftPXbFuOs01yOf7k2Y65Sy9zq5BZjupL2Q9fI+9QOPb9ecuesAa9df6vRKXVMZCeU6kML4//7cHw0FciNooD1B0Aw/wAsgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1409064792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ka16rdmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:26:10Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=651d6b65-a0ee-4942-bf60-88b037eb6508,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.920 226833 DEBUG nova.network.os_vif_util [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.921 226833 DEBUG nova.network.os_vif_util [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.922 226833 DEBUG os_vif [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.923 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.923 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.924 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.934 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.934 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec7fbb6b-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.935 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec7fbb6b-9a, col_values=(('external_ids', {'iface-id': 'ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:f3:24', 'vm-uuid': '651d6b65-a0ee-4942-bf60-88b037eb6508'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.936 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:23 compute-2 NetworkManager[48999]: <info>  [1769847983.9397] manager: (tapec7fbb6b-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.939 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.944 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:23 compute-2 nova_compute[226829]: 2026-01-31 08:26:23.946 226833 INFO os_vif [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a')
Jan 31 08:26:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:24.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:24 compute-2 nova_compute[226829]: 2026-01-31 08:26:24.192 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:26:24 compute-2 nova_compute[226829]: 2026-01-31 08:26:24.193 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:26:24 compute-2 nova_compute[226829]: 2026-01-31 08:26:24.193 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:75:f3:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:26:24 compute-2 nova_compute[226829]: 2026-01-31 08:26:24.194 226833 INFO nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Using config drive
Jan 31 08:26:24 compute-2 nova_compute[226829]: 2026-01-31 08:26:24.219 226833 DEBUG nova.storage.rbd_utils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 651d6b65-a0ee-4942-bf60-88b037eb6508_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3903398586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:24 compute-2 ceph-osd[79942]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.087 226833 INFO nova.virt.libvirt.driver [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Deleting instance files /var/lib/nova/instances/6a8b37e1-e818-43a4-bb83-940e1968472d_del
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.088 226833 INFO nova.virt.libvirt.driver [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Deletion of /var/lib/nova/instances/6a8b37e1-e818-43a4-bb83-940e1968472d_del complete
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.173 226833 INFO nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Creating config drive at /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/disk.config
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.177 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplxixbtcl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.219 226833 INFO nova.compute.manager [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Took 2.38 seconds to destroy the instance on the hypervisor.
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.220 226833 DEBUG oslo.service.loopingcall [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.220 226833 DEBUG nova.compute.manager [-] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.220 226833 DEBUG nova.network.neutron [-] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.306 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplxixbtcl" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.334 226833 DEBUG nova.storage.rbd_utils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 651d6b65-a0ee-4942-bf60-88b037eb6508_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.337 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/disk.config 651d6b65-a0ee-4942-bf60-88b037eb6508_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.464 226833 DEBUG oslo_concurrency.processutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/disk.config 651d6b65-a0ee-4942-bf60-88b037eb6508_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.465 226833 INFO nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Deleting local config drive /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/disk.config because it was imported into RBD.
Jan 31 08:26:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:26:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:25.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:26:25 compute-2 kernel: tapec7fbb6b-9a: entered promiscuous mode
Jan 31 08:26:25 compute-2 NetworkManager[48999]: <info>  [1769847985.5184] manager: (tapec7fbb6b-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/318)
Jan 31 08:26:25 compute-2 systemd-udevd[299544]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.519 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-2 ovn_controller[133834]: 2026-01-31T08:26:25Z|00645|binding|INFO|Claiming lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for this chassis.
Jan 31 08:26:25 compute-2 ovn_controller[133834]: 2026-01-31T08:26:25Z|00646|binding|INFO|ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f: Claiming fa:16:3e:75:f3:24 10.100.0.11
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.525 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.528 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.531 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-2 NetworkManager[48999]: <info>  [1769847985.5375] device (tapec7fbb6b-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:26:25 compute-2 NetworkManager[48999]: <info>  [1769847985.5389] device (tapec7fbb6b-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.548 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:f3:24 10.100.0.11'], port_security=['fa:16:3e:75:f3:24 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '651d6b65-a0ee-4942-bf60-88b037eb6508', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '53c8d6c3-3fc8-4e05-8f45-013b15b35751', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eeea185c-a70e-4e33-a1d7-88e2fb6e75b6, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.549 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f in datapath cc669a9b-1a99-4cea-8b35-6d932fb2087c bound to our chassis
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.551 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cc669a9b-1a99-4cea-8b35-6d932fb2087c
Jan 31 08:26:25 compute-2 systemd-machined[195142]: New machine qemu-74-instance-000000a1.
Jan 31 08:26:25 compute-2 ovn_controller[133834]: 2026-01-31T08:26:25Z|00647|binding|INFO|Setting lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f ovn-installed in OVS
Jan 31 08:26:25 compute-2 ovn_controller[133834]: 2026-01-31T08:26:25Z|00648|binding|INFO|Setting lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f up in Southbound
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.562 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-2 systemd[1]: Started Virtual Machine qemu-74-instance-000000a1.
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.568 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[852ac448-83cf-442c-9afc-8ffce1fab400]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.570 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcc669a9b-11 in ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.573 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcc669a9b-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.573 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc5351c-f6f9-4d6b-98a1-78cc269c1929]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.574 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0d3c79-e73f-4bd8-a58e-513db0002c8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.585 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[11487354-56fd-4d9c-9627-134f7c3d66a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.598 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[da40db49-6702-459e-9a9c-c9355d37da79]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.627 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[924615ba-b55b-424f-9f98-62b895e822f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 systemd-udevd[299681]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:26:25 compute-2 NetworkManager[48999]: <info>  [1769847985.6350] manager: (tapcc669a9b-10): new Veth device (/org/freedesktop/NetworkManager/Devices/319)
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.634 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1125bad4-966e-4052-89b6-7da7ac91a525]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.658 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d637beff-45ea-42a3-b548-6f5dd7179be8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.661 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc44707-ea2e-4467-a3f8-e765c05d95dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 NetworkManager[48999]: <info>  [1769847985.6833] device (tapcc669a9b-10): carrier: link connected
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.682 226833 DEBUG nova.network.neutron [-] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.689 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[68e201e5-b239-4780-940c-3e138eb200f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.701 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c76555b7-dfe9-47fa-acef-8f54a9dce794]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc669a9b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823815, 'reachable_time': 41165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299715, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.711 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[af1dc499-f304-4d53-84a7-33f3356b25c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:92df'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 823815, 'tstamp': 823815}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299716, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.722 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3a4db4-7dce-4cc8-86ea-d84353bce492]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc669a9b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823815, 'reachable_time': 41165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299717, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.742 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b29e0fb0-add7-44fa-8e1a-cb20ed98dab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.775 226833 DEBUG nova.network.neutron [-] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.783 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9860e9b9-b8f4-4536-8595-a6f83c90d752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.784 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc669a9b-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.784 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.785 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc669a9b-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-2 NetworkManager[48999]: <info>  [1769847985.8086] manager: (tapcc669a9b-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Jan 31 08:26:25 compute-2 kernel: tapcc669a9b-10: entered promiscuous mode
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.809 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-2 ceph-mon[77282]: pgmap v2782: 305 pgs: 305 active+clean; 500 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.828 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcc669a9b-10, col_values=(('external_ids', {'iface-id': '897a561a-9f88-407a-b979-589100a315c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.829 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-2 ovn_controller[133834]: 2026-01-31T08:26:25Z|00649|binding|INFO|Releasing lport 897a561a-9f88-407a-b979-589100a315c7 from this chassis (sb_readonly=0)
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.831 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.832 226833 INFO nova.compute.manager [-] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Took 0.61 seconds to deallocate network for instance.
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.836 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.836 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cc669a9b-1a99-4cea-8b35-6d932fb2087c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cc669a9b-1a99-4cea-8b35-6d932fb2087c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.837 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[65bdba54-1c80-4f5d-8f3f-b211e468aa2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.837 226833 DEBUG nova.network.neutron [req-6567d7e4-8f50-4a15-bf30-dee090558bae req-016be00e-b8b1-4e2c-94c1-012f3ae3ccd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updated VIF entry in instance network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.838 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-cc669a9b-1a99-4cea-8b35-6d932fb2087c
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/cc669a9b-1a99-4cea-8b35-6d932fb2087c.pid.haproxy
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID cc669a9b-1a99-4cea-8b35-6d932fb2087c
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.838 226833 DEBUG nova.network.neutron [req-6567d7e4-8f50-4a15-bf30-dee090558bae req-016be00e-b8b1-4e2c-94c1-012f3ae3ccd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:25.838 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'env', 'PROCESS_TAG=haproxy-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cc669a9b-1a99-4cea-8b35-6d932fb2087c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.939 226833 DEBUG oslo_concurrency.lockutils [req-6567d7e4-8f50-4a15-bf30-dee090558bae req-016be00e-b8b1-4e2c-94c1-012f3ae3ccd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.948 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:25 compute-2 nova_compute[226829]: 2026-01-31 08:26:25.949 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:26.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.058 226833 DEBUG oslo_concurrency.processutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.168 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847986.1681323, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.169 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Started (Lifecycle Event)
Jan 31 08:26:26 compute-2 podman[299792]: 2026-01-31 08:26:26.216667226 +0000 UTC m=+0.053024470 container create 12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 08:26:26 compute-2 systemd[1]: Started libpod-conmon-12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416.scope.
Jan 31 08:26:26 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:26:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4da0da63bb09708574a05cda79f6d06dfe41659777b984c38a01f7a81a0df6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:26 compute-2 podman[299792]: 2026-01-31 08:26:26.188255164 +0000 UTC m=+0.024612468 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:26:26 compute-2 podman[299792]: 2026-01-31 08:26:26.283773757 +0000 UTC m=+0.120131021 container init 12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:26:26 compute-2 podman[299792]: 2026-01-31 08:26:26.287630221 +0000 UTC m=+0.123987465 container start 12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:26:26 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[299826]: [NOTICE]   (299830) : New worker (299832) forked
Jan 31 08:26:26 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[299826]: [NOTICE]   (299830) : Loading success.
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.376 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.381 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847986.168397, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.381 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Paused (Lifecycle Event)
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.447 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.450 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.454 226833 DEBUG nova.compute.manager [req-8f5aeca2-a158-4767-9ce5-ab42cb3c80ad req-f1ba4c60-16c3-4951-8a83-db1f99fab199 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.454 226833 DEBUG oslo_concurrency.lockutils [req-8f5aeca2-a158-4767-9ce5-ab42cb3c80ad req-f1ba4c60-16c3-4951-8a83-db1f99fab199 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.455 226833 DEBUG oslo_concurrency.lockutils [req-8f5aeca2-a158-4767-9ce5-ab42cb3c80ad req-f1ba4c60-16c3-4951-8a83-db1f99fab199 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.455 226833 DEBUG oslo_concurrency.lockutils [req-8f5aeca2-a158-4767-9ce5-ab42cb3c80ad req-f1ba4c60-16c3-4951-8a83-db1f99fab199 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.456 226833 DEBUG nova.compute.manager [req-8f5aeca2-a158-4767-9ce5-ab42cb3c80ad req-f1ba4c60-16c3-4951-8a83-db1f99fab199 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Processing event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.456 226833 DEBUG nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.473 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.477 226833 INFO nova.virt.libvirt.driver [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance spawned successfully.
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.477 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:26:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:26:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4235895941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.521 226833 DEBUG oslo_concurrency.processutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.526 226833 DEBUG nova.compute.provider_tree [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.553 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.553 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769847986.4733503, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.553 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Resumed (Lifecycle Event)
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.583 226833 DEBUG nova.scheduler.client.report [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.592 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.593 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.593 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.594 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.594 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.595 226833 DEBUG nova.virt.libvirt.driver [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.662 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.667 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.676 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.758 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.801 226833 INFO nova.scheduler.client.report [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Deleted allocations for instance 6a8b37e1-e818-43a4-bb83-940e1968472d
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.804 226833 INFO nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Took 16.61 seconds to spawn the instance on the hypervisor.
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.805 226833 DEBUG nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:26 compute-2 nova_compute[226829]: 2026-01-31 08:26:26.826 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:26:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4235895941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:27 compute-2 nova_compute[226829]: 2026-01-31 08:26:27.085 226833 INFO nova.compute.manager [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Took 18.19 seconds to build instance.
Jan 31 08:26:27 compute-2 nova_compute[226829]: 2026-01-31 08:26:27.121 226833 DEBUG oslo_concurrency.lockutils [None req-7082fa56-04c9-488a-aa99-040534d3de6d 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "6a8b37e1-e818-43a4-bb83-940e1968472d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:27 compute-2 nova_compute[226829]: 2026-01-31 08:26:27.176 226833 DEBUG oslo_concurrency.lockutils [None req-43ab4f51-e030-4587-b466-d2c37092eabc 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:27.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:27 compute-2 ceph-mon[77282]: pgmap v2783: 305 pgs: 305 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.8 MiB/s wr, 179 op/s
Jan 31 08:26:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1778908129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3912877065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:28.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:28 compute-2 nova_compute[226829]: 2026-01-31 08:26:28.638 226833 DEBUG nova.compute.manager [req-f5edf629-1a0f-40af-a964-a16f79dbc591 req-22697082-f8be-4058-ab83-f18fc0444db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:28 compute-2 nova_compute[226829]: 2026-01-31 08:26:28.639 226833 DEBUG oslo_concurrency.lockutils [req-f5edf629-1a0f-40af-a964-a16f79dbc591 req-22697082-f8be-4058-ab83-f18fc0444db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:28 compute-2 nova_compute[226829]: 2026-01-31 08:26:28.639 226833 DEBUG oslo_concurrency.lockutils [req-f5edf629-1a0f-40af-a964-a16f79dbc591 req-22697082-f8be-4058-ab83-f18fc0444db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:28 compute-2 nova_compute[226829]: 2026-01-31 08:26:28.639 226833 DEBUG oslo_concurrency.lockutils [req-f5edf629-1a0f-40af-a964-a16f79dbc591 req-22697082-f8be-4058-ab83-f18fc0444db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:28 compute-2 nova_compute[226829]: 2026-01-31 08:26:28.639 226833 DEBUG nova.compute.manager [req-f5edf629-1a0f-40af-a964-a16f79dbc591 req-22697082-f8be-4058-ab83-f18fc0444db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:26:28 compute-2 nova_compute[226829]: 2026-01-31 08:26:28.640 226833 WARNING nova.compute.manager [req-f5edf629-1a0f-40af-a964-a16f79dbc591 req-22697082-f8be-4058-ab83-f18fc0444db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state active and task_state None.
Jan 31 08:26:28 compute-2 nova_compute[226829]: 2026-01-31 08:26:28.937 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:29 compute-2 podman[299844]: 2026-01-31 08:26:29.203212414 +0000 UTC m=+0.079537230 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:26:29 compute-2 ceph-mon[77282]: pgmap v2784: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 809 KiB/s rd, 3.2 MiB/s wr, 126 op/s
Jan 31 08:26:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:26:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:29.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:26:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:26:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3639916361' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:30 compute-2 sudo[299864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:30 compute-2 sudo[299864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-2 sudo[299864]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:30.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:30 compute-2 sudo[299889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:30 compute-2 sudo[299889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-2 sudo[299889]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-2 sudo[299914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:30 compute-2 sudo[299914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-2 sudo[299914]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-2 sudo[299939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:26:30 compute-2 sudo[299939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3639916361' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:30 compute-2 sudo[299939]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-2 sudo[299996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:30 compute-2 sudo[299996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-2 sudo[299996]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-2 sudo[300021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:30 compute-2 sudo[300021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-2 sudo[300021]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:31.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:31 compute-2 ceph-mon[77282]: pgmap v2785: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Jan 31 08:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:26:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.554113) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991554225, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 660, "num_deletes": 251, "total_data_size": 979312, "memory_usage": 991624, "flush_reason": "Manual Compaction"}
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991561266, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 644796, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65895, "largest_seqno": 66550, "table_properties": {"data_size": 641646, "index_size": 1057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7682, "raw_average_key_size": 19, "raw_value_size": 635209, "raw_average_value_size": 1596, "num_data_blocks": 47, "num_entries": 398, "num_filter_entries": 398, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847953, "oldest_key_time": 1769847953, "file_creation_time": 1769847991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 7212 microseconds, and 3745 cpu microseconds.
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.561335) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 644796 bytes OK
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.561358) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.563199) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.563224) EVENT_LOG_v1 {"time_micros": 1769847991563216, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.563248) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 975701, prev total WAL file size 975701, number of live WAL files 2.
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.563734) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(629KB)], [132(11MB)]
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991563762, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 12554239, "oldest_snapshot_seqno": -1}
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 8687 keys, 10651860 bytes, temperature: kUnknown
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991624736, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 10651860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10596450, "index_size": 32577, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 230066, "raw_average_key_size": 26, "raw_value_size": 10444402, "raw_average_value_size": 1202, "num_data_blocks": 1238, "num_entries": 8687, "num_filter_entries": 8687, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769847991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.625196) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10651860 bytes
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.626537) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.0 rd, 174.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.4 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(36.0) write-amplify(16.5) OK, records in: 9199, records dropped: 512 output_compression: NoCompression
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.626558) EVENT_LOG_v1 {"time_micros": 1769847991626549, "job": 84, "event": "compaction_finished", "compaction_time_micros": 61234, "compaction_time_cpu_micros": 20180, "output_level": 6, "num_output_files": 1, "total_output_size": 10651860, "num_input_records": 9199, "num_output_records": 8687, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991626781, "job": 84, "event": "table_file_deletion", "file_number": 134}
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991627974, "job": 84, "event": "table_file_deletion", "file_number": 132}
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.563696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.628008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.628013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.628015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.628018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:26:31.628020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-2 nova_compute[226829]: 2026-01-31 08:26:31.761 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:32.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:33 compute-2 ceph-mon[77282]: pgmap v2786: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 31 08:26:33 compute-2 nova_compute[226829]: 2026-01-31 08:26:33.940 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:34.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:34 compute-2 nova_compute[226829]: 2026-01-31 08:26:34.383 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:34 compute-2 NetworkManager[48999]: <info>  [1769847994.3843] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Jan 31 08:26:34 compute-2 NetworkManager[48999]: <info>  [1769847994.3867] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Jan 31 08:26:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:26:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/121583807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:34 compute-2 nova_compute[226829]: 2026-01-31 08:26:34.447 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:34 compute-2 ovn_controller[133834]: 2026-01-31T08:26:34Z|00650|binding|INFO|Releasing lport 897a561a-9f88-407a-b979-589100a315c7 from this chassis (sb_readonly=0)
Jan 31 08:26:34 compute-2 nova_compute[226829]: 2026-01-31 08:26:34.467 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/121583807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:35.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:35 compute-2 ceph-mon[77282]: pgmap v2787: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 217 op/s
Jan 31 08:26:35 compute-2 nova_compute[226829]: 2026-01-31 08:26:35.890 226833 DEBUG nova.compute.manager [req-9cb15d4c-27af-4b8a-8447-54c5d4e30263 req-4d93b7c5-e16f-4aee-a94f-d00552f23a9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:35 compute-2 nova_compute[226829]: 2026-01-31 08:26:35.891 226833 DEBUG nova.compute.manager [req-9cb15d4c-27af-4b8a-8447-54c5d4e30263 req-4d93b7c5-e16f-4aee-a94f-d00552f23a9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing instance network info cache due to event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:26:35 compute-2 nova_compute[226829]: 2026-01-31 08:26:35.891 226833 DEBUG oslo_concurrency.lockutils [req-9cb15d4c-27af-4b8a-8447-54c5d4e30263 req-4d93b7c5-e16f-4aee-a94f-d00552f23a9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:35 compute-2 nova_compute[226829]: 2026-01-31 08:26:35.892 226833 DEBUG oslo_concurrency.lockutils [req-9cb15d4c-27af-4b8a-8447-54c5d4e30263 req-4d93b7c5-e16f-4aee-a94f-d00552f23a9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:35 compute-2 nova_compute[226829]: 2026-01-31 08:26:35.892 226833 DEBUG nova.network.neutron [req-9cb15d4c-27af-4b8a-8447-54c5d4e30263 req-4d93b7c5-e16f-4aee-a94f-d00552f23a9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:26:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:36.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:36 compute-2 nova_compute[226829]: 2026-01-31 08:26:36.763 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:37 compute-2 nova_compute[226829]: 2026-01-31 08:26:37.538 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:37 compute-2 nova_compute[226829]: 2026-01-31 08:26:37.539 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:37 compute-2 sudo[300051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:37 compute-2 sudo[300051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:37 compute-2 sudo[300051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:37 compute-2 sudo[300076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:26:37 compute-2 sudo[300076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:37 compute-2 sudo[300076]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:38.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:38 compute-2 nova_compute[226829]: 2026-01-31 08:26:38.075 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847983.0733683, 6a8b37e1-e818-43a4-bb83-940e1968472d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:38 compute-2 nova_compute[226829]: 2026-01-31 08:26:38.075 226833 INFO nova.compute.manager [-] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] VM Stopped (Lifecycle Event)
Jan 31 08:26:38 compute-2 nova_compute[226829]: 2026-01-31 08:26:38.128 226833 DEBUG nova.compute.manager [None req-b2eee99d-3d37-4b0f-b147-42b7960e42ca - - - - - -] [instance: 6a8b37e1-e818-43a4-bb83-940e1968472d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:38 compute-2 nova_compute[226829]: 2026-01-31 08:26:38.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:38 compute-2 ceph-mon[77282]: pgmap v2788: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Jan 31 08:26:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:26:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:26:38 compute-2 nova_compute[226829]: 2026-01-31 08:26:38.914 226833 DEBUG nova.network.neutron [req-9cb15d4c-27af-4b8a-8447-54c5d4e30263 req-4d93b7c5-e16f-4aee-a94f-d00552f23a9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updated VIF entry in instance network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:26:38 compute-2 nova_compute[226829]: 2026-01-31 08:26:38.916 226833 DEBUG nova.network.neutron [req-9cb15d4c-27af-4b8a-8447-54c5d4e30263 req-4d93b7c5-e16f-4aee-a94f-d00552f23a9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:38 compute-2 nova_compute[226829]: 2026-01-31 08:26:38.942 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:38 compute-2 nova_compute[226829]: 2026-01-31 08:26:38.987 226833 DEBUG oslo_concurrency.lockutils [req-9cb15d4c-27af-4b8a-8447-54c5d4e30263 req-4d93b7c5-e16f-4aee-a94f-d00552f23a9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:39 compute-2 nova_compute[226829]: 2026-01-31 08:26:39.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:39 compute-2 nova_compute[226829]: 2026-01-31 08:26:39.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:26:39 compute-2 nova_compute[226829]: 2026-01-31 08:26:39.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:26:39 compute-2 ceph-mon[77282]: pgmap v2789: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 49 KiB/s wr, 156 op/s
Jan 31 08:26:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:26:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:39.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:26:39 compute-2 nova_compute[226829]: 2026-01-31 08:26:39.875 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:39 compute-2 nova_compute[226829]: 2026-01-31 08:26:39.876 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:39 compute-2 nova_compute[226829]: 2026-01-31 08:26:39.876 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:26:39 compute-2 nova_compute[226829]: 2026-01-31 08:26:39.877 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:40.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:40 compute-2 ovn_controller[133834]: 2026-01-31T08:26:40Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:f3:24 10.100.0.11
Jan 31 08:26:40 compute-2 ovn_controller[133834]: 2026-01-31T08:26:40Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:f3:24 10.100.0.11
Jan 31 08:26:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:41 compute-2 ceph-mon[77282]: pgmap v2790: 305 pgs: 305 active+clean; 460 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.1 MiB/s wr, 170 op/s
Jan 31 08:26:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:41.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:41 compute-2 nova_compute[226829]: 2026-01-31 08:26:41.766 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:42.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:42 compute-2 nova_compute[226829]: 2026-01-31 08:26:42.697 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:42 compute-2 nova_compute[226829]: 2026-01-31 08:26:42.804 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:42 compute-2 nova_compute[226829]: 2026-01-31 08:26:42.805 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:26:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:43 compute-2 ceph-mon[77282]: pgmap v2791: 305 pgs: 305 active+clean; 471 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 148 op/s
Jan 31 08:26:43 compute-2 nova_compute[226829]: 2026-01-31 08:26:43.951 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:44.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:45 compute-2 nova_compute[226829]: 2026-01-31 08:26:45.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:45 compute-2 nova_compute[226829]: 2026-01-31 08:26:45.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:45 compute-2 nova_compute[226829]: 2026-01-31 08:26:45.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:45 compute-2 ceph-mon[77282]: pgmap v2792: 305 pgs: 305 active+clean; 481 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 31 08:26:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2063618554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/848783938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:46.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:46 compute-2 nova_compute[226829]: 2026-01-31 08:26:46.426 226833 INFO nova.compute.manager [None req-7d0c09c5-a81f-4275-94ca-1c2c48dfb7dd 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Get console output
Jan 31 08:26:46 compute-2 nova_compute[226829]: 2026-01-31 08:26:46.433 267670 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:26:46 compute-2 nova_compute[226829]: 2026-01-31 08:26:46.769 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1898698978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/791014467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:46 compute-2 nova_compute[226829]: 2026-01-31 08:26:46.942 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:46 compute-2 nova_compute[226829]: 2026-01-31 08:26:46.942 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.057 226833 DEBUG nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.558 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.558 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.558 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.559 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.559 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.603 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.604 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.610 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.611 226833 INFO nova.compute.claims [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:26:47 compute-2 nova_compute[226829]: 2026-01-31 08:26:47.878 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:47 compute-2 ceph-mon[77282]: pgmap v2793: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 105 op/s
Jan 31 08:26:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2720197295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1176362589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1160455600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:26:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1267069725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.003 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:48.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:48 compute-2 podman[300141]: 2026-01-31 08:26:48.117686981 +0000 UTC m=+0.078068930 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.215 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.216 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:26:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:26:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2763669411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.327 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.331 226833 DEBUG nova.compute.provider_tree [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.361 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.362 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3973MB free_disk=20.827919006347656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.362 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.365 226833 DEBUG nova.scheduler.client.report [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.395 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.396 226833 DEBUG nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.398 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.494 226833 DEBUG nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.494 226833 DEBUG nova.network.neutron [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.529 226833 INFO nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.535 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 651d6b65-a0ee-4942-bf60-88b037eb6508 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.535 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.536 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.536 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.632 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.665 226833 DEBUG nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.727 226833 INFO nova.compute.manager [None req-f41a6523-bd79-4397-a259-f9e4224e3917 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Get console output
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.732 267670 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.870 226833 DEBUG nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.872 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.872 226833 INFO nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Creating image(s)
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.900 226833 DEBUG nova.storage.rbd_utils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.929 226833 DEBUG nova.storage.rbd_utils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.958 226833 DEBUG nova.storage.rbd_utils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.962 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:48 compute-2 nova_compute[226829]: 2026-01-31 08:26:48.995 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1267069725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2763669411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.037 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.038 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.039 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.040 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:26:49 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1876234704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.078 226833 DEBUG nova.storage.rbd_utils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.083 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.111 226833 DEBUG nova.policy [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '85dfa8546d9942648bb4197c8b1947e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbdbdee526499e90da7e971ede68d3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.114 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.118 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.146 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.207 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:26:49 compute-2 nova_compute[226829]: 2026-01-31 08:26:49.208 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:50 compute-2 ceph-mon[77282]: pgmap v2794: 305 pgs: 305 active+clean; 509 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 460 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Jan 31 08:26:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1876234704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:50.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:50 compute-2 nova_compute[226829]: 2026-01-31 08:26:50.082 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.999s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:50 compute-2 nova_compute[226829]: 2026-01-31 08:26:50.158 226833 DEBUG nova.storage.rbd_utils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] resizing rbd image e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:26:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:50 compute-2 nova_compute[226829]: 2026-01-31 08:26:50.520 226833 DEBUG nova.objects.instance [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'migration_context' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:50 compute-2 nova_compute[226829]: 2026-01-31 08:26:50.548 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:26:50 compute-2 nova_compute[226829]: 2026-01-31 08:26:50.548 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Ensure instance console log exists: /var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:26:50 compute-2 nova_compute[226829]: 2026-01-31 08:26:50.549 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:50 compute-2 nova_compute[226829]: 2026-01-31 08:26:50.549 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:50 compute-2 nova_compute[226829]: 2026-01-31 08:26:50.549 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:50 compute-2 nova_compute[226829]: 2026-01-31 08:26:50.686 226833 DEBUG nova.network.neutron [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Successfully created port: a3ff723d-9e34-45ef-881d-6534126ae169 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:26:50 compute-2 sudo[300367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:50 compute-2 sudo[300367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:50 compute-2 sudo[300367]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:50 compute-2 sudo[300392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:50 compute-2 sudo[300392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:50 compute-2 sudo[300392]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:51.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:51 compute-2 nova_compute[226829]: 2026-01-31 08:26:51.771 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:52 compute-2 ceph-mon[77282]: pgmap v2795: 305 pgs: 305 active+clean; 566 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 867 KiB/s rd, 6.7 MiB/s wr, 150 op/s
Jan 31 08:26:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:52.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:52 compute-2 nova_compute[226829]: 2026-01-31 08:26:52.662 226833 DEBUG nova.network.neutron [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Successfully updated port: a3ff723d-9e34-45ef-881d-6534126ae169 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:26:52 compute-2 nova_compute[226829]: 2026-01-31 08:26:52.800 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:52 compute-2 nova_compute[226829]: 2026-01-31 08:26:52.800 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:52 compute-2 nova_compute[226829]: 2026-01-31 08:26:52.801 226833 DEBUG nova.network.neutron [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:26:52 compute-2 nova_compute[226829]: 2026-01-31 08:26:52.920 226833 DEBUG nova.compute.manager [req-bfd7a972-6d8e-4a18-a68e-e5315548168e req-8042af54-8082-4cbd-98b5-27ca4e494df2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-changed-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:52 compute-2 nova_compute[226829]: 2026-01-31 08:26:52.921 226833 DEBUG nova.compute.manager [req-bfd7a972-6d8e-4a18-a68e-e5315548168e req-8042af54-8082-4cbd-98b5-27ca4e494df2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Refreshing instance network info cache due to event network-changed-a3ff723d-9e34-45ef-881d-6534126ae169. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:26:52 compute-2 nova_compute[226829]: 2026-01-31 08:26:52.921 226833 DEBUG oslo_concurrency.lockutils [req-bfd7a972-6d8e-4a18-a68e-e5315548168e req-8042af54-8082-4cbd-98b5-27ca4e494df2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:53 compute-2 nova_compute[226829]: 2026-01-31 08:26:53.193 226833 DEBUG nova.network.neutron [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:26:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:53 compute-2 nova_compute[226829]: 2026-01-31 08:26:53.997 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:54 compute-2 ceph-mon[77282]: pgmap v2796: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.2 MiB/s wr, 157 op/s
Jan 31 08:26:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2557482008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:26:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2557482008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:26:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1848615711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:54 compute-2 nova_compute[226829]: 2026-01-31 08:26:54.443 226833 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:54 compute-2 nova_compute[226829]: 2026-01-31 08:26:54.443 226833 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:54 compute-2 nova_compute[226829]: 2026-01-31 08:26:54.443 226833 DEBUG nova.network.neutron [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:26:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:26:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:56.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:26:56 compute-2 ceph-mon[77282]: pgmap v2797: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.176 226833 DEBUG nova.network.neutron [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.243 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.244 226833 DEBUG nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Instance network_info: |[{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.244 226833 DEBUG oslo_concurrency.lockutils [req-bfd7a972-6d8e-4a18-a68e-e5315548168e req-8042af54-8082-4cbd-98b5-27ca4e494df2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.244 226833 DEBUG nova.network.neutron [req-bfd7a972-6d8e-4a18-a68e-e5315548168e req-8042af54-8082-4cbd-98b5-27ca4e494df2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Refreshing network info cache for port a3ff723d-9e34-45ef-881d-6534126ae169 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.247 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Start _get_guest_xml network_info=[{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.251 226833 WARNING nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.258 226833 DEBUG nova.virt.libvirt.host [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.259 226833 DEBUG nova.virt.libvirt.host [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.262 226833 DEBUG nova.virt.libvirt.host [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.262 226833 DEBUG nova.virt.libvirt.host [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.263 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.264 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.264 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.264 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.264 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.265 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.265 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.265 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.265 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.266 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.266 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.266 226833 DEBUG nova.virt.hardware [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.269 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:26:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/306460858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.720 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.752 226833 DEBUG nova.storage.rbd_utils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.759 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:56 compute-2 nova_compute[226829]: 2026-01-31 08:26:56.789 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/306460858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:26:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2863717728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.172 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.174 226833 DEBUG nova.virt.libvirt.vif [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-ec12ij1b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:26:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=e89132fd-2d0c-475e-a3c5-0407e4cbbbb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.174 226833 DEBUG nova.network.os_vif_util [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.175 226833 DEBUG nova.network.os_vif_util [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.176 226833 DEBUG nova.objects.instance [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.205 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <uuid>e89132fd-2d0c-475e-a3c5-0407e4cbbbb8</uuid>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <name>instance-000000a4</name>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <nova:name>multiattach-server-0</nova:name>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:26:56</nova:creationTime>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <nova:user uuid="85dfa8546d9942648bb4197c8b1947e3">tempest-AttachVolumeMultiAttachTest-2017021026-project-member</nova:user>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <nova:project uuid="48bbdbdee526499e90da7e971ede68d3">tempest-AttachVolumeMultiAttachTest-2017021026</nova:project>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <nova:port uuid="a3ff723d-9e34-45ef-881d-6534126ae169">
Jan 31 08:26:57 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <system>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <entry name="serial">e89132fd-2d0c-475e-a3c5-0407e4cbbbb8</entry>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <entry name="uuid">e89132fd-2d0c-475e-a3c5-0407e4cbbbb8</entry>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     </system>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <os>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   </os>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <features>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   </features>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk">
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       </source>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk.config">
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       </source>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:26:57 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:1b:c5:21"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <target dev="tapa3ff723d-9e"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8/console.log" append="off"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <video>
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     </video>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:26:57 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:26:57 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:26:57 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:26:57 compute-2 nova_compute[226829]: </domain>
Jan 31 08:26:57 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.206 226833 DEBUG nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Preparing to wait for external event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.206 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.206 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.206 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.207 226833 DEBUG nova.virt.libvirt.vif [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-ec12ij1b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:26:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=e89132fd-2d0c-475e-a3c5-0407e4cbbbb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.207 226833 DEBUG nova.network.os_vif_util [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.208 226833 DEBUG nova.network.os_vif_util [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.208 226833 DEBUG os_vif [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.209 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.209 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.210 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.215 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.215 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ff723d-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.216 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3ff723d-9e, col_values=(('external_ids', {'iface-id': 'a3ff723d-9e34-45ef-881d-6534126ae169', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:c5:21', 'vm-uuid': 'e89132fd-2d0c-475e-a3c5-0407e4cbbbb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.218 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:57 compute-2 NetworkManager[48999]: <info>  [1769848017.2198] manager: (tapa3ff723d-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.226 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.227 226833 INFO os_vif [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e')
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.296 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.296 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.297 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:1b:c5:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.297 226833 INFO nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Using config drive
Jan 31 08:26:57 compute-2 nova_compute[226829]: 2026-01-31 08:26:57.323 226833 DEBUG nova.storage.rbd_utils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:58.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:58 compute-2 ceph-mon[77282]: pgmap v2798: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 198 op/s
Jan 31 08:26:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2863717728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:58 compute-2 nova_compute[226829]: 2026-01-31 08:26:58.649 226833 INFO nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Creating config drive at /var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8/disk.config
Jan 31 08:26:58 compute-2 nova_compute[226829]: 2026-01-31 08:26:58.658 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpy_do8_rw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:58 compute-2 nova_compute[226829]: 2026-01-31 08:26:58.802 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpy_do8_rw" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:58 compute-2 nova_compute[226829]: 2026-01-31 08:26:58.842 226833 DEBUG nova.storage.rbd_utils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:58 compute-2 nova_compute[226829]: 2026-01-31 08:26:58.845 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8/disk.config e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:58 compute-2 nova_compute[226829]: 2026-01-31 08:26:58.879 226833 DEBUG nova.network.neutron [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:58 compute-2 nova_compute[226829]: 2026-01-31 08:26:58.903 226833 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.041 226833 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.042 226833 DEBUG nova.virt.libvirt.volume.remotefs [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Creating file /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/b1f6ac2d621c44b0bd5b140b53171834.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.043 226833 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/b1f6ac2d621c44b0bd5b140b53171834.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:59 compute-2 ceph-mon[77282]: pgmap v2799: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.5 MiB/s wr, 184 op/s
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.278 226833 DEBUG nova.network.neutron [req-bfd7a972-6d8e-4a18-a68e-e5315548168e req-8042af54-8082-4cbd-98b5-27ca4e494df2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updated VIF entry in instance network info cache for port a3ff723d-9e34-45ef-881d-6534126ae169. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.279 226833 DEBUG nova.network.neutron [req-bfd7a972-6d8e-4a18-a68e-e5315548168e req-8042af54-8082-4cbd-98b5-27ca4e494df2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.299 226833 DEBUG oslo_concurrency.lockutils [req-bfd7a972-6d8e-4a18-a68e-e5315548168e req-8042af54-8082-4cbd-98b5-27ca4e494df2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.306 226833 DEBUG oslo_concurrency.processutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8/disk.config e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.307 226833 INFO nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Deleting local config drive /var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8/disk.config because it was imported into RBD.
Jan 31 08:26:59 compute-2 kernel: tapa3ff723d-9e: entered promiscuous mode
Jan 31 08:26:59 compute-2 NetworkManager[48999]: <info>  [1769848019.3623] manager: (tapa3ff723d-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.387 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:59 compute-2 ovn_controller[133834]: 2026-01-31T08:26:59Z|00651|binding|INFO|Claiming lport a3ff723d-9e34-45ef-881d-6534126ae169 for this chassis.
Jan 31 08:26:59 compute-2 ovn_controller[133834]: 2026-01-31T08:26:59Z|00652|binding|INFO|a3ff723d-9e34-45ef-881d-6534126ae169: Claiming fa:16:3e:1b:c5:21 10.100.0.5
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.389 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:26:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 66K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1751 writes, 9011 keys, 1751 commit groups, 1.0 writes per commit group, ingest: 16.64 MB, 0.03 MB/s
                                           Interval WAL: 1751 writes, 1751 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     76.8      1.05              0.24        42    0.025       0      0       0.0       0.0
                                             L6      1/0   10.16 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    153.8    130.9      3.09              1.16        41    0.075    281K    22K       0.0       0.0
                                            Sum      1/0   10.16 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0    114.7    117.1      4.15              1.41        83    0.050    281K    22K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    149.6    149.8      0.68              0.27        16    0.042     72K   4169       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    153.8    130.9      3.09              1.16        41    0.075    281K    22K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     76.9      1.05              0.24        41    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.079, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.47 GB write, 0.10 MB/s write, 0.46 GB read, 0.10 MB/s read, 4.1 seconds
                                           Interval compaction: 0.10 GB write, 0.17 MB/s write, 0.10 GB read, 0.17 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 51.81 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000334 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3020,49.75 MB,16.3643%) FilterBlock(83,788.36 KB,0.253251%) IndexBlock(83,1.29 MB,0.425524%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:26:59 compute-2 ovn_controller[133834]: 2026-01-31T08:26:59Z|00653|binding|INFO|Setting lport a3ff723d-9e34-45ef-881d-6534126ae169 ovn-installed in OVS
Jan 31 08:26:59 compute-2 ovn_controller[133834]: 2026-01-31T08:26:59Z|00654|binding|INFO|Setting lport a3ff723d-9e34-45ef-881d-6534126ae169 up in Southbound
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.395 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:c5:21 10.100.0.5'], port_security=['fa:16:3e:1b:c5:21 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e89132fd-2d0c-475e-a3c5-0407e4cbbbb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=a3ff723d-9e34-45ef-881d-6534126ae169) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.399 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.400 143841 INFO neutron.agent.ovn.metadata.agent [-] Port a3ff723d-9e34-45ef-881d-6534126ae169 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad bound to our chassis
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.404 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.406 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:59 compute-2 systemd-udevd[300564]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.417 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2c930e0c-3e41-4b42-aa70-c8942642c574]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.419 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26ad6a8f-31 in ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:26:59 compute-2 systemd-machined[195142]: New machine qemu-75-instance-000000a4.
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.422 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26ad6a8f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.422 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[535410f7-b742-438f-9836-67a80ed63338]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.424 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[45bd7a46-4628-4708-8595-34af4ebc003e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 NetworkManager[48999]: <info>  [1769848019.4358] device (tapa3ff723d-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:26:59 compute-2 NetworkManager[48999]: <info>  [1769848019.4367] device (tapa3ff723d-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:26:59 compute-2 systemd[1]: Started Virtual Machine qemu-75-instance-000000a4.
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.439 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[350df4e9-4082-4673-8aa1-209c3b546fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.460 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6630bc1a-8fba-4b94-a804-34152ac3f72e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 podman[300554]: 2026-01-31 08:26:59.480882213 +0000 UTC m=+0.074412151 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.490 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[001c23d5-2253-4dd1-85b6-4b58d56b614a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 NetworkManager[48999]: <info>  [1769848019.4968] manager: (tap26ad6a8f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/325)
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.496 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a73bfa-dc63-4bd8-9db6-1ebeee1cbde9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 systemd-udevd[300571]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.512 226833 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/b1f6ac2d621c44b0bd5b140b53171834.tmp" returned: 1 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.513 226833 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/b1f6ac2d621c44b0bd5b140b53171834.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.513 226833 DEBUG nova.virt.libvirt.volume.remotefs [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Creating directory /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.514 226833 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.521 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8a3943-b511-42aa-a34e-2347432a0cbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.525 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c4197994-40fa-4b19-95f4-74128c201513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 NetworkManager[48999]: <info>  [1769848019.5427] device (tap26ad6a8f-30): carrier: link connected
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.544 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[42791a8d-6f73-4fd1-be30-36b7ab859cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:26:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:59.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.558 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[87c40f4c-ccc4-407e-94bd-f60284085138]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827201, 'reachable_time': 31398, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300610, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.570 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3059973b-b011-42bf-993e-cf6ca45413fa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:605d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827201, 'tstamp': 827201}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300611, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.584 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3114e6-ead8-407f-9477-3f3c7c658869]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827201, 'reachable_time': 31398, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300612, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.604 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9d04d839-edbc-4c21-8a18-5caecea359d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.652 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ef114e5f-87ed-4a99-bd7e-c8ba5d94c3a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.655 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.656 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.656 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:59 compute-2 kernel: tap26ad6a8f-30: entered promiscuous mode
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.658 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:59 compute-2 NetworkManager[48999]: <info>  [1769848019.6594] manager: (tap26ad6a8f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.660 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:59 compute-2 ovn_controller[133834]: 2026-01-31T08:26:59Z|00655|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.663 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.663 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.664 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a74da003-7fa9-4579-9f83-519e9d83540d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.665 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:26:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:26:59.666 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'env', 'PROCESS_TAG=haproxy-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.667 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.716 226833 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.723 226833 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.861 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848019.860762, e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.862 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] VM Started (Lifecycle Event)
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.896 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.902 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848019.8619254, e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.903 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] VM Paused (Lifecycle Event)
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.952 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:59 compute-2 nova_compute[226829]: 2026-01-31 08:26:59.958 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.003 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:27:00 compute-2 podman[300687]: 2026-01-31 08:27:00.065794454 +0000 UTC m=+0.073078915 container create 854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:27:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:00.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:00 compute-2 systemd[1]: Started libpod-conmon-854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535.scope.
Jan 31 08:27:00 compute-2 podman[300687]: 2026-01-31 08:27:00.02143946 +0000 UTC m=+0.028724001 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:27:00 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:27:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90e4124f66e1f1fa984bb617bd7f7496a89441dcd3d941184d9a52beeb504be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:00 compute-2 podman[300687]: 2026-01-31 08:27:00.158414737 +0000 UTC m=+0.165699218 container init 854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 08:27:00 compute-2 podman[300687]: 2026-01-31 08:27:00.163932377 +0000 UTC m=+0.171216868 container start 854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:27:00 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[300702]: [NOTICE]   (300706) : New worker (300708) forked
Jan 31 08:27:00 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[300702]: [NOTICE]   (300706) : Loading success.
Jan 31 08:27:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.529 226833 DEBUG nova.compute.manager [req-db823e3f-5116-4848-abfc-f85c10c0f25f req-9ca31cf5-c6d8-464f-a36b-75eff7fde641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.530 226833 DEBUG oslo_concurrency.lockutils [req-db823e3f-5116-4848-abfc-f85c10c0f25f req-9ca31cf5-c6d8-464f-a36b-75eff7fde641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.530 226833 DEBUG oslo_concurrency.lockutils [req-db823e3f-5116-4848-abfc-f85c10c0f25f req-9ca31cf5-c6d8-464f-a36b-75eff7fde641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.531 226833 DEBUG oslo_concurrency.lockutils [req-db823e3f-5116-4848-abfc-f85c10c0f25f req-9ca31cf5-c6d8-464f-a36b-75eff7fde641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.531 226833 DEBUG nova.compute.manager [req-db823e3f-5116-4848-abfc-f85c10c0f25f req-9ca31cf5-c6d8-464f-a36b-75eff7fde641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Processing event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.533 226833 DEBUG nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.538 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848020.5380602, e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.539 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] VM Resumed (Lifecycle Event)
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.542 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.548 226833 INFO nova.virt.libvirt.driver [-] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Instance spawned successfully.
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.549 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.573 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.579 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.596 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.597 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.599 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.600 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.601 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.602 226833 DEBUG nova.virt.libvirt.driver [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.611 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.684 226833 INFO nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Took 11.81 seconds to spawn the instance on the hypervisor.
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.684 226833 DEBUG nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.765 226833 INFO nova.compute.manager [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Took 13.24 seconds to build instance.
Jan 31 08:27:00 compute-2 nova_compute[226829]: 2026-01-31 08:27:00.786 226833 DEBUG oslo_concurrency.lockutils [None req-87288db5-ff1d-4fdf-94cc-e31421e1326e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:01 compute-2 ceph-mon[77282]: pgmap v2800: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 153 op/s
Jan 31 08:27:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:01.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:01 compute-2 nova_compute[226829]: 2026-01-31 08:27:01.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:02.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:02 compute-2 kernel: tapec7fbb6b-9a (unregistering): left promiscuous mode
Jan 31 08:27:02 compute-2 NetworkManager[48999]: <info>  [1769848022.1431] device (tapec7fbb6b-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.150 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 ovn_controller[133834]: 2026-01-31T08:27:02Z|00656|binding|INFO|Releasing lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f from this chassis (sb_readonly=0)
Jan 31 08:27:02 compute-2 ovn_controller[133834]: 2026-01-31T08:27:02Z|00657|binding|INFO|Setting lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f down in Southbound
Jan 31 08:27:02 compute-2 ovn_controller[133834]: 2026-01-31T08:27:02Z|00658|binding|INFO|Removing iface tapec7fbb6b-9a ovn-installed in OVS
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.153 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.162 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.160 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:f3:24 10.100.0.11'], port_security=['fa:16:3e:75:f3:24 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '651d6b65-a0ee-4942-bf60-88b037eb6508', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53c8d6c3-3fc8-4e05-8f45-013b15b35751', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eeea185c-a70e-4e33-a1d7-88e2fb6e75b6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.167 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f in datapath cc669a9b-1a99-4cea-8b35-6d932fb2087c unbound from our chassis
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.174 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cc669a9b-1a99-4cea-8b35-6d932fb2087c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.178 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[638cd969-4f82-405a-9bbe-4248b3c48ea6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.179 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c namespace which is not needed anymore
Jan 31 08:27:02 compute-2 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Jan 31 08:27:02 compute-2 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a1.scope: Consumed 14.226s CPU time.
Jan 31 08:27:02 compute-2 systemd-machined[195142]: Machine qemu-74-instance-000000a1 terminated.
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.218 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[299826]: [NOTICE]   (299830) : haproxy version is 2.8.14-c23fe91
Jan 31 08:27:02 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[299826]: [NOTICE]   (299830) : path to executable is /usr/sbin/haproxy
Jan 31 08:27:02 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[299826]: [WARNING]  (299830) : Exiting Master process...
Jan 31 08:27:02 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[299826]: [ALERT]    (299830) : Current worker (299832) exited with code 143 (Terminated)
Jan 31 08:27:02 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[299826]: [WARNING]  (299830) : All workers exited. Exiting... (0)
Jan 31 08:27:02 compute-2 systemd[1]: libpod-12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416.scope: Deactivated successfully.
Jan 31 08:27:02 compute-2 podman[300740]: 2026-01-31 08:27:02.329533489 +0000 UTC m=+0.046764420 container died 12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:27:02 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416-userdata-shm.mount: Deactivated successfully.
Jan 31 08:27:02 compute-2 systemd[1]: var-lib-containers-storage-overlay-5c4da0da63bb09708574a05cda79f6d06dfe41659777b984c38a01f7a81a0df6-merged.mount: Deactivated successfully.
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.376 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 podman[300740]: 2026-01-31 08:27:02.391123921 +0000 UTC m=+0.108354862 container cleanup 12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:27:02 compute-2 systemd[1]: libpod-conmon-12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416.scope: Deactivated successfully.
Jan 31 08:27:02 compute-2 podman[300771]: 2026-01-31 08:27:02.458612332 +0000 UTC m=+0.043685967 container remove 12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.463 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[edd28013-9725-4e96-b314-ebd078e0677c]: (4, ('Sat Jan 31 08:27:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c (12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416)\n12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416\nSat Jan 31 08:27:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c (12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416)\n12775654224afbf1f8a93c3ed80496203b5724418ac0384e7440f861025c1416\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.465 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6960482f-6899-42a2-b86e-7a8be10f1d67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.465 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc669a9b-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:02 compute-2 kernel: tapcc669a9b-10: left promiscuous mode
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.471 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.477 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.484 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[38ee2a8b-237d-4c2d-9207-6d882cb81b23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.514 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[06d2c74e-8d77-4937-9e3a-bb0072b8585d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.515 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ce40b1-71ea-4b5b-ad32-616a429035d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.527 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2e29bf-aee2-4032-927b-c1f09585f6e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823809, 'reachable_time': 17720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300800, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:02 compute-2 systemd[1]: run-netns-ovnmeta\x2dcc669a9b\x2d1a99\x2d4cea\x2d8b35\x2d6d932fb2087c.mount: Deactivated successfully.
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.531 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:27:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:02.531 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[fae1ad4f-d3bb-4bd9-831e-9f87a807ad0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.740 226833 DEBUG nova.compute.manager [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.741 226833 DEBUG oslo_concurrency.lockutils [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.741 226833 DEBUG oslo_concurrency.lockutils [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.742 226833 DEBUG oslo_concurrency.lockutils [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.742 226833 DEBUG nova.compute.manager [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] No waiting events found dispatching network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.742 226833 WARNING nova.compute.manager [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received unexpected event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 for instance with vm_state active and task_state None.
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.743 226833 DEBUG nova.compute.manager [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.743 226833 DEBUG oslo_concurrency.lockutils [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.743 226833 DEBUG oslo_concurrency.lockutils [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.743 226833 DEBUG oslo_concurrency.lockutils [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.744 226833 DEBUG nova.compute.manager [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.744 226833 WARNING nova.compute.manager [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state active and task_state resize_migrating.
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.747 226833 INFO nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance shutdown successfully after 3 seconds.
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.752 226833 INFO nova.virt.libvirt.driver [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance destroyed successfully.
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.754 226833 DEBUG nova.virt.libvirt.vif [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2107337279',display_name='tempest-TestNetworkAdvancedServerOps-server-2107337279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2107337279',id=161,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWmLoXJJT8HJ06ToetETaDpi1L/xOsFHTQftPXbFuOs01yOf7k2Y65Sy9zq5BZjupL2Q9fI+9QOPb9ecuesAa9df6vRKXVMZCeU6kML4//7cHw0FciNooD1B0Aw/wAsgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1409064792',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:26:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ka16rdmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:26:53Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=651d6b65-a0ee-4942-bf60-88b037eb6508,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1449295874", "vif_mac": "fa:16:3e:75:f3:24"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.754 226833 DEBUG nova.network.os_vif_util [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converting VIF {"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1449295874", "vif_mac": "fa:16:3e:75:f3:24"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.755 226833 DEBUG nova.network.os_vif_util [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.756 226833 DEBUG os_vif [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.759 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.759 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec7fbb6b-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.789 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.791 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.793 226833 INFO os_vif [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a')
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.797 226833 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.798 226833 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:02 compute-2 nova_compute[226829]: 2026-01-31 08:27:02.990 226833 DEBUG neutronclient.v2_0.client [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 31 08:27:03 compute-2 nova_compute[226829]: 2026-01-31 08:27:03.390 226833 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:03 compute-2 nova_compute[226829]: 2026-01-31 08:27:03.391 226833 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:03 compute-2 nova_compute[226829]: 2026-01-31 08:27:03.391 226833 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:03 compute-2 ceph-mon[77282]: pgmap v2801: 305 pgs: 305 active+clean; 619 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Jan 31 08:27:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:27:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:03.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:27:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:04.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:04.489 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:27:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:04.490 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:27:04 compute-2 nova_compute[226829]: 2026-01-31 08:27:04.492 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:04 compute-2 nova_compute[226829]: 2026-01-31 08:27:04.992 226833 DEBUG nova.compute.manager [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:04 compute-2 nova_compute[226829]: 2026-01-31 08:27:04.992 226833 DEBUG oslo_concurrency.lockutils [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:04 compute-2 nova_compute[226829]: 2026-01-31 08:27:04.992 226833 DEBUG oslo_concurrency.lockutils [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:04 compute-2 nova_compute[226829]: 2026-01-31 08:27:04.993 226833 DEBUG oslo_concurrency.lockutils [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:04 compute-2 nova_compute[226829]: 2026-01-31 08:27:04.993 226833 DEBUG nova.compute.manager [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:04 compute-2 nova_compute[226829]: 2026-01-31 08:27:04.993 226833 WARNING nova.compute.manager [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state active and task_state resize_migrated.
Jan 31 08:27:05 compute-2 nova_compute[226829]: 2026-01-31 08:27:05.209 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:05 compute-2 nova_compute[226829]: 2026-01-31 08:27:05.233 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:05 compute-2 nova_compute[226829]: 2026-01-31 08:27:05.233 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:27:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:05 compute-2 ceph-mon[77282]: pgmap v2802: 305 pgs: 305 active+clean; 627 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 127 op/s
Jan 31 08:27:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:05.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:05 compute-2 nova_compute[226829]: 2026-01-31 08:27:05.613 226833 DEBUG nova.compute.manager [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:05 compute-2 nova_compute[226829]: 2026-01-31 08:27:05.613 226833 DEBUG nova.compute.manager [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing instance network info cache due to event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:27:05 compute-2 nova_compute[226829]: 2026-01-31 08:27:05.614 226833 DEBUG oslo_concurrency.lockutils [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:05 compute-2 nova_compute[226829]: 2026-01-31 08:27:05.614 226833 DEBUG oslo_concurrency.lockutils [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:05 compute-2 nova_compute[226829]: 2026-01-31 08:27:05.614 226833 DEBUG nova.network.neutron [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:27:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:06.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:06 compute-2 nova_compute[226829]: 2026-01-31 08:27:06.779 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:06.904 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:06.904 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:06.905 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:07 compute-2 nova_compute[226829]: 2026-01-31 08:27:07.400 226833 DEBUG nova.compute.manager [req-ac840916-f1b1-4ec8-ad72-eae15f28b9c2 req-8f3578b7-e253-4c08-a66c-ac1a38764d41 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-changed-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:07 compute-2 nova_compute[226829]: 2026-01-31 08:27:07.400 226833 DEBUG nova.compute.manager [req-ac840916-f1b1-4ec8-ad72-eae15f28b9c2 req-8f3578b7-e253-4c08-a66c-ac1a38764d41 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Refreshing instance network info cache due to event network-changed-a3ff723d-9e34-45ef-881d-6534126ae169. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:27:07 compute-2 nova_compute[226829]: 2026-01-31 08:27:07.401 226833 DEBUG oslo_concurrency.lockutils [req-ac840916-f1b1-4ec8-ad72-eae15f28b9c2 req-8f3578b7-e253-4c08-a66c-ac1a38764d41 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:07 compute-2 nova_compute[226829]: 2026-01-31 08:27:07.401 226833 DEBUG oslo_concurrency.lockutils [req-ac840916-f1b1-4ec8-ad72-eae15f28b9c2 req-8f3578b7-e253-4c08-a66c-ac1a38764d41 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:07 compute-2 nova_compute[226829]: 2026-01-31 08:27:07.401 226833 DEBUG nova.network.neutron [req-ac840916-f1b1-4ec8-ad72-eae15f28b9c2 req-8f3578b7-e253-4c08-a66c-ac1a38764d41 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Refreshing network info cache for port a3ff723d-9e34-45ef-881d-6534126ae169 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:27:07 compute-2 ceph-mon[77282]: pgmap v2803: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 175 op/s
Jan 31 08:27:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:07.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:07 compute-2 nova_compute[226829]: 2026-01-31 08:27:07.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:08 compute-2 nova_compute[226829]: 2026-01-31 08:27:08.088 226833 DEBUG nova.network.neutron [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updated VIF entry in instance network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:27:08 compute-2 nova_compute[226829]: 2026-01-31 08:27:08.088 226833 DEBUG nova.network.neutron [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:08.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:08 compute-2 nova_compute[226829]: 2026-01-31 08:27:08.118 226833 DEBUG oslo_concurrency.lockutils [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:08.493 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2764240598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1378778972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e359 e359: 3 total, 3 up, 3 in
Jan 31 08:27:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:09.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:09 compute-2 ceph-mon[77282]: pgmap v2804: 305 pgs: 305 active+clean; 642 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 162 op/s
Jan 31 08:27:09 compute-2 ceph-mon[77282]: osdmap e359: 3 total, 3 up, 3 in
Jan 31 08:27:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3732445394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:10.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:10 compute-2 nova_compute[226829]: 2026-01-31 08:27:10.127 226833 DEBUG nova.network.neutron [req-ac840916-f1b1-4ec8-ad72-eae15f28b9c2 req-8f3578b7-e253-4c08-a66c-ac1a38764d41 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updated VIF entry in instance network info cache for port a3ff723d-9e34-45ef-881d-6534126ae169. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:27:10 compute-2 nova_compute[226829]: 2026-01-31 08:27:10.129 226833 DEBUG nova.network.neutron [req-ac840916-f1b1-4ec8-ad72-eae15f28b9c2 req-8f3578b7-e253-4c08-a66c-ac1a38764d41 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:10 compute-2 nova_compute[226829]: 2026-01-31 08:27:10.167 226833 DEBUG oslo_concurrency.lockutils [req-ac840916-f1b1-4ec8-ad72-eae15f28b9c2 req-8f3578b7-e253-4c08-a66c-ac1a38764d41 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:10 compute-2 nova_compute[226829]: 2026-01-31 08:27:10.519 226833 DEBUG nova.compute.manager [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:10 compute-2 nova_compute[226829]: 2026-01-31 08:27:10.520 226833 DEBUG oslo_concurrency.lockutils [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:10 compute-2 nova_compute[226829]: 2026-01-31 08:27:10.520 226833 DEBUG oslo_concurrency.lockutils [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:10 compute-2 nova_compute[226829]: 2026-01-31 08:27:10.521 226833 DEBUG oslo_concurrency.lockutils [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:10 compute-2 nova_compute[226829]: 2026-01-31 08:27:10.521 226833 DEBUG nova.compute.manager [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:10 compute-2 nova_compute[226829]: 2026-01-31 08:27:10.522 226833 WARNING nova.compute.manager [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state active and task_state resize_finish.
Jan 31 08:27:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2994273057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:11 compute-2 sudo[300805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:11 compute-2 sudo[300805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:11 compute-2 sudo[300805]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:11 compute-2 sudo[300830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:11 compute-2 sudo[300830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:11 compute-2 sudo[300830]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:11.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:11 compute-2 ceph-mon[77282]: pgmap v2806: 305 pgs: 305 active+clean; 626 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.0 MiB/s wr, 225 op/s
Jan 31 08:27:11 compute-2 nova_compute[226829]: 2026-01-31 08:27:11.780 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:12.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3058255815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4158850951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:12 compute-2 nova_compute[226829]: 2026-01-31 08:27:12.723 226833 DEBUG nova.compute.manager [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:12 compute-2 nova_compute[226829]: 2026-01-31 08:27:12.724 226833 DEBUG oslo_concurrency.lockutils [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:12 compute-2 nova_compute[226829]: 2026-01-31 08:27:12.724 226833 DEBUG oslo_concurrency.lockutils [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:12 compute-2 nova_compute[226829]: 2026-01-31 08:27:12.724 226833 DEBUG oslo_concurrency.lockutils [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:12 compute-2 nova_compute[226829]: 2026-01-31 08:27:12.724 226833 DEBUG nova.compute.manager [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:12 compute-2 nova_compute[226829]: 2026-01-31 08:27:12.724 226833 WARNING nova.compute.manager [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:27:12 compute-2 nova_compute[226829]: 2026-01-31 08:27:12.846 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:13.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:13 compute-2 ceph-mon[77282]: pgmap v2807: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 220 op/s
Jan 31 08:27:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:14.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1067723279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:15 compute-2 ovn_controller[133834]: 2026-01-31T08:27:15Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:c5:21 10.100.0.5
Jan 31 08:27:15 compute-2 ovn_controller[133834]: 2026-01-31T08:27:15Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:c5:21 10.100.0.5
Jan 31 08:27:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:15.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:27:15 compute-2 nova_compute[226829]: 2026-01-31 08:27:15.747 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:15 compute-2 ceph-mon[77282]: pgmap v2808: 305 pgs: 305 active+clean; 616 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 264 op/s
Jan 31 08:27:16 compute-2 nova_compute[226829]: 2026-01-31 08:27:16.077 226833 WARNING nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] While synchronizing instance power states, found 1 instances in the database and 2 instances on the hypervisor.
Jan 31 08:27:16 compute-2 nova_compute[226829]: 2026-01-31 08:27:16.077 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Triggering sync for uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:27:16 compute-2 nova_compute[226829]: 2026-01-31 08:27:16.077 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:16 compute-2 nova_compute[226829]: 2026-01-31 08:27:16.078 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:16 compute-2 nova_compute[226829]: 2026-01-31 08:27:16.137 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:16 compute-2 nova_compute[226829]: 2026-01-31 08:27:16.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:16 compute-2 nova_compute[226829]: 2026-01-31 08:27:16.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:27:16 compute-2 nova_compute[226829]: 2026-01-31 08:27:16.782 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:17 compute-2 nova_compute[226829]: 2026-01-31 08:27:17.402 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848022.4006345, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:17 compute-2 nova_compute[226829]: 2026-01-31 08:27:17.402 226833 INFO nova.compute.manager [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Stopped (Lifecycle Event)
Jan 31 08:27:17 compute-2 nova_compute[226829]: 2026-01-31 08:27:17.439 226833 DEBUG nova.compute.manager [None req-a6d4bb7a-e1b2-4f7d-b676-d23ed1781a06 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:17 compute-2 nova_compute[226829]: 2026-01-31 08:27:17.442 226833 DEBUG nova.compute.manager [None req-a6d4bb7a-e1b2-4f7d-b676-d23ed1781a06 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:27:17 compute-2 nova_compute[226829]: 2026-01-31 08:27:17.486 226833 INFO nova.compute.manager [None req-a6d4bb7a-e1b2-4f7d-b676-d23ed1781a06 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com
Jan 31 08:27:17 compute-2 nova_compute[226829]: 2026-01-31 08:27:17.521 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:17 compute-2 nova_compute[226829]: 2026-01-31 08:27:17.522 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:27:17 compute-2 nova_compute[226829]: 2026-01-31 08:27:17.553 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:27:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:17.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:17 compute-2 ceph-mon[77282]: pgmap v2809: 305 pgs: 305 active+clean; 661 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.8 MiB/s wr, 393 op/s
Jan 31 08:27:17 compute-2 nova_compute[226829]: 2026-01-31 08:27:17.848 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:18.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:19 compute-2 podman[300859]: 2026-01-31 08:27:19.206950624 +0000 UTC m=+0.091895143 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:27:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:19.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:19 compute-2 ceph-mon[77282]: pgmap v2810: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 381 op/s
Jan 31 08:27:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:20.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:20 compute-2 nova_compute[226829]: 2026-01-31 08:27:20.242 226833 DEBUG nova.compute.manager [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:20 compute-2 nova_compute[226829]: 2026-01-31 08:27:20.242 226833 DEBUG oslo_concurrency.lockutils [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:20 compute-2 nova_compute[226829]: 2026-01-31 08:27:20.242 226833 DEBUG oslo_concurrency.lockutils [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:20 compute-2 nova_compute[226829]: 2026-01-31 08:27:20.243 226833 DEBUG oslo_concurrency.lockutils [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:20 compute-2 nova_compute[226829]: 2026-01-31 08:27:20.243 226833 DEBUG nova.compute.manager [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:20 compute-2 nova_compute[226829]: 2026-01-31 08:27:20.243 226833 WARNING nova.compute.manager [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:27:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3122160911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3247627803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:21 compute-2 nova_compute[226829]: 2026-01-31 08:27:21.085 226833 INFO nova.compute.manager [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Swapping old allocation on dict_keys(['2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc']) held by migration 45878621-34ad-484d-97db-dac570b012b2 for instance
Jan 31 08:27:21 compute-2 nova_compute[226829]: 2026-01-31 08:27:21.136 226833 DEBUG nova.scheduler.client.report [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Overwriting current allocation {'allocations': {'39dae8fb-a3d6-4f01-ab04-67eb06f4b735': {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}, 'generation': 74}}, 'project_id': '9710f0cf77d84353ae13fa47922b085d', 'user_id': '4d0e9d918b4041fabd5ded633b4cf404', 'consumer_generation': 1} on consumer 651d6b65-a0ee-4942-bf60-88b037eb6508 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 31 08:27:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:21.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:21 compute-2 nova_compute[226829]: 2026-01-31 08:27:21.783 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:21 compute-2 ceph-mon[77282]: pgmap v2811: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.9 MiB/s wr, 329 op/s
Jan 31 08:27:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2954419915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:22.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:22 compute-2 nova_compute[226829]: 2026-01-31 08:27:22.222 226833 INFO nova.network.neutron [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 08:27:22 compute-2 nova_compute[226829]: 2026-01-31 08:27:22.850 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:23 compute-2 nova_compute[226829]: 2026-01-31 08:27:23.091 226833 DEBUG nova.compute.manager [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:23 compute-2 nova_compute[226829]: 2026-01-31 08:27:23.092 226833 DEBUG oslo_concurrency.lockutils [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:23 compute-2 nova_compute[226829]: 2026-01-31 08:27:23.092 226833 DEBUG oslo_concurrency.lockutils [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:23 compute-2 nova_compute[226829]: 2026-01-31 08:27:23.093 226833 DEBUG oslo_concurrency.lockutils [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:23 compute-2 nova_compute[226829]: 2026-01-31 08:27:23.093 226833 DEBUG nova.compute.manager [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:23 compute-2 nova_compute[226829]: 2026-01-31 08:27:23.093 226833 WARNING nova.compute.manager [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:27:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:23.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:23 compute-2 ceph-mon[77282]: pgmap v2812: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 287 op/s
Jan 31 08:27:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:24.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:24 compute-2 nova_compute[226829]: 2026-01-31 08:27:24.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:24 compute-2 nova_compute[226829]: 2026-01-31 08:27:24.554 226833 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:24 compute-2 nova_compute[226829]: 2026-01-31 08:27:24.555 226833 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:24 compute-2 nova_compute[226829]: 2026-01-31 08:27:24.555 226833 DEBUG nova.network.neutron [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:27:25 compute-2 nova_compute[226829]: 2026-01-31 08:27:25.272 226833 DEBUG nova.compute.manager [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:25 compute-2 nova_compute[226829]: 2026-01-31 08:27:25.272 226833 DEBUG nova.compute.manager [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing instance network info cache due to event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:27:25 compute-2 nova_compute[226829]: 2026-01-31 08:27:25.273 226833 DEBUG oslo_concurrency.lockutils [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:25.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:26 compute-2 ceph-mon[77282]: pgmap v2813: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Jan 31 08:27:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:26.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:26 compute-2 nova_compute[226829]: 2026-01-31 08:27:26.785 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:27 compute-2 ceph-mon[77282]: pgmap v2814: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 250 op/s
Jan 31 08:27:27 compute-2 nova_compute[226829]: 2026-01-31 08:27:27.401 226833 DEBUG nova.network.neutron [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:27.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:27 compute-2 nova_compute[226829]: 2026-01-31 08:27:27.610 226833 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:27 compute-2 nova_compute[226829]: 2026-01-31 08:27:27.611 226833 DEBUG nova.virt.libvirt.driver [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 31 08:27:27 compute-2 nova_compute[226829]: 2026-01-31 08:27:27.654 226833 DEBUG oslo_concurrency.lockutils [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:27 compute-2 nova_compute[226829]: 2026-01-31 08:27:27.655 226833 DEBUG nova.network.neutron [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:27:27 compute-2 nova_compute[226829]: 2026-01-31 08:27:27.972 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:27 compute-2 nova_compute[226829]: 2026-01-31 08:27:27.980 226833 DEBUG nova.storage.rbd_utils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rolling back rbd image(651d6b65-a0ee-4942-bf60-88b037eb6508_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 31 08:27:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:28.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:28 compute-2 nova_compute[226829]: 2026-01-31 08:27:28.492 226833 DEBUG nova.storage.rbd_utils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] removing snapshot(nova-resize) on rbd image(651d6b65-a0ee-4942-bf60-88b037eb6508_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:27:28 compute-2 nova_compute[226829]: 2026-01-31 08:27:28.746 226833 DEBUG nova.compute.manager [req-9a2e2f5d-23cd-4663-93e3-48713180a61d req-32aabd7f-5081-442e-b471-231b1c1c91ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-changed-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:28 compute-2 nova_compute[226829]: 2026-01-31 08:27:28.746 226833 DEBUG nova.compute.manager [req-9a2e2f5d-23cd-4663-93e3-48713180a61d req-32aabd7f-5081-442e-b471-231b1c1c91ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Refreshing instance network info cache due to event network-changed-a3ff723d-9e34-45ef-881d-6534126ae169. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:27:28 compute-2 nova_compute[226829]: 2026-01-31 08:27:28.747 226833 DEBUG oslo_concurrency.lockutils [req-9a2e2f5d-23cd-4663-93e3-48713180a61d req-32aabd7f-5081-442e-b471-231b1c1c91ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:28 compute-2 nova_compute[226829]: 2026-01-31 08:27:28.747 226833 DEBUG oslo_concurrency.lockutils [req-9a2e2f5d-23cd-4663-93e3-48713180a61d req-32aabd7f-5081-442e-b471-231b1c1c91ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:28 compute-2 nova_compute[226829]: 2026-01-31 08:27:28.747 226833 DEBUG nova.network.neutron [req-9a2e2f5d-23cd-4663-93e3-48713180a61d req-32aabd7f-5081-442e-b471-231b1c1c91ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Refreshing network info cache for port a3ff723d-9e34-45ef-881d-6534126ae169 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:27:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:27:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 59K writes, 247K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.05 MB/s
                                           Cumulative WAL: 59K writes, 21K syncs, 2.81 writes per sync, written: 0.25 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 45.08 MB, 0.08 MB/s
                                           Interval WAL: 10K writes, 4020 syncs, 2.62 writes per sync, written: 0.04 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:27:29 compute-2 ceph-mon[77282]: pgmap v2815: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 32 KiB/s wr, 70 op/s
Jan 31 08:27:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/35819088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e360 e360: 3 total, 3 up, 3 in
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.503 226833 DEBUG nova.virt.libvirt.driver [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Start _get_guest_xml network_info=[{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.507 226833 WARNING nova.virt.libvirt.driver [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.530 226833 DEBUG nova.virt.libvirt.host [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.531 226833 DEBUG nova.virt.libvirt.host [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.534 226833 DEBUG nova.virt.libvirt.host [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.535 226833 DEBUG nova.virt.libvirt.host [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.536 226833 DEBUG nova.virt.libvirt.driver [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.536 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.536 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.536 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.537 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.537 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.537 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.537 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.537 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.538 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.538 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.538 226833 DEBUG nova.virt.hardware [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.538 226833 DEBUG nova.objects.instance [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.564 226833 DEBUG oslo_concurrency.processutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:29.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:27:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3654738820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:29 compute-2 nova_compute[226829]: 2026-01-31 08:27:29.983 226833 DEBUG oslo_concurrency.processutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.028 226833 DEBUG oslo_concurrency.processutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:30.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:30 compute-2 podman[300986]: 2026-01-31 08:27:30.207892557 +0000 UTC m=+0.079540299 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:27:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:30 compute-2 ceph-mon[77282]: osdmap e360: 3 total, 3 up, 3 in
Jan 31 08:27:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3654738820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:27:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/965567276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.464 226833 DEBUG oslo_concurrency.processutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.466 226833 DEBUG nova.virt.libvirt.vif [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2107337279',display_name='tempest-TestNetworkAdvancedServerOps-server-2107337279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2107337279',id=161,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWmLoXJJT8HJ06ToetETaDpi1L/xOsFHTQftPXbFuOs01yOf7k2Y65Sy9zq5BZjupL2Q9fI+9QOPb9ecuesAa9df6vRKXVMZCeU6kML4//7cHw0FciNooD1B0Aw/wAsgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1409064792',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:10Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ka16rdmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:27:12Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=651d6b65-a0ee-4942-bf60-88b037eb6508,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.467 226833 DEBUG nova.network.os_vif_util [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.468 226833 DEBUG nova.network.os_vif_util [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.472 226833 DEBUG nova.virt.libvirt.driver [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <uuid>651d6b65-a0ee-4942-bf60-88b037eb6508</uuid>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <name>instance-000000a1</name>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2107337279</nova:name>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:27:29</nova:creationTime>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <nova:port uuid="ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f">
Jan 31 08:27:30 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <system>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <entry name="serial">651d6b65-a0ee-4942-bf60-88b037eb6508</entry>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <entry name="uuid">651d6b65-a0ee-4942-bf60-88b037eb6508</entry>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     </system>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <os>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   </os>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <features>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   </features>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/651d6b65-a0ee-4942-bf60-88b037eb6508_disk">
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       </source>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/651d6b65-a0ee-4942-bf60-88b037eb6508_disk.config">
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       </source>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:27:30 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:75:f3:24"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <target dev="tapec7fbb6b-9a"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/console.log" append="off"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <video>
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     </video>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:27:30 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:27:30 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:27:30 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:27:30 compute-2 nova_compute[226829]: </domain>
Jan 31 08:27:30 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.474 226833 DEBUG nova.compute.manager [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Preparing to wait for external event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.474 226833 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.475 226833 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.475 226833 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.476 226833 DEBUG nova.virt.libvirt.vif [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2107337279',display_name='tempest-TestNetworkAdvancedServerOps-server-2107337279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2107337279',id=161,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWmLoXJJT8HJ06ToetETaDpi1L/xOsFHTQftPXbFuOs01yOf7k2Y65Sy9zq5BZjupL2Q9fI+9QOPb9ecuesAa9df6vRKXVMZCeU6kML4//7cHw0FciNooD1B0Aw/wAsgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1409064792',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:10Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ka16rdmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:27:12Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=651d6b65-a0ee-4942-bf60-88b037eb6508,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.477 226833 DEBUG nova.network.os_vif_util [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.478 226833 DEBUG nova.network.os_vif_util [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.479 226833 DEBUG os_vif [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.480 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.481 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.482 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.487 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.487 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec7fbb6b-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.488 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec7fbb6b-9a, col_values=(('external_ids', {'iface-id': 'ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:f3:24', 'vm-uuid': '651d6b65-a0ee-4942-bf60-88b037eb6508'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.489 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 NetworkManager[48999]: <info>  [1769848050.4903] manager: (tapec7fbb6b-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.491 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.495 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.495 226833 INFO os_vif [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a')
Jan 31 08:27:30 compute-2 kernel: tapec7fbb6b-9a: entered promiscuous mode
Jan 31 08:27:30 compute-2 NetworkManager[48999]: <info>  [1769848050.5773] manager: (tapec7fbb6b-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.579 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 ovn_controller[133834]: 2026-01-31T08:27:30Z|00659|binding|INFO|Claiming lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for this chassis.
Jan 31 08:27:30 compute-2 ovn_controller[133834]: 2026-01-31T08:27:30Z|00660|binding|INFO|ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f: Claiming fa:16:3e:75:f3:24 10.100.0.11
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.596 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 ovn_controller[133834]: 2026-01-31T08:27:30Z|00661|binding|INFO|Setting lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f ovn-installed in OVS
Jan 31 08:27:30 compute-2 ovn_controller[133834]: 2026-01-31T08:27:30Z|00662|binding|INFO|Setting lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f up in Southbound
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.600 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.598 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:f3:24 10.100.0.11'], port_security=['fa:16:3e:75:f3:24 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '651d6b65-a0ee-4942-bf60-88b037eb6508', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '10', 'neutron:security_group_ids': '53c8d6c3-3fc8-4e05-8f45-013b15b35751', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eeea185c-a70e-4e33-a1d7-88e2fb6e75b6, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.602 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.602 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f in datapath cc669a9b-1a99-4cea-8b35-6d932fb2087c bound to our chassis
Jan 31 08:27:30 compute-2 systemd-udevd[301038]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.604 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cc669a9b-1a99-4cea-8b35-6d932fb2087c
Jan 31 08:27:30 compute-2 NetworkManager[48999]: <info>  [1769848050.6178] device (tapec7fbb6b-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:27:30 compute-2 NetworkManager[48999]: <info>  [1769848050.6201] device (tapec7fbb6b-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.616 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[54d5a1bc-54b9-4c97-823f-b73900630157]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.618 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcc669a9b-11 in ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.620 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcc669a9b-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.620 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f553e78e-f574-4e22-9182-aafdc5ee982a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.622 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[14778cc5-4fc3-4bf1-ae8c-6ed2158e6371]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 systemd-machined[195142]: New machine qemu-76-instance-000000a1.
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.635 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d8363a-15de-4451-86b8-b79eddc32eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.647 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2f81bd5a-56bb-4e9e-8598-7e3c133044bf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 systemd[1]: Started Virtual Machine qemu-76-instance-000000a1.
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.675 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9779c66e-5cc7-4555-a118-84f040410430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.680 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[276ab0a8-add3-4b34-9f3d-2a91241b2f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 NetworkManager[48999]: <info>  [1769848050.6814] manager: (tapcc669a9b-10): new Veth device (/org/freedesktop/NetworkManager/Devices/329)
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.704 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[26767fc4-933e-412a-99c2-f921e820e2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.707 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c8685f84-d73f-4090-9770-0a6bdc31eff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 NetworkManager[48999]: <info>  [1769848050.7300] device (tapcc669a9b-10): carrier: link connected
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.733 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[378357d5-0ad4-4a7e-985c-5af03bfbf407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.751 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6808a195-5cb7-447f-bfc0-9d453c4519c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc669a9b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 830320, 'reachable_time': 20215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301074, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.768 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9c27567f-8204-4f35-939d-772fd7e1659f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:92df'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 830320, 'tstamp': 830320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301075, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.789 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2e6acd-6427-4464-a494-8c9e3dedf25d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc669a9b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 830320, 'reachable_time': 20215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301076, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.812 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0ec712-006f-4e25-acca-4fe5e333a960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.861 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdc1b7c-bc5f-4490-a143-aaf6c831ca67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.863 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc669a9b-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.863 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.864 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc669a9b-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:30 compute-2 kernel: tapcc669a9b-10: entered promiscuous mode
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.866 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 NetworkManager[48999]: <info>  [1769848050.8679] manager: (tapcc669a9b-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.871 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcc669a9b-10, col_values=(('external_ids', {'iface-id': '897a561a-9f88-407a-b979-589100a315c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.872 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-2 ovn_controller[133834]: 2026-01-31T08:27:30Z|00663|binding|INFO|Releasing lport 897a561a-9f88-407a-b979-589100a315c7 from this chassis (sb_readonly=0)
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.875 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cc669a9b-1a99-4cea-8b35-6d932fb2087c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cc669a9b-1a99-4cea-8b35-6d932fb2087c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.876 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[31c1ae0a-eb22-4d5a-9cd2-ef45c45992e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.877 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-cc669a9b-1a99-4cea-8b35-6d932fb2087c
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/cc669a9b-1a99-4cea-8b35-6d932fb2087c.pid.haproxy
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID cc669a9b-1a99-4cea-8b35-6d932fb2087c
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:27:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:27:30.878 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'env', 'PROCESS_TAG=haproxy-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cc669a9b-1a99-4cea-8b35-6d932fb2087c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:27:30 compute-2 nova_compute[226829]: 2026-01-31 08:27:30.878 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.143 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848051.1431775, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.144 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Started (Lifecycle Event)
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.176 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:31 compute-2 sudo[301133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:31 compute-2 sudo[301133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.183 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848051.1437113, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.183 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Paused (Lifecycle Event)
Jan 31 08:27:31 compute-2 sudo[301133]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.212 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.216 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:27:31 compute-2 sudo[301172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:31 compute-2 sudo[301172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:31 compute-2 sudo[301172]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:31 compute-2 podman[301177]: 2026-01-31 08:27:31.256679276 +0000 UTC m=+0.046950915 container create e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:27:31 compute-2 systemd[1]: Started libpod-conmon-e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f.scope.
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.302 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 08:27:31 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:27:31 compute-2 podman[301177]: 2026-01-31 08:27:31.230746912 +0000 UTC m=+0.021018571 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:27:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb5a8836cc45928f8a65b90aa4ab7610d9d882bc41725630addf4ad66c8d0d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:31 compute-2 podman[301177]: 2026-01-31 08:27:31.338391523 +0000 UTC m=+0.128663192 container init e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:27:31 compute-2 podman[301177]: 2026-01-31 08:27:31.341925219 +0000 UTC m=+0.132196858 container start e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:27:31 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[301214]: [NOTICE]   (301218) : New worker (301220) forked
Jan 31 08:27:31 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[301214]: [NOTICE]   (301218) : Loading success.
Jan 31 08:27:31 compute-2 ceph-mon[77282]: pgmap v2817: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 660 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 398 KiB/s wr, 94 op/s
Jan 31 08:27:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/965567276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:31.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.599 226833 DEBUG nova.compute.manager [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.600 226833 DEBUG oslo_concurrency.lockutils [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.600 226833 DEBUG oslo_concurrency.lockutils [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.600 226833 DEBUG oslo_concurrency.lockutils [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.601 226833 DEBUG nova.compute.manager [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Processing event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.601 226833 DEBUG nova.compute.manager [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.605 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848051.605194, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.605 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Resumed (Lifecycle Event)
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.609 226833 INFO nova.virt.libvirt.driver [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance running successfully.
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.610 226833 DEBUG nova.virt.libvirt.driver [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.650 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.655 226833 DEBUG nova.network.neutron [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updated VIF entry in instance network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.655 226833 DEBUG nova.network.neutron [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.659 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.711 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.721 226833 DEBUG oslo_concurrency.lockutils [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:31 compute-2 nova_compute[226829]: 2026-01-31 08:27:31.804 226833 INFO nova.compute.manager [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance to original state: 'active'
Jan 31 08:27:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:32.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:32 compute-2 nova_compute[226829]: 2026-01-31 08:27:32.231 226833 DEBUG nova.network.neutron [req-9a2e2f5d-23cd-4663-93e3-48713180a61d req-32aabd7f-5081-442e-b471-231b1c1c91ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updated VIF entry in instance network info cache for port a3ff723d-9e34-45ef-881d-6534126ae169. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:27:32 compute-2 nova_compute[226829]: 2026-01-31 08:27:32.234 226833 DEBUG nova.network.neutron [req-9a2e2f5d-23cd-4663-93e3-48713180a61d req-32aabd7f-5081-442e-b471-231b1c1c91ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:32 compute-2 nova_compute[226829]: 2026-01-31 08:27:32.381 226833 DEBUG oslo_concurrency.lockutils [req-9a2e2f5d-23cd-4663-93e3-48713180a61d req-32aabd7f-5081-442e-b471-231b1c1c91ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:32 compute-2 nova_compute[226829]: 2026-01-31 08:27:32.638 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:33 compute-2 ceph-osd[79942]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 08:27:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:33.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:33 compute-2 ceph-mon[77282]: pgmap v2818: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 674 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 980 KiB/s wr, 101 op/s
Jan 31 08:27:33 compute-2 nova_compute[226829]: 2026-01-31 08:27:33.883 226833 DEBUG nova.compute.manager [req-a8d4cf80-7430-4210-8635-7e7cd8b364dc req-ffb409f1-b46f-4dfb-b846-7b7a866f024f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:33 compute-2 nova_compute[226829]: 2026-01-31 08:27:33.883 226833 DEBUG oslo_concurrency.lockutils [req-a8d4cf80-7430-4210-8635-7e7cd8b364dc req-ffb409f1-b46f-4dfb-b846-7b7a866f024f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:33 compute-2 nova_compute[226829]: 2026-01-31 08:27:33.884 226833 DEBUG oslo_concurrency.lockutils [req-a8d4cf80-7430-4210-8635-7e7cd8b364dc req-ffb409f1-b46f-4dfb-b846-7b7a866f024f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:33 compute-2 nova_compute[226829]: 2026-01-31 08:27:33.884 226833 DEBUG oslo_concurrency.lockutils [req-a8d4cf80-7430-4210-8635-7e7cd8b364dc req-ffb409f1-b46f-4dfb-b846-7b7a866f024f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:33 compute-2 nova_compute[226829]: 2026-01-31 08:27:33.884 226833 DEBUG nova.compute.manager [req-a8d4cf80-7430-4210-8635-7e7cd8b364dc req-ffb409f1-b46f-4dfb-b846-7b7a866f024f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:33 compute-2 nova_compute[226829]: 2026-01-31 08:27:33.885 226833 WARNING nova.compute.manager [req-a8d4cf80-7430-4210-8635-7e7cd8b364dc req-ffb409f1-b46f-4dfb-b846-7b7a866f024f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state active and task_state None.
Jan 31 08:27:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:34.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:34 compute-2 nova_compute[226829]: 2026-01-31 08:27:34.328 226833 DEBUG oslo_concurrency.lockutils [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:34 compute-2 nova_compute[226829]: 2026-01-31 08:27:34.328 226833 DEBUG oslo_concurrency.lockutils [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:34 compute-2 nova_compute[226829]: 2026-01-31 08:27:34.508 226833 DEBUG nova.objects.instance [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:34 compute-2 nova_compute[226829]: 2026-01-31 08:27:34.678 226833 DEBUG oslo_concurrency.lockutils [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.328 226833 DEBUG oslo_concurrency.lockutils [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.328 226833 DEBUG oslo_concurrency.lockutils [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.329 226833 INFO nova.compute.manager [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Attaching volume c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4 to /dev/vdb
Jan 31 08:27:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.491 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.530 226833 DEBUG os_brick.utils [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.532 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.543 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.544 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3e7e31-1d91-4e79-80aa-5a10ff15cc03]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.545 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.552 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.552 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[0881d78a-18d4-4d50-ae8c-fba3dae56b88]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.554 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.562 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.562 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[09a93536-ef0d-4b9c-a834-6487cbd806a4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.564 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[97ad7335-55f1-4b03-aef7-0df52f2aa0a8]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.564 226833 DEBUG oslo_concurrency.processutils [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:35.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.590 226833 DEBUG oslo_concurrency.processutils [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.592 226833 DEBUG os_brick.initiator.connectors.lightos [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.593 226833 DEBUG os_brick.initiator.connectors.lightos [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.593 226833 DEBUG os_brick.initiator.connectors.lightos [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.593 226833 DEBUG os_brick.utils [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:27:35 compute-2 nova_compute[226829]: 2026-01-31 08:27:35.594 226833 DEBUG nova.virt.block_device [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating existing volume attachment record: 98f56350-b4f9-4380-9298-d7e2a6967b73 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:27:35 compute-2 ceph-mon[77282]: pgmap v2819: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 689 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 147 op/s
Jan 31 08:27:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:36.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:27:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1469878505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1469878505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:36 compute-2 nova_compute[226829]: 2026-01-31 08:27:36.795 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:36 compute-2 nova_compute[226829]: 2026-01-31 08:27:36.844 226833 DEBUG nova.objects.instance [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:36 compute-2 nova_compute[226829]: 2026-01-31 08:27:36.907 226833 DEBUG nova.virt.libvirt.driver [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Attempting to attach volume c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:27:36 compute-2 nova_compute[226829]: 2026-01-31 08:27:36.910 226833 DEBUG nova.virt.libvirt.guest [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:27:36 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:27:36 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4">
Jan 31 08:27:36 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:27:36 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:27:36 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:27:36 compute-2 nova_compute[226829]:   </source>
Jan 31 08:27:36 compute-2 nova_compute[226829]:   <auth username="openstack">
Jan 31 08:27:36 compute-2 nova_compute[226829]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:27:36 compute-2 nova_compute[226829]:   </auth>
Jan 31 08:27:36 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:27:36 compute-2 nova_compute[226829]:   <serial>c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4</serial>
Jan 31 08:27:36 compute-2 nova_compute[226829]:   <shareable/>
Jan 31 08:27:36 compute-2 nova_compute[226829]: </disk>
Jan 31 08:27:36 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:27:37 compute-2 nova_compute[226829]: 2026-01-31 08:27:37.061 226833 DEBUG nova.virt.libvirt.driver [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:37 compute-2 nova_compute[226829]: 2026-01-31 08:27:37.062 226833 DEBUG nova.virt.libvirt.driver [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:37 compute-2 nova_compute[226829]: 2026-01-31 08:27:37.062 226833 DEBUG nova.virt.libvirt.driver [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:37 compute-2 nova_compute[226829]: 2026-01-31 08:27:37.062 226833 DEBUG nova.virt.libvirt.driver [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:1b:c5:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:27:37 compute-2 nova_compute[226829]: 2026-01-31 08:27:37.436 226833 DEBUG oslo_concurrency.lockutils [None req-da77b395-57cb-4984-98ce-a13e76819bc4 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:37.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:37 compute-2 sudo[301260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:37 compute-2 sudo[301260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:37 compute-2 sudo[301260]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:37 compute-2 sudo[301285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:37 compute-2 sudo[301285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:37 compute-2 sudo[301285]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e361 e361: 3 total, 3 up, 3 in
Jan 31 08:27:37 compute-2 sudo[301310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:37 compute-2 sudo[301310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:37 compute-2 sudo[301310]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:37 compute-2 sudo[301335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:27:37 compute-2 sudo[301335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:38 compute-2 ceph-mon[77282]: pgmap v2820: 305 pgs: 305 active+clean; 689 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 164 op/s
Jan 31 08:27:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/953468057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1320643255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:38.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:38 compute-2 podman[301435]: 2026-01-31 08:27:38.346183144 +0000 UTC m=+0.064162132 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:38 compute-2 podman[301435]: 2026-01-31 08:27:38.441407638 +0000 UTC m=+0.159386596 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:27:38 compute-2 nova_compute[226829]: 2026-01-31 08:27:38.533 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:38 compute-2 nova_compute[226829]: 2026-01-31 08:27:38.533 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:38 compute-2 nova_compute[226829]: 2026-01-31 08:27:38.534 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:39 compute-2 podman[301586]: 2026-01-31 08:27:39.008325562 +0000 UTC m=+0.105566156 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:27:39 compute-2 podman[301607]: 2026-01-31 08:27:39.077547849 +0000 UTC m=+0.054543360 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:27:39 compute-2 podman[301586]: 2026-01-31 08:27:39.134591008 +0000 UTC m=+0.231831582 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:27:39 compute-2 podman[301651]: 2026-01-31 08:27:39.387291744 +0000 UTC m=+0.085015017 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, name=keepalived, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 08:27:39 compute-2 podman[301651]: 2026-01-31 08:27:39.422425198 +0000 UTC m=+0.120148481 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, release=1793, vcs-type=git, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph)
Jan 31 08:27:39 compute-2 sudo[301335]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:39 compute-2 nova_compute[226829]: 2026-01-31 08:27:39.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:39 compute-2 nova_compute[226829]: 2026-01-31 08:27:39.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:27:39 compute-2 nova_compute[226829]: 2026-01-31 08:27:39.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:27:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:39.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:39 compute-2 ceph-mon[77282]: osdmap e361: 3 total, 3 up, 3 in
Jan 31 08:27:39 compute-2 nova_compute[226829]: 2026-01-31 08:27:39.831 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:39 compute-2 nova_compute[226829]: 2026-01-31 08:27:39.936 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:39 compute-2 nova_compute[226829]: 2026-01-31 08:27:39.936 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:39 compute-2 nova_compute[226829]: 2026-01-31 08:27:39.937 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:27:39 compute-2 nova_compute[226829]: 2026-01-31 08:27:39.937 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:40.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:40 compute-2 sudo[301685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:40 compute-2 sudo[301685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:40 compute-2 sudo[301685]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:40 compute-2 sudo[301710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:40 compute-2 sudo[301710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:40 compute-2 sudo[301710]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:40 compute-2 sudo[301735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:40 compute-2 sudo[301735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:40 compute-2 sudo[301735]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:40 compute-2 sudo[301760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:27:40 compute-2 sudo[301760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:40 compute-2 nova_compute[226829]: 2026-01-31 08:27:40.541 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:40 compute-2 ceph-mon[77282]: pgmap v2822: 305 pgs: 305 active+clean; 695 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.9 MiB/s wr, 190 op/s
Jan 31 08:27:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:27:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:27:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2333100646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:40 compute-2 sudo[301760]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:41.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:41 compute-2 nova_compute[226829]: 2026-01-31 08:27:41.797 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:42 compute-2 ceph-mon[77282]: pgmap v2823: 305 pgs: 305 active+clean; 713 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.5 MiB/s wr, 199 op/s
Jan 31 08:27:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:27:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:27:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:27:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:27:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:27:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:27:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:42.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:43 compute-2 ceph-mon[77282]: pgmap v2824: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 210 op/s
Jan 31 08:27:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:43.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 31 08:27:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:44.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 31 08:27:44 compute-2 nova_compute[226829]: 2026-01-31 08:27:44.395 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:44 compute-2 nova_compute[226829]: 2026-01-31 08:27:44.804 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:44 compute-2 nova_compute[226829]: 2026-01-31 08:27:44.805 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:27:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:45 compute-2 nova_compute[226829]: 2026-01-31 08:27:45.543 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:27:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:45.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:27:45 compute-2 ovn_controller[133834]: 2026-01-31T08:27:45Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:f3:24 10.100.0.11
Jan 31 08:27:45 compute-2 ceph-mon[77282]: pgmap v2825: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.6 MiB/s wr, 204 op/s
Jan 31 08:27:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/373990389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:46.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:46 compute-2 nova_compute[226829]: 2026-01-31 08:27:46.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:46 compute-2 nova_compute[226829]: 2026-01-31 08:27:46.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:46 compute-2 nova_compute[226829]: 2026-01-31 08:27:46.798 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:47 compute-2 nova_compute[226829]: 2026-01-31 08:27:47.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:47.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:48 compute-2 nova_compute[226829]: 2026-01-31 08:27:48.134 226833 DEBUG nova.compute.manager [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 08:27:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:48.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:49 compute-2 nova_compute[226829]: 2026-01-31 08:27:49.372 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:49 compute-2 nova_compute[226829]: 2026-01-31 08:27:49.373 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:49 compute-2 nova_compute[226829]: 2026-01-31 08:27:49.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:49.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:50.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:50 compute-2 nova_compute[226829]: 2026-01-31 08:27:50.191 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:50 compute-2 nova_compute[226829]: 2026-01-31 08:27:50.222 226833 DEBUG nova.objects.instance [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'pci_requests' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:50 compute-2 podman[301822]: 2026-01-31 08:27:50.228347777 +0000 UTC m=+0.096200541 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:27:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:50 compute-2 nova_compute[226829]: 2026-01-31 08:27:50.489 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:27:50 compute-2 nova_compute[226829]: 2026-01-31 08:27:50.490 226833 INFO nova.compute.claims [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:27:50 compute-2 nova_compute[226829]: 2026-01-31 08:27:50.490 226833 DEBUG nova.objects.instance [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'resources' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:50 compute-2 nova_compute[226829]: 2026-01-31 08:27:50.545 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:50 compute-2 nova_compute[226829]: 2026-01-31 08:27:50.722 226833 DEBUG nova.objects.instance [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:51 compute-2 sudo[301848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:51 compute-2 sudo[301848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:51 compute-2 sudo[301848]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:51 compute-2 sudo[301873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:51 compute-2 sudo[301873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:51 compute-2 sudo[301873]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:51 compute-2 nova_compute[226829]: 2026-01-31 08:27:51.558 226833 INFO nova.compute.resource_tracker [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating resource usage from migration 9630bd79-bebe-4b48-b09b-b3f1c7a6ac9a
Jan 31 08:27:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:51.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:51 compute-2 nova_compute[226829]: 2026-01-31 08:27:51.656 226833 DEBUG oslo_concurrency.processutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:51 compute-2 nova_compute[226829]: 2026-01-31 08:27:51.800 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:27:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/933239023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.108 226833 DEBUG oslo_concurrency.processutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.116 226833 DEBUG nova.compute.provider_tree [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.184 226833 DEBUG nova.scheduler.client.report [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:27:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:52.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.348 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 2.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.349 226833 INFO nova.compute.manager [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Migrating
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.355 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 2.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.355 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.355 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.355 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:27:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/377933433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.766 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.956 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.956 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:52 compute-2 nova_compute[226829]: 2026-01-31 08:27:52.957 226833 DEBUG nova.network.neutron [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:27:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:27:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:53.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:27:53 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.732 226833 INFO nova.compute.manager [None req-ac52a355-953c-46b2-b4db-fb51c751313f 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Get console output
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.738 267670 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.753 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.753 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.758 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.758 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.759 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.910 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.911 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3777MB free_disk=20.693660736083984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.911 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:53 compute-2 nova_compute[226829]: 2026-01-31 08:27:53.912 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:54.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:54 compute-2 nova_compute[226829]: 2026-01-31 08:27:54.384 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Applying migration context for instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 as it has an incoming, in-progress migration 9630bd79-bebe-4b48-b09b-b3f1c7a6ac9a. Migration status is pre-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Jan 31 08:27:54 compute-2 nova_compute[226829]: 2026-01-31 08:27:54.385 226833 INFO nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating resource usage from migration 9630bd79-bebe-4b48-b09b-b3f1c7a6ac9a
Jan 31 08:27:54 compute-2 nova_compute[226829]: 2026-01-31 08:27:54.410 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 651d6b65-a0ee-4942-bf60-88b037eb6508 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:27:54 compute-2 nova_compute[226829]: 2026-01-31 08:27:54.410 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Migration 9630bd79-bebe-4b48-b09b-b3f1c7a6ac9a is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 31 08:27:54 compute-2 nova_compute[226829]: 2026-01-31 08:27:54.411 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:27:54 compute-2 nova_compute[226829]: 2026-01-31 08:27:54.411 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:27:54 compute-2 nova_compute[226829]: 2026-01-31 08:27:54.412 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:27:54 compute-2 nova_compute[226829]: 2026-01-31 08:27:54.526 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:27:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2599037876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:54 compute-2 nova_compute[226829]: 2026-01-31 08:27:54.960 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:54 compute-2 nova_compute[226829]: 2026-01-31 08:27:54.965 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:27:55 compute-2 nova_compute[226829]: 2026-01-31 08:27:55.080 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:27:55 compute-2 nova_compute[226829]: 2026-01-31 08:27:55.330 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:27:55 compute-2 nova_compute[226829]: 2026-01-31 08:27:55.330 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:55 compute-2 nova_compute[226829]: 2026-01-31 08:27:55.547 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:55.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:56.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:56 compute-2 nova_compute[226829]: 2026-01-31 08:27:56.803 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).paxos(paxos active c 5774..6368) lease_timeout -- calling new election
Jan 31 08:27:57 compute-2 nova_compute[226829]: 2026-01-31 08:27:57.337 226833 DEBUG nova.network.neutron [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:57 compute-2 nova_compute[226829]: 2026-01-31 08:27:57.455 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:57.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:57 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 08:27:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:58.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:58 compute-2 nova_compute[226829]: 2026-01-31 08:27:58.623 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 08:27:58 compute-2 nova_compute[226829]: 2026-01-31 08:27:58.628 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:27:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:27:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:59.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:00.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:00 compute-2 nova_compute[226829]: 2026-01-31 08:28:00.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:00 compute-2 ceph-mon[77282]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 31 08:28:00 compute-2 ceph-mon[77282]: paxos.1).electionLogic(46) init, last seen epoch 46
Jan 31 08:28:00 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:28:01 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:28:01 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:28:01 compute-2 podman[301972]: 2026-01-31 08:28:01.228405766 +0000 UTC m=+0.098682529 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:28:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:01.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:01 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 08:28:01 compute-2 nova_compute[226829]: 2026-01-31 08:28:01.660 226833 INFO nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Instance shutdown successfully after 3 seconds.
Jan 31 08:28:01 compute-2 nova_compute[226829]: 2026-01-31 08:28:01.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:02.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:02 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:28:02 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma MDS connection to Monitors appears to be laggy; 16.8575s since last acked beacon
Jan 31 08:28:02 compute-2 ceph-mds[84366]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 31 08:28:02 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:28:02 compute-2 nova_compute[226829]: 2026-01-31 08:28:02.539 226833 DEBUG nova.compute.manager [req-a042d145-e788-46f3-9042-fd0dbdfea596 req-e822ced3-8f9f-45e5-8fd7-258830ef3fb7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:02 compute-2 nova_compute[226829]: 2026-01-31 08:28:02.540 226833 DEBUG nova.compute.manager [req-a042d145-e788-46f3-9042-fd0dbdfea596 req-e822ced3-8f9f-45e5-8fd7-258830ef3fb7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing instance network info cache due to event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:28:02 compute-2 nova_compute[226829]: 2026-01-31 08:28:02.540 226833 DEBUG oslo_concurrency.lockutils [req-a042d145-e788-46f3-9042-fd0dbdfea596 req-e822ced3-8f9f-45e5-8fd7-258830ef3fb7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:28:02 compute-2 nova_compute[226829]: 2026-01-31 08:28:02.541 226833 DEBUG oslo_concurrency.lockutils [req-a042d145-e788-46f3-9042-fd0dbdfea596 req-e822ced3-8f9f-45e5-8fd7-258830ef3fb7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:28:02 compute-2 nova_compute[226829]: 2026-01-31 08:28:02.541 226833 DEBUG nova.network.neutron [req-a042d145-e788-46f3-9042-fd0dbdfea596 req-e822ced3-8f9f-45e5-8fd7-258830ef3fb7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:28:02 compute-2 ceph-mon[77282]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:28:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:28:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346384978' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:28:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:28:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346384978' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:28:03 compute-2 kernel: tapa3ff723d-9e (unregistering): left promiscuous mode
Jan 31 08:28:03 compute-2 NetworkManager[48999]: <info>  [1769848083.5317] device (tapa3ff723d-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:28:03 compute-2 ovn_controller[133834]: 2026-01-31T08:28:03Z|00664|binding|INFO|Releasing lport a3ff723d-9e34-45ef-881d-6534126ae169 from this chassis (sb_readonly=0)
Jan 31 08:28:03 compute-2 ovn_controller[133834]: 2026-01-31T08:28:03Z|00665|binding|INFO|Setting lport a3ff723d-9e34-45ef-881d-6534126ae169 down in Southbound
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.540 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:03 compute-2 ovn_controller[133834]: 2026-01-31T08:28:03Z|00666|binding|INFO|Removing iface tapa3ff723d-9e ovn-installed in OVS
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.544 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.548 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:03 compute-2 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Jan 31 08:28:03 compute-2 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a4.scope: Consumed 15.375s CPU time.
Jan 31 08:28:03 compute-2 systemd-machined[195142]: Machine qemu-75-instance-000000a4 terminated.
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.593 226833 DEBUG oslo_concurrency.lockutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.594 226833 DEBUG oslo_concurrency.lockutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.594 226833 DEBUG oslo_concurrency.lockutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.595 226833 DEBUG oslo_concurrency.lockutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.595 226833 DEBUG oslo_concurrency.lockutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.597 226833 INFO nova.compute.manager [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Terminating instance
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.599 226833 DEBUG nova.compute.manager [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:28:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:03.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.706 226833 INFO nova.virt.libvirt.driver [-] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Instance destroyed successfully.
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.708 226833 DEBUG nova.virt.libvirt.vif [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-ec12ij1b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:27:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=e89132fd-2d0c-475e-a3c5-0407e4cbbbb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:1b:c5:21"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.709 226833 DEBUG nova.network.os_vif_util [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:1b:c5:21"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.710 226833 DEBUG nova.network.os_vif_util [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.710 226833 DEBUG os_vif [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.715 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.716 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ff723d-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:03 compute-2 ceph-mon[77282]: pgmap v2826: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 211 op/s
Jan 31 08:28:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3553281191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.720 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.721 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:03.720 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:c5:21 10.100.0.5'], port_security=['fa:16:3e:1b:c5:21 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e89132fd-2d0c-475e-a3c5-0407e4cbbbb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=a3ff723d-9e34-45ef-881d-6534126ae169) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:28:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:03.723 143841 INFO neutron.agent.ovn.metadata.agent [-] Port a3ff723d-9e34-45ef-881d-6534126ae169 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad unbound from our chassis
Jan 31 08:28:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:03.726 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:28:03 compute-2 nova_compute[226829]: 2026-01-31 08:28:03.727 226833 INFO os_vif [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e')
Jan 31 08:28:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:03.728 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[298ab033-f97c-4cb5-bdd7-4de0ef13dcc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:03.729 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad namespace which is not needed anymore
Jan 31 08:28:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:28:03 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma  MDS is no longer laggy
Jan 31 08:28:03 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[300702]: [NOTICE]   (300706) : haproxy version is 2.8.14-c23fe91
Jan 31 08:28:03 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[300702]: [NOTICE]   (300706) : path to executable is /usr/sbin/haproxy
Jan 31 08:28:03 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[300702]: [WARNING]  (300706) : Exiting Master process...
Jan 31 08:28:03 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[300702]: [WARNING]  (300706) : Exiting Master process...
Jan 31 08:28:03 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[300702]: [ALERT]    (300706) : Current worker (300708) exited with code 143 (Terminated)
Jan 31 08:28:03 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[300702]: [WARNING]  (300706) : All workers exited. Exiting... (0)
Jan 31 08:28:03 compute-2 systemd[1]: libpod-854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535.scope: Deactivated successfully.
Jan 31 08:28:03 compute-2 conmon[300702]: conmon 854f11d8d7ef31db7f99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535.scope/container/memory.events
Jan 31 08:28:03 compute-2 podman[302029]: 2026-01-31 08:28:03.903326517 +0000 UTC m=+0.059222358 container died 854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:28:03 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535-userdata-shm.mount: Deactivated successfully.
Jan 31 08:28:03 compute-2 systemd[1]: var-lib-containers-storage-overlay-e90e4124f66e1f1fa984bb617bd7f7496a89441dcd3d941184d9a52beeb504be-merged.mount: Deactivated successfully.
Jan 31 08:28:03 compute-2 podman[302029]: 2026-01-31 08:28:03.998345806 +0000 UTC m=+0.154241617 container cleanup 854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:28:04 compute-2 systemd[1]: libpod-conmon-854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535.scope: Deactivated successfully.
Jan 31 08:28:04 compute-2 podman[302057]: 2026-01-31 08:28:04.066498245 +0000 UTC m=+0.047603392 container remove 854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.071 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e70e5a3c-7f5e-48d5-8a7e-696e1191f8ba]: (4, ('Sat Jan 31 08:28:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad (854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535)\n854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535\nSat Jan 31 08:28:04 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad (854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535)\n854f11d8d7ef31db7f99a3062e2e3e5807fcfeb6443922456615351350948535\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.073 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7131c8-103c-4680-97b3-79ee58d4e321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.075 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:04 compute-2 nova_compute[226829]: 2026-01-31 08:28:04.078 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:04 compute-2 kernel: tap26ad6a8f-30: left promiscuous mode
Jan 31 08:28:04 compute-2 nova_compute[226829]: 2026-01-31 08:28:04.084 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.088 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[157c3552-3a5e-4065-8490-5a60a9968de0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.102 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0a604a-e69f-4a85-9132-772e16d08e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.104 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b260f99c-0776-4f3d-b9f7-fdc6bcf6a09f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.117 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab5b904-5675-40a5-9e23-6e8c72977eb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827195, 'reachable_time': 44413, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302071, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:04 compute-2 systemd[1]: run-netns-ovnmeta\x2d26ad6a8f\x2d33d5\x2d432e\x2d83d3\x2d63a9d2f165ad.mount: Deactivated successfully.
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.122 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.123 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[ca96a729-7e26-4b90-a544-002392255dcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:04.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:04 compute-2 kernel: tapec7fbb6b-9a (unregistering): left promiscuous mode
Jan 31 08:28:04 compute-2 NetworkManager[48999]: <info>  [1769848084.7099] device (tapec7fbb6b-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:28:04 compute-2 ovn_controller[133834]: 2026-01-31T08:28:04Z|00667|binding|INFO|Releasing lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f from this chassis (sb_readonly=0)
Jan 31 08:28:04 compute-2 ovn_controller[133834]: 2026-01-31T08:28:04Z|00668|binding|INFO|Setting lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f down in Southbound
Jan 31 08:28:04 compute-2 nova_compute[226829]: 2026-01-31 08:28:04.718 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:04 compute-2 ovn_controller[133834]: 2026-01-31T08:28:04Z|00669|binding|INFO|Removing iface tapec7fbb6b-9a ovn-installed in OVS
Jan 31 08:28:04 compute-2 nova_compute[226829]: 2026-01-31 08:28:04.721 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:04 compute-2 nova_compute[226829]: 2026-01-31 08:28:04.731 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:04 compute-2 nova_compute[226829]: 2026-01-31 08:28:04.733 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:04 compute-2 nova_compute[226829]: 2026-01-31 08:28:04.734 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:04 compute-2 nova_compute[226829]: 2026-01-31 08:28:04.737 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:04 compute-2 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Jan 31 08:28:04 compute-2 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a1.scope: Consumed 14.762s CPU time.
Jan 31 08:28:04 compute-2 systemd-machined[195142]: Machine qemu-76-instance-000000a1 terminated.
Jan 31 08:28:04 compute-2 NetworkManager[48999]: <info>  [1769848084.8192] manager: (tapec7fbb6b-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/331)
Jan 31 08:28:04 compute-2 nova_compute[226829]: 2026-01-31 08:28:04.841 226833 INFO nova.virt.libvirt.driver [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance destroyed successfully.
Jan 31 08:28:04 compute-2 nova_compute[226829]: 2026-01-31 08:28:04.841 226833 DEBUG nova.objects.instance [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.970 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:f3:24 10.100.0.11'], port_security=['fa:16:3e:75:f3:24 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '651d6b65-a0ee-4942-bf60-88b037eb6508', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '12', 'neutron:security_group_ids': '53c8d6c3-3fc8-4e05-8f45-013b15b35751', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eeea185c-a70e-4e33-a1d7-88e2fb6e75b6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.972 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f in datapath cc669a9b-1a99-4cea-8b35-6d932fb2087c unbound from our chassis
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.975 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cc669a9b-1a99-4cea-8b35-6d932fb2087c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.976 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d14bbe88-80fe-44a4-947d-fcbc49c87ab8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:04.977 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c namespace which is not needed anymore
Jan 31 08:28:05 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[301214]: [NOTICE]   (301218) : haproxy version is 2.8.14-c23fe91
Jan 31 08:28:05 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[301214]: [NOTICE]   (301218) : path to executable is /usr/sbin/haproxy
Jan 31 08:28:05 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[301214]: [WARNING]  (301218) : Exiting Master process...
Jan 31 08:28:05 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[301214]: [WARNING]  (301218) : Exiting Master process...
Jan 31 08:28:05 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[301214]: [ALERT]    (301218) : Current worker (301220) exited with code 143 (Terminated)
Jan 31 08:28:05 compute-2 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[301214]: [WARNING]  (301218) : All workers exited. Exiting... (0)
Jan 31 08:28:05 compute-2 systemd[1]: libpod-e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f.scope: Deactivated successfully.
Jan 31 08:28:05 compute-2 podman[302104]: 2026-01-31 08:28:05.101676234 +0000 UTC m=+0.045431394 container died e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:28:05 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f-userdata-shm.mount: Deactivated successfully.
Jan 31 08:28:05 compute-2 systemd[1]: var-lib-containers-storage-overlay-ebb5a8836cc45928f8a65b90aa4ab7610d9d882bc41725630addf4ad66c8d0d4-merged.mount: Deactivated successfully.
Jan 31 08:28:05 compute-2 podman[302104]: 2026-01-31 08:28:05.132772958 +0000 UTC m=+0.076528168 container cleanup e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:28:05 compute-2 systemd[1]: libpod-conmon-e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f.scope: Deactivated successfully.
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.190 226833 DEBUG nova.compute.manager [req-27775446-f6e4-474d-b888-8fa2edac9cf1 req-842222a4-82a9-4166-8e93-7c4f9719c629 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-vif-unplugged-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.190 226833 DEBUG oslo_concurrency.lockutils [req-27775446-f6e4-474d-b888-8fa2edac9cf1 req-842222a4-82a9-4166-8e93-7c4f9719c629 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.191 226833 DEBUG oslo_concurrency.lockutils [req-27775446-f6e4-474d-b888-8fa2edac9cf1 req-842222a4-82a9-4166-8e93-7c4f9719c629 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:05 compute-2 podman[302134]: 2026-01-31 08:28:05.191342957 +0000 UTC m=+0.038149996 container remove e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.191 226833 DEBUG oslo_concurrency.lockutils [req-27775446-f6e4-474d-b888-8fa2edac9cf1 req-842222a4-82a9-4166-8e93-7c4f9719c629 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.192 226833 DEBUG nova.compute.manager [req-27775446-f6e4-474d-b888-8fa2edac9cf1 req-842222a4-82a9-4166-8e93-7c4f9719c629 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] No waiting events found dispatching network-vif-unplugged-a3ff723d-9e34-45ef-881d-6534126ae169 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.192 226833 WARNING nova.compute.manager [req-27775446-f6e4-474d-b888-8fa2edac9cf1 req-842222a4-82a9-4166-8e93-7c4f9719c629 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received unexpected event network-vif-unplugged-a3ff723d-9e34-45ef-881d-6534126ae169 for instance with vm_state active and task_state resize_migrating.
Jan 31 08:28:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:05.195 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[62775197-995b-4bcb-830f-575ad6dd5009]: (4, ('Sat Jan 31 08:28:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c (e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f)\ne1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f\nSat Jan 31 08:28:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c (e1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f)\ne1b8dd63d844b3a81de5b4734f36e1a8713e758840bc99a00d4d4e476d745a5f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:05.197 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e71002e2-bf74-4876-8940-3eeec955f807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:05.198 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc669a9b-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:05 compute-2 kernel: tapcc669a9b-10: left promiscuous mode
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.201 226833 DEBUG nova.virt.libvirt.vif [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2107337279',display_name='tempest-TestNetworkAdvancedServerOps-server-2107337279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2107337279',id=161,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWmLoXJJT8HJ06ToetETaDpi1L/xOsFHTQftPXbFuOs01yOf7k2Y65Sy9zq5BZjupL2Q9fI+9QOPb9ecuesAa9df6vRKXVMZCeU6kML4//7cHw0FciNooD1B0Aw/wAsgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1409064792',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ka16rdmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:27:31Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=651d6b65-a0ee-4942-bf60-88b037eb6508,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.201 226833 DEBUG nova.network.os_vif_util [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.202 226833 DEBUG nova.network.os_vif_util [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.202 226833 DEBUG os_vif [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.204 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.205 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec7fbb6b-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.206 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.206 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.208 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.210 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.211 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:05.212 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b574b248-543d-433b-a43f-bb30790fb9e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:05 compute-2 nova_compute[226829]: 2026-01-31 08:28:05.213 226833 INFO os_vif [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a')
Jan 31 08:28:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:05.232 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bff247-495d-444c-ad39-867a55d7a56b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:05.234 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[827ff583-8b9d-4580-9cae-b1141731c266]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:05.248 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[60c88723-6ef2-44da-b12d-c5d9fc94fab8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 830314, 'reachable_time': 15345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302165, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:05.250 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:28:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:05.250 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d2893a-46a9-4cb9-815b-9f5ca056f63e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:05 compute-2 systemd[1]: run-netns-ovnmeta\x2dcc669a9b\x2d1a99\x2d4cea\x2d8b35\x2d6d932fb2087c.mount: Deactivated successfully.
Jan 31 08:28:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:05 compute-2 ceph-mon[77282]: pgmap v2827: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Jan 31 08:28:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3567843601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:05 compute-2 ceph-mon[77282]: pgmap v2828: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Jan 31 08:28:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/933239023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:05 compute-2 ceph-mon[77282]: pgmap v2829: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 727 KiB/s wr, 134 op/s
Jan 31 08:28:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/377933433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:05 compute-2 ceph-mon[77282]: pgmap v2830: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 41 KiB/s wr, 125 op/s
Jan 31 08:28:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2599037876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:05 compute-2 ceph-mon[77282]: pgmap v2831: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 38 KiB/s wr, 94 op/s
Jan 31 08:28:05 compute-2 ceph-mon[77282]: pgmap v2832: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 297 KiB/s rd, 13 KiB/s wr, 23 op/s
Jan 31 08:28:05 compute-2 ceph-mon[77282]: pgmap v2833: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 117 KiB/s rd, 29 KiB/s wr, 19 op/s
Jan 31 08:28:05 compute-2 ceph-mon[77282]: mon.compute-2 calling monitor election
Jan 31 08:28:05 compute-2 ceph-mon[77282]: mon.compute-1 calling monitor election
Jan 31 08:28:05 compute-2 ceph-mon[77282]: mon.compute-0 calling monitor election
Jan 31 08:28:05 compute-2 ceph-mon[77282]: pgmap v2834: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 28 KiB/s wr, 12 op/s
Jan 31 08:28:05 compute-2 ceph-mon[77282]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 08:28:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3308763206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/346384978' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:28:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/346384978' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:28:05 compute-2 ceph-mon[77282]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 08:28:05 compute-2 ceph-mon[77282]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 08:28:05 compute-2 ceph-mon[77282]: osdmap e361: 3 total, 3 up, 3 in
Jan 31 08:28:05 compute-2 ceph-mon[77282]: mgrmap e11: compute-0.hhuoua(active, since 82m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 08:28:05 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 08:28:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:05.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:06.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:06 compute-2 nova_compute[226829]: 2026-01-31 08:28:06.234 226833 DEBUG nova.network.neutron [req-a042d145-e788-46f3-9042-fd0dbdfea596 req-e822ced3-8f9f-45e5-8fd7-258830ef3fb7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updated VIF entry in instance network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:28:06 compute-2 nova_compute[226829]: 2026-01-31 08:28:06.235 226833 DEBUG nova.network.neutron [req-a042d145-e788-46f3-9042-fd0dbdfea596 req-e822ced3-8f9f-45e5-8fd7-258830ef3fb7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:06 compute-2 nova_compute[226829]: 2026-01-31 08:28:06.302 226833 DEBUG oslo_concurrency.lockutils [req-a042d145-e788-46f3-9042-fd0dbdfea596 req-e822ced3-8f9f-45e5-8fd7-258830ef3fb7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:28:06 compute-2 nova_compute[226829]: 2026-01-31 08:28:06.662 226833 DEBUG nova.network.neutron [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Port a3ff723d-9e34-45ef-881d-6534126ae169 binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 31 08:28:06 compute-2 nova_compute[226829]: 2026-01-31 08:28:06.808 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:06.904 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:06.905 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:06.905 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:06 compute-2 ceph-mon[77282]: pgmap v2835: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 28 KiB/s wr, 12 op/s
Jan 31 08:28:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2965806856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.016 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.016 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.017 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.611 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.611 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.612 226833 DEBUG nova.network.neutron [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:28:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:07.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.946 226833 DEBUG nova.compute.manager [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.947 226833 DEBUG oslo_concurrency.lockutils [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.947 226833 DEBUG oslo_concurrency.lockutils [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.948 226833 DEBUG oslo_concurrency.lockutils [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.948 226833 DEBUG nova.compute.manager [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] No waiting events found dispatching network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.949 226833 WARNING nova.compute.manager [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received unexpected event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 for instance with vm_state active and task_state resize_migrated.
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.949 226833 DEBUG nova.compute.manager [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.949 226833 DEBUG oslo_concurrency.lockutils [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.950 226833 DEBUG oslo_concurrency.lockutils [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.950 226833 DEBUG oslo_concurrency.lockutils [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.950 226833 DEBUG nova.compute.manager [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:07 compute-2 nova_compute[226829]: 2026-01-31 08:28:07.951 226833 DEBUG nova.compute.manager [req-4f3a5112-2848-4a36-84c9-d5da97e13163 req-78c46765-6eb2-414e-9bc2-32e6789a11b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:28:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:08.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:08 compute-2 ceph-mon[77282]: pgmap v2836: 305 pgs: 305 active+clean; 733 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 31 08:28:09 compute-2 nova_compute[226829]: 2026-01-31 08:28:09.332 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:09 compute-2 nova_compute[226829]: 2026-01-31 08:28:09.332 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:28:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:09.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:09 compute-2 sudo[302173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:09 compute-2 sudo[302173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:09 compute-2 sudo[302173]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:09 compute-2 sudo[302198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:28:09 compute-2 sudo[302198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:09 compute-2 sudo[302198]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:09 compute-2 nova_compute[226829]: 2026-01-31 08:28:09.811 226833 INFO nova.virt.libvirt.driver [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Deleting instance files /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508_del
Jan 31 08:28:09 compute-2 nova_compute[226829]: 2026-01-31 08:28:09.812 226833 INFO nova.virt.libvirt.driver [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Deletion of /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508_del complete
Jan 31 08:28:09 compute-2 nova_compute[226829]: 2026-01-31 08:28:09.986 226833 INFO nova.compute.manager [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Took 6.39 seconds to destroy the instance on the hypervisor.
Jan 31 08:28:09 compute-2 nova_compute[226829]: 2026-01-31 08:28:09.987 226833 DEBUG oslo.service.loopingcall [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:28:09 compute-2 nova_compute[226829]: 2026-01-31 08:28:09.987 226833 DEBUG nova.compute.manager [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:28:09 compute-2 nova_compute[226829]: 2026-01-31 08:28:09.988 226833 DEBUG nova.network.neutron [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:28:10 compute-2 nova_compute[226829]: 2026-01-31 08:28:10.145 226833 DEBUG nova.compute.manager [req-c3f975bb-b81d-4298-8012-ee285b056548 req-5cf3b96c-8e11-4611-b475-3b958db524e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:10 compute-2 nova_compute[226829]: 2026-01-31 08:28:10.146 226833 DEBUG oslo_concurrency.lockutils [req-c3f975bb-b81d-4298-8012-ee285b056548 req-5cf3b96c-8e11-4611-b475-3b958db524e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:10 compute-2 nova_compute[226829]: 2026-01-31 08:28:10.146 226833 DEBUG oslo_concurrency.lockutils [req-c3f975bb-b81d-4298-8012-ee285b056548 req-5cf3b96c-8e11-4611-b475-3b958db524e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:10 compute-2 nova_compute[226829]: 2026-01-31 08:28:10.147 226833 DEBUG oslo_concurrency.lockutils [req-c3f975bb-b81d-4298-8012-ee285b056548 req-5cf3b96c-8e11-4611-b475-3b958db524e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:10 compute-2 nova_compute[226829]: 2026-01-31 08:28:10.147 226833 DEBUG nova.compute.manager [req-c3f975bb-b81d-4298-8012-ee285b056548 req-5cf3b96c-8e11-4611-b475-3b958db524e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:10 compute-2 nova_compute[226829]: 2026-01-31 08:28:10.148 226833 WARNING nova.compute.manager [req-c3f975bb-b81d-4298-8012-ee285b056548 req-5cf3b96c-8e11-4611-b475-3b958db524e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state active and task_state deleting.
Jan 31 08:28:10 compute-2 nova_compute[226829]: 2026-01-31 08:28:10.206 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:10.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:10 compute-2 ceph-mon[77282]: pgmap v2837: 305 pgs: 305 active+clean; 741 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 31 08:28:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:28:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:28:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:11.003 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:28:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:11.005 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.010 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.059 226833 DEBUG nova.network.neutron [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.150 226833 INFO nova.compute.manager [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Took 1.16 seconds to deallocate network for instance.
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.240 226833 DEBUG oslo_concurrency.lockutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.241 226833 DEBUG oslo_concurrency.lockutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.328 226833 DEBUG nova.compute.manager [req-2950f455-345c-4876-ae59-d297fcacdafe req-16bb358e-78c0-42b2-99ff-650b99ff483d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-deleted-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.379 226833 DEBUG oslo_concurrency.processutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:11 compute-2 sudo[302225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.489 226833 DEBUG nova.network.neutron [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:11 compute-2 sudo[302225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:11 compute-2 sudo[302225]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.544 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:28:11 compute-2 sudo[302250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:11 compute-2 sudo[302250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:11 compute-2 sudo[302250]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:11.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:11 compute-2 ceph-mon[77282]: pgmap v2838: 305 pgs: 305 active+clean; 728 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 2.7 MiB/s wr, 75 op/s
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.660 226833 DEBUG os_brick.utils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.662 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.674 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.674 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[d9acb08c-4d41-4edb-b830-7f61058ef604]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.675 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.681 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.681 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[f82eab32-aa5b-40e6-8113-8c8f686e17ef]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.683 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.695 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.695 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3a5e29-7d06-414f-9d26-c0e63605c8a9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.697 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2c01be-aa8b-4eb0-87e5-604f19175b1f]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.698 226833 DEBUG oslo_concurrency.processutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.730 226833 DEBUG oslo_concurrency.processutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.734 226833 DEBUG os_brick.initiator.connectors.lightos [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.735 226833 DEBUG os_brick.initiator.connectors.lightos [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.735 226833 DEBUG os_brick.initiator.connectors.lightos [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.736 226833 DEBUG os_brick.utils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] <== get_connector_properties: return (74ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.810 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4148727517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.848 226833 DEBUG oslo_concurrency.processutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.856 226833 DEBUG nova.compute.provider_tree [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.885 226833 DEBUG nova.scheduler.client.report [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.917 226833 DEBUG oslo_concurrency.lockutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:11 compute-2 nova_compute[226829]: 2026-01-31 08:28:11.996 226833 INFO nova.scheduler.client.report [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocations for instance 651d6b65-a0ee-4942-bf60-88b037eb6508
Jan 31 08:28:12 compute-2 nova_compute[226829]: 2026-01-31 08:28:12.082 226833 DEBUG oslo_concurrency.lockutils [None req-a71080cb-e4f7-40e8-a951-7b17ffd948ac 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:12.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:28:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4211918690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4148727517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:13 compute-2 nova_compute[226829]: 2026-01-31 08:28:13.528 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 08:28:13 compute-2 nova_compute[226829]: 2026-01-31 08:28:13.531 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:28:13 compute-2 nova_compute[226829]: 2026-01-31 08:28:13.531 226833 INFO nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Creating image(s)
Jan 31 08:28:13 compute-2 nova_compute[226829]: 2026-01-31 08:28:13.587 226833 DEBUG nova.storage.rbd_utils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] creating snapshot(nova-resize) on rbd image(e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:28:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:13.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:13 compute-2 ceph-mon[77282]: pgmap v2839: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 3.4 MiB/s wr, 84 op/s
Jan 31 08:28:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4211918690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/960312472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e362 e362: 3 total, 3 up, 3 in
Jan 31 08:28:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:14.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.361 226833 DEBUG nova.objects.instance [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.645 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.646 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Ensure instance console log exists: /var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.646 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.646 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.647 226833 DEBUG oslo_concurrency.lockutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.649 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Start _get_guest_xml network_info=[{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:1b:c5:21"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'e89132fd-2d0c-475e-a3c5-0407e4cbbbb8', 'attached_at': '2026-01-31T08:28:13.000000', 'detached_at': '', 'volume_id': 'c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4', 'multiattach': True, 'serial': 'c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'attachment_id': '13b69764-03f6-48ee-b1e0-87f9db57136b', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.653 226833 WARNING nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.658 226833 DEBUG nova.virt.libvirt.host [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.658 226833 DEBUG nova.virt.libvirt.host [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.663 226833 DEBUG nova.virt.libvirt.host [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.664 226833 DEBUG nova.virt.libvirt.host [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.665 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.665 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.665 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.665 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.666 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.666 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.666 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.666 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.666 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.667 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.667 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.667 226833 DEBUG nova.virt.hardware [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.667 226833 DEBUG nova.objects.instance [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:14 compute-2 nova_compute[226829]: 2026-01-31 08:28:14.805 226833 DEBUG oslo_concurrency.processutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:15.007 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:15 compute-2 nova_compute[226829]: 2026-01-31 08:28:15.208 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:28:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1068672120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3856339751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:15 compute-2 ceph-mon[77282]: osdmap e362: 3 total, 3 up, 3 in
Jan 31 08:28:15 compute-2 ceph-mon[77282]: pgmap v2841: 305 pgs: 305 active+clean; 720 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 358 KiB/s rd, 4.7 MiB/s wr, 132 op/s
Jan 31 08:28:15 compute-2 nova_compute[226829]: 2026-01-31 08:28:15.431 226833 DEBUG oslo_concurrency.processutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:15 compute-2 nova_compute[226829]: 2026-01-31 08:28:15.472 226833 DEBUG oslo_concurrency.processutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:15.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:28:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1560134433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:15 compute-2 nova_compute[226829]: 2026-01-31 08:28:15.897 226833 DEBUG oslo_concurrency.processutils [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.004 226833 DEBUG nova.virt.libvirt.vif [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-ec12ij1b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:28:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=e89132fd-2d0c-475e-a3c5-0407e4cbbbb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:1b:c5:21"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.004 226833 DEBUG nova.network.os_vif_util [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:1b:c5:21"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.005 226833 DEBUG nova.network.os_vif_util [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.009 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <uuid>e89132fd-2d0c-475e-a3c5-0407e4cbbbb8</uuid>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <name>instance-000000a4</name>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <memory>196608</memory>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <nova:name>multiattach-server-0</nova:name>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:28:14</nova:creationTime>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <nova:flavor name="m1.micro">
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <nova:memory>192</nova:memory>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <nova:user uuid="85dfa8546d9942648bb4197c8b1947e3">tempest-AttachVolumeMultiAttachTest-2017021026-project-member</nova:user>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <nova:project uuid="48bbdbdee526499e90da7e971ede68d3">tempest-AttachVolumeMultiAttachTest-2017021026</nova:project>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <nova:port uuid="a3ff723d-9e34-45ef-881d-6534126ae169">
Jan 31 08:28:16 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <system>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <entry name="serial">e89132fd-2d0c-475e-a3c5-0407e4cbbbb8</entry>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <entry name="uuid">e89132fd-2d0c-475e-a3c5-0407e4cbbbb8</entry>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     </system>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <os>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   </os>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <features>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   </features>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk">
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       </source>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk.config">
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       </source>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4">
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       </source>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:28:16 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <serial>c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4</serial>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <shareable/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:1b:c5:21"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <target dev="tapa3ff723d-9e"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8/console.log" append="off"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <video>
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     </video>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:28:16 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:28:16 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:28:16 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:28:16 compute-2 nova_compute[226829]: </domain>
Jan 31 08:28:16 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.010 226833 DEBUG nova.virt.libvirt.vif [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-ec12ij1b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:28:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=e89132fd-2d0c-475e-a3c5-0407e4cbbbb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:1b:c5:21"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.011 226833 DEBUG nova.network.os_vif_util [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:1b:c5:21"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.011 226833 DEBUG nova.network.os_vif_util [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.012 226833 DEBUG os_vif [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.013 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.013 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.014 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.017 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.018 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ff723d-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.019 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3ff723d-9e, col_values=(('external_ids', {'iface-id': 'a3ff723d-9e34-45ef-881d-6534126ae169', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:c5:21', 'vm-uuid': 'e89132fd-2d0c-475e-a3c5-0407e4cbbbb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.020 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 NetworkManager[48999]: <info>  [1769848096.0219] manager: (tapa3ff723d-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.023 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.025 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.026 226833 INFO os_vif [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e')
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.099 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.100 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.100 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.100 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:1b:c5:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.100 226833 INFO nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Using config drive
Jan 31 08:28:16 compute-2 kernel: tapa3ff723d-9e: entered promiscuous mode
Jan 31 08:28:16 compute-2 NetworkManager[48999]: <info>  [1769848096.2091] manager: (tapa3ff723d-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Jan 31 08:28:16 compute-2 ovn_controller[133834]: 2026-01-31T08:28:16Z|00670|binding|INFO|Claiming lport a3ff723d-9e34-45ef-881d-6534126ae169 for this chassis.
Jan 31 08:28:16 compute-2 ovn_controller[133834]: 2026-01-31T08:28:16Z|00671|binding|INFO|a3ff723d-9e34-45ef-881d-6534126ae169: Claiming fa:16:3e:1b:c5:21 10.100.0.5
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.210 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.218 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:c5:21 10.100.0.5'], port_security=['fa:16:3e:1b:c5:21 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e89132fd-2d0c-475e-a3c5-0407e4cbbbb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=a3ff723d-9e34-45ef-881d-6534126ae169) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.219 143841 INFO neutron.agent.ovn.metadata.agent [-] Port a3ff723d-9e34-45ef-881d-6534126ae169 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad bound to our chassis
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.222 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:28:16 compute-2 ovn_controller[133834]: 2026-01-31T08:28:16Z|00672|binding|INFO|Setting lport a3ff723d-9e34-45ef-881d-6534126ae169 ovn-installed in OVS
Jan 31 08:28:16 compute-2 ovn_controller[133834]: 2026-01-31T08:28:16Z|00673|binding|INFO|Setting lport a3ff723d-9e34-45ef-881d-6534126ae169 up in Southbound
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.224 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:16.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.231 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[54b6a717-b339-4a4c-af3b-02bf3a5a047e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.232 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26ad6a8f-31 in ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.234 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26ad6a8f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.234 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3c301691-5350-4972-82e9-c5727a5ef101]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 systemd-udevd[302473]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.235 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0125bd6a-7691-4ea3-9970-17d35c53e09a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 systemd-machined[195142]: New machine qemu-77-instance-000000a4.
Jan 31 08:28:16 compute-2 NetworkManager[48999]: <info>  [1769848096.2463] device (tapa3ff723d-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:28:16 compute-2 NetworkManager[48999]: <info>  [1769848096.2471] device (tapa3ff723d-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.247 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[7137d031-f952-47ec-bffd-053ae0479c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 systemd[1]: Started Virtual Machine qemu-77-instance-000000a4.
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.255 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a547a1f2-d02d-4f93-8060-9b2c1fa09aee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.275 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[37878fea-21d4-4939-a0f3-9643edea6112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 NetworkManager[48999]: <info>  [1769848096.2831] manager: (tap26ad6a8f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/334)
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.282 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[46fd3d96-8709-4766-b3ae-2ae344119f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 systemd-udevd[302477]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.304 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[17d3e697-1638-4af7-9af5-1ae35b1c011f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.309 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[b77240d5-5c0a-432d-ab38-435be244c172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 NetworkManager[48999]: <info>  [1769848096.3272] device (tap26ad6a8f-30): carrier: link connected
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.332 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[acc70d3f-89ff-424c-9d81-c6c976165be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.346 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b92d374f-e0ab-4203-bbc3-538f1bf02b99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834880, 'reachable_time': 27664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302506, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.359 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cba6c6a8-615c-4715-957f-7ba4e8de04ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:605d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 834880, 'tstamp': 834880}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302507, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.379 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6f9681-4ac9-4e1c-847e-d35d289b5969]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834880, 'reachable_time': 27664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302508, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.405 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ef749d9c-7be2-4e49-97d4-d85dd1695f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.454 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0f390937-c8c1-40bb-888e-3595feff2f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.456 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.457 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.458 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.460 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 NetworkManager[48999]: <info>  [1769848096.4607] manager: (tap26ad6a8f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Jan 31 08:28:16 compute-2 kernel: tap26ad6a8f-30: entered promiscuous mode
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.470 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.471 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:16 compute-2 ovn_controller[133834]: 2026-01-31T08:28:16Z|00674|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.473 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.478 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.479 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.480 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c090132e-887d-4cac-9e0e-5463dc3ba7f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.481 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:28:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:28:16.481 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'env', 'PROCESS_TAG=haproxy-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:28:16 compute-2 nova_compute[226829]: 2026-01-31 08:28:16.832 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1068672120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1560134433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:16 compute-2 podman[302558]: 2026-01-31 08:28:16.765353972 +0000 UTC m=+0.023077318 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:28:17 compute-2 podman[302558]: 2026-01-31 08:28:17.081741587 +0000 UTC m=+0.339464913 container create d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:28:17 compute-2 systemd[1]: Started libpod-conmon-d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97.scope.
Jan 31 08:28:17 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:28:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c6305b77055b989fd97abdb7785ff3a5c6dc509c2d74d7072596f86b75a0dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.242 226833 DEBUG nova.virt.libvirt.host [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Removed pending event for e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.243 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848097.2420962, e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.244 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] VM Resumed (Lifecycle Event)
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.247 226833 DEBUG nova.compute.manager [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.251 226833 INFO nova.virt.libvirt.driver [-] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Instance running successfully.
Jan 31 08:28:17 compute-2 virtqemud[226546]: argument unsupported: QEMU guest agent is not configured
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.254 226833 DEBUG nova.virt.libvirt.guest [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.255 226833 DEBUG nova.virt.libvirt.driver [None req-45a9f2c6-bc33-44c6-8939-d021313b25ed 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.265 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.269 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:28:17 compute-2 podman[302558]: 2026-01-31 08:28:17.27939645 +0000 UTC m=+0.537119796 container init d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:28:17 compute-2 podman[302558]: 2026-01-31 08:28:17.283672806 +0000 UTC m=+0.541396122 container start d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:28:17 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[302611]: [NOTICE]   (302620) : New worker (302622) forked
Jan 31 08:28:17 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[302611]: [NOTICE]   (302620) : Loading success.
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.308 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.308 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848097.2423809, e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.309 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] VM Started (Lifecycle Event)
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.335 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.350 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.376 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.468 226833 DEBUG nova.compute.manager [req-b0229d57-43ea-4bbd-96b7-ef3d39acb834 req-2d038c87-98b5-4b83-a221-38d9bc6c5e1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.469 226833 DEBUG oslo_concurrency.lockutils [req-b0229d57-43ea-4bbd-96b7-ef3d39acb834 req-2d038c87-98b5-4b83-a221-38d9bc6c5e1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.469 226833 DEBUG oslo_concurrency.lockutils [req-b0229d57-43ea-4bbd-96b7-ef3d39acb834 req-2d038c87-98b5-4b83-a221-38d9bc6c5e1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.469 226833 DEBUG oslo_concurrency.lockutils [req-b0229d57-43ea-4bbd-96b7-ef3d39acb834 req-2d038c87-98b5-4b83-a221-38d9bc6c5e1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.469 226833 DEBUG nova.compute.manager [req-b0229d57-43ea-4bbd-96b7-ef3d39acb834 req-2d038c87-98b5-4b83-a221-38d9bc6c5e1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] No waiting events found dispatching network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:17 compute-2 nova_compute[226829]: 2026-01-31 08:28:17.469 226833 WARNING nova.compute.manager [req-b0229d57-43ea-4bbd-96b7-ef3d39acb834 req-2d038c87-98b5-4b83-a221-38d9bc6c5e1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received unexpected event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 for instance with vm_state active and task_state resize_finish.
Jan 31 08:28:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:17.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:17 compute-2 ceph-mon[77282]: pgmap v2842: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 3.3 MiB/s wr, 138 op/s
Jan 31 08:28:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:18.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.367 226833 DEBUG oslo_concurrency.lockutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.368 226833 DEBUG oslo_concurrency.lockutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.368 226833 DEBUG nova.compute.manager [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Going to confirm migration 21 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.616 226833 DEBUG nova.compute.manager [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.619 226833 DEBUG oslo_concurrency.lockutils [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.619 226833 DEBUG oslo_concurrency.lockutils [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.620 226833 DEBUG oslo_concurrency.lockutils [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.620 226833 DEBUG nova.compute.manager [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] No waiting events found dispatching network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.621 226833 WARNING nova.compute.manager [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received unexpected event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 for instance with vm_state resized and task_state None.
Jan 31 08:28:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:28:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:19.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.839 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848084.8382742, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.840 226833 INFO nova.compute.manager [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Stopped (Lifecycle Event)
Jan 31 08:28:19 compute-2 nova_compute[226829]: 2026-01-31 08:28:19.868 226833 DEBUG nova.compute.manager [None req-a66cc236-0e9a-44dd-b256-87b9aac26f10 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:28:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:20.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:20 compute-2 nova_compute[226829]: 2026-01-31 08:28:20.330 226833 DEBUG oslo_concurrency.lockutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:28:20 compute-2 nova_compute[226829]: 2026-01-31 08:28:20.331 226833 DEBUG oslo_concurrency.lockutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:28:20 compute-2 nova_compute[226829]: 2026-01-31 08:28:20.331 226833 DEBUG nova.network.neutron [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:28:20 compute-2 nova_compute[226829]: 2026-01-31 08:28:20.331 226833 DEBUG nova.objects.instance [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'info_cache' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:20 compute-2 ceph-mon[77282]: pgmap v2843: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.6 MiB/s wr, 136 op/s
Jan 31 08:28:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:20 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 08:28:21 compute-2 nova_compute[226829]: 2026-01-31 08:28:21.023 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:21 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 08:28:21 compute-2 ovn_controller[133834]: 2026-01-31T08:28:21Z|00675|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 08:28:21 compute-2 nova_compute[226829]: 2026-01-31 08:28:21.111 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:21 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 31 08:28:21 compute-2 podman[302633]: 2026-01-31 08:28:21.181690535 +0000 UTC m=+0.072828277 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:28:21 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 08:28:21 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 31 08:28:21 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 08:28:21 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 31 08:28:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:21.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:21 compute-2 nova_compute[226829]: 2026-01-31 08:28:21.835 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:22 compute-2 ceph-mon[77282]: pgmap v2844: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 136 op/s
Jan 31 08:28:22 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 31 08:28:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:22.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:22 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 31 08:28:23 compute-2 nova_compute[226829]: 2026-01-31 08:28:23.055 226833 DEBUG nova.network.neutron [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:23 compute-2 nova_compute[226829]: 2026-01-31 08:28:23.186 226833 DEBUG oslo_concurrency.lockutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:28:23 compute-2 nova_compute[226829]: 2026-01-31 08:28:23.187 226833 DEBUG nova.objects.instance [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'migration_context' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:23 compute-2 nova_compute[226829]: 2026-01-31 08:28:23.358 226833 DEBUG nova.storage.rbd_utils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] removing snapshot(nova-resize) on rbd image(e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:28:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:23.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:23 compute-2 ceph-mon[77282]: pgmap v2845: 305 pgs: 305 active+clean; 723 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 703 KiB/s wr, 158 op/s
Jan 31 08:28:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:24.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e363 e363: 3 total, 3 up, 3 in
Jan 31 08:28:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:25.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:25 compute-2 nova_compute[226829]: 2026-01-31 08:28:25.826 226833 DEBUG oslo_concurrency.lockutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:25 compute-2 nova_compute[226829]: 2026-01-31 08:28:25.827 226833 DEBUG oslo_concurrency.lockutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:25 compute-2 nova_compute[226829]: 2026-01-31 08:28:25.973 226833 DEBUG oslo_concurrency.processutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:26 compute-2 nova_compute[226829]: 2026-01-31 08:28:26.028 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:26.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:26 compute-2 ceph-mon[77282]: pgmap v2846: 305 pgs: 305 active+clean; 723 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 174 op/s
Jan 31 08:28:26 compute-2 ceph-mon[77282]: osdmap e363: 3 total, 3 up, 3 in
Jan 31 08:28:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/912552622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:26 compute-2 nova_compute[226829]: 2026-01-31 08:28:26.448 226833 DEBUG oslo_concurrency.processutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:26 compute-2 nova_compute[226829]: 2026-01-31 08:28:26.454 226833 DEBUG nova.compute.provider_tree [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:28:26 compute-2 nova_compute[226829]: 2026-01-31 08:28:26.486 226833 DEBUG nova.scheduler.client.report [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:28:26 compute-2 nova_compute[226829]: 2026-01-31 08:28:26.559 226833 DEBUG oslo_concurrency.lockutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:26 compute-2 nova_compute[226829]: 2026-01-31 08:28:26.707 226833 INFO nova.scheduler.client.report [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Deleted allocation for migration 9630bd79-bebe-4b48-b09b-b3f1c7a6ac9a
Jan 31 08:28:26 compute-2 nova_compute[226829]: 2026-01-31 08:28:26.775 226833 DEBUG oslo_concurrency.lockutils [None req-e85f260e-8e5e-4172-b68d-faec8f65bf25 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 7.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:26 compute-2 nova_compute[226829]: 2026-01-31 08:28:26.837 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:28:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:27.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:28:28 compute-2 ceph-mon[77282]: pgmap v2848: 305 pgs: 305 active+clean; 723 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 41 KiB/s wr, 224 op/s
Jan 31 08:28:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/912552622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:28.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1559272313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:29.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:30.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:30 compute-2 nova_compute[226829]: 2026-01-31 08:28:30.785 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:31 compute-2 nova_compute[226829]: 2026-01-31 08:28:31.030 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:31 compute-2 ceph-mon[77282]: pgmap v2849: 305 pgs: 305 active+clean; 697 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 39 KiB/s wr, 220 op/s
Jan 31 08:28:31 compute-2 ovn_controller[133834]: 2026-01-31T08:28:31Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:c5:21 10.100.0.5
Jan 31 08:28:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:31.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:31 compute-2 sudo[302721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:31 compute-2 sudo[302721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:31 compute-2 sudo[302721]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:31 compute-2 podman[302745]: 2026-01-31 08:28:31.739490383 +0000 UTC m=+0.053603985 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:28:31 compute-2 sudo[302756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:31 compute-2 sudo[302756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:31 compute-2 sudo[302756]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:31 compute-2 nova_compute[226829]: 2026-01-31 08:28:31.840 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:32.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:32 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Jan 31 08:28:32 compute-2 ceph-mon[77282]: pgmap v2850: 305 pgs: 305 active+clean; 662 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 39 KiB/s wr, 198 op/s
Jan 31 08:28:33 compute-2 ceph-mon[77282]: pgmap v2851: 305 pgs: 305 active+clean; 651 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 129 KiB/s wr, 184 op/s
Jan 31 08:28:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:33.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:34.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:35.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:35 compute-2 ceph-mon[77282]: pgmap v2852: 305 pgs: 305 active+clean; 659 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 998 KiB/s wr, 174 op/s
Jan 31 08:28:36 compute-2 nova_compute[226829]: 2026-01-31 08:28:36.032 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:36.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:36 compute-2 nova_compute[226829]: 2026-01-31 08:28:36.478 226833 DEBUG nova.compute.manager [req-c4e5d1e8-b2a0-4347-93dc-c52de508fea6 req-cfb199bd-8336-4166-965a-7f3def295d17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-changed-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:36 compute-2 nova_compute[226829]: 2026-01-31 08:28:36.479 226833 DEBUG nova.compute.manager [req-c4e5d1e8-b2a0-4347-93dc-c52de508fea6 req-cfb199bd-8336-4166-965a-7f3def295d17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Refreshing instance network info cache due to event network-changed-a3ff723d-9e34-45ef-881d-6534126ae169. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:28:36 compute-2 nova_compute[226829]: 2026-01-31 08:28:36.479 226833 DEBUG oslo_concurrency.lockutils [req-c4e5d1e8-b2a0-4347-93dc-c52de508fea6 req-cfb199bd-8336-4166-965a-7f3def295d17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:28:36 compute-2 nova_compute[226829]: 2026-01-31 08:28:36.480 226833 DEBUG oslo_concurrency.lockutils [req-c4e5d1e8-b2a0-4347-93dc-c52de508fea6 req-cfb199bd-8336-4166-965a-7f3def295d17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:28:36 compute-2 nova_compute[226829]: 2026-01-31 08:28:36.480 226833 DEBUG nova.network.neutron [req-c4e5d1e8-b2a0-4347-93dc-c52de508fea6 req-cfb199bd-8336-4166-965a-7f3def295d17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Refreshing network info cache for port a3ff723d-9e34-45ef-881d-6534126ae169 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:28:36 compute-2 nova_compute[226829]: 2026-01-31 08:28:36.843 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e364 e364: 3 total, 3 up, 3 in
Jan 31 08:28:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:37.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:37 compute-2 ceph-mon[77282]: pgmap v2853: 305 pgs: 305 active+clean; 712 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 877 KiB/s rd, 3.9 MiB/s wr, 198 op/s
Jan 31 08:28:37 compute-2 ceph-mon[77282]: osdmap e364: 3 total, 3 up, 3 in
Jan 31 08:28:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:38.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:38 compute-2 nova_compute[226829]: 2026-01-31 08:28:38.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3289421549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/663713159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:39 compute-2 nova_compute[226829]: 2026-01-31 08:28:39.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:39.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:40 compute-2 ceph-mon[77282]: pgmap v2855: 305 pgs: 305 active+clean; 718 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 973 KiB/s rd, 4.6 MiB/s wr, 218 op/s
Jan 31 08:28:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1999143793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:40.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:40 compute-2 nova_compute[226829]: 2026-01-31 08:28:40.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:40 compute-2 nova_compute[226829]: 2026-01-31 08:28:40.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:28:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:41 compute-2 nova_compute[226829]: 2026-01-31 08:28:41.036 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:41 compute-2 nova_compute[226829]: 2026-01-31 08:28:41.240 226833 DEBUG nova.network.neutron [req-c4e5d1e8-b2a0-4347-93dc-c52de508fea6 req-cfb199bd-8336-4166-965a-7f3def295d17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updated VIF entry in instance network info cache for port a3ff723d-9e34-45ef-881d-6534126ae169. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:28:41 compute-2 nova_compute[226829]: 2026-01-31 08:28:41.241 226833 DEBUG nova.network.neutron [req-c4e5d1e8-b2a0-4347-93dc-c52de508fea6 req-cfb199bd-8336-4166-965a-7f3def295d17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:41.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:41 compute-2 nova_compute[226829]: 2026-01-31 08:28:41.752 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:28:41 compute-2 nova_compute[226829]: 2026-01-31 08:28:41.773 226833 DEBUG oslo_concurrency.lockutils [req-c4e5d1e8-b2a0-4347-93dc-c52de508fea6 req-cfb199bd-8336-4166-965a-7f3def295d17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:28:41 compute-2 nova_compute[226829]: 2026-01-31 08:28:41.774 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:28:41 compute-2 nova_compute[226829]: 2026-01-31 08:28:41.774 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:28:41 compute-2 nova_compute[226829]: 2026-01-31 08:28:41.844 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:42 compute-2 ceph-mon[77282]: pgmap v2856: 305 pgs: 305 active+clean; 723 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 859 KiB/s rd, 4.7 MiB/s wr, 206 op/s
Jan 31 08:28:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:42.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:43 compute-2 ceph-mon[77282]: pgmap v2857: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 761 KiB/s rd, 4.6 MiB/s wr, 203 op/s
Jan 31 08:28:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:43.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:44.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:45 compute-2 nova_compute[226829]: 2026-01-31 08:28:45.540 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:45 compute-2 nova_compute[226829]: 2026-01-31 08:28:45.566 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:28:45 compute-2 nova_compute[226829]: 2026-01-31 08:28:45.567 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:28:45 compute-2 nova_compute[226829]: 2026-01-31 08:28:45.567 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:45.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:46 compute-2 nova_compute[226829]: 2026-01-31 08:28:46.040 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:46.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:46 compute-2 nova_compute[226829]: 2026-01-31 08:28:46.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:46 compute-2 ceph-mon[77282]: pgmap v2858: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 588 KiB/s rd, 3.7 MiB/s wr, 188 op/s
Jan 31 08:28:46 compute-2 nova_compute[226829]: 2026-01-31 08:28:46.848 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:47 compute-2 nova_compute[226829]: 2026-01-31 08:28:47.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:47.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:47 compute-2 nova_compute[226829]: 2026-01-31 08:28:47.691 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:47 compute-2 nova_compute[226829]: 2026-01-31 08:28:47.755 226833 DEBUG nova.compute.manager [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 08:28:48 compute-2 ceph-mon[77282]: pgmap v2859: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 357 KiB/s wr, 170 op/s
Jan 31 08:28:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3624284653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.180 226833 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.180 226833 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.234 226833 DEBUG nova.objects.instance [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'pci_requests' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.273 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.273 226833 INFO nova.compute.claims [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.274 226833 DEBUG nova.objects.instance [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'resources' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:48.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.440 226833 DEBUG nova.objects.instance [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.556 226833 INFO nova.compute.resource_tracker [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating resource usage from migration 25fea84c-439e-4b69-837e-757239c42f77
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.556 226833 DEBUG nova.compute.resource_tracker [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Starting to track incoming migration 25fea84c-439e-4b69-837e-757239c42f77 with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.939 226833 DEBUG nova.scheduler.client.report [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.958 226833 DEBUG nova.scheduler.client.report [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:28:48 compute-2 nova_compute[226829]: 2026-01-31 08:28:48.959 226833 DEBUG nova.compute.provider_tree [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:28:49 compute-2 nova_compute[226829]: 2026-01-31 08:28:49.037 226833 DEBUG nova.scheduler.client.report [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:28:49 compute-2 nova_compute[226829]: 2026-01-31 08:28:49.059 226833 DEBUG nova.scheduler.client.report [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:28:49 compute-2 nova_compute[226829]: 2026-01-31 08:28:49.125 226833 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1814181840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1319289982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:49 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/643233360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:49 compute-2 nova_compute[226829]: 2026-01-31 08:28:49.613 226833 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:49 compute-2 nova_compute[226829]: 2026-01-31 08:28:49.620 226833 DEBUG nova.compute.provider_tree [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:28:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:49.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:49 compute-2 nova_compute[226829]: 2026-01-31 08:28:49.850 226833 DEBUG nova.scheduler.client.report [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:28:50 compute-2 nova_compute[226829]: 2026-01-31 08:28:50.080 226833 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:50 compute-2 nova_compute[226829]: 2026-01-31 08:28:50.081 226833 INFO nova.compute.manager [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Migrating
Jan 31 08:28:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:28:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:50.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:28:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:50 compute-2 ceph-mon[77282]: pgmap v2860: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 333 KiB/s wr, 182 op/s
Jan 31 08:28:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/643233360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2216114932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:51 compute-2 nova_compute[226829]: 2026-01-31 08:28:51.044 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:51 compute-2 nova_compute[226829]: 2026-01-31 08:28:51.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:51 compute-2 nova_compute[226829]: 2026-01-31 08:28:51.648 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:51 compute-2 nova_compute[226829]: 2026-01-31 08:28:51.649 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:51 compute-2 nova_compute[226829]: 2026-01-31 08:28:51.650 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:51 compute-2 nova_compute[226829]: 2026-01-31 08:28:51.650 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:28:51 compute-2 nova_compute[226829]: 2026-01-31 08:28:51.651 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:51.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:51 compute-2 ceph-mon[77282]: pgmap v2861: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 118 KiB/s wr, 167 op/s
Jan 31 08:28:51 compute-2 nova_compute[226829]: 2026-01-31 08:28:51.852 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:51 compute-2 sudo[302824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:51 compute-2 sudo[302824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:51 compute-2 sudo[302824]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:51 compute-2 sudo[302875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:51 compute-2 sudo[302875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:51 compute-2 sudo[302875]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:52 compute-2 podman[302867]: 2026-01-31 08:28:52.002923585 +0000 UTC m=+0.135983461 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:28:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1528313895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:52 compute-2 nova_compute[226829]: 2026-01-31 08:28:52.105 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:52 compute-2 nova_compute[226829]: 2026-01-31 08:28:52.236 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:52 compute-2 nova_compute[226829]: 2026-01-31 08:28:52.237 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:52 compute-2 nova_compute[226829]: 2026-01-31 08:28:52.237 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:52.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:52 compute-2 nova_compute[226829]: 2026-01-31 08:28:52.401 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:28:52 compute-2 nova_compute[226829]: 2026-01-31 08:28:52.402 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3919MB free_disk=20.693714141845703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:28:52 compute-2 nova_compute[226829]: 2026-01-31 08:28:52.402 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:52 compute-2 nova_compute[226829]: 2026-01-31 08:28:52.402 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:52 compute-2 nova_compute[226829]: 2026-01-31 08:28:52.702 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Migration for instance 10816ede-cf43-4736-aba7-48389f607d30 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 31 08:28:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1528313895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:53 compute-2 nova_compute[226829]: 2026-01-31 08:28:53.457 226833 INFO nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating resource usage from migration 25fea84c-439e-4b69-837e-757239c42f77
Jan 31 08:28:53 compute-2 nova_compute[226829]: 2026-01-31 08:28:53.458 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Starting to track incoming migration 25fea84c-439e-4b69-837e-757239c42f77 with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 08:28:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:53.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:53 compute-2 ceph-mon[77282]: pgmap v2862: 305 pgs: 305 active+clean; 700 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 39 KiB/s wr, 172 op/s
Jan 31 08:28:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3992570297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:28:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3992570297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:28:53 compute-2 sshd-session[302923]: Accepted publickey for nova from 192.168.122.100 port 38298 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:28:53 compute-2 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 08:28:53 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 08:28:53 compute-2 systemd-logind[801]: New session 60 of user nova.
Jan 31 08:28:53 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 08:28:53 compute-2 systemd[1]: Starting User Manager for UID 42436...
Jan 31 08:28:53 compute-2 systemd[302928]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:28:54 compute-2 systemd[302928]: Queued start job for default target Main User Target.
Jan 31 08:28:54 compute-2 systemd[302928]: Created slice User Application Slice.
Jan 31 08:28:54 compute-2 systemd[302928]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:28:54 compute-2 systemd[302928]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:28:54 compute-2 systemd[302928]: Reached target Paths.
Jan 31 08:28:54 compute-2 systemd[302928]: Reached target Timers.
Jan 31 08:28:54 compute-2 systemd[302928]: Starting D-Bus User Message Bus Socket...
Jan 31 08:28:54 compute-2 systemd[302928]: Starting Create User's Volatile Files and Directories...
Jan 31 08:28:54 compute-2 systemd[302928]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:28:54 compute-2 systemd[302928]: Reached target Sockets.
Jan 31 08:28:54 compute-2 systemd[302928]: Finished Create User's Volatile Files and Directories.
Jan 31 08:28:54 compute-2 systemd[302928]: Reached target Basic System.
Jan 31 08:28:54 compute-2 systemd[302928]: Reached target Main User Target.
Jan 31 08:28:54 compute-2 systemd[302928]: Startup finished in 122ms.
Jan 31 08:28:54 compute-2 systemd[1]: Started User Manager for UID 42436.
Jan 31 08:28:54 compute-2 systemd[1]: Started Session 60 of User nova.
Jan 31 08:28:54 compute-2 sshd-session[302923]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:28:54 compute-2 sshd-session[302943]: Received disconnect from 192.168.122.100 port 38298:11: disconnected by user
Jan 31 08:28:54 compute-2 sshd-session[302943]: Disconnected from user nova 192.168.122.100 port 38298
Jan 31 08:28:54 compute-2 sshd-session[302923]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:28:54 compute-2 systemd[1]: session-60.scope: Deactivated successfully.
Jan 31 08:28:54 compute-2 systemd-logind[801]: Session 60 logged out. Waiting for processes to exit.
Jan 31 08:28:54 compute-2 systemd-logind[801]: Removed session 60.
Jan 31 08:28:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:54.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:54 compute-2 sshd-session[302945]: Accepted publickey for nova from 192.168.122.100 port 38312 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:28:54 compute-2 systemd-logind[801]: New session 62 of user nova.
Jan 31 08:28:54 compute-2 systemd[1]: Started Session 62 of User nova.
Jan 31 08:28:54 compute-2 sshd-session[302945]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:28:54 compute-2 sshd-session[302948]: Received disconnect from 192.168.122.100 port 38312:11: disconnected by user
Jan 31 08:28:54 compute-2 sshd-session[302948]: Disconnected from user nova 192.168.122.100 port 38312
Jan 31 08:28:54 compute-2 sshd-session[302945]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:28:54 compute-2 systemd[1]: session-62.scope: Deactivated successfully.
Jan 31 08:28:54 compute-2 systemd-logind[801]: Session 62 logged out. Waiting for processes to exit.
Jan 31 08:28:54 compute-2 systemd-logind[801]: Removed session 62.
Jan 31 08:28:54 compute-2 nova_compute[226829]: 2026-01-31 08:28:54.967 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:28:55 compute-2 nova_compute[226829]: 2026-01-31 08:28:55.060 226833 WARNING nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 10816ede-cf43-4736-aba7-48389f607d30 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}.
Jan 31 08:28:55 compute-2 nova_compute[226829]: 2026-01-31 08:28:55.060 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:28:55 compute-2 nova_compute[226829]: 2026-01-31 08:28:55.061 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:28:55 compute-2 nova_compute[226829]: 2026-01-31 08:28:55.130 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:55.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:55 compute-2 ceph-mon[77282]: pgmap v2863: 305 pgs: 305 active+clean; 656 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 39 KiB/s wr, 165 op/s
Jan 31 08:28:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1234714180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:55 compute-2 nova_compute[226829]: 2026-01-31 08:28:55.929 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.798s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:55 compute-2 nova_compute[226829]: 2026-01-31 08:28:55.936 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:28:56 compute-2 nova_compute[226829]: 2026-01-31 08:28:56.049 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:28:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:56.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:28:56 compute-2 nova_compute[226829]: 2026-01-31 08:28:56.854 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1234714180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:57 compute-2 nova_compute[226829]: 2026-01-31 08:28:57.087 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:28:57 compute-2 nova_compute[226829]: 2026-01-31 08:28:57.225 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:28:57 compute-2 nova_compute[226829]: 2026-01-31 08:28:57.225 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:57.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:58 compute-2 ceph-mon[77282]: pgmap v2864: 305 pgs: 305 active+clean; 598 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 49 KiB/s wr, 177 op/s
Jan 31 08:28:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:58.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:59 compute-2 nova_compute[226829]: 2026-01-31 08:28:59.014 226833 DEBUG nova.compute.manager [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-unplugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:59 compute-2 nova_compute[226829]: 2026-01-31 08:28:59.014 226833 DEBUG oslo_concurrency.lockutils [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:59 compute-2 nova_compute[226829]: 2026-01-31 08:28:59.015 226833 DEBUG oslo_concurrency.lockutils [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:59 compute-2 nova_compute[226829]: 2026-01-31 08:28:59.015 226833 DEBUG oslo_concurrency.lockutils [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:59 compute-2 nova_compute[226829]: 2026-01-31 08:28:59.016 226833 DEBUG nova.compute.manager [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-unplugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:59 compute-2 nova_compute[226829]: 2026-01-31 08:28:59.016 226833 WARNING nova.compute.manager [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received unexpected event network-vif-unplugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with vm_state active and task_state resize_migrating.
Jan 31 08:28:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2136540381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3570348620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/326598905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:28:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:59.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:00.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:00 compute-2 ceph-mon[77282]: pgmap v2865: 305 pgs: 305 active+clean; 598 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 459 KiB/s rd, 21 KiB/s wr, 104 op/s
Jan 31 08:29:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:01 compute-2 nova_compute[226829]: 2026-01-31 08:29:01.103 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:01 compute-2 ceph-mon[77282]: pgmap v2866: 305 pgs: 305 active+clean; 616 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 857 KiB/s wr, 84 op/s
Jan 31 08:29:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:01.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:01 compute-2 nova_compute[226829]: 2026-01-31 08:29:01.857 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:01 compute-2 anacron[52153]: Job `cron.monthly' started
Jan 31 08:29:01 compute-2 nova_compute[226829]: 2026-01-31 08:29:01.927 226833 DEBUG nova.compute.manager [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:01 compute-2 nova_compute[226829]: 2026-01-31 08:29:01.928 226833 DEBUG oslo_concurrency.lockutils [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:01 compute-2 nova_compute[226829]: 2026-01-31 08:29:01.928 226833 DEBUG oslo_concurrency.lockutils [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:01 compute-2 nova_compute[226829]: 2026-01-31 08:29:01.928 226833 DEBUG oslo_concurrency.lockutils [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:01 compute-2 nova_compute[226829]: 2026-01-31 08:29:01.929 226833 DEBUG nova.compute.manager [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:29:01 compute-2 nova_compute[226829]: 2026-01-31 08:29:01.929 226833 WARNING nova.compute.manager [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received unexpected event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with vm_state active and task_state resize_migrated.
Jan 31 08:29:01 compute-2 anacron[52153]: Job `cron.monthly' terminated
Jan 31 08:29:01 compute-2 anacron[52153]: Normal exit (3 jobs run)
Jan 31 08:29:02 compute-2 nova_compute[226829]: 2026-01-31 08:29:02.037 226833 INFO nova.network.neutron [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 08:29:02 compute-2 podman[302978]: 2026-01-31 08:29:02.046945687 +0000 UTC m=+0.086206010 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:29:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:02.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:29:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:03.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:29:03 compute-2 ceph-mon[77282]: pgmap v2867: 305 pgs: 305 active+clean; 637 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 1.7 MiB/s wr, 80 op/s
Jan 31 08:29:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:04.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:04 compute-2 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 08:29:04 compute-2 systemd[302928]: Activating special unit Exit the Session...
Jan 31 08:29:04 compute-2 systemd[302928]: Stopped target Main User Target.
Jan 31 08:29:04 compute-2 systemd[302928]: Stopped target Basic System.
Jan 31 08:29:04 compute-2 systemd[302928]: Stopped target Paths.
Jan 31 08:29:04 compute-2 systemd[302928]: Stopped target Sockets.
Jan 31 08:29:04 compute-2 systemd[302928]: Stopped target Timers.
Jan 31 08:29:04 compute-2 systemd[302928]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:29:04 compute-2 systemd[302928]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 08:29:04 compute-2 systemd[302928]: Closed D-Bus User Message Bus Socket.
Jan 31 08:29:04 compute-2 systemd[302928]: Stopped Create User's Volatile Files and Directories.
Jan 31 08:29:04 compute-2 systemd[302928]: Removed slice User Application Slice.
Jan 31 08:29:04 compute-2 systemd[302928]: Reached target Shutdown.
Jan 31 08:29:04 compute-2 systemd[302928]: Finished Exit the Session.
Jan 31 08:29:04 compute-2 systemd[302928]: Reached target Exit the Session.
Jan 31 08:29:04 compute-2 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 08:29:04 compute-2 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 08:29:04 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 08:29:04 compute-2 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 08:29:04 compute-2 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 08:29:04 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 08:29:04 compute-2 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 08:29:05 compute-2 nova_compute[226829]: 2026-01-31 08:29:05.399 226833 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:05 compute-2 nova_compute[226829]: 2026-01-31 08:29:05.400 226833 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:05 compute-2 nova_compute[226829]: 2026-01-31 08:29:05.400 226833 DEBUG nova.network.neutron [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:29:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:05.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:05 compute-2 nova_compute[226829]: 2026-01-31 08:29:05.776 226833 DEBUG nova.compute.manager [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:05 compute-2 nova_compute[226829]: 2026-01-31 08:29:05.776 226833 DEBUG nova.compute.manager [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing instance network info cache due to event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:29:05 compute-2 nova_compute[226829]: 2026-01-31 08:29:05.776 226833 DEBUG oslo_concurrency.lockutils [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:06 compute-2 nova_compute[226829]: 2026-01-31 08:29:06.105 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:06 compute-2 ceph-mon[77282]: pgmap v2868: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 31 08:29:06 compute-2 nova_compute[226829]: 2026-01-31 08:29:06.859 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:06.905 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:06.907 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:06.908 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:07 compute-2 ceph-mon[77282]: pgmap v2869: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 31 08:29:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:07.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:08.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:09.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:09 compute-2 ceph-mon[77282]: pgmap v2870: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:29:09 compute-2 sudo[302999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:09 compute-2 sudo[302999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:09 compute-2 sudo[302999]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:09 compute-2 sudo[303024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:09 compute-2 sudo[303024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:09 compute-2 sudo[303024]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:09 compute-2 sudo[303049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:09 compute-2 sudo[303049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:09 compute-2 sudo[303049]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:09 compute-2 nova_compute[226829]: 2026-01-31 08:29:09.906 226833 DEBUG nova.network.neutron [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:09 compute-2 sudo[303075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:29:09 compute-2 sudo[303075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.259 226833 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.263 226833 DEBUG oslo_concurrency.lockutils [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.263 226833 DEBUG nova.network.neutron [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:29:10 compute-2 sudo[303075]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:10.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.481 226833 DEBUG os_brick.utils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.484 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.501 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.502 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[75d22488-27db-4500-b5ce-2b0e0aa8756e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.503 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.514 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.514 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5a87b6-ac34-401e-8bc2-253250db0832]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.518 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.528 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.529 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[eee77d67-6b44-4e5c-accf-b250adf0ae28]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.531 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[82543ca9-0ded-4d02-aa96-75f450ef6dd4]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.532 226833 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.564 226833 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.566 226833 DEBUG os_brick.initiator.connectors.lightos [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.567 226833 DEBUG os_brick.initiator.connectors.lightos [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.567 226833 DEBUG os_brick.initiator.connectors.lightos [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:29:10 compute-2 nova_compute[226829]: 2026-01-31 08:29:10.567 226833 DEBUG os_brick.utils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:29:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:29:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:29:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:29:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:29:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:29:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:29:11 compute-2 nova_compute[226829]: 2026-01-31 08:29:11.108 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:11 compute-2 nova_compute[226829]: 2026-01-31 08:29:11.222 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:11 compute-2 nova_compute[226829]: 2026-01-31 08:29:11.273 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:11 compute-2 nova_compute[226829]: 2026-01-31 08:29:11.274 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:29:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:11.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:11 compute-2 nova_compute[226829]: 2026-01-31 08:29:11.706 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:11.706 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:29:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:11.708 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:29:11 compute-2 nova_compute[226829]: 2026-01-31 08:29:11.860 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:11 compute-2 sudo[303139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:11 compute-2 sudo[303139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:11 compute-2 sudo[303139]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:12 compute-2 sudo[303164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:12 compute-2 sudo[303164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:12 compute-2 sudo[303164]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:12 compute-2 ceph-mon[77282]: pgmap v2871: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:29:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/503516173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:12.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:13 compute-2 nova_compute[226829]: 2026-01-31 08:29:13.400 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 08:29:13 compute-2 nova_compute[226829]: 2026-01-31 08:29:13.404 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:29:13 compute-2 nova_compute[226829]: 2026-01-31 08:29:13.405 226833 INFO nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Creating image(s)
Jan 31 08:29:13 compute-2 nova_compute[226829]: 2026-01-31 08:29:13.462 226833 DEBUG nova.storage.rbd_utils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] creating snapshot(nova-resize) on rbd image(10816ede-cf43-4736-aba7-48389f607d30_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:29:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:13.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.092 226833 DEBUG nova.network.neutron [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updated VIF entry in instance network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.092 226833 DEBUG nova.network.neutron [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:14 compute-2 ceph-mon[77282]: pgmap v2872: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 981 KiB/s wr, 27 op/s
Jan 31 08:29:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4283237309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.182 226833 DEBUG oslo_concurrency.lockutils [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e365 e365: 3 total, 3 up, 3 in
Jan 31 08:29:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:14.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.339 226833 DEBUG nova.objects.instance [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.503 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.505 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Ensure instance console log exists: /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.506 226833 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.507 226833 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.507 226833 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.510 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Start _get_guest_xml network_info=[{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:ec:78:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '10816ede-cf43-4736-aba7-48389f607d30', 'attached_at': '2026-01-31T08:29:12.000000', 'detached_at': '', 'volume_id': 'c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4', 'multiattach': True, 'serial': 'c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'attachment_id': '28ab6c6c-1b6f-4e9f-a289-867ba20770bd', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.517 226833 WARNING nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.523 226833 DEBUG nova.virt.libvirt.host [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.524 226833 DEBUG nova.virt.libvirt.host [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.530 226833 DEBUG nova.virt.libvirt.host [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.531 226833 DEBUG nova.virt.libvirt.host [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.533 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.535 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.536 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.536 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.537 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.537 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.537 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.538 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.538 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.539 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.539 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.540 226833 DEBUG nova.virt.hardware [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.540 226833 DEBUG nova.objects.instance [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:29:14 compute-2 nova_compute[226829]: 2026-01-31 08:29:14.579 226833 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:29:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1429176043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.022 226833 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.057 226833 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:15 compute-2 ceph-mon[77282]: pgmap v2873: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 91 KiB/s wr, 14 op/s
Jan 31 08:29:15 compute-2 ceph-mon[77282]: osdmap e365: 3 total, 3 up, 3 in
Jan 31 08:29:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2624916612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1429176043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:29:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/569270921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.454 226833 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.528 226833 DEBUG nova.virt.libvirt.vif [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:27:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-1',id=165,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-uhld1k6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:29:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=10816ede-cf43-4736-aba7-48389f607d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:ec:78:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.530 226833 DEBUG nova.network.os_vif_util [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:ec:78:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.531 226833 DEBUG nova.network.os_vif_util [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.536 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <uuid>10816ede-cf43-4736-aba7-48389f607d30</uuid>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <name>instance-000000a5</name>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <memory>196608</memory>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <nova:name>multiattach-server-1</nova:name>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:29:14</nova:creationTime>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <nova:flavor name="m1.micro">
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <nova:memory>192</nova:memory>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <nova:user uuid="85dfa8546d9942648bb4197c8b1947e3">tempest-AttachVolumeMultiAttachTest-2017021026-project-member</nova:user>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <nova:project uuid="48bbdbdee526499e90da7e971ede68d3">tempest-AttachVolumeMultiAttachTest-2017021026</nova:project>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <nova:port uuid="aeb09486-b68f-4fa4-a410-dd0ffaf49b05">
Jan 31 08:29:15 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <system>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <entry name="serial">10816ede-cf43-4736-aba7-48389f607d30</entry>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <entry name="uuid">10816ede-cf43-4736-aba7-48389f607d30</entry>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     </system>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <os>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   </os>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <features>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   </features>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/10816ede-cf43-4736-aba7-48389f607d30_disk">
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       </source>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/10816ede-cf43-4736-aba7-48389f607d30_disk.config">
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       </source>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4">
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       </source>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:29:15 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <serial>c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4</serial>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <shareable/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:ec:78:f9"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <target dev="tapaeb09486-b6"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/console.log" append="off"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <video>
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     </video>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:29:15 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:29:15 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:29:15 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:29:15 compute-2 nova_compute[226829]: </domain>
Jan 31 08:29:15 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.538 226833 DEBUG nova.virt.libvirt.vif [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:27:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-1',id=165,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-uhld1k6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:29:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=10816ede-cf43-4736-aba7-48389f607d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:ec:78:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.538 226833 DEBUG nova.network.os_vif_util [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:ec:78:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.539 226833 DEBUG nova.network.os_vif_util [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.539 226833 DEBUG os_vif [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.540 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.541 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.541 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.551 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.551 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaeb09486-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.551 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaeb09486-b6, col_values=(('external_ids', {'iface-id': 'aeb09486-b68f-4fa4-a410-dd0ffaf49b05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:78:f9', 'vm-uuid': '10816ede-cf43-4736-aba7-48389f607d30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:15 compute-2 NetworkManager[48999]: <info>  [1769848155.5883] manager: (tapaeb09486-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.591 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.596 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.598 226833 INFO os_vif [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6')
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.678 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.678 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.680 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.680 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:ec:78:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.681 226833 INFO nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Using config drive
Jan 31 08:29:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:15.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:15.711 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:15 compute-2 NetworkManager[48999]: <info>  [1769848155.7955] manager: (tapaeb09486-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/337)
Jan 31 08:29:15 compute-2 kernel: tapaeb09486-b6: entered promiscuous mode
Jan 31 08:29:15 compute-2 ovn_controller[133834]: 2026-01-31T08:29:15Z|00676|binding|INFO|Claiming lport aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for this chassis.
Jan 31 08:29:15 compute-2 ovn_controller[133834]: 2026-01-31T08:29:15Z|00677|binding|INFO|aeb09486-b68f-4fa4-a410-dd0ffaf49b05: Claiming fa:16:3e:ec:78:f9 10.100.0.11
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.797 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:15 compute-2 ovn_controller[133834]: 2026-01-31T08:29:15Z|00678|binding|INFO|Setting lport aeb09486-b68f-4fa4-a410-dd0ffaf49b05 ovn-installed in OVS
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.806 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:15 compute-2 nova_compute[226829]: 2026-01-31 08:29:15.809 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:15 compute-2 systemd-udevd[303354]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:29:15 compute-2 NetworkManager[48999]: <info>  [1769848155.8515] device (tapaeb09486-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:29:15 compute-2 NetworkManager[48999]: <info>  [1769848155.8528] device (tapaeb09486-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:29:15 compute-2 systemd-machined[195142]: New machine qemu-78-instance-000000a5.
Jan 31 08:29:15 compute-2 systemd[1]: Started Virtual Machine qemu-78-instance-000000a5.
Jan 31 08:29:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:15.905 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:78:f9 10.100.0.11'], port_security=['fa:16:3e:ec:78:f9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '10816ede-cf43-4736-aba7-48389f607d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=aeb09486-b68f-4fa4-a410-dd0ffaf49b05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:29:15 compute-2 ovn_controller[133834]: 2026-01-31T08:29:15Z|00679|binding|INFO|Setting lport aeb09486-b68f-4fa4-a410-dd0ffaf49b05 up in Southbound
Jan 31 08:29:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:15.907 143841 INFO neutron.agent.ovn.metadata.agent [-] Port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad bound to our chassis
Jan 31 08:29:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:15.908 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:29:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:15.924 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa84d4e-7a78-4f5e-8887-7ef7da1b7cd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:15.959 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed78311-6c14-464c-b86f-2fc4a07e4e18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:15.963 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3034f3-079c-4e0b-ab55-e79f51ca65f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:15.990 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[6f87581e-aa41-48d6-b8bb-d5e8811a3516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:16.008 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[04ece97b-b6a2-4bff-80b1-c73dcfe2d368]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834880, 'reachable_time': 24981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303372, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:16.021 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[15fa284e-2519-4b42-849e-c4ac620006ea]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 834889, 'tstamp': 834889}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303373, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 834892, 'tstamp': 834892}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303373, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:16.022 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.023 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.024 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:16.025 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:16.025 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:29:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:16.026 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:16.026 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:29:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:16.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/569270921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.512 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848156.5117004, 10816ede-cf43-4736-aba7-48389f607d30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.513 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] VM Resumed (Lifecycle Event)
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.516 226833 DEBUG nova.compute.manager [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.521 226833 INFO nova.virt.libvirt.driver [-] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Instance running successfully.
Jan 31 08:29:16 compute-2 virtqemud[226546]: argument unsupported: QEMU guest agent is not configured
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.525 226833 DEBUG nova.virt.libvirt.guest [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.526 226833 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.615 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.620 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.674 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.674 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848156.515118, 10816ede-cf43-4736-aba7-48389f607d30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.674 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] VM Started (Lifecycle Event)
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.732 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.734 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.797 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:29:16 compute-2 nova_compute[226829]: 2026-01-31 08:29:16.863 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:17 compute-2 nova_compute[226829]: 2026-01-31 08:29:17.256 226833 DEBUG nova.compute.manager [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:17 compute-2 nova_compute[226829]: 2026-01-31 08:29:17.257 226833 DEBUG oslo_concurrency.lockutils [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:17 compute-2 nova_compute[226829]: 2026-01-31 08:29:17.257 226833 DEBUG oslo_concurrency.lockutils [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:17 compute-2 nova_compute[226829]: 2026-01-31 08:29:17.257 226833 DEBUG oslo_concurrency.lockutils [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:17 compute-2 nova_compute[226829]: 2026-01-31 08:29:17.257 226833 DEBUG nova.compute.manager [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:29:17 compute-2 nova_compute[226829]: 2026-01-31 08:29:17.258 226833 WARNING nova.compute.manager [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received unexpected event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with vm_state resized and task_state None.
Jan 31 08:29:17 compute-2 ceph-mon[77282]: pgmap v2875: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 3.7 KiB/s wr, 21 op/s
Jan 31 08:29:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:17.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:18.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:18 compute-2 sudo[303435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:18 compute-2 sudo[303435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:18 compute-2 sudo[303435]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:18 compute-2 sudo[303460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:29:18 compute-2 sudo[303460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:18 compute-2 sudo[303460]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:19 compute-2 ceph-mon[77282]: pgmap v2876: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 4.3 KiB/s wr, 32 op/s
Jan 31 08:29:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:29:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:29:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:19.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:19 compute-2 nova_compute[226829]: 2026-01-31 08:29:19.908 226833 DEBUG nova.compute.manager [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:19 compute-2 nova_compute[226829]: 2026-01-31 08:29:19.908 226833 DEBUG oslo_concurrency.lockutils [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:19 compute-2 nova_compute[226829]: 2026-01-31 08:29:19.909 226833 DEBUG oslo_concurrency.lockutils [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:19 compute-2 nova_compute[226829]: 2026-01-31 08:29:19.909 226833 DEBUG oslo_concurrency.lockutils [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:19 compute-2 nova_compute[226829]: 2026-01-31 08:29:19.910 226833 DEBUG nova.compute.manager [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:29:19 compute-2 nova_compute[226829]: 2026-01-31 08:29:19.910 226833 WARNING nova.compute.manager [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received unexpected event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with vm_state resized and task_state None.
Jan 31 08:29:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:20.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:20 compute-2 nova_compute[226829]: 2026-01-31 08:29:20.589 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:21.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:21 compute-2 ceph-mon[77282]: pgmap v2877: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 18 KiB/s wr, 109 op/s
Jan 31 08:29:21 compute-2 nova_compute[226829]: 2026-01-31 08:29:21.770 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:21 compute-2 nova_compute[226829]: 2026-01-31 08:29:21.770 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:21 compute-2 nova_compute[226829]: 2026-01-31 08:29:21.867 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:21 compute-2 nova_compute[226829]: 2026-01-31 08:29:21.895 226833 DEBUG nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:29:22 compute-2 podman[303487]: 2026-01-31 08:29:22.201417721 +0000 UTC m=+0.083457006 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:22 compute-2 nova_compute[226829]: 2026-01-31 08:29:22.268 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:22 compute-2 nova_compute[226829]: 2026-01-31 08:29:22.269 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:22 compute-2 nova_compute[226829]: 2026-01-31 08:29:22.276 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:29:22 compute-2 nova_compute[226829]: 2026-01-31 08:29:22.276 226833 INFO nova.compute.claims [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:29:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:22.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:22 compute-2 nova_compute[226829]: 2026-01-31 08:29:22.522 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:29:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3450192108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:22 compute-2 nova_compute[226829]: 2026-01-31 08:29:22.941 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:22 compute-2 nova_compute[226829]: 2026-01-31 08:29:22.947 226833 DEBUG nova.compute.provider_tree [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:29:23 compute-2 nova_compute[226829]: 2026-01-31 08:29:23.053 226833 DEBUG nova.scheduler.client.report [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:29:23 compute-2 nova_compute[226829]: 2026-01-31 08:29:23.149 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:23 compute-2 nova_compute[226829]: 2026-01-31 08:29:23.150 226833 DEBUG nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:29:23 compute-2 nova_compute[226829]: 2026-01-31 08:29:23.459 226833 DEBUG nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:29:23 compute-2 nova_compute[226829]: 2026-01-31 08:29:23.460 226833 DEBUG nova.network.neutron [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:29:23 compute-2 nova_compute[226829]: 2026-01-31 08:29:23.700 226833 INFO nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:29:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:23.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:23 compute-2 nova_compute[226829]: 2026-01-31 08:29:23.894 226833 DEBUG nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:29:24 compute-2 ceph-mon[77282]: pgmap v2878: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 16 KiB/s wr, 193 op/s
Jan 31 08:29:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3450192108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.128 226833 DEBUG nova.policy [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48d684de9ba340f48e249b4cce857bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '418d5319c640455ab23850c0b0f24f92', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.222 226833 DEBUG nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.223 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.224 226833 INFO nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Creating image(s)
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.256 226833 DEBUG nova.storage.rbd_utils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.282 226833 DEBUG nova.storage.rbd_utils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.308 226833 DEBUG nova.storage.rbd_utils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.312 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:24.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.385 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.385 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.386 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.386 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.410 226833 DEBUG nova.storage.rbd_utils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:29:24 compute-2 nova_compute[226829]: 2026-01-31 08:29:24.415 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:25 compute-2 nova_compute[226829]: 2026-01-31 08:29:25.385 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.969s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:25 compute-2 nova_compute[226829]: 2026-01-31 08:29:25.467 226833 DEBUG nova.storage.rbd_utils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] resizing rbd image b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:29:25 compute-2 ceph-mon[77282]: pgmap v2879: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 196 op/s
Jan 31 08:29:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:25 compute-2 nova_compute[226829]: 2026-01-31 08:29:25.595 226833 DEBUG nova.objects.instance [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'migration_context' on Instance uuid b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:29:25 compute-2 nova_compute[226829]: 2026-01-31 08:29:25.599 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:25.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:26.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:26 compute-2 nova_compute[226829]: 2026-01-31 08:29:26.868 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:27 compute-2 ceph-mon[77282]: pgmap v2880: 305 pgs: 305 active+clean; 682 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 181 op/s
Jan 31 08:29:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:29:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:27.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:29:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:28.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:29 compute-2 nova_compute[226829]: 2026-01-31 08:29:29.072 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:29:29 compute-2 nova_compute[226829]: 2026-01-31 08:29:29.073 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Ensure instance console log exists: /var/lib/nova/instances/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:29:29 compute-2 nova_compute[226829]: 2026-01-31 08:29:29.073 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:29 compute-2 nova_compute[226829]: 2026-01-31 08:29:29.073 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:29 compute-2 nova_compute[226829]: 2026-01-31 08:29:29.074 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:29 compute-2 nova_compute[226829]: 2026-01-31 08:29:29.491 226833 DEBUG nova.network.neutron [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Successfully created port: f71d80d1-5129-4397-82d8-2bebfc6c2cb2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:29:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:29.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:29 compute-2 ceph-mon[77282]: pgmap v2881: 305 pgs: 305 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Jan 31 08:29:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e366 e366: 3 total, 3 up, 3 in
Jan 31 08:29:29 compute-2 ovn_controller[133834]: 2026-01-31T08:29:29Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:78:f9 10.100.0.11
Jan 31 08:29:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:30.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:30 compute-2 nova_compute[226829]: 2026-01-31 08:29:30.602 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:30 compute-2 ceph-mon[77282]: osdmap e366: 3 total, 3 up, 3 in
Jan 31 08:29:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2308574213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:31.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:31 compute-2 nova_compute[226829]: 2026-01-31 08:29:31.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:31 compute-2 ceph-mon[77282]: pgmap v2883: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 175 op/s
Jan 31 08:29:32 compute-2 sudo[303706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:32 compute-2 sudo[303706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:32 compute-2 sudo[303706]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:32 compute-2 podman[303707]: 2026-01-31 08:29:32.175690877 +0000 UTC m=+0.055572719 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 08:29:32 compute-2 sudo[303741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:32 compute-2 sudo[303741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:32 compute-2 sudo[303741]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:32.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:32 compute-2 nova_compute[226829]: 2026-01-31 08:29:32.834 226833 DEBUG nova.network.neutron [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Successfully updated port: f71d80d1-5129-4397-82d8-2bebfc6c2cb2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:29:33 compute-2 nova_compute[226829]: 2026-01-31 08:29:33.311 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "refresh_cache-b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:33 compute-2 nova_compute[226829]: 2026-01-31 08:29:33.312 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquired lock "refresh_cache-b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:33 compute-2 nova_compute[226829]: 2026-01-31 08:29:33.312 226833 DEBUG nova.network.neutron [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:29:33 compute-2 nova_compute[226829]: 2026-01-31 08:29:33.440 226833 DEBUG nova.compute.manager [req-9c7ed21e-c397-47a7-b4d3-1d50f71a16ee req-caa205d6-2e51-4406-896d-7e508558e69f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Received event network-changed-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:33 compute-2 nova_compute[226829]: 2026-01-31 08:29:33.441 226833 DEBUG nova.compute.manager [req-9c7ed21e-c397-47a7-b4d3-1d50f71a16ee req-caa205d6-2e51-4406-896d-7e508558e69f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Refreshing instance network info cache due to event network-changed-f71d80d1-5129-4397-82d8-2bebfc6c2cb2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:29:33 compute-2 nova_compute[226829]: 2026-01-31 08:29:33.441 226833 DEBUG oslo_concurrency.lockutils [req-9c7ed21e-c397-47a7-b4d3-1d50f71a16ee req-caa205d6-2e51-4406-896d-7e508558e69f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:33 compute-2 nova_compute[226829]: 2026-01-31 08:29:33.657 226833 DEBUG nova.network.neutron [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:29:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:33.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:34 compute-2 ceph-mon[77282]: pgmap v2884: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 698 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 852 KiB/s rd, 3.1 MiB/s wr, 126 op/s
Jan 31 08:29:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:34.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.745 226833 DEBUG nova.network.neutron [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Updating instance_info_cache with network_info: [{"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.921 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Releasing lock "refresh_cache-b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.922 226833 DEBUG nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Instance network_info: |[{"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.922 226833 DEBUG oslo_concurrency.lockutils [req-9c7ed21e-c397-47a7-b4d3-1d50f71a16ee req-caa205d6-2e51-4406-896d-7e508558e69f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.922 226833 DEBUG nova.network.neutron [req-9c7ed21e-c397-47a7-b4d3-1d50f71a16ee req-caa205d6-2e51-4406-896d-7e508558e69f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Refreshing network info cache for port f71d80d1-5129-4397-82d8-2bebfc6c2cb2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.925 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Start _get_guest_xml network_info=[{"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.930 226833 WARNING nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.940 226833 DEBUG nova.virt.libvirt.host [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.941 226833 DEBUG nova.virt.libvirt.host [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.944 226833 DEBUG nova.virt.libvirt.host [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.944 226833 DEBUG nova.virt.libvirt.host [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.945 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.945 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.946 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.946 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.946 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.946 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.947 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.947 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.947 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.947 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.947 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.948 226833 DEBUG nova.virt.hardware [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:29:34 compute-2 nova_compute[226829]: 2026-01-31 08:29:34.950 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:35 compute-2 ceph-mon[77282]: pgmap v2885: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 707 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 802 KiB/s rd, 3.9 MiB/s wr, 141 op/s
Jan 31 08:29:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:29:35 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/277042438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:35 compute-2 nova_compute[226829]: 2026-01-31 08:29:35.427 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:35 compute-2 nova_compute[226829]: 2026-01-31 08:29:35.456 226833 DEBUG nova.storage.rbd_utils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:29:35 compute-2 nova_compute[226829]: 2026-01-31 08:29:35.460 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:35 compute-2 nova_compute[226829]: 2026-01-31 08:29:35.604 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:35.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:29:35 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3857347424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:35 compute-2 nova_compute[226829]: 2026-01-31 08:29:35.914 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:35 compute-2 nova_compute[226829]: 2026-01-31 08:29:35.915 226833 DEBUG nova.virt.libvirt.vif [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-988559875',display_name='tempest-AttachVolumeNegativeTest-server-988559875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-988559875',id=170,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/zuybdYg042+ljf9K0OaAu4qMaJQTzi7pgwG187ctJeBYsiO0zjHd/uMi155olotZf3rtKbwgn46CPW17Eobd3fKqCBDHO/KeVlfYLbYoq0ah/sjrhyXG89J7gTR3NLQ==',key_name='tempest-keypair-71951735',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-ikeiarwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:29:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:29:35 compute-2 nova_compute[226829]: 2026-01-31 08:29:35.915 226833 DEBUG nova.network.os_vif_util [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:29:35 compute-2 nova_compute[226829]: 2026-01-31 08:29:35.916 226833 DEBUG nova.network.os_vif_util [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8a:82,bridge_name='br-int',has_traffic_filtering=True,id=f71d80d1-5129-4397-82d8-2bebfc6c2cb2,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf71d80d1-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:29:35 compute-2 nova_compute[226829]: 2026-01-31 08:29:35.917 226833 DEBUG nova.objects.instance [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'pci_devices' on Instance uuid b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.049 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <uuid>b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5</uuid>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <name>instance-000000aa</name>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <nova:name>tempest-AttachVolumeNegativeTest-server-988559875</nova:name>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:29:34</nova:creationTime>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <nova:user uuid="48d684de9ba340f48e249b4cce857bfa">tempest-AttachVolumeNegativeTest-562353674-project-member</nova:user>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <nova:project uuid="418d5319c640455ab23850c0b0f24f92">tempest-AttachVolumeNegativeTest-562353674</nova:project>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <nova:port uuid="f71d80d1-5129-4397-82d8-2bebfc6c2cb2">
Jan 31 08:29:36 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <system>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <entry name="serial">b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5</entry>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <entry name="uuid">b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5</entry>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     </system>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <os>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   </os>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <features>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   </features>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk">
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       </source>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk.config">
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       </source>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:29:36 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:fe:8a:82"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <target dev="tapf71d80d1-51"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5/console.log" append="off"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <video>
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     </video>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:29:36 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:29:36 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:29:36 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:29:36 compute-2 nova_compute[226829]: </domain>
Jan 31 08:29:36 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.050 226833 DEBUG nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Preparing to wait for external event network-vif-plugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.050 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.050 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.050 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.051 226833 DEBUG nova.virt.libvirt.vif [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-988559875',display_name='tempest-AttachVolumeNegativeTest-server-988559875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-988559875',id=170,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/zuybdYg042+ljf9K0OaAu4qMaJQTzi7pgwG187ctJeBYsiO0zjHd/uMi155olotZf3rtKbwgn46CPW17Eobd3fKqCBDHO/KeVlfYLbYoq0ah/sjrhyXG89J7gTR3NLQ==',key_name='tempest-keypair-71951735',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-ikeiarwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:29:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.051 226833 DEBUG nova.network.os_vif_util [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.052 226833 DEBUG nova.network.os_vif_util [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8a:82,bridge_name='br-int',has_traffic_filtering=True,id=f71d80d1-5129-4397-82d8-2bebfc6c2cb2,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf71d80d1-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.052 226833 DEBUG os_vif [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8a:82,bridge_name='br-int',has_traffic_filtering=True,id=f71d80d1-5129-4397-82d8-2bebfc6c2cb2,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf71d80d1-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.053 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.053 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.053 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.061 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.061 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf71d80d1-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.061 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf71d80d1-51, col_values=(('external_ids', {'iface-id': 'f71d80d1-5129-4397-82d8-2bebfc6c2cb2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:8a:82', 'vm-uuid': 'b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:36 compute-2 NetworkManager[48999]: <info>  [1769848176.0893] manager: (tapf71d80d1-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.088 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.093 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.094 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.095 226833 INFO os_vif [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8a:82,bridge_name='br-int',has_traffic_filtering=True,id=f71d80d1-5129-4397-82d8-2bebfc6c2cb2,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf71d80d1-51')
Jan 31 08:29:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/277042438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3857347424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:36 compute-2 nova_compute[226829]: 2026-01-31 08:29:36.873 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:37 compute-2 nova_compute[226829]: 2026-01-31 08:29:37.310 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:29:37 compute-2 nova_compute[226829]: 2026-01-31 08:29:37.310 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:29:37 compute-2 nova_compute[226829]: 2026-01-31 08:29:37.311 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No VIF found with MAC fa:16:3e:fe:8a:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:29:37 compute-2 nova_compute[226829]: 2026-01-31 08:29:37.311 226833 INFO nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Using config drive
Jan 31 08:29:37 compute-2 nova_compute[226829]: 2026-01-31 08:29:37.340 226833 DEBUG nova.storage.rbd_utils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:29:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 e367: 3 total, 3 up, 3 in
Jan 31 08:29:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:37 compute-2 ceph-mon[77282]: pgmap v2886: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.8 MiB/s wr, 160 op/s
Jan 31 08:29:37 compute-2 ceph-mon[77282]: osdmap e367: 3 total, 3 up, 3 in
Jan 31 08:29:37 compute-2 nova_compute[226829]: 2026-01-31 08:29:37.964 226833 INFO nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Creating config drive at /var/lib/nova/instances/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5/disk.config
Jan 31 08:29:37 compute-2 nova_compute[226829]: 2026-01-31 08:29:37.971 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmph6vafv7p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.106 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmph6vafv7p" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.145 226833 DEBUG nova.storage.rbd_utils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.150 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5/disk.config b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.180 226833 DEBUG nova.network.neutron [req-9c7ed21e-c397-47a7-b4d3-1d50f71a16ee req-caa205d6-2e51-4406-896d-7e508558e69f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Updated VIF entry in instance network info cache for port f71d80d1-5129-4397-82d8-2bebfc6c2cb2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.182 226833 DEBUG nova.network.neutron [req-9c7ed21e-c397-47a7-b4d3-1d50f71a16ee req-caa205d6-2e51-4406-896d-7e508558e69f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Updating instance_info_cache with network_info: [{"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.215 226833 DEBUG oslo_concurrency.lockutils [req-9c7ed21e-c397-47a7-b4d3-1d50f71a16ee req-caa205d6-2e51-4406-896d-7e508558e69f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:38.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.409 226833 DEBUG nova.compute.manager [req-790fb103-5b80-49e6-9d39-a87ab6b5b67c req-31581ef0-b7f8-4030-a5ab-a157f495c88b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-changed-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.410 226833 DEBUG nova.compute.manager [req-790fb103-5b80-49e6-9d39-a87ab6b5b67c req-31581ef0-b7f8-4030-a5ab-a157f495c88b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Refreshing instance network info cache due to event network-changed-a3ff723d-9e34-45ef-881d-6534126ae169. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.411 226833 DEBUG oslo_concurrency.lockutils [req-790fb103-5b80-49e6-9d39-a87ab6b5b67c req-31581ef0-b7f8-4030-a5ab-a157f495c88b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.411 226833 DEBUG oslo_concurrency.lockutils [req-790fb103-5b80-49e6-9d39-a87ab6b5b67c req-31581ef0-b7f8-4030-a5ab-a157f495c88b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.411 226833 DEBUG nova.network.neutron [req-790fb103-5b80-49e6-9d39-a87ab6b5b67c req-31581ef0-b7f8-4030-a5ab-a157f495c88b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Refreshing network info cache for port a3ff723d-9e34-45ef-881d-6534126ae169 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.412 226833 DEBUG oslo_concurrency.processutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5/disk.config b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.413 226833 INFO nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Deleting local config drive /var/lib/nova/instances/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5/disk.config because it was imported into RBD.
Jan 31 08:29:38 compute-2 kernel: tapf71d80d1-51: entered promiscuous mode
Jan 31 08:29:38 compute-2 NetworkManager[48999]: <info>  [1769848178.4592] manager: (tapf71d80d1-51): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Jan 31 08:29:38 compute-2 ovn_controller[133834]: 2026-01-31T08:29:38Z|00680|binding|INFO|Claiming lport f71d80d1-5129-4397-82d8-2bebfc6c2cb2 for this chassis.
Jan 31 08:29:38 compute-2 ovn_controller[133834]: 2026-01-31T08:29:38Z|00681|binding|INFO|f71d80d1-5129-4397-82d8-2bebfc6c2cb2: Claiming fa:16:3e:fe:8a:82 10.100.0.10
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.459 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:38 compute-2 ovn_controller[133834]: 2026-01-31T08:29:38Z|00682|binding|INFO|Setting lport f71d80d1-5129-4397-82d8-2bebfc6c2cb2 ovn-installed in OVS
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.472 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:38 compute-2 ovn_controller[133834]: 2026-01-31T08:29:38Z|00683|binding|INFO|Setting lport f71d80d1-5129-4397-82d8-2bebfc6c2cb2 up in Southbound
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.477 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:8a:82 10.100.0.10'], port_security=['fa:16:3e:fe:8a:82 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e26a2af1-a850-4885-977e-596b6be13fb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '418d5319c640455ab23850c0b0f24f92', 'neutron:revision_number': '2', 'neutron:security_group_ids': '80cc5615-b383-4fce-816b-fb758f7f671b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068708bd-dd36-4d03-9d65-912eb9981ecc, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=f71d80d1-5129-4397-82d8-2bebfc6c2cb2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.483 143841 INFO neutron.agent.ovn.metadata.agent [-] Port f71d80d1-5129-4397-82d8-2bebfc6c2cb2 in datapath e26a2af1-a850-4885-977e-596b6be13fb8 bound to our chassis
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.487 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 08:29:38 compute-2 systemd-udevd[303914]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:29:38 compute-2 systemd-machined[195142]: New machine qemu-79-instance-000000aa.
Jan 31 08:29:38 compute-2 NetworkManager[48999]: <info>  [1769848178.5003] device (tapf71d80d1-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:29:38 compute-2 NetworkManager[48999]: <info>  [1769848178.5009] device (tapf71d80d1-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:29:38 compute-2 systemd[1]: Started Virtual Machine qemu-79-instance-000000aa.
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.500 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[59960b5f-cfaa-4aa3-af6a-463dc4df35bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.502 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape26a2af1-a1 in ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.505 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape26a2af1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.505 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bab06056-9842-4654-840a-275ed5a1371b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.507 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8dd170-4f01-47fe-ba6d-ed359f19039d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.526 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd45217-47cf-4204-9b25-d12e08d27869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.541 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[92726467-47cc-4b88-90b0-853baed6b8d0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.569 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4119beb0-f310-44e2-b1ed-b990ba0c41d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 systemd-udevd[303917]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:29:38 compute-2 NetworkManager[48999]: <info>  [1769848178.5772] manager: (tape26a2af1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.576 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[004b9399-90ef-4241-b9e5-28c60117e19e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.600 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c23adbec-e740-43da-91df-657e5f974bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.605 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[91cc0680-dc29-4fc9-bbf6-cd37ab9f2e1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 NetworkManager[48999]: <info>  [1769848178.6238] device (tape26a2af1-a0): carrier: link connected
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.628 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[1f12932e-f5b3-4ea6-995a-76b5546e8fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.645 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[67c7bd54-ca8a-4dab-af86-948225ea63c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape26a2af1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:7d:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843109, 'reachable_time': 19447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303947, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.662 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[844ba00e-ce83-46f6-bc0b-287753a04184]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9c:7dba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 843109, 'tstamp': 843109}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303948, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.683 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1685bb-36c0-4e00-bfe2-e7731b0954a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape26a2af1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:7d:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843109, 'reachable_time': 19447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303964, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.714 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fd756e0a-7c8e-4db9-bc5a-cea3fe7082ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.760 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[426349a4-ff15-4e94-9844-46d8ccc49485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.762 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape26a2af1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.762 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.762 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape26a2af1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:38 compute-2 kernel: tape26a2af1-a0: entered promiscuous mode
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.764 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:38 compute-2 NetworkManager[48999]: <info>  [1769848178.7648] manager: (tape26a2af1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.766 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.767 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape26a2af1-a0, col_values=(('external_ids', {'iface-id': '003d1f0e-744f-4244-8c9f-3a9be6033652'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:38 compute-2 ovn_controller[133834]: 2026-01-31T08:29:38Z|00684|binding|INFO|Releasing lport 003d1f0e-744f-4244-8c9f-3a9be6033652 from this chassis (sb_readonly=0)
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.773 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.774 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.775 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a30854d2-e553-486b-a8a2-830fad10a793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.776 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:29:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:29:38.777 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'env', 'PROCESS_TAG=haproxy-e26a2af1-a850-4885-977e-596b6be13fb8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e26a2af1-a850-4885-977e-596b6be13fb8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.845 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848178.8444836, b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:29:38 compute-2 nova_compute[226829]: 2026-01-31 08:29:38.847 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] VM Started (Lifecycle Event)
Jan 31 08:29:39 compute-2 nova_compute[226829]: 2026-01-31 08:29:39.002 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:29:39 compute-2 nova_compute[226829]: 2026-01-31 08:29:39.009 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848178.8447971, b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:29:39 compute-2 nova_compute[226829]: 2026-01-31 08:29:39.010 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] VM Paused (Lifecycle Event)
Jan 31 08:29:39 compute-2 podman[304023]: 2026-01-31 08:29:39.166084105 +0000 UTC m=+0.050024569 container create 4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:29:39 compute-2 systemd[1]: Started libpod-conmon-4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c.scope.
Jan 31 08:29:39 compute-2 nova_compute[226829]: 2026-01-31 08:29:39.231 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:29:39 compute-2 podman[304023]: 2026-01-31 08:29:39.14121928 +0000 UTC m=+0.025159784 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:29:39 compute-2 nova_compute[226829]: 2026-01-31 08:29:39.235 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:29:39 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:29:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8992676e1ba9fa737826a9be0781b520d2204979ea10c38c6353861ef0880e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:39 compute-2 podman[304023]: 2026-01-31 08:29:39.259025066 +0000 UTC m=+0.142965550 container init 4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:29:39 compute-2 podman[304023]: 2026-01-31 08:29:39.266608092 +0000 UTC m=+0.150548556 container start 4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:29:39 compute-2 nova_compute[226829]: 2026-01-31 08:29:39.294 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:29:39 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[304039]: [NOTICE]   (304043) : New worker (304045) forked
Jan 31 08:29:39 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[304039]: [NOTICE]   (304043) : Loading success.
Jan 31 08:29:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:39.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:39 compute-2 nova_compute[226829]: 2026-01-31 08:29:39.938 226833 DEBUG oslo_concurrency.lockutils [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:39 compute-2 nova_compute[226829]: 2026-01-31 08:29:39.939 226833 DEBUG oslo_concurrency.lockutils [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:40 compute-2 ceph-mon[77282]: pgmap v2888: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 190 op/s
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.248 226833 INFO nova.compute.manager [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Detaching volume c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4
Jan 31 08:29:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:29:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:40.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.490 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.490 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.504 226833 INFO nova.virt.block_device [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Attempting to driver detach volume c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4 from mountpoint /dev/vdb
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.518 226833 DEBUG nova.virt.libvirt.driver [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Attempting to detach device vdb from instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.519 226833 DEBUG nova.virt.libvirt.guest [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4">
Jan 31 08:29:40 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   </source>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <serial>c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4</serial>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <shareable/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]: </disk>
Jan 31 08:29:40 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.529 226833 INFO nova.virt.libvirt.driver [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully detached device vdb from instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 from the persistent domain config.
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.530 226833 DEBUG nova.virt.libvirt.driver [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.531 226833 DEBUG nova.virt.libvirt.guest [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4">
Jan 31 08:29:40 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   </source>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <serial>c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4</serial>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <shareable/>
Jan 31 08:29:40 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 08:29:40 compute-2 nova_compute[226829]: </disk>
Jan 31 08:29:40 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:29:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.700 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769848180.7004836, e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.703 226833 DEBUG nova.virt.libvirt.driver [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.707 226833 INFO nova.virt.libvirt.driver [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully detached device vdb from instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 from the live domain config.
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.755 226833 DEBUG nova.compute.manager [req-25f99871-785b-4864-9069-2411c5a3c347 req-7d7e4bdf-4a8d-466a-8b14-c4a0cc3e5f74 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Received event network-vif-plugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.756 226833 DEBUG oslo_concurrency.lockutils [req-25f99871-785b-4864-9069-2411c5a3c347 req-7d7e4bdf-4a8d-466a-8b14-c4a0cc3e5f74 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.757 226833 DEBUG oslo_concurrency.lockutils [req-25f99871-785b-4864-9069-2411c5a3c347 req-7d7e4bdf-4a8d-466a-8b14-c4a0cc3e5f74 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.757 226833 DEBUG oslo_concurrency.lockutils [req-25f99871-785b-4864-9069-2411c5a3c347 req-7d7e4bdf-4a8d-466a-8b14-c4a0cc3e5f74 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.758 226833 DEBUG nova.compute.manager [req-25f99871-785b-4864-9069-2411c5a3c347 req-7d7e4bdf-4a8d-466a-8b14-c4a0cc3e5f74 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Processing event network-vif-plugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.759 226833 DEBUG nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.764 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848180.7639496, b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.765 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] VM Resumed (Lifecycle Event)
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.768 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.772 226833 INFO nova.virt.libvirt.driver [-] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Instance spawned successfully.
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.774 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.852 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.858 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.863 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.863 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.863 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.864 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.864 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:29:40 compute-2 nova_compute[226829]: 2026-01-31 08:29:40.865 226833 DEBUG nova.virt.libvirt.driver [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:29:41 compute-2 nova_compute[226829]: 2026-01-31 08:29:41.091 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:41 compute-2 nova_compute[226829]: 2026-01-31 08:29:41.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:41 compute-2 nova_compute[226829]: 2026-01-31 08:29:41.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:29:41 compute-2 nova_compute[226829]: 2026-01-31 08:29:41.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:29:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:41.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:41 compute-2 nova_compute[226829]: 2026-01-31 08:29:41.882 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:42 compute-2 ceph-mon[77282]: pgmap v2889: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 544 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Jan 31 08:29:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:42.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:42 compute-2 nova_compute[226829]: 2026-01-31 08:29:42.428 226833 DEBUG nova.compute.manager [req-506172f8-053d-4ce7-950a-6f3bf1b619bc req-d551bf99-3d22-455e-9955-90c84725d803 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:42 compute-2 nova_compute[226829]: 2026-01-31 08:29:42.429 226833 DEBUG nova.compute.manager [req-506172f8-053d-4ce7-950a-6f3bf1b619bc req-d551bf99-3d22-455e-9955-90c84725d803 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing instance network info cache due to event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:29:42 compute-2 nova_compute[226829]: 2026-01-31 08:29:42.430 226833 DEBUG oslo_concurrency.lockutils [req-506172f8-053d-4ce7-950a-6f3bf1b619bc req-d551bf99-3d22-455e-9955-90c84725d803 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:42 compute-2 nova_compute[226829]: 2026-01-31 08:29:42.430 226833 DEBUG oslo_concurrency.lockutils [req-506172f8-053d-4ce7-950a-6f3bf1b619bc req-d551bf99-3d22-455e-9955-90c84725d803 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:42 compute-2 nova_compute[226829]: 2026-01-31 08:29:42.430 226833 DEBUG nova.network.neutron [req-506172f8-053d-4ce7-950a-6f3bf1b619bc req-d551bf99-3d22-455e-9955-90c84725d803 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:29:42 compute-2 nova_compute[226829]: 2026-01-31 08:29:42.457 226833 INFO nova.virt.libvirt.driver [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Detected multiple connections on this host for volume: c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4, skipping target disconnect.
Jan 31 08:29:42 compute-2 nova_compute[226829]: 2026-01-31 08:29:42.465 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.123 226833 DEBUG nova.network.neutron [req-790fb103-5b80-49e6-9d39-a87ab6b5b67c req-31581ef0-b7f8-4030-a5ab-a157f495c88b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updated VIF entry in instance network info cache for port a3ff723d-9e34-45ef-881d-6534126ae169. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.124 226833 DEBUG nova.network.neutron [req-790fb103-5b80-49e6-9d39-a87ab6b5b67c req-31581ef0-b7f8-4030-a5ab-a157f495c88b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.241 226833 INFO nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Took 19.02 seconds to spawn the instance on the hypervisor.
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.242 226833 DEBUG nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.248 226833 DEBUG nova.compute.manager [req-153dbfa0-0c88-4742-8f6c-23d6f81da647 req-6ea1b4cd-c4ff-4f89-aa0f-75ebf8acca4d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Received event network-vif-plugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.249 226833 DEBUG oslo_concurrency.lockutils [req-153dbfa0-0c88-4742-8f6c-23d6f81da647 req-6ea1b4cd-c4ff-4f89-aa0f-75ebf8acca4d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.250 226833 DEBUG oslo_concurrency.lockutils [req-153dbfa0-0c88-4742-8f6c-23d6f81da647 req-6ea1b4cd-c4ff-4f89-aa0f-75ebf8acca4d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.251 226833 DEBUG oslo_concurrency.lockutils [req-153dbfa0-0c88-4742-8f6c-23d6f81da647 req-6ea1b4cd-c4ff-4f89-aa0f-75ebf8acca4d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.252 226833 DEBUG nova.compute.manager [req-153dbfa0-0c88-4742-8f6c-23d6f81da647 req-6ea1b4cd-c4ff-4f89-aa0f-75ebf8acca4d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] No waiting events found dispatching network-vif-plugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.252 226833 WARNING nova.compute.manager [req-153dbfa0-0c88-4742-8f6c-23d6f81da647 req-6ea1b4cd-c4ff-4f89-aa0f-75ebf8acca4d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Received unexpected event network-vif-plugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 for instance with vm_state building and task_state spawning.
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.260 226833 DEBUG nova.objects.instance [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.270 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.364 226833 DEBUG oslo_concurrency.lockutils [req-790fb103-5b80-49e6-9d39-a87ab6b5b67c req-31581ef0-b7f8-4030-a5ab-a157f495c88b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.561 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.561 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.561 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.561 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.684 226833 INFO nova.compute.manager [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Took 21.44 seconds to build instance.
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.708 226833 DEBUG oslo_concurrency.lockutils [None req-95df8db2-f892-49c2-8ef4-c247a393eff8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 3.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:29:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:43.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:29:43 compute-2 nova_compute[226829]: 2026-01-31 08:29:43.929 226833 DEBUG oslo_concurrency.lockutils [None req-de70a62b-75e7-4223-8dce-ef0de37d46d5 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:44 compute-2 ceph-mon[77282]: pgmap v2890: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 1.7 MiB/s wr, 71 op/s
Jan 31 08:29:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:44.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:44 compute-2 nova_compute[226829]: 2026-01-31 08:29:44.468 226833 DEBUG oslo_concurrency.lockutils [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:44 compute-2 nova_compute[226829]: 2026-01-31 08:29:44.469 226833 DEBUG oslo_concurrency.lockutils [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:44 compute-2 nova_compute[226829]: 2026-01-31 08:29:44.542 226833 INFO nova.compute.manager [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Detaching volume c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4
Jan 31 08:29:45 compute-2 nova_compute[226829]: 2026-01-31 08:29:45.103 226833 INFO nova.virt.block_device [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Attempting to driver detach volume c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4 from mountpoint /dev/vdb
Jan 31 08:29:45 compute-2 nova_compute[226829]: 2026-01-31 08:29:45.116 226833 DEBUG nova.virt.libvirt.driver [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Attempting to detach device vdb from instance 10816ede-cf43-4736-aba7-48389f607d30 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:29:45 compute-2 nova_compute[226829]: 2026-01-31 08:29:45.117 226833 DEBUG nova.virt.libvirt.guest [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4">
Jan 31 08:29:45 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   </source>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <serial>c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4</serial>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <shareable/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]: </disk>
Jan 31 08:29:45 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:29:45 compute-2 nova_compute[226829]: 2026-01-31 08:29:45.127 226833 INFO nova.virt.libvirt.driver [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully detached device vdb from instance 10816ede-cf43-4736-aba7-48389f607d30 from the persistent domain config.
Jan 31 08:29:45 compute-2 nova_compute[226829]: 2026-01-31 08:29:45.128 226833 DEBUG nova.virt.libvirt.driver [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 10816ede-cf43-4736-aba7-48389f607d30 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:29:45 compute-2 nova_compute[226829]: 2026-01-31 08:29:45.129 226833 DEBUG nova.virt.libvirt.guest [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4">
Jan 31 08:29:45 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   </source>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <serial>c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4</serial>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <shareable/>
Jan 31 08:29:45 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 08:29:45 compute-2 nova_compute[226829]: </disk>
Jan 31 08:29:45 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:29:45 compute-2 nova_compute[226829]: 2026-01-31 08:29:45.189 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769848185.1887178, 10816ede-cf43-4736-aba7-48389f607d30 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:29:45 compute-2 nova_compute[226829]: 2026-01-31 08:29:45.192 226833 DEBUG nova.virt.libvirt.driver [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 10816ede-cf43-4736-aba7-48389f607d30 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:29:45 compute-2 nova_compute[226829]: 2026-01-31 08:29:45.194 226833 INFO nova.virt.libvirt.driver [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully detached device vdb from instance 10816ede-cf43-4736-aba7-48389f607d30 from the live domain config.
Jan 31 08:29:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:45.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:46 compute-2 nova_compute[226829]: 2026-01-31 08:29:46.033 226833 DEBUG nova.objects.instance [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:29:46 compute-2 nova_compute[226829]: 2026-01-31 08:29:46.067 226833 DEBUG nova.network.neutron [req-506172f8-053d-4ce7-950a-6f3bf1b619bc req-d551bf99-3d22-455e-9955-90c84725d803 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updated VIF entry in instance network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:29:46 compute-2 nova_compute[226829]: 2026-01-31 08:29:46.067 226833 DEBUG nova.network.neutron [req-506172f8-053d-4ce7-950a-6f3bf1b619bc req-d551bf99-3d22-455e-9955-90c84725d803 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:46 compute-2 ceph-mon[77282]: pgmap v2891: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 572 KiB/s rd, 874 KiB/s wr, 64 op/s
Jan 31 08:29:46 compute-2 nova_compute[226829]: 2026-01-31 08:29:46.094 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:46 compute-2 nova_compute[226829]: 2026-01-31 08:29:46.219 226833 DEBUG oslo_concurrency.lockutils [None req-9485e9f9-6c90-4a46-81fb-a9071bde8b92 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:46 compute-2 nova_compute[226829]: 2026-01-31 08:29:46.337 226833 DEBUG oslo_concurrency.lockutils [req-506172f8-053d-4ce7-950a-6f3bf1b619bc req-d551bf99-3d22-455e-9955-90c84725d803 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:46.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:46 compute-2 nova_compute[226829]: 2026-01-31 08:29:46.885 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:47 compute-2 nova_compute[226829]: 2026-01-31 08:29:47.506 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [{"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:47 compute-2 nova_compute[226829]: 2026-01-31 08:29:47.717 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:47 compute-2 nova_compute[226829]: 2026-01-31 08:29:47.717 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:29:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:47.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:48 compute-2 ceph-mon[77282]: pgmap v2892: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 41 KiB/s wr, 90 op/s
Jan 31 08:29:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2416992503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:48 compute-2 nova_compute[226829]: 2026-01-31 08:29:48.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:49 compute-2 ceph-mon[77282]: pgmap v2893: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 38 KiB/s wr, 83 op/s
Jan 31 08:29:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3836140822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3395964674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:49 compute-2 nova_compute[226829]: 2026-01-31 08:29:49.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:49.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:50 compute-2 nova_compute[226829]: 2026-01-31 08:29:50.043 226833 DEBUG nova.compute.manager [req-4636e284-3921-4b68-9a99-6994734e9ac5 req-4c3f8440-6cb3-486d-a91d-ce9d1d8a6e9b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Received event network-changed-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:50 compute-2 nova_compute[226829]: 2026-01-31 08:29:50.044 226833 DEBUG nova.compute.manager [req-4636e284-3921-4b68-9a99-6994734e9ac5 req-4c3f8440-6cb3-486d-a91d-ce9d1d8a6e9b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Refreshing instance network info cache due to event network-changed-f71d80d1-5129-4397-82d8-2bebfc6c2cb2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:29:50 compute-2 nova_compute[226829]: 2026-01-31 08:29:50.044 226833 DEBUG oslo_concurrency.lockutils [req-4636e284-3921-4b68-9a99-6994734e9ac5 req-4c3f8440-6cb3-486d-a91d-ce9d1d8a6e9b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:50 compute-2 nova_compute[226829]: 2026-01-31 08:29:50.044 226833 DEBUG oslo_concurrency.lockutils [req-4636e284-3921-4b68-9a99-6994734e9ac5 req-4c3f8440-6cb3-486d-a91d-ce9d1d8a6e9b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:50 compute-2 nova_compute[226829]: 2026-01-31 08:29:50.044 226833 DEBUG nova.network.neutron [req-4636e284-3921-4b68-9a99-6994734e9ac5 req-4c3f8440-6cb3-486d-a91d-ce9d1d8a6e9b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Refreshing network info cache for port f71d80d1-5129-4397-82d8-2bebfc6c2cb2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:29:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:50 compute-2 nova_compute[226829]: 2026-01-31 08:29:50.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:51 compute-2 nova_compute[226829]: 2026-01-31 08:29:51.096 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:51 compute-2 ceph-mon[77282]: pgmap v2894: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 34 KiB/s wr, 75 op/s
Jan 31 08:29:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2309520996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:29:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:51.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:29:51 compute-2 nova_compute[226829]: 2026-01-31 08:29:51.887 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:52 compute-2 sudo[304065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:52 compute-2 sudo[304065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:52 compute-2 sudo[304065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:52 compute-2 nova_compute[226829]: 2026-01-31 08:29:52.376 226833 DEBUG nova.network.neutron [req-4636e284-3921-4b68-9a99-6994734e9ac5 req-4c3f8440-6cb3-486d-a91d-ce9d1d8a6e9b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Updated VIF entry in instance network info cache for port f71d80d1-5129-4397-82d8-2bebfc6c2cb2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:29:52 compute-2 nova_compute[226829]: 2026-01-31 08:29:52.377 226833 DEBUG nova.network.neutron [req-4636e284-3921-4b68-9a99-6994734e9ac5 req-4c3f8440-6cb3-486d-a91d-ce9d1d8a6e9b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Updating instance_info_cache with network_info: [{"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:52 compute-2 sudo[304098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:52 compute-2 sudo[304098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:52 compute-2 sudo[304098]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:52 compute-2 podman[304089]: 2026-01-31 08:29:52.445542843 +0000 UTC m=+0.086826517 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:52 compute-2 nova_compute[226829]: 2026-01-31 08:29:52.725 226833 DEBUG oslo_concurrency.lockutils [req-4636e284-3921-4b68-9a99-6994734e9ac5 req-4c3f8440-6cb3-486d-a91d-ce9d1d8a6e9b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:53 compute-2 ceph-mon[77282]: pgmap v2895: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 70 op/s
Jan 31 08:29:53 compute-2 nova_compute[226829]: 2026-01-31 08:29:53.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:53 compute-2 nova_compute[226829]: 2026-01-31 08:29:53.759 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:53.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:53 compute-2 nova_compute[226829]: 2026-01-31 08:29:53.760 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:53 compute-2 nova_compute[226829]: 2026-01-31 08:29:53.762 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:53 compute-2 nova_compute[226829]: 2026-01-31 08:29:53.763 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:29:53 compute-2 nova_compute[226829]: 2026-01-31 08:29:53.764 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:29:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3050272016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.231 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1075930529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:29:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1075930529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:29:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3050272016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:54.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.456 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.457 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.460 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.460 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.464 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.464 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.688 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.689 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3591MB free_disk=20.69361114501953GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.689 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:54 compute-2 nova_compute[226829]: 2026-01-31 08:29:54.690 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.083 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.083 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 10816ede-cf43-4736-aba7-48389f607d30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.084 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.084 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.085 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.181 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:55 compute-2 ceph-mon[77282]: pgmap v2896: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 64 op/s
Jan 31 08:29:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:29:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1243262519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.611 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.617 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.646 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.688 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:29:55 compute-2 nova_compute[226829]: 2026-01-31 08:29:55.689 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:55.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:56 compute-2 nova_compute[226829]: 2026-01-31 08:29:56.099 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1243262519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:56 compute-2 ovn_controller[133834]: 2026-01-31T08:29:56Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:8a:82 10.100.0.10
Jan 31 08:29:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:29:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:56.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:29:56 compute-2 ovn_controller[133834]: 2026-01-31T08:29:56Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:8a:82 10.100.0.10
Jan 31 08:29:57 compute-2 nova_compute[226829]: 2026-01-31 08:29:57.236 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:57 compute-2 ceph-mon[77282]: pgmap v2897: 305 pgs: 305 active+clean; 746 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.7 MiB/s wr, 84 op/s
Jan 31 08:29:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:57.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:58.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:58 compute-2 ceph-osd[79942]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 08:29:59 compute-2 ceph-mon[77282]: pgmap v2898: 305 pgs: 305 active+clean; 751 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 31 08:29:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:29:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:59.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:00.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 08:30:01 compute-2 nova_compute[226829]: 2026-01-31 08:30:01.103 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:01.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:01 compute-2 ceph-mon[77282]: pgmap v2899: 305 pgs: 305 active+clean; 759 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 31 08:30:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:01.909 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:30:01 compute-2 nova_compute[226829]: 2026-01-31 08:30:01.910 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:01.911 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:30:02 compute-2 nova_compute[226829]: 2026-01-31 08:30:02.240 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:02.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:03 compute-2 podman[304191]: 2026-01-31 08:30:03.18273259 +0000 UTC m=+0.069644070 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:03.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:03 compute-2 ceph-mon[77282]: pgmap v2900: 305 pgs: 305 active+clean; 759 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 08:30:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:03.914 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:04.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:05.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:05 compute-2 ceph-mon[77282]: pgmap v2901: 305 pgs: 305 active+clean; 759 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 08:30:06 compute-2 nova_compute[226829]: 2026-01-31 08:30:06.107 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:06.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:06.906 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:06.906 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:06.907 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:07 compute-2 nova_compute[226829]: 2026-01-31 08:30:07.242 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:07 compute-2 nova_compute[226829]: 2026-01-31 08:30:07.689 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:07 compute-2 nova_compute[226829]: 2026-01-31 08:30:07.690 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:30:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:07.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:07 compute-2 nova_compute[226829]: 2026-01-31 08:30:07.796 226833 DEBUG oslo_concurrency.lockutils [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:07 compute-2 nova_compute[226829]: 2026-01-31 08:30:07.796 226833 DEBUG oslo_concurrency.lockutils [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:08 compute-2 nova_compute[226829]: 2026-01-31 08:30:08.159 226833 DEBUG nova.objects.instance [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'flavor' on Instance uuid b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:30:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:08.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:08 compute-2 ceph-mon[77282]: pgmap v2902: 305 pgs: 305 active+clean; 779 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 105 op/s
Jan 31 08:30:08 compute-2 nova_compute[226829]: 2026-01-31 08:30:08.916 226833 DEBUG oslo_concurrency.lockutils [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.693 226833 DEBUG oslo_concurrency.lockutils [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.693 226833 DEBUG oslo_concurrency.lockutils [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.693 226833 INFO nova.compute.manager [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Attaching volume 5279c0c3-4a03-4ce2-a469-4152d8ade465 to /dev/vdb
Jan 31 08:30:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:09.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.881 226833 DEBUG os_brick.utils [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.884 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.896 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.896 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbcd00b-4d51-494f-babb-d4c7c4ffc402]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.897 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.904 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.905 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bf61ea-0844-4f79-9495-3ea8b7640d11]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.906 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.914 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.914 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba42050-598d-423f-90fb-4573ed9ffb45]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.915 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[8e80267d-6f22-450f-9d31-6ce70f76b1bf]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.916 226833 DEBUG oslo_concurrency.processutils [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.944 226833 DEBUG oslo_concurrency.processutils [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.947 226833 DEBUG os_brick.initiator.connectors.lightos [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.948 226833 DEBUG os_brick.initiator.connectors.lightos [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.948 226833 DEBUG os_brick.initiator.connectors.lightos [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.949 226833 DEBUG os_brick.utils [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:30:09 compute-2 nova_compute[226829]: 2026-01-31 08:30:09.950 226833 DEBUG nova.virt.block_device [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Updating existing volume attachment record: 656fced6-5ce0-4274-82ee-e8e078dea578 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:30:10 compute-2 ceph-mon[77282]: pgmap v2903: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 2.3 MiB/s wr, 89 op/s
Jan 31 08:30:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:10.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:11 compute-2 nova_compute[226829]: 2026-01-31 08:30:11.116 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:11 compute-2 nova_compute[226829]: 2026-01-31 08:30:11.482 226833 DEBUG nova.objects.instance [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'flavor' on Instance uuid b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:30:11 compute-2 ceph-mon[77282]: pgmap v2904: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 485 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 31 08:30:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1044056741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:11 compute-2 nova_compute[226829]: 2026-01-31 08:30:11.744 226833 DEBUG nova.virt.libvirt.driver [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Attempting to attach volume 5279c0c3-4a03-4ce2-a469-4152d8ade465 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:30:11 compute-2 nova_compute[226829]: 2026-01-31 08:30:11.746 226833 DEBUG nova.virt.libvirt.guest [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:30:11 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:30:11 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-5279c0c3-4a03-4ce2-a469-4152d8ade465">
Jan 31 08:30:11 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:30:11 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:30:11 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:30:11 compute-2 nova_compute[226829]:   </source>
Jan 31 08:30:11 compute-2 nova_compute[226829]:   <auth username="openstack">
Jan 31 08:30:11 compute-2 nova_compute[226829]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:30:11 compute-2 nova_compute[226829]:   </auth>
Jan 31 08:30:11 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:30:11 compute-2 nova_compute[226829]:   <serial>5279c0c3-4a03-4ce2-a469-4152d8ade465</serial>
Jan 31 08:30:11 compute-2 nova_compute[226829]: </disk>
Jan 31 08:30:11 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.774553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848211774677, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2458, "num_deletes": 254, "total_data_size": 5684667, "memory_usage": 5756584, "flush_reason": "Manual Compaction"}
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Jan 31 08:30:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:30:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:11.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848211812591, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 3723986, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66556, "largest_seqno": 69008, "table_properties": {"data_size": 3714067, "index_size": 6220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21444, "raw_average_key_size": 20, "raw_value_size": 3693969, "raw_average_value_size": 3596, "num_data_blocks": 270, "num_entries": 1027, "num_filter_entries": 1027, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847991, "oldest_key_time": 1769847991, "file_creation_time": 1769848211, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 38094 microseconds, and 9306 cpu microseconds.
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.812647) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 3723986 bytes OK
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.812664) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.814785) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.814802) EVENT_LOG_v1 {"time_micros": 1769848211814796, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.814821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 5673891, prev total WAL file size 5673891, number of live WAL files 2.
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.815932) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(3636KB)], [135(10MB)]
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848211815980, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 14375846, "oldest_snapshot_seqno": -1}
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9186 keys, 12495394 bytes, temperature: kUnknown
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848211920238, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12495394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12435209, "index_size": 36136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241554, "raw_average_key_size": 26, "raw_value_size": 12272844, "raw_average_value_size": 1336, "num_data_blocks": 1384, "num_entries": 9186, "num_filter_entries": 9186, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769848211, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.920526) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12495394 bytes
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.921846) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.8 rd, 119.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.2 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.2) write-amplify(3.4) OK, records in: 9714, records dropped: 528 output_compression: NoCompression
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.921865) EVENT_LOG_v1 {"time_micros": 1769848211921856, "job": 86, "event": "compaction_finished", "compaction_time_micros": 104346, "compaction_time_cpu_micros": 27368, "output_level": 6, "num_output_files": 1, "total_output_size": 12495394, "num_input_records": 9714, "num_output_records": 9186, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848211922317, "job": 86, "event": "table_file_deletion", "file_number": 137}
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848211923318, "job": 86, "event": "table_file_deletion", "file_number": 135}
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.815865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.923395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.923400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.923402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.923404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:30:11.923405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:11 compute-2 nova_compute[226829]: 2026-01-31 08:30:11.996 226833 DEBUG nova.virt.libvirt.driver [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:30:11 compute-2 nova_compute[226829]: 2026-01-31 08:30:11.997 226833 DEBUG nova.virt.libvirt.driver [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:30:11 compute-2 nova_compute[226829]: 2026-01-31 08:30:11.997 226833 DEBUG nova.virt.libvirt.driver [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:30:11 compute-2 nova_compute[226829]: 2026-01-31 08:30:11.997 226833 DEBUG nova.virt.libvirt.driver [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No VIF found with MAC fa:16:3e:fe:8a:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:30:12 compute-2 nova_compute[226829]: 2026-01-31 08:30:12.244 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:12 compute-2 nova_compute[226829]: 2026-01-31 08:30:12.372 226833 DEBUG oslo_concurrency.lockutils [None req-ef84c9f3-dc62-4876-9c96-2ee877fa9fad 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:12.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:12 compute-2 sudo[304243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:12 compute-2 sudo[304243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:12 compute-2 sudo[304243]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:12 compute-2 sudo[304268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:12 compute-2 sudo[304268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:12 compute-2 sudo[304268]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:30:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:13.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:30:13 compute-2 ceph-mon[77282]: pgmap v2905: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 31 08:30:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2326995779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:14.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4019013427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:15 compute-2 nova_compute[226829]: 2026-01-31 08:30:15.601 226833 DEBUG oslo_concurrency.lockutils [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:15 compute-2 nova_compute[226829]: 2026-01-31 08:30:15.602 226833 DEBUG oslo_concurrency.lockutils [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:15 compute-2 nova_compute[226829]: 2026-01-31 08:30:15.700 226833 INFO nova.compute.manager [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Detaching volume 5279c0c3-4a03-4ce2-a469-4152d8ade465
Jan 31 08:30:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:15.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:16 compute-2 ceph-mon[77282]: pgmap v2906: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 31 08:30:16 compute-2 nova_compute[226829]: 2026-01-31 08:30:16.119 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:16 compute-2 nova_compute[226829]: 2026-01-31 08:30:16.264 226833 INFO nova.virt.block_device [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Attempting to driver detach volume 5279c0c3-4a03-4ce2-a469-4152d8ade465 from mountpoint /dev/vdb
Jan 31 08:30:16 compute-2 nova_compute[226829]: 2026-01-31 08:30:16.272 226833 DEBUG nova.virt.libvirt.driver [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Attempting to detach device vdb from instance b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:30:16 compute-2 nova_compute[226829]: 2026-01-31 08:30:16.272 226833 DEBUG nova.virt.libvirt.guest [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:30:16 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-5279c0c3-4a03-4ce2-a469-4152d8ade465">
Jan 31 08:30:16 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]:   </source>
Jan 31 08:30:16 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]:   <serial>5279c0c3-4a03-4ce2-a469-4152d8ade465</serial>
Jan 31 08:30:16 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]: </disk>
Jan 31 08:30:16 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:30:16 compute-2 nova_compute[226829]: 2026-01-31 08:30:16.278 226833 INFO nova.virt.libvirt.driver [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully detached device vdb from instance b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 from the persistent domain config.
Jan 31 08:30:16 compute-2 nova_compute[226829]: 2026-01-31 08:30:16.278 226833 DEBUG nova.virt.libvirt.driver [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:30:16 compute-2 nova_compute[226829]: 2026-01-31 08:30:16.278 226833 DEBUG nova.virt.libvirt.guest [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:30:16 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-5279c0c3-4a03-4ce2-a469-4152d8ade465">
Jan 31 08:30:16 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]:   </source>
Jan 31 08:30:16 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]:   <serial>5279c0c3-4a03-4ce2-a469-4152d8ade465</serial>
Jan 31 08:30:16 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:30:16 compute-2 nova_compute[226829]: </disk>
Jan 31 08:30:16 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:30:16 compute-2 nova_compute[226829]: 2026-01-31 08:30:16.376 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769848216.3762448, b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:30:16 compute-2 nova_compute[226829]: 2026-01-31 08:30:16.377 226833 DEBUG nova.virt.libvirt.driver [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:30:16 compute-2 nova_compute[226829]: 2026-01-31 08:30:16.379 226833 INFO nova.virt.libvirt.driver [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully detached device vdb from instance b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 from the live domain config.
Jan 31 08:30:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:16.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:17 compute-2 nova_compute[226829]: 2026-01-31 08:30:17.092 226833 DEBUG nova.objects.instance [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'flavor' on Instance uuid b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:30:17 compute-2 nova_compute[226829]: 2026-01-31 08:30:17.245 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:17 compute-2 nova_compute[226829]: 2026-01-31 08:30:17.463 226833 DEBUG oslo_concurrency.lockutils [None req-3b8e5014-63bd-4572-9fdb-7e526dbbf978 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:17.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:18 compute-2 ceph-mon[77282]: pgmap v2907: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 31 08:30:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/647630110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:18.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:18 compute-2 sudo[304298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:18 compute-2 sudo[304298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:18 compute-2 sudo[304298]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:18 compute-2 sudo[304323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:18 compute-2 sudo[304323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:18 compute-2 sudo[304323]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:18 compute-2 sudo[304348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:18 compute-2 sudo[304348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:18 compute-2 sudo[304348]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:18 compute-2 sudo[304373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:30:18 compute-2 sudo[304373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:19 compute-2 sudo[304373]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.457 226833 DEBUG oslo_concurrency.lockutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.458 226833 DEBUG oslo_concurrency.lockutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.458 226833 DEBUG oslo_concurrency.lockutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.458 226833 DEBUG oslo_concurrency.lockutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.458 226833 DEBUG oslo_concurrency.lockutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.459 226833 INFO nova.compute.manager [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Terminating instance
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.460 226833 DEBUG nova.compute.manager [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:30:19 compute-2 kernel: tapf71d80d1-51 (unregistering): left promiscuous mode
Jan 31 08:30:19 compute-2 NetworkManager[48999]: <info>  [1769848219.5271] device (tapf71d80d1-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.532 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:19 compute-2 ovn_controller[133834]: 2026-01-31T08:30:19Z|00685|binding|INFO|Releasing lport f71d80d1-5129-4397-82d8-2bebfc6c2cb2 from this chassis (sb_readonly=0)
Jan 31 08:30:19 compute-2 ovn_controller[133834]: 2026-01-31T08:30:19Z|00686|binding|INFO|Setting lport f71d80d1-5129-4397-82d8-2bebfc6c2cb2 down in Southbound
Jan 31 08:30:19 compute-2 ovn_controller[133834]: 2026-01-31T08:30:19Z|00687|binding|INFO|Removing iface tapf71d80d1-51 ovn-installed in OVS
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.534 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.538 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:19 compute-2 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Jan 31 08:30:19 compute-2 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000aa.scope: Consumed 15.770s CPU time.
Jan 31 08:30:19 compute-2 systemd-machined[195142]: Machine qemu-79-instance-000000aa terminated.
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.693 226833 INFO nova.virt.libvirt.driver [-] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Instance destroyed successfully.
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.694 226833 DEBUG nova.objects.instance [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'resources' on Instance uuid b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:30:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:19.794 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:8a:82 10.100.0.10'], port_security=['fa:16:3e:fe:8a:82 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e26a2af1-a850-4885-977e-596b6be13fb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '418d5319c640455ab23850c0b0f24f92', 'neutron:revision_number': '4', 'neutron:security_group_ids': '80cc5615-b383-4fce-816b-fb758f7f671b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068708bd-dd36-4d03-9d65-912eb9981ecc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=f71d80d1-5129-4397-82d8-2bebfc6c2cb2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:30:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:19.798 143841 INFO neutron.agent.ovn.metadata.agent [-] Port f71d80d1-5129-4397-82d8-2bebfc6c2cb2 in datapath e26a2af1-a850-4885-977e-596b6be13fb8 unbound from our chassis
Jan 31 08:30:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:30:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:19.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:30:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:19.802 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e26a2af1-a850-4885-977e-596b6be13fb8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:30:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:19.805 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2552db-05c2-4af9-8677-aac57d803851]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:19.807 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 namespace which is not needed anymore
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.884 226833 DEBUG nova.virt.libvirt.vif [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-988559875',display_name='tempest-AttachVolumeNegativeTest-server-988559875',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-988559875',id=170,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/zuybdYg042+ljf9K0OaAu4qMaJQTzi7pgwG187ctJeBYsiO0zjHd/uMi155olotZf3rtKbwgn46CPW17Eobd3fKqCBDHO/KeVlfYLbYoq0ah/sjrhyXG89J7gTR3NLQ==',key_name='tempest-keypair-71951735',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:29:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-ikeiarwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:29:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.885 226833 DEBUG nova.network.os_vif_util [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "address": "fa:16:3e:fe:8a:82", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf71d80d1-51", "ovs_interfaceid": "f71d80d1-5129-4397-82d8-2bebfc6c2cb2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.890 226833 DEBUG nova.network.os_vif_util [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:8a:82,bridge_name='br-int',has_traffic_filtering=True,id=f71d80d1-5129-4397-82d8-2bebfc6c2cb2,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf71d80d1-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.891 226833 DEBUG os_vif [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:8a:82,bridge_name='br-int',has_traffic_filtering=True,id=f71d80d1-5129-4397-82d8-2bebfc6c2cb2,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf71d80d1-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.896 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.897 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf71d80d1-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.902 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.904 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:30:19 compute-2 nova_compute[226829]: 2026-01-31 08:30:19.910 226833 INFO os_vif [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:8a:82,bridge_name='br-int',has_traffic_filtering=True,id=f71d80d1-5129-4397-82d8-2bebfc6c2cb2,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf71d80d1-51')
Jan 31 08:30:19 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[304039]: [NOTICE]   (304043) : haproxy version is 2.8.14-c23fe91
Jan 31 08:30:19 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[304039]: [NOTICE]   (304043) : path to executable is /usr/sbin/haproxy
Jan 31 08:30:19 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[304039]: [WARNING]  (304043) : Exiting Master process...
Jan 31 08:30:19 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[304039]: [ALERT]    (304043) : Current worker (304045) exited with code 143 (Terminated)
Jan 31 08:30:19 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[304039]: [WARNING]  (304043) : All workers exited. Exiting... (0)
Jan 31 08:30:19 compute-2 systemd[1]: libpod-4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c.scope: Deactivated successfully.
Jan 31 08:30:19 compute-2 podman[304465]: 2026-01-31 08:30:19.936645867 +0000 UTC m=+0.046601496 container died 4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:30:19 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c-userdata-shm.mount: Deactivated successfully.
Jan 31 08:30:19 compute-2 systemd[1]: var-lib-containers-storage-overlay-3f8992676e1ba9fa737826a9be0781b520d2204979ea10c38c6353861ef0880e-merged.mount: Deactivated successfully.
Jan 31 08:30:19 compute-2 podman[304465]: 2026-01-31 08:30:19.998205138 +0000 UTC m=+0.108160757 container cleanup 4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:30:20 compute-2 systemd[1]: libpod-conmon-4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c.scope: Deactivated successfully.
Jan 31 08:30:20 compute-2 podman[304512]: 2026-01-31 08:30:20.048076021 +0000 UTC m=+0.034949910 container remove 4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:30:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:20.052 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[51a6c6e1-9d66-49c3-96ae-a1a53ac6251d]: (4, ('Sat Jan 31 08:30:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 (4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c)\n4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c\nSat Jan 31 08:30:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 (4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c)\n4b84882824dfc1902335537110749f0beff763205a4b4063435ebca34c3fda6c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:20.054 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fb4862-b004-4618-8c68-435a07f10b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:20.055 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape26a2af1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:20 compute-2 nova_compute[226829]: 2026-01-31 08:30:20.080 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:20 compute-2 kernel: tape26a2af1-a0: left promiscuous mode
Jan 31 08:30:20 compute-2 nova_compute[226829]: 2026-01-31 08:30:20.086 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:20.089 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[21ad000e-867e-4328-99b6-6b9cd4a579f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:20.104 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[acc71f3e-9336-462c-8160-1ed745ac03d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:20.106 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[13045aaa-6db4-4077-adb6-17b7e445974e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:20.118 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[280d6c9c-3f7a-4ed1-a842-6d7913e5ee31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843103, 'reachable_time': 38216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304526, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:20 compute-2 systemd[1]: run-netns-ovnmeta\x2de26a2af1\x2da850\x2d4885\x2d977e\x2d596b6be13fb8.mount: Deactivated successfully.
Jan 31 08:30:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:20.122 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:30:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:20.122 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[55355da0-5c08-4a59-9296-4670de3481b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:20 compute-2 ceph-mon[77282]: pgmap v2908: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.1 MiB/s wr, 18 op/s
Jan 31 08:30:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4186738958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:30:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:30:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:30:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:30:20 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:30:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:20.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:20 compute-2 nova_compute[226829]: 2026-01-31 08:30:20.573 226833 INFO nova.virt.libvirt.driver [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Deleting instance files /var/lib/nova/instances/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_del
Jan 31 08:30:20 compute-2 nova_compute[226829]: 2026-01-31 08:30:20.575 226833 INFO nova.virt.libvirt.driver [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Deletion of /var/lib/nova/instances/b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5_del complete
Jan 31 08:30:20 compute-2 nova_compute[226829]: 2026-01-31 08:30:20.998 226833 INFO nova.compute.manager [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Took 1.54 seconds to destroy the instance on the hypervisor.
Jan 31 08:30:20 compute-2 nova_compute[226829]: 2026-01-31 08:30:20.999 226833 DEBUG oslo.service.loopingcall [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:30:21 compute-2 nova_compute[226829]: 2026-01-31 08:30:20.999 226833 DEBUG nova.compute.manager [-] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:30:21 compute-2 nova_compute[226829]: 2026-01-31 08:30:21.000 226833 DEBUG nova.network.neutron [-] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:30:21 compute-2 ceph-mon[77282]: pgmap v2909: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Jan 31 08:30:21 compute-2 nova_compute[226829]: 2026-01-31 08:30:21.713 226833 DEBUG nova.compute.manager [req-d3e96305-e0c8-41a3-94e5-844763f74194 req-7070e98b-8577-427b-8502-04c30441a7b7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Received event network-vif-unplugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:21 compute-2 nova_compute[226829]: 2026-01-31 08:30:21.713 226833 DEBUG oslo_concurrency.lockutils [req-d3e96305-e0c8-41a3-94e5-844763f74194 req-7070e98b-8577-427b-8502-04c30441a7b7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:21 compute-2 nova_compute[226829]: 2026-01-31 08:30:21.713 226833 DEBUG oslo_concurrency.lockutils [req-d3e96305-e0c8-41a3-94e5-844763f74194 req-7070e98b-8577-427b-8502-04c30441a7b7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:21 compute-2 nova_compute[226829]: 2026-01-31 08:30:21.714 226833 DEBUG oslo_concurrency.lockutils [req-d3e96305-e0c8-41a3-94e5-844763f74194 req-7070e98b-8577-427b-8502-04c30441a7b7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:21 compute-2 nova_compute[226829]: 2026-01-31 08:30:21.714 226833 DEBUG nova.compute.manager [req-d3e96305-e0c8-41a3-94e5-844763f74194 req-7070e98b-8577-427b-8502-04c30441a7b7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] No waiting events found dispatching network-vif-unplugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:30:21 compute-2 nova_compute[226829]: 2026-01-31 08:30:21.714 226833 DEBUG nova.compute.manager [req-d3e96305-e0c8-41a3-94e5-844763f74194 req-7070e98b-8577-427b-8502-04c30441a7b7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Received event network-vif-unplugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:30:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:21.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1662883223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:22 compute-2 nova_compute[226829]: 2026-01-31 08:30:22.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:22.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:22 compute-2 nova_compute[226829]: 2026-01-31 08:30:22.989 226833 DEBUG nova.network.neutron [-] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.091 226833 INFO nova.compute.manager [-] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Took 2.09 seconds to deallocate network for instance.
Jan 31 08:30:23 compute-2 podman[304529]: 2026-01-31 08:30:23.186867529 +0000 UTC m=+0.069131737 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.204 226833 DEBUG oslo_concurrency.lockutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.204 226833 DEBUG oslo_concurrency.lockutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:23 compute-2 ceph-mon[77282]: pgmap v2910: 305 pgs: 305 active+clean; 781 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.7 KiB/s wr, 21 op/s
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.329 226833 DEBUG oslo_concurrency.processutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:30:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3100385578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.754 226833 DEBUG oslo_concurrency.processutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.759 226833 DEBUG nova.compute.provider_tree [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:30:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:23.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.833 226833 DEBUG nova.scheduler.client.report [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.955 226833 DEBUG oslo_concurrency.lockutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.984 226833 DEBUG nova.compute.manager [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Received event network-vif-plugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.984 226833 DEBUG oslo_concurrency.lockutils [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.985 226833 DEBUG oslo_concurrency.lockutils [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.985 226833 DEBUG oslo_concurrency.lockutils [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.986 226833 DEBUG nova.compute.manager [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] No waiting events found dispatching network-vif-plugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.986 226833 WARNING nova.compute.manager [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Received unexpected event network-vif-plugged-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 for instance with vm_state deleted and task_state None.
Jan 31 08:30:23 compute-2 nova_compute[226829]: 2026-01-31 08:30:23.986 226833 DEBUG nova.compute.manager [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Received event network-vif-deleted-f71d80d1-5129-4397-82d8-2bebfc6c2cb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:24 compute-2 nova_compute[226829]: 2026-01-31 08:30:24.119 226833 INFO nova.scheduler.client.report [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Deleted allocations for instance b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5
Jan 31 08:30:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3100385578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:24 compute-2 nova_compute[226829]: 2026-01-31 08:30:24.322 226833 DEBUG oslo_concurrency.lockutils [None req-04ca5eef-8f66-49c8-8d3e-b9ee257a41e6 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:24.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:24 compute-2 nova_compute[226829]: 2026-01-31 08:30:24.900 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:25 compute-2 ceph-mon[77282]: pgmap v2911: 305 pgs: 305 active+clean; 767 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Jan 31 08:30:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:25.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:26 compute-2 sudo[304581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:26 compute-2 sudo[304581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:26 compute-2 sudo[304581]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:26 compute-2 sudo[304606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:30:26 compute-2 sudo[304606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:26 compute-2 sudo[304606]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:26.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:27 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:27 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:27 compute-2 nova_compute[226829]: 2026-01-31 08:30:27.248 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:30:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:27.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:30:28 compute-2 ceph-mon[77282]: pgmap v2912: 305 pgs: 305 active+clean; 664 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 382 KiB/s rd, 3.2 KiB/s wr, 73 op/s
Jan 31 08:30:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:30:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:28.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:30:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:29 compute-2 nova_compute[226829]: 2026-01-31 08:30:29.903 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:30 compute-2 ceph-mon[77282]: pgmap v2913: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 15 KiB/s wr, 116 op/s
Jan 31 08:30:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e368 e368: 3 total, 3 up, 3 in
Jan 31 08:30:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:30.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:31 compute-2 ceph-mon[77282]: pgmap v2914: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 130 op/s
Jan 31 08:30:31 compute-2 ceph-mon[77282]: osdmap e368: 3 total, 3 up, 3 in
Jan 31 08:30:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:31.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:32 compute-2 nova_compute[226829]: 2026-01-31 08:30:32.250 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:32.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:32 compute-2 sudo[304634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:32 compute-2 sudo[304634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:32 compute-2 sudo[304634]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:32 compute-2 sudo[304659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:32 compute-2 sudo[304659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:32 compute-2 sudo[304659]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e369 e369: 3 total, 3 up, 3 in
Jan 31 08:30:33 compute-2 ceph-mon[77282]: pgmap v2916: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 24 KiB/s wr, 135 op/s
Jan 31 08:30:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:33.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:33 compute-2 ovn_controller[133834]: 2026-01-31T08:30:33Z|00688|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 08:30:33 compute-2 nova_compute[226829]: 2026-01-31 08:30:33.952 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:34 compute-2 podman[304685]: 2026-01-31 08:30:34.202914491 +0000 UTC m=+0.079028755 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 08:30:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:34.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:34 compute-2 ceph-mon[77282]: osdmap e369: 3 total, 3 up, 3 in
Jan 31 08:30:34 compute-2 nova_compute[226829]: 2026-01-31 08:30:34.692 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848219.6911547, b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:34 compute-2 nova_compute[226829]: 2026-01-31 08:30:34.693 226833 INFO nova.compute.manager [-] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] VM Stopped (Lifecycle Event)
Jan 31 08:30:34 compute-2 nova_compute[226829]: 2026-01-31 08:30:34.731 226833 DEBUG nova.compute.manager [None req-267a31c0-8660-4a03-8163-52f33aad7c6d - - - - - -] [instance: b17f15ec-65a0-4d5b-a53b-9e9fda7eafd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:34 compute-2 nova_compute[226829]: 2026-01-31 08:30:34.906 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:35 compute-2 ceph-mon[77282]: pgmap v2918: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 28 KiB/s wr, 90 op/s
Jan 31 08:30:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:35.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:36.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:37 compute-2 nova_compute[226829]: 2026-01-31 08:30:37.252 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:37.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:37 compute-2 ceph-mon[77282]: pgmap v2919: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 546 KiB/s rd, 10 KiB/s wr, 32 op/s
Jan 31 08:30:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:38.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:39.117 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:30:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:39.119 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:30:39 compute-2 nova_compute[226829]: 2026-01-31 08:30:39.119 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:39.121 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:39.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:39 compute-2 nova_compute[226829]: 2026-01-31 08:30:39.861 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:39 compute-2 nova_compute[226829]: 2026-01-31 08:30:39.862 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:39 compute-2 nova_compute[226829]: 2026-01-31 08:30:39.899 226833 DEBUG nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:30:39 compute-2 nova_compute[226829]: 2026-01-31 08:30:39.908 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.056 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.057 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.069 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.069 226833 INFO nova.compute.claims [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:30:40 compute-2 ceph-mon[77282]: pgmap v2920: 305 pgs: 305 active+clean; 664 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 238 KiB/s rd, 1.7 MiB/s wr, 54 op/s
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.281 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:40.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:30:40 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1777556716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.723 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.727 226833 DEBUG nova.compute.provider_tree [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.759 226833 DEBUG nova.scheduler.client.report [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.811 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.811 226833 DEBUG nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.893 226833 DEBUG nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.893 226833 DEBUG nova.network.neutron [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.934 226833 INFO nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:30:40 compute-2 nova_compute[226829]: 2026-01-31 08:30:40.963 226833 DEBUG nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.118 226833 DEBUG nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.119 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.120 226833 INFO nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Creating image(s)
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.143 226833 DEBUG nova.storage.rbd_utils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.164 226833 DEBUG nova.storage.rbd_utils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.185 226833 DEBUG nova.storage.rbd_utils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.188 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.214 226833 DEBUG nova.policy [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48d684de9ba340f48e249b4cce857bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '418d5319c640455ab23850c0b0f24f92', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.260 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.261 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.262 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.262 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.282 226833 DEBUG nova.storage.rbd_utils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:30:41 compute-2 nova_compute[226829]: 2026-01-31 08:30:41.285 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:41.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:41 compute-2 ceph-mon[77282]: pgmap v2921: 305 pgs: 305 active+clean; 672 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 31 08:30:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1777556716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.254 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.264 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.979s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.339 226833 DEBUG nova.storage.rbd_utils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] resizing rbd image cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:42.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.694 226833 DEBUG nova.objects.instance [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'migration_context' on Instance uuid cd7614f6-e095-4eaf-bc4a-f749f49d3da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.761 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.762 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Ensure instance console log exists: /var/lib/nova/instances/cd7614f6-e095-4eaf-bc4a-f749f49d3da7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.763 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.764 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:42 compute-2 nova_compute[226829]: 2026-01-31 08:30:42.765 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:43 compute-2 ceph-mon[77282]: pgmap v2922: 305 pgs: 305 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Jan 31 08:30:43 compute-2 nova_compute[226829]: 2026-01-31 08:30:43.481 226833 DEBUG nova.network.neutron [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Successfully created port: ffaf11e4-ab25-435a-a550-4c6c0b25801c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:30:43 compute-2 nova_compute[226829]: 2026-01-31 08:30:43.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:43 compute-2 nova_compute[226829]: 2026-01-31 08:30:43.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:30:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:43.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:43 compute-2 nova_compute[226829]: 2026-01-31 08:30:43.985 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:30:43 compute-2 nova_compute[226829]: 2026-01-31 08:30:43.985 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:30:43 compute-2 nova_compute[226829]: 2026-01-31 08:30:43.986 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:30:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/810849707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:44.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:44 compute-2 nova_compute[226829]: 2026-01-31 08:30:44.911 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:45 compute-2 ceph-mon[77282]: pgmap v2923: 305 pgs: 305 active+clean; 692 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.9 MiB/s wr, 82 op/s
Jan 31 08:30:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:45.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:46.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:47 compute-2 nova_compute[226829]: 2026-01-31 08:30:47.257 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:47 compute-2 ceph-mon[77282]: pgmap v2924: 305 pgs: 305 active+clean; 723 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 256 KiB/s rd, 3.8 MiB/s wr, 89 op/s
Jan 31 08:30:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e370 e370: 3 total, 3 up, 3 in
Jan 31 08:30:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:48.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:48 compute-2 ceph-mon[77282]: osdmap e370: 3 total, 3 up, 3 in
Jan 31 08:30:48 compute-2 nova_compute[226829]: 2026-01-31 08:30:48.897 226833 DEBUG nova.network.neutron [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Successfully updated port: ffaf11e4-ab25-435a-a550-4c6c0b25801c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.023 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "refresh_cache-cd7614f6-e095-4eaf-bc4a-f749f49d3da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.023 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquired lock "refresh_cache-cd7614f6-e095-4eaf-bc4a-f749f49d3da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.024 226833 DEBUG nova.network.neutron [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.091 226833 DEBUG nova.compute.manager [req-30b1969c-84d4-44e4-bd15-7db4382ed690 req-acf4b4dc-09f3-4d9e-8324-d34a8ff5b73d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Received event network-changed-ffaf11e4-ab25-435a-a550-4c6c0b25801c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.092 226833 DEBUG nova.compute.manager [req-30b1969c-84d4-44e4-bd15-7db4382ed690 req-acf4b4dc-09f3-4d9e-8324-d34a8ff5b73d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Refreshing instance network info cache due to event network-changed-ffaf11e4-ab25-435a-a550-4c6c0b25801c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.093 226833 DEBUG oslo_concurrency.lockutils [req-30b1969c-84d4-44e4-bd15-7db4382ed690 req-acf4b4dc-09f3-4d9e-8324-d34a8ff5b73d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-cd7614f6-e095-4eaf-bc4a-f749f49d3da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.125 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.150 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.151 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.332 226833 DEBUG nova.network.neutron [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:49.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:49 compute-2 nova_compute[226829]: 2026-01-31 08:30:49.914 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:50 compute-2 ceph-mon[77282]: pgmap v2926: 305 pgs: 305 active+clean; 723 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 3.2 MiB/s wr, 63 op/s
Jan 31 08:30:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3018523473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/573318407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:50 compute-2 nova_compute[226829]: 2026-01-31 08:30:50.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:50.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.141 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:30:51 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1955153886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:30:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:30:51 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1955153886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:30:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1160593388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:51 compute-2 ceph-mon[77282]: pgmap v2927: 305 pgs: 305 active+clean; 703 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 2.5 MiB/s wr, 71 op/s
Jan 31 08:30:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3089461330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1955153886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:30:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1955153886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.391 226833 DEBUG nova.network.neutron [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Updating instance_info_cache with network_info: [{"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.428 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Releasing lock "refresh_cache-cd7614f6-e095-4eaf-bc4a-f749f49d3da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.428 226833 DEBUG nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Instance network_info: |[{"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.429 226833 DEBUG oslo_concurrency.lockutils [req-30b1969c-84d4-44e4-bd15-7db4382ed690 req-acf4b4dc-09f3-4d9e-8324-d34a8ff5b73d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-cd7614f6-e095-4eaf-bc4a-f749f49d3da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.429 226833 DEBUG nova.network.neutron [req-30b1969c-84d4-44e4-bd15-7db4382ed690 req-acf4b4dc-09f3-4d9e-8324-d34a8ff5b73d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Refreshing network info cache for port ffaf11e4-ab25-435a-a550-4c6c0b25801c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.436 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Start _get_guest_xml network_info=[{"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.444 226833 WARNING nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.456 226833 DEBUG nova.virt.libvirt.host [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.457 226833 DEBUG nova.virt.libvirt.host [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.462 226833 DEBUG nova.virt.libvirt.host [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.463 226833 DEBUG nova.virt.libvirt.host [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.465 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.466 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.467 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.467 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.468 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.468 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.469 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.469 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.470 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.470 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.471 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.472 226833 DEBUG nova.virt.hardware [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.478 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.512 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:51.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:30:51 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3159674751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:51 compute-2 nova_compute[226829]: 2026-01-31 08:30:51.984 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.008 226833 DEBUG nova.storage.rbd_utils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.013 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.260 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3159674751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:30:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3900223522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.465 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.467 226833 DEBUG nova.virt.libvirt.vif [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:30:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-2093056277',display_name='tempest-AttachVolumeNegativeTest-server-2093056277',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-2093056277',id=172,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDonjfnAujZefiMYUrxel4+/qI0af0RRcGu8mb+x+XiquvG3Cqt6583WFG6Aculfd7qg2S4SI25/n8o8oX595vAY8g9p6XyR4w5iSVlLPkpPjgA7hRODzCQmVbkbFO4dHg==',key_name='tempest-keypair-1218148886',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-nhxuewdv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:30:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=cd7614f6-e095-4eaf-bc4a-f749f49d3da7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.467 226833 DEBUG nova.network.os_vif_util [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.468 226833 DEBUG nova.network.os_vif_util [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:0b:77,bridge_name='br-int',has_traffic_filtering=True,id=ffaf11e4-ab25-435a-a550-4c6c0b25801c,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaf11e4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.469 226833 DEBUG nova.objects.instance [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd7614f6-e095-4eaf-bc4a-f749f49d3da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:30:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:30:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:52.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.517 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <uuid>cd7614f6-e095-4eaf-bc4a-f749f49d3da7</uuid>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <name>instance-000000ac</name>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <nova:name>tempest-AttachVolumeNegativeTest-server-2093056277</nova:name>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:30:51</nova:creationTime>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <nova:user uuid="48d684de9ba340f48e249b4cce857bfa">tempest-AttachVolumeNegativeTest-562353674-project-member</nova:user>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <nova:project uuid="418d5319c640455ab23850c0b0f24f92">tempest-AttachVolumeNegativeTest-562353674</nova:project>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <nova:port uuid="ffaf11e4-ab25-435a-a550-4c6c0b25801c">
Jan 31 08:30:52 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <system>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <entry name="serial">cd7614f6-e095-4eaf-bc4a-f749f49d3da7</entry>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <entry name="uuid">cd7614f6-e095-4eaf-bc4a-f749f49d3da7</entry>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     </system>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <os>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   </os>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <features>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   </features>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk">
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       </source>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk.config">
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       </source>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:30:52 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:7c:0b:77"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <target dev="tapffaf11e4-ab"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/cd7614f6-e095-4eaf-bc4a-f749f49d3da7/console.log" append="off"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <video>
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     </video>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:30:52 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:30:52 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:30:52 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:30:52 compute-2 nova_compute[226829]: </domain>
Jan 31 08:30:52 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.519 226833 DEBUG nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Preparing to wait for external event network-vif-plugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.519 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.520 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.522 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.523 226833 DEBUG nova.virt.libvirt.vif [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:30:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-2093056277',display_name='tempest-AttachVolumeNegativeTest-server-2093056277',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-2093056277',id=172,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDonjfnAujZefiMYUrxel4+/qI0af0RRcGu8mb+x+XiquvG3Cqt6583WFG6Aculfd7qg2S4SI25/n8o8oX595vAY8g9p6XyR4w5iSVlLPkpPjgA7hRODzCQmVbkbFO4dHg==',key_name='tempest-keypair-1218148886',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-nhxuewdv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:30:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=cd7614f6-e095-4eaf-bc4a-f749f49d3da7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.523 226833 DEBUG nova.network.os_vif_util [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.523 226833 DEBUG nova.network.os_vif_util [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:0b:77,bridge_name='br-int',has_traffic_filtering=True,id=ffaf11e4-ab25-435a-a550-4c6c0b25801c,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaf11e4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.524 226833 DEBUG os_vif [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:0b:77,bridge_name='br-int',has_traffic_filtering=True,id=ffaf11e4-ab25-435a-a550-4c6c0b25801c,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaf11e4-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.525 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.525 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.525 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.528 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.529 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapffaf11e4-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.529 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapffaf11e4-ab, col_values=(('external_ids', {'iface-id': 'ffaf11e4-ab25-435a-a550-4c6c0b25801c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:0b:77', 'vm-uuid': 'cd7614f6-e095-4eaf-bc4a-f749f49d3da7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.530 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:52 compute-2 NetworkManager[48999]: <info>  [1769848252.5316] manager: (tapffaf11e4-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.532 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.537 226833 INFO os_vif [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:0b:77,bridge_name='br-int',has_traffic_filtering=True,id=ffaf11e4-ab25-435a-a550-4c6c0b25801c,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaf11e4-ab')
Jan 31 08:30:52 compute-2 sudo[304969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:52 compute-2 sudo[304969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:52 compute-2 sudo[304969]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:52 compute-2 sudo[304994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:52 compute-2 sudo[304994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:52 compute-2 sudo[304994]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.937 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.938 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.938 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No VIF found with MAC fa:16:3e:7c:0b:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.938 226833 INFO nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Using config drive
Jan 31 08:30:52 compute-2 nova_compute[226829]: 2026-01-31 08:30:52.967 226833 DEBUG nova.storage.rbd_utils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:30:53 compute-2 nova_compute[226829]: 2026-01-31 08:30:53.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:53 compute-2 ceph-mon[77282]: pgmap v2928: 305 pgs: 305 active+clean; 690 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 31 08:30:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3900223522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1006507322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:30:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1006507322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:30:53 compute-2 nova_compute[226829]: 2026-01-31 08:30:53.565 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:53 compute-2 nova_compute[226829]: 2026-01-31 08:30:53.566 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:53 compute-2 nova_compute[226829]: 2026-01-31 08:30:53.566 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:53 compute-2 nova_compute[226829]: 2026-01-31 08:30:53.567 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:30:53 compute-2 nova_compute[226829]: 2026-01-31 08:30:53.567 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:53.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:30:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2760016401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.003 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e371 e371: 3 total, 3 up, 3 in
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.174 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.174 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.177 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.177 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:30:54 compute-2 podman[305060]: 2026-01-31 08:30:54.221117831 +0000 UTC m=+0.110013536 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.224 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.224 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.263 226833 INFO nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Creating config drive at /var/lib/nova/instances/cd7614f6-e095-4eaf-bc4a-f749f49d3da7/disk.config
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.267 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd7614f6-e095-4eaf-bc4a-f749f49d3da7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe93h0nz5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.322 226833 DEBUG nova.network.neutron [req-30b1969c-84d4-44e4-bd15-7db4382ed690 req-acf4b4dc-09f3-4d9e-8324-d34a8ff5b73d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Updated VIF entry in instance network info cache for port ffaf11e4-ab25-435a-a550-4c6c0b25801c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.322 226833 DEBUG nova.network.neutron [req-30b1969c-84d4-44e4-bd15-7db4382ed690 req-acf4b4dc-09f3-4d9e-8324-d34a8ff5b73d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Updating instance_info_cache with network_info: [{"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.395 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd7614f6-e095-4eaf-bc4a-f749f49d3da7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe93h0nz5" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.419 226833 DEBUG nova.storage.rbd_utils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.422 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd7614f6-e095-4eaf-bc4a-f749f49d3da7/disk.config cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.444 226833 DEBUG oslo_concurrency.lockutils [req-30b1969c-84d4-44e4-bd15-7db4382ed690 req-acf4b4dc-09f3-4d9e-8324-d34a8ff5b73d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-cd7614f6-e095-4eaf-bc4a-f749f49d3da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.451 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.452 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3649MB free_disk=20.73931884765625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.452 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.453 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:54.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.655 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.655 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 10816ede-cf43-4736-aba7-48389f607d30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.656 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance cd7614f6-e095-4eaf-bc4a-f749f49d3da7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.656 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.656 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:30:54 compute-2 nova_compute[226829]: 2026-01-31 08:30:54.829 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.090 226833 DEBUG oslo_concurrency.processutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd7614f6-e095-4eaf-bc4a-f749f49d3da7/disk.config cd7614f6-e095-4eaf-bc4a-f749f49d3da7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.092 226833 INFO nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Deleting local config drive /var/lib/nova/instances/cd7614f6-e095-4eaf-bc4a-f749f49d3da7/disk.config because it was imported into RBD.
Jan 31 08:30:55 compute-2 NetworkManager[48999]: <info>  [1769848255.1395] manager: (tapffaf11e4-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/343)
Jan 31 08:30:55 compute-2 kernel: tapffaf11e4-ab: entered promiscuous mode
Jan 31 08:30:55 compute-2 ovn_controller[133834]: 2026-01-31T08:30:55Z|00689|binding|INFO|Claiming lport ffaf11e4-ab25-435a-a550-4c6c0b25801c for this chassis.
Jan 31 08:30:55 compute-2 ovn_controller[133834]: 2026-01-31T08:30:55Z|00690|binding|INFO|ffaf11e4-ab25-435a-a550-4c6c0b25801c: Claiming fa:16:3e:7c:0b:77 10.100.0.4
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.149 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:55 compute-2 ovn_controller[133834]: 2026-01-31T08:30:55Z|00691|binding|INFO|Setting lport ffaf11e4-ab25-435a-a550-4c6c0b25801c ovn-installed in OVS
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.160 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.163 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:55 compute-2 systemd-udevd[305156]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.170 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:55 compute-2 NetworkManager[48999]: <info>  [1769848255.1833] device (tapffaf11e4-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.180 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:0b:77 10.100.0.4'], port_security=['fa:16:3e:7c:0b:77 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cd7614f6-e095-4eaf-bc4a-f749f49d3da7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e26a2af1-a850-4885-977e-596b6be13fb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '418d5319c640455ab23850c0b0f24f92', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4749436d-38f6-439f-9f12-f38ebaef0842', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068708bd-dd36-4d03-9d65-912eb9981ecc, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ffaf11e4-ab25-435a-a550-4c6c0b25801c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.182 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ffaf11e4-ab25-435a-a550-4c6c0b25801c in datapath e26a2af1-a850-4885-977e-596b6be13fb8 bound to our chassis
Jan 31 08:30:55 compute-2 NetworkManager[48999]: <info>  [1769848255.1841] device (tapffaf11e4-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.184 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 08:30:55 compute-2 ovn_controller[133834]: 2026-01-31T08:30:55Z|00692|binding|INFO|Setting lport ffaf11e4-ab25-435a-a550-4c6c0b25801c up in Southbound
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.193 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9e2d70-0e31-49c7-91e3-bc2595e9fe86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.194 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape26a2af1-a1 in ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:30:55 compute-2 systemd-machined[195142]: New machine qemu-80-instance-000000ac.
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.200 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape26a2af1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.200 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[963981d1-2695-4cf2-850c-edbe8085e1a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.201 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8330ee5a-f651-40d4-aa90-449885bfb5ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.211 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[f38ff6ae-d357-4b3f-bf09-c29e0830c901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 systemd[1]: Started Virtual Machine qemu-80-instance-000000ac.
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.224 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[482621cf-230c-42da-a579-45dc73e55076]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.252 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2f579a-b3e6-4a9a-b40f-47c31819bcfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.256 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0b51a721-d770-4bf6-b784-a0a294d16192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 NetworkManager[48999]: <info>  [1769848255.2576] manager: (tape26a2af1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/344)
Jan 31 08:30:55 compute-2 systemd-udevd[305161]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.278 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e77514-f0bc-4bba-a719-52cbfbe15676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.281 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9426d880-faae-497e-ae31-662be86f9c2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 NetworkManager[48999]: <info>  [1769848255.2985] device (tape26a2af1-a0): carrier: link connected
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.305 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5d5dea-7fd0-4158-b67b-ffd826e0221e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.326 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8788265c-e48f-4432-adf2-12359ab14e29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape26a2af1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:7d:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850777, 'reachable_time': 18192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305193, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.339 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c7ecde-f5d2-411e-81d9-ae079b7217a2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9c:7dba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 850777, 'tstamp': 850777}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305194, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.355 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[063831f3-2b7c-4adf-b419-188454b15308]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape26a2af1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:7d:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850777, 'reachable_time': 18192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305195, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.378 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2115d0f5-b649-45f8-a15e-e7e1e54dac87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.419 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8873a025-349c-4847-99f7-3d32dd30d2df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.421 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape26a2af1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.422 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.422 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape26a2af1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.424 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:55 compute-2 NetworkManager[48999]: <info>  [1769848255.4253] manager: (tape26a2af1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Jan 31 08:30:55 compute-2 kernel: tape26a2af1-a0: entered promiscuous mode
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.428 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.429 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape26a2af1-a0, col_values=(('external_ids', {'iface-id': '003d1f0e-744f-4244-8c9f-3a9be6033652'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.430 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.430 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:55 compute-2 ovn_controller[133834]: 2026-01-31T08:30:55Z|00693|binding|INFO|Releasing lport 003d1f0e-744f-4244-8c9f-3a9be6033652 from this chassis (sb_readonly=0)
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.431 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.432 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[12ff0510-2e63-4e0d-bae8-b755fc23fef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.433 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:30:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:30:55.434 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'env', 'PROCESS_TAG=haproxy-e26a2af1-a850-4885-977e-596b6be13fb8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e26a2af1-a850-4885-977e-596b6be13fb8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.435 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:30:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/820759006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.475 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.481 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:30:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2760016401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:55 compute-2 ceph-mon[77282]: osdmap e371: 3 total, 3 up, 3 in
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.517 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.565 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.565 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:55 compute-2 podman[305246]: 2026-01-31 08:30:55.742642946 +0000 UTC m=+0.019033347 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:30:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:55.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.862 226833 DEBUG nova.compute.manager [req-709e874b-006e-4b17-88b1-b5bb3a2c7529 req-beeea741-c2ab-4112-91c4-f3367b4bd16f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Received event network-vif-plugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.862 226833 DEBUG oslo_concurrency.lockutils [req-709e874b-006e-4b17-88b1-b5bb3a2c7529 req-beeea741-c2ab-4112-91c4-f3367b4bd16f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.862 226833 DEBUG oslo_concurrency.lockutils [req-709e874b-006e-4b17-88b1-b5bb3a2c7529 req-beeea741-c2ab-4112-91c4-f3367b4bd16f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.863 226833 DEBUG oslo_concurrency.lockutils [req-709e874b-006e-4b17-88b1-b5bb3a2c7529 req-beeea741-c2ab-4112-91c4-f3367b4bd16f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:55 compute-2 nova_compute[226829]: 2026-01-31 08:30:55.863 226833 DEBUG nova.compute.manager [req-709e874b-006e-4b17-88b1-b5bb3a2c7529 req-beeea741-c2ab-4112-91c4-f3367b4bd16f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Processing event network-vif-plugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.032 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848256.0317123, cd7614f6-e095-4eaf-bc4a-f749f49d3da7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.032 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] VM Started (Lifecycle Event)
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.035 226833 DEBUG nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.038 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.042 226833 INFO nova.virt.libvirt.driver [-] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Instance spawned successfully.
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.042 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.071 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.075 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:30:56 compute-2 podman[305246]: 2026-01-31 08:30:56.098735039 +0000 UTC m=+0.375125420 container create 9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.099 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.100 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.100 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.101 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.101 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.102 226833 DEBUG nova.virt.libvirt.driver [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.157 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.157 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848256.0345063, cd7614f6-e095-4eaf-bc4a-f749f49d3da7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.158 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] VM Paused (Lifecycle Event)
Jan 31 08:30:56 compute-2 systemd[1]: Started libpod-conmon-9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68.scope.
Jan 31 08:30:56 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:30:56 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/896f028bfa72f766cbfc7a955d209574f47609ac8b7dbf14b9c88ac372b81f50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.209 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.213 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848256.0382516, cd7614f6-e095-4eaf-bc4a-f749f49d3da7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.213 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] VM Resumed (Lifecycle Event)
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.275 226833 INFO nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Took 15.16 seconds to spawn the instance on the hypervisor.
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.276 226833 DEBUG nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.302 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.426 226833 INFO nova.compute.manager [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Took 16.42 seconds to build instance.
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.586 226833 DEBUG oslo_concurrency.lockutils [None req-977abec1-3c3d-443c-99fa-5de368b15ff9 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:30:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:56.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:30:56 compute-2 nova_compute[226829]: 2026-01-31 08:30:56.635 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:30:56 compute-2 podman[305246]: 2026-01-31 08:30:56.63610546 +0000 UTC m=+0.912495861 container init 9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:30:56 compute-2 ceph-mon[77282]: pgmap v2930: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 34 op/s
Jan 31 08:30:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/820759006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:56 compute-2 podman[305246]: 2026-01-31 08:30:56.642091793 +0000 UTC m=+0.918482174 container start 9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:30:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e372 e372: 3 total, 3 up, 3 in
Jan 31 08:30:56 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[305286]: [NOTICE]   (305290) : New worker (305292) forked
Jan 31 08:30:56 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[305286]: [NOTICE]   (305290) : Loading success.
Jan 31 08:30:57 compute-2 nova_compute[226829]: 2026-01-31 08:30:57.263 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:57 compute-2 nova_compute[226829]: 2026-01-31 08:30:57.531 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e373 e373: 3 total, 3 up, 3 in
Jan 31 08:30:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:57.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:58 compute-2 ceph-mon[77282]: pgmap v2931: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 34 KiB/s wr, 51 op/s
Jan 31 08:30:58 compute-2 ceph-mon[77282]: osdmap e372: 3 total, 3 up, 3 in
Jan 31 08:30:58 compute-2 nova_compute[226829]: 2026-01-31 08:30:58.247 226833 DEBUG nova.compute.manager [req-1e5013f2-9869-4cd3-900a-884e6dd90d8b req-3a31e2f9-0159-4647-82eb-6e90cafc96de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Received event network-vif-plugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:58 compute-2 nova_compute[226829]: 2026-01-31 08:30:58.248 226833 DEBUG oslo_concurrency.lockutils [req-1e5013f2-9869-4cd3-900a-884e6dd90d8b req-3a31e2f9-0159-4647-82eb-6e90cafc96de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:58 compute-2 nova_compute[226829]: 2026-01-31 08:30:58.248 226833 DEBUG oslo_concurrency.lockutils [req-1e5013f2-9869-4cd3-900a-884e6dd90d8b req-3a31e2f9-0159-4647-82eb-6e90cafc96de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:58 compute-2 nova_compute[226829]: 2026-01-31 08:30:58.250 226833 DEBUG oslo_concurrency.lockutils [req-1e5013f2-9869-4cd3-900a-884e6dd90d8b req-3a31e2f9-0159-4647-82eb-6e90cafc96de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:58 compute-2 nova_compute[226829]: 2026-01-31 08:30:58.250 226833 DEBUG nova.compute.manager [req-1e5013f2-9869-4cd3-900a-884e6dd90d8b req-3a31e2f9-0159-4647-82eb-6e90cafc96de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] No waiting events found dispatching network-vif-plugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:30:58 compute-2 nova_compute[226829]: 2026-01-31 08:30:58.250 226833 WARNING nova.compute.manager [req-1e5013f2-9869-4cd3-900a-884e6dd90d8b req-3a31e2f9-0159-4647-82eb-6e90cafc96de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Received unexpected event network-vif-plugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c for instance with vm_state active and task_state None.
Jan 31 08:30:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:58.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:58 compute-2 nova_compute[226829]: 2026-01-31 08:30:58.922 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:59 compute-2 ceph-mon[77282]: osdmap e373: 3 total, 3 up, 3 in
Jan 31 08:30:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:30:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:59.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:00.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:00 compute-2 ceph-mon[77282]: pgmap v2934: 305 pgs: 305 active+clean; 660 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 127 op/s
Jan 31 08:31:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e374 e374: 3 total, 3 up, 3 in
Jan 31 08:31:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:01.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:02 compute-2 ceph-mon[77282]: pgmap v2935: 305 pgs: 305 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 3.7 MiB/s wr, 282 op/s
Jan 31 08:31:02 compute-2 ceph-mon[77282]: osdmap e374: 3 total, 3 up, 3 in
Jan 31 08:31:02 compute-2 nova_compute[226829]: 2026-01-31 08:31:02.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:02.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:02 compute-2 nova_compute[226829]: 2026-01-31 08:31:02.533 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e375 e375: 3 total, 3 up, 3 in
Jan 31 08:31:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:03.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:03 compute-2 ceph-mon[77282]: pgmap v2937: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 714 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 MiB/s rd, 6.8 MiB/s wr, 299 op/s
Jan 31 08:31:03 compute-2 ceph-mon[77282]: osdmap e375: 3 total, 3 up, 3 in
Jan 31 08:31:03 compute-2 nova_compute[226829]: 2026-01-31 08:31:03.989 226833 DEBUG nova.compute.manager [req-cc53c0a2-b518-4d53-ab3f-0799f8c54fcb req-ac066bcf-94a8-425f-bdaa-05def3707d06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Received event network-changed-ffaf11e4-ab25-435a-a550-4c6c0b25801c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:03 compute-2 nova_compute[226829]: 2026-01-31 08:31:03.989 226833 DEBUG nova.compute.manager [req-cc53c0a2-b518-4d53-ab3f-0799f8c54fcb req-ac066bcf-94a8-425f-bdaa-05def3707d06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Refreshing instance network info cache due to event network-changed-ffaf11e4-ab25-435a-a550-4c6c0b25801c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:31:03 compute-2 nova_compute[226829]: 2026-01-31 08:31:03.990 226833 DEBUG oslo_concurrency.lockutils [req-cc53c0a2-b518-4d53-ab3f-0799f8c54fcb req-ac066bcf-94a8-425f-bdaa-05def3707d06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-cd7614f6-e095-4eaf-bc4a-f749f49d3da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:31:03 compute-2 nova_compute[226829]: 2026-01-31 08:31:03.990 226833 DEBUG oslo_concurrency.lockutils [req-cc53c0a2-b518-4d53-ab3f-0799f8c54fcb req-ac066bcf-94a8-425f-bdaa-05def3707d06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-cd7614f6-e095-4eaf-bc4a-f749f49d3da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:31:03 compute-2 nova_compute[226829]: 2026-01-31 08:31:03.990 226833 DEBUG nova.network.neutron [req-cc53c0a2-b518-4d53-ab3f-0799f8c54fcb req-ac066bcf-94a8-425f-bdaa-05def3707d06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Refreshing network info cache for port ffaf11e4-ab25-435a-a550-4c6c0b25801c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:31:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:04.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2377867101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2377867101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:05 compute-2 podman[305305]: 2026-01-31 08:31:05.164785571 +0000 UTC m=+0.045108275 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 08:31:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:05.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:06 compute-2 ceph-mon[77282]: pgmap v2939: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 298 op/s
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.406 226833 DEBUG oslo_concurrency.lockutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.408 226833 DEBUG oslo_concurrency.lockutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.408 226833 DEBUG oslo_concurrency.lockutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.408 226833 DEBUG oslo_concurrency.lockutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.409 226833 DEBUG oslo_concurrency.lockutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.410 226833 INFO nova.compute.manager [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Terminating instance
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.411 226833 DEBUG nova.compute.manager [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:31:06 compute-2 kernel: tapaeb09486-b6 (unregistering): left promiscuous mode
Jan 31 08:31:06 compute-2 NetworkManager[48999]: <info>  [1769848266.5291] device (tapaeb09486-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:31:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:06.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:06 compute-2 ovn_controller[133834]: 2026-01-31T08:31:06Z|00694|binding|INFO|Releasing lport aeb09486-b68f-4fa4-a410-dd0ffaf49b05 from this chassis (sb_readonly=0)
Jan 31 08:31:06 compute-2 ovn_controller[133834]: 2026-01-31T08:31:06Z|00695|binding|INFO|Setting lport aeb09486-b68f-4fa4-a410-dd0ffaf49b05 down in Southbound
Jan 31 08:31:06 compute-2 ovn_controller[133834]: 2026-01-31T08:31:06Z|00696|binding|INFO|Removing iface tapaeb09486-b6 ovn-installed in OVS
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.571 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.573 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.583 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:06 compute-2 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Jan 31 08:31:06 compute-2 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a5.scope: Consumed 17.691s CPU time.
Jan 31 08:31:06 compute-2 systemd-machined[195142]: Machine qemu-78-instance-000000a5 terminated.
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.710 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:78:f9 10.100.0.11'], port_security=['fa:16:3e:ec:78:f9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '10816ede-cf43-4736-aba7-48389f607d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=aeb09486-b68f-4fa4-a410-dd0ffaf49b05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.712 143841 INFO neutron.agent.ovn.metadata.agent [-] Port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad unbound from our chassis
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.714 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.731 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[04c83c12-bb6c-41cb-b72c-90cd6e62b1ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.758 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ff785dab-37e7-415a-a227-83f9aeeab512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.762 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1afcb2-b723-49d0-957b-de2307893a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.795 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[97beec09-25c7-4086-819f-9c0db95759d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.817 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4e56da66-c8d3-4b8a-9655-25421261b5c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834880, 'reachable_time': 24981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305336, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.837 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.834 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5d65a643-9a93-4793-b143-777817bbd205]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 834889, 'tstamp': 834889}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305337, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 834892, 'tstamp': 834892}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305337, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.841 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.843 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.854 226833 INFO nova.virt.libvirt.driver [-] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Instance destroyed successfully.
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.854 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.855 226833 DEBUG nova.objects.instance [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'resources' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.855 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.857 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.858 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.858 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.907 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.907 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:06.908 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.950 226833 DEBUG nova.virt.libvirt.vif [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:27:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-1',id=165,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:29:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-uhld1k6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:29:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=10816ede-cf43-4736-aba7-48389f607d30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.951 226833 DEBUG nova.network.os_vif_util [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.951 226833 DEBUG nova.network.os_vif_util [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.952 226833 DEBUG os_vif [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.953 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.954 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaeb09486-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.955 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.957 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.959 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:06 compute-2 nova_compute[226829]: 2026-01-31 08:31:06.961 226833 INFO os_vif [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6')
Jan 31 08:31:07 compute-2 nova_compute[226829]: 2026-01-31 08:31:07.267 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:07 compute-2 nova_compute[226829]: 2026-01-31 08:31:07.362 226833 DEBUG nova.network.neutron [req-cc53c0a2-b518-4d53-ab3f-0799f8c54fcb req-ac066bcf-94a8-425f-bdaa-05def3707d06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Updated VIF entry in instance network info cache for port ffaf11e4-ab25-435a-a550-4c6c0b25801c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:31:07 compute-2 nova_compute[226829]: 2026-01-31 08:31:07.363 226833 DEBUG nova.network.neutron [req-cc53c0a2-b518-4d53-ab3f-0799f8c54fcb req-ac066bcf-94a8-425f-bdaa-05def3707d06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Updating instance_info_cache with network_info: [{"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:07 compute-2 nova_compute[226829]: 2026-01-31 08:31:07.565 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:07 compute-2 nova_compute[226829]: 2026-01-31 08:31:07.565 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:31:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e376 e376: 3 total, 3 up, 3 in
Jan 31 08:31:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:07.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:08 compute-2 nova_compute[226829]: 2026-01-31 08:31:08.040 226833 DEBUG nova.compute.manager [req-d606ffe6-73fb-49b4-8016-b062ff3f3f96 req-2828e545-d749-4c79-9519-76d1ead3990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-unplugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:08 compute-2 nova_compute[226829]: 2026-01-31 08:31:08.041 226833 DEBUG oslo_concurrency.lockutils [req-d606ffe6-73fb-49b4-8016-b062ff3f3f96 req-2828e545-d749-4c79-9519-76d1ead3990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:08 compute-2 nova_compute[226829]: 2026-01-31 08:31:08.042 226833 DEBUG oslo_concurrency.lockutils [req-d606ffe6-73fb-49b4-8016-b062ff3f3f96 req-2828e545-d749-4c79-9519-76d1ead3990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:08 compute-2 nova_compute[226829]: 2026-01-31 08:31:08.042 226833 DEBUG oslo_concurrency.lockutils [req-d606ffe6-73fb-49b4-8016-b062ff3f3f96 req-2828e545-d749-4c79-9519-76d1ead3990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:08 compute-2 nova_compute[226829]: 2026-01-31 08:31:08.043 226833 DEBUG nova.compute.manager [req-d606ffe6-73fb-49b4-8016-b062ff3f3f96 req-2828e545-d749-4c79-9519-76d1ead3990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-unplugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:31:08 compute-2 nova_compute[226829]: 2026-01-31 08:31:08.043 226833 DEBUG nova.compute.manager [req-d606ffe6-73fb-49b4-8016-b062ff3f3f96 req-2828e545-d749-4c79-9519-76d1ead3990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-unplugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:31:08 compute-2 nova_compute[226829]: 2026-01-31 08:31:08.154 226833 DEBUG oslo_concurrency.lockutils [req-cc53c0a2-b518-4d53-ab3f-0799f8c54fcb req-ac066bcf-94a8-425f-bdaa-05def3707d06 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-cd7614f6-e095-4eaf-bc4a-f749f49d3da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:31:08 compute-2 ceph-mon[77282]: pgmap v2940: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 5.0 MiB/s wr, 175 op/s
Jan 31 08:31:08 compute-2 ceph-mon[77282]: osdmap e376: 3 total, 3 up, 3 in
Jan 31 08:31:08 compute-2 nova_compute[226829]: 2026-01-31 08:31:08.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:08.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:09 compute-2 ceph-mon[77282]: pgmap v2942: 305 pgs: 305 active+clean; 720 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.7 MiB/s wr, 72 op/s
Jan 31 08:31:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:09.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:10 compute-2 nova_compute[226829]: 2026-01-31 08:31:10.424 226833 INFO nova.virt.libvirt.driver [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Deleting instance files /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30_del
Jan 31 08:31:10 compute-2 nova_compute[226829]: 2026-01-31 08:31:10.425 226833 INFO nova.virt.libvirt.driver [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Deletion of /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30_del complete
Jan 31 08:31:10 compute-2 nova_compute[226829]: 2026-01-31 08:31:10.496 226833 DEBUG nova.compute.manager [req-a5dd5ba6-a23e-47b8-b2fc-b45650f4cf27 req-3736f21f-3195-43da-aa44-a8f650394b9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:10 compute-2 nova_compute[226829]: 2026-01-31 08:31:10.496 226833 DEBUG oslo_concurrency.lockutils [req-a5dd5ba6-a23e-47b8-b2fc-b45650f4cf27 req-3736f21f-3195-43da-aa44-a8f650394b9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:10 compute-2 nova_compute[226829]: 2026-01-31 08:31:10.497 226833 DEBUG oslo_concurrency.lockutils [req-a5dd5ba6-a23e-47b8-b2fc-b45650f4cf27 req-3736f21f-3195-43da-aa44-a8f650394b9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:10 compute-2 nova_compute[226829]: 2026-01-31 08:31:10.497 226833 DEBUG oslo_concurrency.lockutils [req-a5dd5ba6-a23e-47b8-b2fc-b45650f4cf27 req-3736f21f-3195-43da-aa44-a8f650394b9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:10 compute-2 nova_compute[226829]: 2026-01-31 08:31:10.497 226833 DEBUG nova.compute.manager [req-a5dd5ba6-a23e-47b8-b2fc-b45650f4cf27 req-3736f21f-3195-43da-aa44-a8f650394b9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:31:10 compute-2 nova_compute[226829]: 2026-01-31 08:31:10.497 226833 WARNING nova.compute.manager [req-a5dd5ba6-a23e-47b8-b2fc-b45650f4cf27 req-3736f21f-3195-43da-aa44-a8f650394b9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received unexpected event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with vm_state active and task_state deleting.
Jan 31 08:31:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:10.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:11 compute-2 nova_compute[226829]: 2026-01-31 08:31:11.060 226833 INFO nova.compute.manager [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Took 4.65 seconds to destroy the instance on the hypervisor.
Jan 31 08:31:11 compute-2 nova_compute[226829]: 2026-01-31 08:31:11.060 226833 DEBUG oslo.service.loopingcall [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:31:11 compute-2 nova_compute[226829]: 2026-01-31 08:31:11.061 226833 DEBUG nova.compute.manager [-] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:31:11 compute-2 nova_compute[226829]: 2026-01-31 08:31:11.061 226833 DEBUG nova.network.neutron [-] [instance: 10816ede-cf43-4736-aba7-48389f607d30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:31:11 compute-2 ceph-mon[77282]: pgmap v2943: 305 pgs: 305 active+clean; 695 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 3.3 MiB/s wr, 86 op/s
Jan 31 08:31:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1602169119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:11.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:11 compute-2 ovn_controller[133834]: 2026-01-31T08:31:11Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:0b:77 10.100.0.4
Jan 31 08:31:11 compute-2 ovn_controller[133834]: 2026-01-31T08:31:11Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:0b:77 10.100.0.4
Jan 31 08:31:11 compute-2 nova_compute[226829]: 2026-01-31 08:31:11.955 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:12 compute-2 nova_compute[226829]: 2026-01-31 08:31:12.270 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:12.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:12 compute-2 sudo[305372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:12 compute-2 sudo[305372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:12 compute-2 sudo[305372]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:13 compute-2 sudo[305397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:13 compute-2 sudo[305397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:13 compute-2 sudo[305397]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:13 compute-2 ceph-mon[77282]: pgmap v2944: 305 pgs: 305 active+clean; 671 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 99 KiB/s rd, 3.3 MiB/s wr, 93 op/s
Jan 31 08:31:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:13.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:14.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:15.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:16 compute-2 nova_compute[226829]: 2026-01-31 08:31:16.062 226833 DEBUG nova.network.neutron [-] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:16 compute-2 nova_compute[226829]: 2026-01-31 08:31:16.314 226833 DEBUG nova.compute.manager [req-e95d9cbe-35e9-44a2-8c9d-63a007fe6980 req-a72b5813-4e40-4b1d-b572-b440491ca47e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-deleted-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:16 compute-2 nova_compute[226829]: 2026-01-31 08:31:16.314 226833 INFO nova.compute.manager [req-e95d9cbe-35e9-44a2-8c9d-63a007fe6980 req-a72b5813-4e40-4b1d-b572-b440491ca47e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Neutron deleted interface aeb09486-b68f-4fa4-a410-dd0ffaf49b05; detaching it from the instance and deleting it from the info cache
Jan 31 08:31:16 compute-2 nova_compute[226829]: 2026-01-31 08:31:16.315 226833 DEBUG nova.network.neutron [req-e95d9cbe-35e9-44a2-8c9d-63a007fe6980 req-a72b5813-4e40-4b1d-b572-b440491ca47e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:16 compute-2 nova_compute[226829]: 2026-01-31 08:31:16.319 226833 INFO nova.compute.manager [-] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Took 5.26 seconds to deallocate network for instance.
Jan 31 08:31:16 compute-2 ceph-mon[77282]: pgmap v2945: 305 pgs: 305 active+clean; 671 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 114 KiB/s rd, 2.5 MiB/s wr, 97 op/s
Jan 31 08:31:16 compute-2 nova_compute[226829]: 2026-01-31 08:31:16.546 226833 DEBUG nova.compute.manager [req-e95d9cbe-35e9-44a2-8c9d-63a007fe6980 req-a72b5813-4e40-4b1d-b572-b440491ca47e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Detach interface failed, port_id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05, reason: Instance 10816ede-cf43-4736-aba7-48389f607d30 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:31:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:16.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:16 compute-2 nova_compute[226829]: 2026-01-31 08:31:16.771 226833 DEBUG oslo_concurrency.lockutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:16 compute-2 nova_compute[226829]: 2026-01-31 08:31:16.772 226833 DEBUG oslo_concurrency.lockutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:16 compute-2 nova_compute[226829]: 2026-01-31 08:31:16.957 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:17 compute-2 nova_compute[226829]: 2026-01-31 08:31:17.120 226833 DEBUG oslo_concurrency.processutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:17 compute-2 nova_compute[226829]: 2026-01-31 08:31:17.271 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:31:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2605209381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:17 compute-2 nova_compute[226829]: 2026-01-31 08:31:17.544 226833 DEBUG oslo_concurrency.processutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:17 compute-2 nova_compute[226829]: 2026-01-31 08:31:17.550 226833 DEBUG nova.compute.provider_tree [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:31:17 compute-2 nova_compute[226829]: 2026-01-31 08:31:17.601 226833 DEBUG nova.scheduler.client.report [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:31:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 31 08:31:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 31 08:31:17 compute-2 nova_compute[226829]: 2026-01-31 08:31:17.895 226833 DEBUG oslo_concurrency.lockutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:18 compute-2 ceph-mon[77282]: pgmap v2946: 305 pgs: 305 active+clean; 678 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 430 KiB/s rd, 2.6 MiB/s wr, 126 op/s
Jan 31 08:31:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2605209381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:18 compute-2 nova_compute[226829]: 2026-01-31 08:31:18.122 226833 INFO nova.scheduler.client.report [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Deleted allocations for instance 10816ede-cf43-4736-aba7-48389f607d30
Jan 31 08:31:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:31:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:18.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:31:19 compute-2 nova_compute[226829]: 2026-01-31 08:31:19.053 226833 DEBUG oslo_concurrency.lockutils [None req-a34ddbe7-fd31-4c68-87d0-e8d5b017e188 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:19.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:20 compute-2 ceph-mon[77282]: pgmap v2947: 305 pgs: 305 active+clean; 667 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 429 KiB/s rd, 2.8 MiB/s wr, 136 op/s
Jan 31 08:31:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:20.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.342 226833 DEBUG oslo_concurrency.lockutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.343 226833 DEBUG oslo_concurrency.lockutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.344 226833 DEBUG oslo_concurrency.lockutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.344 226833 DEBUG oslo_concurrency.lockutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.345 226833 DEBUG oslo_concurrency.lockutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.347 226833 INFO nova.compute.manager [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Terminating instance
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.348 226833 DEBUG nova.compute.manager [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:31:21 compute-2 kernel: tapa3ff723d-9e (unregistering): left promiscuous mode
Jan 31 08:31:21 compute-2 NetworkManager[48999]: <info>  [1769848281.4072] device (tapa3ff723d-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:31:21 compute-2 ovn_controller[133834]: 2026-01-31T08:31:21Z|00697|binding|INFO|Releasing lport a3ff723d-9e34-45ef-881d-6534126ae169 from this chassis (sb_readonly=0)
Jan 31 08:31:21 compute-2 ovn_controller[133834]: 2026-01-31T08:31:21Z|00698|binding|INFO|Setting lport a3ff723d-9e34-45ef-881d-6534126ae169 down in Southbound
Jan 31 08:31:21 compute-2 ovn_controller[133834]: 2026-01-31T08:31:21Z|00699|binding|INFO|Removing iface tapa3ff723d-9e ovn-installed in OVS
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.414 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.427 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:21 compute-2 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Jan 31 08:31:21 compute-2 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a4.scope: Consumed 20.796s CPU time.
Jan 31 08:31:21 compute-2 systemd-machined[195142]: Machine qemu-77-instance-000000a4 terminated.
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.550 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:c5:21 10.100.0.5'], port_security=['fa:16:3e:1b:c5:21 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e89132fd-2d0c-475e-a3c5-0407e4cbbbb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=a3ff723d-9e34-45ef-881d-6534126ae169) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.553 143841 INFO neutron.agent.ovn.metadata.agent [-] Port a3ff723d-9e34-45ef-881d-6534126ae169 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad unbound from our chassis
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.556 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.558 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d263fb30-c7e7-4c52-b2cf-4435dfd3eb82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.558 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad namespace which is not needed anymore
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.566 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.571 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.581 226833 INFO nova.virt.libvirt.driver [-] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Instance destroyed successfully.
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.581 226833 DEBUG nova.objects.instance [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'resources' on Instance uuid e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:21 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[302611]: [NOTICE]   (302620) : haproxy version is 2.8.14-c23fe91
Jan 31 08:31:21 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[302611]: [NOTICE]   (302620) : path to executable is /usr/sbin/haproxy
Jan 31 08:31:21 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[302611]: [WARNING]  (302620) : Exiting Master process...
Jan 31 08:31:21 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[302611]: [WARNING]  (302620) : Exiting Master process...
Jan 31 08:31:21 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[302611]: [ALERT]    (302620) : Current worker (302622) exited with code 143 (Terminated)
Jan 31 08:31:21 compute-2 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[302611]: [WARNING]  (302620) : All workers exited. Exiting... (0)
Jan 31 08:31:21 compute-2 systemd[1]: libpod-d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97.scope: Deactivated successfully.
Jan 31 08:31:21 compute-2 podman[305480]: 2026-01-31 08:31:21.731637928 +0000 UTC m=+0.042764701 container died d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:31:21 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97-userdata-shm.mount: Deactivated successfully.
Jan 31 08:31:21 compute-2 systemd[1]: var-lib-containers-storage-overlay-67c6305b77055b989fd97abdb7785ff3a5c6dc509c2d74d7072596f86b75a0dd-merged.mount: Deactivated successfully.
Jan 31 08:31:21 compute-2 podman[305480]: 2026-01-31 08:31:21.769608379 +0000 UTC m=+0.080735142 container cleanup d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:31:21 compute-2 systemd[1]: libpod-conmon-d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97.scope: Deactivated successfully.
Jan 31 08:31:21 compute-2 podman[305509]: 2026-01-31 08:31:21.83262508 +0000 UTC m=+0.047873121 container remove d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.837 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[309765cd-f173-4b91-a121-facaadc2ec68]: (4, ('Sat Jan 31 08:31:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad (d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97)\nd0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97\nSat Jan 31 08:31:21 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad (d0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97)\nd0ca2b903f6a2b6fbea55cb9ae34e193ff4d260b366d5d9d2d00a14ab6424e97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.840 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f2954de7-474d-4dfc-81b5-439da905e319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.841 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.842 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:21 compute-2 kernel: tap26ad6a8f-30: left promiscuous mode
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.851 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.852 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848266.850103, 10816ede-cf43-4736-aba7-48389f607d30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.852 226833 INFO nova.compute.manager [-] [instance: 10816ede-cf43-4736-aba7-48389f607d30] VM Stopped (Lifecycle Event)
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.855 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cb78197b-e564-477f-9589-1be8da5af8e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.858 226833 DEBUG nova.virt.libvirt.vif [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:28:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-ec12ij1b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:28:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=e89132fd-2d0c-475e-a3c5-0407e4cbbbb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.860 226833 DEBUG nova.network.os_vif_util [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "a3ff723d-9e34-45ef-881d-6534126ae169", "address": "fa:16:3e:1b:c5:21", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ff723d-9e", "ovs_interfaceid": "a3ff723d-9e34-45ef-881d-6534126ae169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.860 226833 DEBUG nova.network.os_vif_util [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.861 226833 DEBUG os_vif [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.863 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.863 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ff723d-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.865 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.867 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:21 compute-2 nova_compute[226829]: 2026-01-31 08:31:21.870 226833 INFO os_vif [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:c5:21,bridge_name='br-int',has_traffic_filtering=True,id=a3ff723d-9e34-45ef-881d-6534126ae169,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ff723d-9e')
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.869 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f2d540-9fb4-45ca-bcc9-b200c7317a48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.870 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[66c1fb33-df96-4c26-b965-8621da203485]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:21.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.883 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6189e899-1a1d-4629-8e44-a68f75162ddb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834874, 'reachable_time': 31193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305528, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:21 compute-2 systemd[1]: run-netns-ovnmeta\x2d26ad6a8f\x2d33d5\x2d432e\x2d83d3\x2d63a9d2f165ad.mount: Deactivated successfully.
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.887 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:31:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:21.888 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[0b02e106-8eb3-4c9b-b799-f63b49c91801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:22 compute-2 ceph-mon[77282]: pgmap v2948: 305 pgs: 305 active+clean; 670 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 370 KiB/s rd, 3.4 MiB/s wr, 130 op/s
Jan 31 08:31:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3503097286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:22 compute-2 nova_compute[226829]: 2026-01-31 08:31:22.214 226833 DEBUG nova.compute.manager [req-fdec67d6-3f52-4fac-9dd3-18a00c0835f9 req-a12c51e2-c759-4c22-99d6-3d3a50472ba3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-vif-unplugged-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:22 compute-2 nova_compute[226829]: 2026-01-31 08:31:22.215 226833 DEBUG oslo_concurrency.lockutils [req-fdec67d6-3f52-4fac-9dd3-18a00c0835f9 req-a12c51e2-c759-4c22-99d6-3d3a50472ba3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:22 compute-2 nova_compute[226829]: 2026-01-31 08:31:22.215 226833 DEBUG oslo_concurrency.lockutils [req-fdec67d6-3f52-4fac-9dd3-18a00c0835f9 req-a12c51e2-c759-4c22-99d6-3d3a50472ba3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:22 compute-2 nova_compute[226829]: 2026-01-31 08:31:22.216 226833 DEBUG oslo_concurrency.lockutils [req-fdec67d6-3f52-4fac-9dd3-18a00c0835f9 req-a12c51e2-c759-4c22-99d6-3d3a50472ba3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:22 compute-2 nova_compute[226829]: 2026-01-31 08:31:22.216 226833 DEBUG nova.compute.manager [req-fdec67d6-3f52-4fac-9dd3-18a00c0835f9 req-a12c51e2-c759-4c22-99d6-3d3a50472ba3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] No waiting events found dispatching network-vif-unplugged-a3ff723d-9e34-45ef-881d-6534126ae169 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:31:22 compute-2 nova_compute[226829]: 2026-01-31 08:31:22.216 226833 DEBUG nova.compute.manager [req-fdec67d6-3f52-4fac-9dd3-18a00c0835f9 req-a12c51e2-c759-4c22-99d6-3d3a50472ba3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-vif-unplugged-a3ff723d-9e34-45ef-881d-6534126ae169 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:31:22 compute-2 nova_compute[226829]: 2026-01-31 08:31:22.251 226833 DEBUG nova.compute.manager [None req-9cc8befd-6162-46bf-a539-c30059ff7ca0 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:31:22 compute-2 nova_compute[226829]: 2026-01-31 08:31:22.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:22 compute-2 nova_compute[226829]: 2026-01-31 08:31:22.527 226833 INFO nova.virt.libvirt.driver [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Deleting instance files /var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_del
Jan 31 08:31:22 compute-2 nova_compute[226829]: 2026-01-31 08:31:22.528 226833 INFO nova.virt.libvirt.driver [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Deletion of /var/lib/nova/instances/e89132fd-2d0c-475e-a3c5-0407e4cbbbb8_del complete
Jan 31 08:31:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:22.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:23 compute-2 nova_compute[226829]: 2026-01-31 08:31:23.117 226833 INFO nova.compute.manager [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Took 1.77 seconds to destroy the instance on the hypervisor.
Jan 31 08:31:23 compute-2 nova_compute[226829]: 2026-01-31 08:31:23.118 226833 DEBUG oslo.service.loopingcall [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:31:23 compute-2 nova_compute[226829]: 2026-01-31 08:31:23.118 226833 DEBUG nova.compute.manager [-] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:31:23 compute-2 nova_compute[226829]: 2026-01-31 08:31:23.119 226833 DEBUG nova.network.neutron [-] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:31:23 compute-2 nova_compute[226829]: 2026-01-31 08:31:23.210 226833 DEBUG oslo_concurrency.lockutils [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:23 compute-2 nova_compute[226829]: 2026-01-31 08:31:23.210 226833 DEBUG oslo_concurrency.lockutils [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:23 compute-2 ceph-mon[77282]: pgmap v2949: 305 pgs: 305 active+clean; 645 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Jan 31 08:31:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:23.326 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:31:23 compute-2 nova_compute[226829]: 2026-01-31 08:31:23.327 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:23.328 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:31:23 compute-2 nova_compute[226829]: 2026-01-31 08:31:23.367 226833 DEBUG nova.objects.instance [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'flavor' on Instance uuid cd7614f6-e095-4eaf-bc4a-f749f49d3da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:23.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:24 compute-2 nova_compute[226829]: 2026-01-31 08:31:24.026 226833 DEBUG oslo_concurrency.lockutils [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.022 226833 DEBUG oslo_concurrency.lockutils [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.023 226833 DEBUG oslo_concurrency.lockutils [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.023 226833 INFO nova.compute.manager [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Attaching volume 85d89a65-6e1e-4b74-97bb-e5082330369c to /dev/vdb
Jan 31 08:31:25 compute-2 podman[305549]: 2026-01-31 08:31:25.212685134 +0000 UTC m=+0.096447998 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.226 226833 DEBUG os_brick.utils [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.229 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.240 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.240 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa0cbdd-bab7-4ef9-90da-ce37420deea6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.242 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.247 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.247 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ac6dce-0240-4cba-90ef-8e73b280b475]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.250 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.255 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.255 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1953ec-23d2-4009-afa9-d3ada0ccfbbf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.257 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[1e46e311-184f-4946-9fac-3d3b4ba0cae7]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.257 226833 DEBUG oslo_concurrency.processutils [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.287 226833 DEBUG nova.compute.manager [req-e395e57b-8172-41bf-9151-e31ed9c7e011 req-672cc8ce-3dd7-4726-8afc-1688ea1b7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.288 226833 DEBUG oslo_concurrency.lockutils [req-e395e57b-8172-41bf-9151-e31ed9c7e011 req-672cc8ce-3dd7-4726-8afc-1688ea1b7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.288 226833 DEBUG oslo_concurrency.lockutils [req-e395e57b-8172-41bf-9151-e31ed9c7e011 req-672cc8ce-3dd7-4726-8afc-1688ea1b7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.289 226833 DEBUG oslo_concurrency.lockutils [req-e395e57b-8172-41bf-9151-e31ed9c7e011 req-672cc8ce-3dd7-4726-8afc-1688ea1b7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.289 226833 DEBUG nova.compute.manager [req-e395e57b-8172-41bf-9151-e31ed9c7e011 req-672cc8ce-3dd7-4726-8afc-1688ea1b7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] No waiting events found dispatching network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.289 226833 WARNING nova.compute.manager [req-e395e57b-8172-41bf-9151-e31ed9c7e011 req-672cc8ce-3dd7-4726-8afc-1688ea1b7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received unexpected event network-vif-plugged-a3ff723d-9e34-45ef-881d-6534126ae169 for instance with vm_state active and task_state deleting.
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.291 226833 DEBUG oslo_concurrency.processutils [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.294 226833 DEBUG os_brick.initiator.connectors.lightos [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.294 226833 DEBUG os_brick.initiator.connectors.lightos [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.294 226833 DEBUG os_brick.initiator.connectors.lightos [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.295 226833 DEBUG os_brick.utils [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:31:25 compute-2 nova_compute[226829]: 2026-01-31 08:31:25.295 226833 DEBUG nova.virt.block_device [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Updating existing volume attachment record: 3cbea2b7-8782-4e83-a9f6-74d8d9194732 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:31:25 compute-2 ceph-mon[77282]: pgmap v2950: 305 pgs: 305 active+clean; 626 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 335 KiB/s rd, 1.9 MiB/s wr, 109 op/s
Jan 31 08:31:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:25.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:26 compute-2 nova_compute[226829]: 2026-01-31 08:31:26.361 226833 DEBUG nova.network.neutron [-] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:26 compute-2 nova_compute[226829]: 2026-01-31 08:31:26.375 226833 DEBUG nova.compute.manager [req-61018928-9f27-4240-9458-9a2121c9f03e req-692f3613-7802-4e4d-9a5e-829a182057de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Received event network-vif-deleted-a3ff723d-9e34-45ef-881d-6534126ae169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:26 compute-2 nova_compute[226829]: 2026-01-31 08:31:26.375 226833 INFO nova.compute.manager [req-61018928-9f27-4240-9458-9a2121c9f03e req-692f3613-7802-4e4d-9a5e-829a182057de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Neutron deleted interface a3ff723d-9e34-45ef-881d-6534126ae169; detaching it from the instance and deleting it from the info cache
Jan 31 08:31:26 compute-2 nova_compute[226829]: 2026-01-31 08:31:26.375 226833 DEBUG nova.network.neutron [req-61018928-9f27-4240-9458-9a2121c9f03e req-692f3613-7802-4e4d-9a5e-829a182057de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:26 compute-2 sudo[305583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:26 compute-2 sudo[305583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:26 compute-2 sudo[305583]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:26.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:26 compute-2 sudo[305608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:26 compute-2 sudo[305608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:26 compute-2 sudo[305608]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:26 compute-2 sudo[305633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:26 compute-2 sudo[305633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:26 compute-2 sudo[305633]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:26 compute-2 sudo[305658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 08:31:26 compute-2 sudo[305658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:26 compute-2 nova_compute[226829]: 2026-01-31 08:31:26.742 226833 DEBUG nova.objects.instance [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'flavor' on Instance uuid cd7614f6-e095-4eaf-bc4a-f749f49d3da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:26 compute-2 nova_compute[226829]: 2026-01-31 08:31:26.831 226833 INFO nova.compute.manager [-] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Took 3.71 seconds to deallocate network for instance.
Jan 31 08:31:26 compute-2 nova_compute[226829]: 2026-01-31 08:31:26.839 226833 DEBUG nova.compute.manager [req-61018928-9f27-4240-9458-9a2121c9f03e req-692f3613-7802-4e4d-9a5e-829a182057de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Detach interface failed, port_id=a3ff723d-9e34-45ef-881d-6534126ae169, reason: Instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:31:26 compute-2 nova_compute[226829]: 2026-01-31 08:31:26.865 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:26 compute-2 sudo[305658]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:26 compute-2 nova_compute[226829]: 2026-01-31 08:31:26.994 226833 DEBUG nova.virt.libvirt.driver [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Attempting to attach volume 85d89a65-6e1e-4b74-97bb-e5082330369c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:31:26 compute-2 nova_compute[226829]: 2026-01-31 08:31:26.999 226833 DEBUG nova.virt.libvirt.guest [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:31:26 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:31:26 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-85d89a65-6e1e-4b74-97bb-e5082330369c">
Jan 31 08:31:26 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:31:26 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:31:26 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:31:26 compute-2 nova_compute[226829]:   </source>
Jan 31 08:31:26 compute-2 nova_compute[226829]:   <auth username="openstack">
Jan 31 08:31:26 compute-2 nova_compute[226829]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:31:26 compute-2 nova_compute[226829]:   </auth>
Jan 31 08:31:26 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:31:26 compute-2 nova_compute[226829]:   <serial>85d89a65-6e1e-4b74-97bb-e5082330369c</serial>
Jan 31 08:31:26 compute-2 nova_compute[226829]: </disk>
Jan 31 08:31:26 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.055 226833 DEBUG oslo_concurrency.lockutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.056 226833 DEBUG oslo_concurrency.lockutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3568431585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.132 226833 DEBUG oslo_concurrency.processutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.275 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:31:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2823635743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.635 226833 DEBUG oslo_concurrency.processutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.642 226833 DEBUG nova.compute.provider_tree [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.670 226833 DEBUG nova.virt.libvirt.driver [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.671 226833 DEBUG nova.virt.libvirt.driver [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.671 226833 DEBUG nova.virt.libvirt.driver [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.672 226833 DEBUG nova.virt.libvirt.driver [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No VIF found with MAC fa:16:3e:7c:0b:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:31:27 compute-2 nova_compute[226829]: 2026-01-31 08:31:27.763 226833 DEBUG nova.scheduler.client.report [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:31:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:27.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:28 compute-2 nova_compute[226829]: 2026-01-31 08:31:28.207 226833 DEBUG oslo_concurrency.lockutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:28 compute-2 ceph-mon[77282]: pgmap v2951: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 321 KiB/s rd, 1.9 MiB/s wr, 110 op/s
Jan 31 08:31:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:31:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2823635743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:28 compute-2 nova_compute[226829]: 2026-01-31 08:31:28.402 226833 INFO nova.scheduler.client.report [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Deleted allocations for instance e89132fd-2d0c-475e-a3c5-0407e4cbbbb8
Jan 31 08:31:28 compute-2 systemd[1]: Starting dnf makecache...
Jan 31 08:31:28 compute-2 sudo[305745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:28 compute-2 sudo[305745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:28 compute-2 sudo[305745]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:31:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:28.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:31:28 compute-2 sudo[305771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:28 compute-2 sudo[305771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:28 compute-2 sudo[305771]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:28 compute-2 sudo[305796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:28 compute-2 sudo[305796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:28 compute-2 sudo[305796]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:28 compute-2 sudo[305821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:31:28 compute-2 sudo[305821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:28 compute-2 dnf[305769]: Metadata cache refreshed recently.
Jan 31 08:31:29 compute-2 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 08:31:29 compute-2 systemd[1]: Finished dnf makecache.
Jan 31 08:31:29 compute-2 sudo[305821]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:29 compute-2 nova_compute[226829]: 2026-01-31 08:31:29.402 226833 DEBUG oslo_concurrency.lockutils [None req-1a5bde41-bdc9-47ab-a4f5-90c6e9cbe203 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "e89132fd-2d0c-475e-a3c5-0407e4cbbbb8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:29 compute-2 ceph-mon[77282]: pgmap v2952: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 31 08:31:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/942918122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:31:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1198668702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:31:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:31:29 compute-2 nova_compute[226829]: 2026-01-31 08:31:29.825 226833 DEBUG oslo_concurrency.lockutils [None req-4031f287-00f1-49ad-86e6-6223d5b517ec 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:31:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:29.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:31:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:30.331 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:31:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:31:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:31:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:30.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:31 compute-2 ceph-mon[77282]: pgmap v2953: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 1.5 MiB/s wr, 74 op/s
Jan 31 08:31:31 compute-2 nova_compute[226829]: 2026-01-31 08:31:31.867 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:31.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:32 compute-2 nova_compute[226829]: 2026-01-31 08:31:32.277 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:32.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:32 compute-2 nova_compute[226829]: 2026-01-31 08:31:32.940 226833 DEBUG oslo_concurrency.lockutils [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:32 compute-2 nova_compute[226829]: 2026-01-31 08:31:32.941 226833 DEBUG oslo_concurrency.lockutils [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:32 compute-2 nova_compute[226829]: 2026-01-31 08:31:32.970 226833 INFO nova.compute.manager [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Detaching volume 85d89a65-6e1e-4b74-97bb-e5082330369c
Jan 31 08:31:33 compute-2 sudo[305880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:33 compute-2 sudo[305880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:33 compute-2 sudo[305880]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:33 compute-2 sudo[305905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:33 compute-2 sudo[305905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:33 compute-2 sudo[305905]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.200 226833 INFO nova.virt.block_device [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Attempting to driver detach volume 85d89a65-6e1e-4b74-97bb-e5082330369c from mountpoint /dev/vdb
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.208 226833 DEBUG nova.virt.libvirt.driver [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Attempting to detach device vdb from instance cd7614f6-e095-4eaf-bc4a-f749f49d3da7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.209 226833 DEBUG nova.virt.libvirt.guest [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:31:33 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-85d89a65-6e1e-4b74-97bb-e5082330369c">
Jan 31 08:31:33 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]:   </source>
Jan 31 08:31:33 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]:   <serial>85d89a65-6e1e-4b74-97bb-e5082330369c</serial>
Jan 31 08:31:33 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]: </disk>
Jan 31 08:31:33 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.215 226833 INFO nova.virt.libvirt.driver [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully detached device vdb from instance cd7614f6-e095-4eaf-bc4a-f749f49d3da7 from the persistent domain config.
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.215 226833 DEBUG nova.virt.libvirt.driver [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance cd7614f6-e095-4eaf-bc4a-f749f49d3da7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.216 226833 DEBUG nova.virt.libvirt.guest [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:31:33 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-85d89a65-6e1e-4b74-97bb-e5082330369c">
Jan 31 08:31:33 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]:   </source>
Jan 31 08:31:33 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]:   <serial>85d89a65-6e1e-4b74-97bb-e5082330369c</serial>
Jan 31 08:31:33 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:31:33 compute-2 nova_compute[226829]: </disk>
Jan 31 08:31:33 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:31:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:31:33 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3932476374' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:31:33 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3932476374' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.338 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769848293.337721, cd7614f6-e095-4eaf-bc4a-f749f49d3da7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.340 226833 DEBUG nova.virt.libvirt.driver [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance cd7614f6-e095-4eaf-bc4a-f749f49d3da7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.342 226833 INFO nova.virt.libvirt.driver [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully detached device vdb from instance cd7614f6-e095-4eaf-bc4a-f749f49d3da7 from the live domain config.
Jan 31 08:31:33 compute-2 ceph-mon[77282]: pgmap v2954: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 203 KiB/s wr, 45 op/s
Jan 31 08:31:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3932476374' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3932476374' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.777 226833 DEBUG nova.objects.instance [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'flavor' on Instance uuid cd7614f6-e095-4eaf-bc4a-f749f49d3da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:33 compute-2 nova_compute[226829]: 2026-01-31 08:31:33.870 226833 DEBUG oslo_concurrency.lockutils [None req-1149560b-3cfd-41be-a4f8-8ff6512dec58 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:33.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.678 226833 DEBUG oslo_concurrency.lockutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.679 226833 DEBUG oslo_concurrency.lockutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.679 226833 DEBUG oslo_concurrency.lockutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.679 226833 DEBUG oslo_concurrency.lockutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.680 226833 DEBUG oslo_concurrency.lockutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.681 226833 INFO nova.compute.manager [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Terminating instance
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.682 226833 DEBUG nova.compute.manager [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:31:34 compute-2 kernel: tapffaf11e4-ab (unregistering): left promiscuous mode
Jan 31 08:31:34 compute-2 NetworkManager[48999]: <info>  [1769848294.7871] device (tapffaf11e4-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:31:34 compute-2 ovn_controller[133834]: 2026-01-31T08:31:34Z|00700|binding|INFO|Releasing lport ffaf11e4-ab25-435a-a550-4c6c0b25801c from this chassis (sb_readonly=0)
Jan 31 08:31:34 compute-2 ovn_controller[133834]: 2026-01-31T08:31:34Z|00701|binding|INFO|Setting lport ffaf11e4-ab25-435a-a550-4c6c0b25801c down in Southbound
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.793 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-2 ovn_controller[133834]: 2026-01-31T08:31:34Z|00702|binding|INFO|Removing iface tapffaf11e4-ab ovn-installed in OVS
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.795 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.803 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:34.810 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:0b:77 10.100.0.4'], port_security=['fa:16:3e:7c:0b:77 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cd7614f6-e095-4eaf-bc4a-f749f49d3da7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e26a2af1-a850-4885-977e-596b6be13fb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '418d5319c640455ab23850c0b0f24f92', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4749436d-38f6-439f-9f12-f38ebaef0842', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068708bd-dd36-4d03-9d65-912eb9981ecc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=ffaf11e4-ab25-435a-a550-4c6c0b25801c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:31:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:34.811 143841 INFO neutron.agent.ovn.metadata.agent [-] Port ffaf11e4-ab25-435a-a550-4c6c0b25801c in datapath e26a2af1-a850-4885-977e-596b6be13fb8 unbound from our chassis
Jan 31 08:31:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:34.813 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e26a2af1-a850-4885-977e-596b6be13fb8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:31:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:34.815 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a24e79bc-a24a-4b21-8072-949fdd30e32a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:34.816 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 namespace which is not needed anymore
Jan 31 08:31:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/245584937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:34 compute-2 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Jan 31 08:31:34 compute-2 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000ac.scope: Consumed 14.137s CPU time.
Jan 31 08:31:34 compute-2 systemd-machined[195142]: Machine qemu-80-instance-000000ac terminated.
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.900 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.904 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.916 226833 INFO nova.virt.libvirt.driver [-] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Instance destroyed successfully.
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.916 226833 DEBUG nova.objects.instance [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'resources' on Instance uuid cd7614f6-e095-4eaf-bc4a-f749f49d3da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.931 226833 DEBUG nova.virt.libvirt.vif [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:30:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-2093056277',display_name='tempest-AttachVolumeNegativeTest-server-2093056277',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-2093056277',id=172,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDonjfnAujZefiMYUrxel4+/qI0af0RRcGu8mb+x+XiquvG3Cqt6583WFG6Aculfd7qg2S4SI25/n8o8oX595vAY8g9p6XyR4w5iSVlLPkpPjgA7hRODzCQmVbkbFO4dHg==',key_name='tempest-keypair-1218148886',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:30:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-nhxuewdv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:30:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=cd7614f6-e095-4eaf-bc4a-f749f49d3da7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.932 226833 DEBUG nova.network.os_vif_util [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "address": "fa:16:3e:7c:0b:77", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaf11e4-ab", "ovs_interfaceid": "ffaf11e4-ab25-435a-a550-4c6c0b25801c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.933 226833 DEBUG nova.network.os_vif_util [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:0b:77,bridge_name='br-int',has_traffic_filtering=True,id=ffaf11e4-ab25-435a-a550-4c6c0b25801c,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaf11e4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.933 226833 DEBUG os_vif [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:0b:77,bridge_name='br-int',has_traffic_filtering=True,id=ffaf11e4-ab25-435a-a550-4c6c0b25801c,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaf11e4-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.936 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.936 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffaf11e4-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.938 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.939 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-2 nova_compute[226829]: 2026-01-31 08:31:34.943 226833 INFO os_vif [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:0b:77,bridge_name='br-int',has_traffic_filtering=True,id=ffaf11e4-ab25-435a-a550-4c6c0b25801c,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaf11e4-ab')
Jan 31 08:31:35 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[305286]: [NOTICE]   (305290) : haproxy version is 2.8.14-c23fe91
Jan 31 08:31:35 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[305286]: [NOTICE]   (305290) : path to executable is /usr/sbin/haproxy
Jan 31 08:31:35 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[305286]: [WARNING]  (305290) : Exiting Master process...
Jan 31 08:31:35 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[305286]: [ALERT]    (305290) : Current worker (305292) exited with code 143 (Terminated)
Jan 31 08:31:35 compute-2 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[305286]: [WARNING]  (305290) : All workers exited. Exiting... (0)
Jan 31 08:31:35 compute-2 systemd[1]: libpod-9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68.scope: Deactivated successfully.
Jan 31 08:31:35 compute-2 conmon[305286]: conmon 9915fbc2f6f18a2e1d04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68.scope/container/memory.events
Jan 31 08:31:35 compute-2 podman[305958]: 2026-01-31 08:31:35.031539032 +0000 UTC m=+0.133190684 container died 9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:31:35 compute-2 systemd[1]: var-lib-containers-storage-overlay-896f028bfa72f766cbfc7a955d209574f47609ac8b7dbf14b9c88ac372b81f50-merged.mount: Deactivated successfully.
Jan 31 08:31:35 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68-userdata-shm.mount: Deactivated successfully.
Jan 31 08:31:35 compute-2 podman[305958]: 2026-01-31 08:31:35.277673571 +0000 UTC m=+0.379325223 container cleanup 9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:31:35 compute-2 systemd[1]: libpod-conmon-9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68.scope: Deactivated successfully.
Jan 31 08:31:35 compute-2 podman[306013]: 2026-01-31 08:31:35.377574811 +0000 UTC m=+0.094601377 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:31:35 compute-2 podman[306020]: 2026-01-31 08:31:35.386952556 +0000 UTC m=+0.045632541 container remove 9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:31:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:35.392 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6765dfc8-ec5b-490f-8394-678bbb32950a]: (4, ('Sat Jan 31 08:31:34 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 (9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68)\n9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68\nSat Jan 31 08:31:35 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 (9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68)\n9915fbc2f6f18a2e1d0464d7bb3cb8f14aec2a546a5f78d770a2f96b8b2a0d68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:35.394 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec3e885-5ec6-47b0-b09c-cf8c85dc4300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:35.395 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape26a2af1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:35 compute-2 kernel: tape26a2af1-a0: left promiscuous mode
Jan 31 08:31:35 compute-2 nova_compute[226829]: 2026-01-31 08:31:35.397 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:35 compute-2 nova_compute[226829]: 2026-01-31 08:31:35.402 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:35.408 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[97c6b154-2e78-409c-8a0d-49ff88be71a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:35.425 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4542ed1f-3fbd-44c4-a0bc-8766de910099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:35.426 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1c4b1e-65e2-4f73-b031-208b7876093d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:35.440 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[660e8f55-afe4-4909-93bf-ec00ebf4e355]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850772, 'reachable_time': 19289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306047, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:35.442 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:31:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:31:35.443 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a30461-a459-4b00-a5ca-d35d611b5b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:35 compute-2 systemd[1]: run-netns-ovnmeta\x2de26a2af1\x2da850\x2d4885\x2d977e\x2d596b6be13fb8.mount: Deactivated successfully.
Jan 31 08:31:35 compute-2 nova_compute[226829]: 2026-01-31 08:31:35.611 226833 DEBUG nova.compute.manager [req-d4e3cbcc-0e61-4fc8-9629-83862e2766cd req-1c021c14-8e7c-4795-8765-9ef02b16a169 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Received event network-vif-unplugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:35 compute-2 nova_compute[226829]: 2026-01-31 08:31:35.611 226833 DEBUG oslo_concurrency.lockutils [req-d4e3cbcc-0e61-4fc8-9629-83862e2766cd req-1c021c14-8e7c-4795-8765-9ef02b16a169 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:35 compute-2 nova_compute[226829]: 2026-01-31 08:31:35.611 226833 DEBUG oslo_concurrency.lockutils [req-d4e3cbcc-0e61-4fc8-9629-83862e2766cd req-1c021c14-8e7c-4795-8765-9ef02b16a169 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:35 compute-2 nova_compute[226829]: 2026-01-31 08:31:35.612 226833 DEBUG oslo_concurrency.lockutils [req-d4e3cbcc-0e61-4fc8-9629-83862e2766cd req-1c021c14-8e7c-4795-8765-9ef02b16a169 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:35 compute-2 nova_compute[226829]: 2026-01-31 08:31:35.612 226833 DEBUG nova.compute.manager [req-d4e3cbcc-0e61-4fc8-9629-83862e2766cd req-1c021c14-8e7c-4795-8765-9ef02b16a169 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] No waiting events found dispatching network-vif-unplugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:31:35 compute-2 nova_compute[226829]: 2026-01-31 08:31:35.612 226833 DEBUG nova.compute.manager [req-d4e3cbcc-0e61-4fc8-9629-83862e2766cd req-1c021c14-8e7c-4795-8765-9ef02b16a169 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Received event network-vif-unplugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:31:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:35 compute-2 ceph-mon[77282]: pgmap v2955: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Jan 31 08:31:35 compute-2 nova_compute[226829]: 2026-01-31 08:31:35.837 226833 INFO nova.virt.libvirt.driver [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Deleting instance files /var/lib/nova/instances/cd7614f6-e095-4eaf-bc4a-f749f49d3da7_del
Jan 31 08:31:35 compute-2 nova_compute[226829]: 2026-01-31 08:31:35.838 226833 INFO nova.virt.libvirt.driver [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Deletion of /var/lib/nova/instances/cd7614f6-e095-4eaf-bc4a-f749f49d3da7_del complete
Jan 31 08:31:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:35.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:36 compute-2 nova_compute[226829]: 2026-01-31 08:31:36.066 226833 INFO nova.compute.manager [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Took 1.38 seconds to destroy the instance on the hypervisor.
Jan 31 08:31:36 compute-2 nova_compute[226829]: 2026-01-31 08:31:36.067 226833 DEBUG oslo.service.loopingcall [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:31:36 compute-2 nova_compute[226829]: 2026-01-31 08:31:36.068 226833 DEBUG nova.compute.manager [-] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:31:36 compute-2 nova_compute[226829]: 2026-01-31 08:31:36.068 226833 DEBUG nova.network.neutron [-] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:31:36 compute-2 sudo[306050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:36 compute-2 sudo[306050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:36 compute-2 sudo[306050]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:36 compute-2 sudo[306075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:31:36 compute-2 sudo[306075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:36 compute-2 sudo[306075]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:36 compute-2 nova_compute[226829]: 2026-01-31 08:31:36.580 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848281.5793164, e89132fd-2d0c-475e-a3c5-0407e4cbbbb8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:31:36 compute-2 nova_compute[226829]: 2026-01-31 08:31:36.581 226833 INFO nova.compute.manager [-] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] VM Stopped (Lifecycle Event)
Jan 31 08:31:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:36.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:36 compute-2 nova_compute[226829]: 2026-01-31 08:31:36.631 226833 DEBUG nova.compute.manager [None req-01637385-9130-4a3d-80e4-bb0f8a1fdde7 - - - - - -] [instance: e89132fd-2d0c-475e-a3c5-0407e4cbbbb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:31:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:37 compute-2 nova_compute[226829]: 2026-01-31 08:31:37.279 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:37 compute-2 nova_compute[226829]: 2026-01-31 08:31:37.851 226833 DEBUG nova.compute.manager [req-e979ac27-c6aa-4de0-8d5a-2e456651a19c req-e301d94f-b214-425e-af38-f7be1376bbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Received event network-vif-plugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:37 compute-2 nova_compute[226829]: 2026-01-31 08:31:37.851 226833 DEBUG oslo_concurrency.lockutils [req-e979ac27-c6aa-4de0-8d5a-2e456651a19c req-e301d94f-b214-425e-af38-f7be1376bbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:37 compute-2 nova_compute[226829]: 2026-01-31 08:31:37.852 226833 DEBUG oslo_concurrency.lockutils [req-e979ac27-c6aa-4de0-8d5a-2e456651a19c req-e301d94f-b214-425e-af38-f7be1376bbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:37 compute-2 nova_compute[226829]: 2026-01-31 08:31:37.852 226833 DEBUG oslo_concurrency.lockutils [req-e979ac27-c6aa-4de0-8d5a-2e456651a19c req-e301d94f-b214-425e-af38-f7be1376bbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:37 compute-2 nova_compute[226829]: 2026-01-31 08:31:37.852 226833 DEBUG nova.compute.manager [req-e979ac27-c6aa-4de0-8d5a-2e456651a19c req-e301d94f-b214-425e-af38-f7be1376bbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] No waiting events found dispatching network-vif-plugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:31:37 compute-2 nova_compute[226829]: 2026-01-31 08:31:37.852 226833 WARNING nova.compute.manager [req-e979ac27-c6aa-4de0-8d5a-2e456651a19c req-e301d94f-b214-425e-af38-f7be1376bbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Received unexpected event network-vif-plugged-ffaf11e4-ab25-435a-a550-4c6c0b25801c for instance with vm_state active and task_state deleting.
Jan 31 08:31:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:37.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:38 compute-2 ceph-mon[77282]: pgmap v2956: 305 pgs: 305 active+clean; 480 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 17 KiB/s wr, 97 op/s
Jan 31 08:31:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:38.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:38 compute-2 nova_compute[226829]: 2026-01-31 08:31:38.707 226833 DEBUG nova.network.neutron [-] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:38 compute-2 nova_compute[226829]: 2026-01-31 08:31:38.774 226833 INFO nova.compute.manager [-] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Took 2.71 seconds to deallocate network for instance.
Jan 31 08:31:39 compute-2 nova_compute[226829]: 2026-01-31 08:31:39.026 226833 DEBUG oslo_concurrency.lockutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:39 compute-2 nova_compute[226829]: 2026-01-31 08:31:39.027 226833 DEBUG oslo_concurrency.lockutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:39 compute-2 nova_compute[226829]: 2026-01-31 08:31:39.095 226833 DEBUG nova.compute.manager [req-0f2988f6-524e-4829-9a48-1a735843755d req-d3016c60-f123-4628-8c21-14a4181b9189 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Received event network-vif-deleted-ffaf11e4-ab25-435a-a550-4c6c0b25801c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:39 compute-2 nova_compute[226829]: 2026-01-31 08:31:39.113 226833 DEBUG oslo_concurrency.processutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3092006587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:31:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2549394482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:39 compute-2 nova_compute[226829]: 2026-01-31 08:31:39.608 226833 DEBUG oslo_concurrency.processutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:39 compute-2 nova_compute[226829]: 2026-01-31 08:31:39.613 226833 DEBUG nova.compute.provider_tree [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:31:39 compute-2 nova_compute[226829]: 2026-01-31 08:31:39.692 226833 DEBUG nova.scheduler.client.report [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:31:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:39 compute-2 nova_compute[226829]: 2026-01-31 08:31:39.900 226833 DEBUG oslo_concurrency.lockutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:39 compute-2 nova_compute[226829]: 2026-01-31 08:31:39.938 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:39 compute-2 nova_compute[226829]: 2026-01-31 08:31:39.965 226833 INFO nova.scheduler.client.report [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Deleted allocations for instance cd7614f6-e095-4eaf-bc4a-f749f49d3da7
Jan 31 08:31:40 compute-2 nova_compute[226829]: 2026-01-31 08:31:40.112 226833 DEBUG oslo_concurrency.lockutils [None req-97281158-0ed5-4dfa-adb4-374ce172e772 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "cd7614f6-e095-4eaf-bc4a-f749f49d3da7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:40 compute-2 ceph-mon[77282]: pgmap v2957: 305 pgs: 305 active+clean; 447 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:31:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2549394482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:40.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:41.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:41 compute-2 ceph-mon[77282]: pgmap v2958: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 140 op/s
Jan 31 08:31:42 compute-2 nova_compute[226829]: 2026-01-31 08:31:42.281 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:42.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/632567154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/185363447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:43 compute-2 nova_compute[226829]: 2026-01-31 08:31:43.278 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:43 compute-2 nova_compute[226829]: 2026-01-31 08:31:43.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:43 compute-2 nova_compute[226829]: 2026-01-31 08:31:43.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:31:43 compute-2 nova_compute[226829]: 2026-01-31 08:31:43.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:31:43 compute-2 nova_compute[226829]: 2026-01-31 08:31:43.645 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:31:43 compute-2 nova_compute[226829]: 2026-01-31 08:31:43.646 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:43.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:44 compute-2 ceph-mon[77282]: pgmap v2959: 305 pgs: 305 active+clean; 442 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 171 op/s
Jan 31 08:31:44 compute-2 nova_compute[226829]: 2026-01-31 08:31:44.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:44.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:44 compute-2 nova_compute[226829]: 2026-01-31 08:31:44.941 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:45 compute-2 ceph-mon[77282]: pgmap v2960: 305 pgs: 305 active+clean; 437 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Jan 31 08:31:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:45.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:46.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:47 compute-2 nova_compute[226829]: 2026-01-31 08:31:47.284 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:47 compute-2 ceph-mon[77282]: pgmap v2961: 305 pgs: 305 active+clean; 416 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 259 op/s
Jan 31 08:31:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/881127251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:47.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:31:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1011709156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:31:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1011709156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:48.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1011709156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1011709156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:49.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:49 compute-2 nova_compute[226829]: 2026-01-31 08:31:49.916 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848294.914976, cd7614f6-e095-4eaf-bc4a-f749f49d3da7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:31:49 compute-2 nova_compute[226829]: 2026-01-31 08:31:49.916 226833 INFO nova.compute.manager [-] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] VM Stopped (Lifecycle Event)
Jan 31 08:31:49 compute-2 nova_compute[226829]: 2026-01-31 08:31:49.942 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:49 compute-2 ceph-mon[77282]: pgmap v2962: 305 pgs: 305 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.1 MiB/s wr, 201 op/s
Jan 31 08:31:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/602892621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:50 compute-2 nova_compute[226829]: 2026-01-31 08:31:50.279 226833 DEBUG nova.compute.manager [None req-3dc95cec-f3f8-43af-adaf-8b49428ee222 - - - - - -] [instance: cd7614f6-e095-4eaf-bc4a-f749f49d3da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:31:50 compute-2 nova_compute[226829]: 2026-01-31 08:31:50.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:50.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e377 e377: 3 total, 3 up, 3 in
Jan 31 08:31:51 compute-2 nova_compute[226829]: 2026-01-31 08:31:51.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:51.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:52 compute-2 ceph-mon[77282]: pgmap v2963: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.9 MiB/s wr, 245 op/s
Jan 31 08:31:52 compute-2 ceph-mon[77282]: osdmap e377: 3 total, 3 up, 3 in
Jan 31 08:31:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3824279903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2375828464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:52 compute-2 nova_compute[226829]: 2026-01-31 08:31:52.286 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:52 compute-2 nova_compute[226829]: 2026-01-31 08:31:52.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:52.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:53 compute-2 sudo[306131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:53 compute-2 sudo[306131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:53 compute-2 sudo[306131]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:53 compute-2 sudo[306156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:53 compute-2 sudo[306156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:53 compute-2 sudo[306156]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3401579954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:53.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:54 compute-2 nova_compute[226829]: 2026-01-31 08:31:54.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:54.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:54 compute-2 nova_compute[226829]: 2026-01-31 08:31:54.890 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:54 compute-2 nova_compute[226829]: 2026-01-31 08:31:54.890 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:54 compute-2 nova_compute[226829]: 2026-01-31 08:31:54.891 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:54 compute-2 nova_compute[226829]: 2026-01-31 08:31:54.891 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:31:54 compute-2 nova_compute[226829]: 2026-01-31 08:31:54.891 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:54 compute-2 nova_compute[226829]: 2026-01-31 08:31:54.945 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:31:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:55.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:31:56 compute-2 podman[306194]: 2026-01-31 08:31:56.195844959 +0000 UTC m=+0.080027383 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:31:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:56.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:57 compute-2 nova_compute[226829]: 2026-01-31 08:31:57.288 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:57 compute-2 ceph-mds[84366]: mds.beacon.cephfs.compute-2.ihffma missed beacon ack from the monitors
Jan 31 08:31:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:57.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:58 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.095175266s, txc = 0x562dbacc7800
Jan 31 08:31:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:58.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:58 compute-2 ceph-mon[77282]: pgmap v2965: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.4 MiB/s wr, 264 op/s
Jan 31 08:31:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:31:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/433518010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:59 compute-2 nova_compute[226829]: 2026-01-31 08:31:59.021 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:59 compute-2 nova_compute[226829]: 2026-01-31 08:31:59.172 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:31:59 compute-2 nova_compute[226829]: 2026-01-31 08:31:59.174 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4187MB free_disk=20.897193908691406GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:31:59 compute-2 nova_compute[226829]: 2026-01-31 08:31:59.174 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:59 compute-2 nova_compute[226829]: 2026-01-31 08:31:59.174 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:59 compute-2 ceph-mon[77282]: pgmap v2966: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.8 MiB/s wr, 230 op/s
Jan 31 08:31:59 compute-2 ceph-mon[77282]: pgmap v2967: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 183 op/s
Jan 31 08:31:59 compute-2 ceph-mon[77282]: pgmap v2968: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 166 op/s
Jan 31 08:31:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2481018420' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2481018420' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/433518010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:31:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:59.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:59 compute-2 nova_compute[226829]: 2026-01-31 08:31:59.949 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:32:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1410069005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:32:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1410069005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:00 compute-2 nova_compute[226829]: 2026-01-31 08:32:00.338 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:32:00 compute-2 nova_compute[226829]: 2026-01-31 08:32:00.339 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:32:00 compute-2 nova_compute[226829]: 2026-01-31 08:32:00.369 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:32:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:32:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:00.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:32:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1410069005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1410069005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:32:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2472068711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:00 compute-2 nova_compute[226829]: 2026-01-31 08:32:00.784 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:32:00 compute-2 nova_compute[226829]: 2026-01-31 08:32:00.789 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:32:01 compute-2 nova_compute[226829]: 2026-01-31 08:32:01.013 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:32:01 compute-2 nova_compute[226829]: 2026-01-31 08:32:01.560 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:32:01 compute-2 nova_compute[226829]: 2026-01-31 08:32:01.562 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:01 compute-2 ceph-mon[77282]: pgmap v2969: 305 pgs: 305 active+clean; 407 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 102 KiB/s wr, 128 op/s
Jan 31 08:32:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2472068711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:01.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:02 compute-2 nova_compute[226829]: 2026-01-31 08:32:02.291 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:02.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 e378: 3 total, 3 up, 3 in
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.324786) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323324879, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1587, "num_deletes": 255, "total_data_size": 3415203, "memory_usage": 3459832, "flush_reason": "Manual Compaction"}
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323337405, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 1478122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69014, "largest_seqno": 70595, "table_properties": {"data_size": 1472581, "index_size": 2744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14755, "raw_average_key_size": 21, "raw_value_size": 1460436, "raw_average_value_size": 2138, "num_data_blocks": 119, "num_entries": 683, "num_filter_entries": 683, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848212, "oldest_key_time": 1769848212, "file_creation_time": 1769848323, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 12668 microseconds, and 4550 cpu microseconds.
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.337460) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 1478122 bytes OK
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.337479) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.339003) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.339018) EVENT_LOG_v1 {"time_micros": 1769848323339013, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.339053) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 3407811, prev total WAL file size 3407811, number of live WAL files 2.
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.339720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323539' seq:72057594037927935, type:22 .. '6D6772737461740032353131' seq:0, type:0; will stop at (end)
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(1443KB)], [138(11MB)]
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323339818, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 13973516, "oldest_snapshot_seqno": -1}
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9388 keys, 10836505 bytes, temperature: kUnknown
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323407166, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10836505, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10777885, "index_size": 34059, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23493, "raw_key_size": 246168, "raw_average_key_size": 26, "raw_value_size": 10614987, "raw_average_value_size": 1130, "num_data_blocks": 1299, "num_entries": 9388, "num_filter_entries": 9388, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769848323, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.407465) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10836505 bytes
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.409375) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.2 rd, 160.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 11.9 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(16.8) write-amplify(7.3) OK, records in: 9869, records dropped: 481 output_compression: NoCompression
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.409407) EVENT_LOG_v1 {"time_micros": 1769848323409394, "job": 88, "event": "compaction_finished", "compaction_time_micros": 67449, "compaction_time_cpu_micros": 25913, "output_level": 6, "num_output_files": 1, "total_output_size": 10836505, "num_input_records": 9869, "num_output_records": 9388, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323409666, "job": 88, "event": "table_file_deletion", "file_number": 140}
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323410658, "job": 88, "event": "table_file_deletion", "file_number": 138}
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.339549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.410690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.410694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.410695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.410697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:03.410698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:03.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:04 compute-2 ceph-mon[77282]: pgmap v2970: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 102 KiB/s wr, 129 op/s
Jan 31 08:32:04 compute-2 ceph-mon[77282]: osdmap e378: 3 total, 3 up, 3 in
Jan 31 08:32:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:04.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:32:04.737 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:32:04 compute-2 nova_compute[226829]: 2026-01-31 08:32:04.737 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:32:04.739 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:32:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:32:04.740 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:32:04 compute-2 nova_compute[226829]: 2026-01-31 08:32:04.951 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:05.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:06 compute-2 ceph-mon[77282]: pgmap v2972: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 31 KiB/s wr, 100 op/s
Jan 31 08:32:06 compute-2 podman[306261]: 2026-01-31 08:32:06.179813818 +0000 UTC m=+0.070172895 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:32:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:06.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:32:06.908 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:32:06.908 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:32:06.908 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:07 compute-2 nova_compute[226829]: 2026-01-31 08:32:07.293 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3019601616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3019601616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:07.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:08 compute-2 ceph-mon[77282]: pgmap v2973: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 664 KiB/s rd, 31 KiB/s wr, 95 op/s
Jan 31 08:32:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3475266291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:08.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:09.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:09 compute-2 nova_compute[226829]: 2026-01-31 08:32:09.953 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:10 compute-2 ceph-mon[77282]: pgmap v2974: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 670 KiB/s rd, 31 KiB/s wr, 103 op/s
Jan 31 08:32:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:10.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:11 compute-2 ceph-mon[77282]: pgmap v2975: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 40 KiB/s wr, 56 op/s
Jan 31 08:32:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:11.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:12 compute-2 nova_compute[226829]: 2026-01-31 08:32:12.296 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:32:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:12.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:32:13 compute-2 sudo[306283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:13 compute-2 sudo[306283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:13 compute-2 sudo[306283]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:13 compute-2 sudo[306308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:13 compute-2 sudo[306308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:13 compute-2 sudo[306308]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:13 compute-2 ceph-mon[77282]: pgmap v2976: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 23 KiB/s wr, 32 op/s
Jan 31 08:32:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:13.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:14 compute-2 nova_compute[226829]: 2026-01-31 08:32:14.563 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:14 compute-2 nova_compute[226829]: 2026-01-31 08:32:14.564 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:32:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:14.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:14 compute-2 nova_compute[226829]: 2026-01-31 08:32:14.955 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:15 compute-2 ceph-mon[77282]: pgmap v2977: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 10 KiB/s wr, 20 op/s
Jan 31 08:32:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:15.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:16.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:17 compute-2 nova_compute[226829]: 2026-01-31 08:32:17.014 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-2 nova_compute[226829]: 2026-01-31 08:32:17.099 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-2 nova_compute[226829]: 2026-01-31 08:32:17.298 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:17.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:18 compute-2 ceph-mon[77282]: pgmap v2978: 305 pgs: 305 active+clean; 316 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 22 op/s
Jan 31 08:32:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/216193617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/216193617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:19.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:19 compute-2 nova_compute[226829]: 2026-01-31 08:32:19.958 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:20 compute-2 ceph-mon[77282]: pgmap v2979: 305 pgs: 305 active+clean; 283 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 27 op/s
Jan 31 08:32:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:21 compute-2 ceph-mon[77282]: pgmap v2980: 305 pgs: 305 active+clean; 241 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 27 op/s
Jan 31 08:32:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:21.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:22 compute-2 nova_compute[226829]: 2026-01-31 08:32:22.301 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:22 compute-2 nova_compute[226829]: 2026-01-31 08:32:22.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:22 compute-2 nova_compute[226829]: 2026-01-31 08:32:22.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:32:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:22.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:23 compute-2 ceph-mon[77282]: pgmap v2981: 305 pgs: 305 active+clean; 219 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 5.4 KiB/s wr, 37 op/s
Jan 31 08:32:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:23.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:24.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:24 compute-2 nova_compute[226829]: 2026-01-31 08:32:24.960 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:25.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:26 compute-2 ceph-mon[77282]: pgmap v2982: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 5.4 KiB/s wr, 41 op/s
Jan 31 08:32:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1982090795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:32:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:26.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:32:27 compute-2 podman[306341]: 2026-01-31 08:32:27.208846217 +0000 UTC m=+0.095932835 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 08:32:27 compute-2 nova_compute[226829]: 2026-01-31 08:32:27.303 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:27.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:28 compute-2 ceph-mon[77282]: pgmap v2983: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 5.4 KiB/s wr, 34 op/s
Jan 31 08:32:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:28.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:28 compute-2 nova_compute[226829]: 2026-01-31 08:32:28.860 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:28 compute-2 nova_compute[226829]: 2026-01-31 08:32:28.861 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:32:28 compute-2 nova_compute[226829]: 2026-01-31 08:32:28.927 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:32:29 compute-2 ceph-mon[77282]: pgmap v2984: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 31 08:32:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:29.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:29 compute-2 nova_compute[226829]: 2026-01-31 08:32:29.985 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:32:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:30.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:32:31 compute-2 nova_compute[226829]: 2026-01-31 08:32:31.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:31 compute-2 ceph-mon[77282]: pgmap v2985: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:32:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:31.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:32 compute-2 nova_compute[226829]: 2026-01-31 08:32:32.307 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:32.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:33 compute-2 sudo[306371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:33 compute-2 sudo[306371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:33 compute-2 sudo[306371]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:33 compute-2 sudo[306396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:33 compute-2 sudo[306396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:33 compute-2 sudo[306396]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:33.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:34 compute-2 ceph-mon[77282]: pgmap v2986: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 19 op/s
Jan 31 08:32:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:34 compute-2 nova_compute[226829]: 2026-01-31 08:32:34.987 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:35.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:36 compute-2 ceph-mon[77282]: pgmap v2987: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 08:32:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:36.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:36 compute-2 sudo[306423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:36 compute-2 sudo[306423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:36 compute-2 sudo[306423]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:36 compute-2 sudo[306454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:36 compute-2 sudo[306454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:36 compute-2 sudo[306454]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:36 compute-2 podman[306447]: 2026-01-31 08:32:36.866075659 +0000 UTC m=+0.042263848 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:32:36 compute-2 sudo[306492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:36 compute-2 sudo[306492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:36 compute-2 sudo[306492]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:36 compute-2 sudo[306517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:32:36 compute-2 sudo[306517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:37 compute-2 sudo[306517]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:37 compute-2 nova_compute[226829]: 2026-01-31 08:32:37.308 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:37 compute-2 ceph-mon[77282]: pgmap v2988: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Jan 31 08:32:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:37.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.021039) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358021073, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 570, "num_deletes": 255, "total_data_size": 858070, "memory_usage": 869048, "flush_reason": "Manual Compaction"}
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358028049, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 565752, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70600, "largest_seqno": 71165, "table_properties": {"data_size": 562822, "index_size": 901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6727, "raw_average_key_size": 18, "raw_value_size": 556979, "raw_average_value_size": 1525, "num_data_blocks": 40, "num_entries": 365, "num_filter_entries": 365, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848324, "oldest_key_time": 1769848324, "file_creation_time": 1769848358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 7046 microseconds, and 2033 cpu microseconds.
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.028087) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 565752 bytes OK
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.028100) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.032502) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.032515) EVENT_LOG_v1 {"time_micros": 1769848358032512, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.032530) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 854821, prev total WAL file size 854821, number of live WAL files 2.
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.032943) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(552KB)], [141(10MB)]
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358032977, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 11402257, "oldest_snapshot_seqno": -1}
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9233 keys, 11267689 bytes, temperature: kUnknown
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358091722, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 11267689, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11209219, "index_size": 34293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23109, "raw_key_size": 243882, "raw_average_key_size": 26, "raw_value_size": 11048179, "raw_average_value_size": 1196, "num_data_blocks": 1306, "num_entries": 9233, "num_filter_entries": 9233, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769848358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.092315) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11267689 bytes
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.101741) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.1 rd, 190.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.3 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(40.1) write-amplify(19.9) OK, records in: 9753, records dropped: 520 output_compression: NoCompression
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.101776) EVENT_LOG_v1 {"time_micros": 1769848358101764, "job": 90, "event": "compaction_finished", "compaction_time_micros": 59060, "compaction_time_cpu_micros": 20129, "output_level": 6, "num_output_files": 1, "total_output_size": 11267689, "num_input_records": 9753, "num_output_records": 9233, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358101982, "job": 90, "event": "table_file_deletion", "file_number": 143}
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358103404, "job": 90, "event": "table_file_deletion", "file_number": 141}
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.032891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.103436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.103440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.103442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.103455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:32:38.103457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:32:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:32:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:32:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:32:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:32:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:32:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:32:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:38.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:39.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:39 compute-2 nova_compute[226829]: 2026-01-31 08:32:39.989 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:40 compute-2 ceph-mon[77282]: pgmap v2989: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 1.2 KiB/s wr, 8 op/s
Jan 31 08:32:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2808292028' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2808292028' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:40.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:32:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:41.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:32:42 compute-2 ceph-mon[77282]: pgmap v2990: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 18 op/s
Jan 31 08:32:42 compute-2 nova_compute[226829]: 2026-01-31 08:32:42.310 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:42.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:42 compute-2 nova_compute[226829]: 2026-01-31 08:32:42.747 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:43.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:44 compute-2 ceph-mon[77282]: pgmap v2991: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Jan 31 08:32:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:32:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:32:44 compute-2 sudo[306579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:44 compute-2 sudo[306579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:44 compute-2 sudo[306579]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:44 compute-2 sudo[306604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:32:44 compute-2 sudo[306604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:44 compute-2 sudo[306604]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:44.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:44 compute-2 nova_compute[226829]: 2026-01-31 08:32:44.991 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:45 compute-2 nova_compute[226829]: 2026-01-31 08:32:45.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:45 compute-2 nova_compute[226829]: 2026-01-31 08:32:45.487 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:32:45 compute-2 nova_compute[226829]: 2026-01-31 08:32:45.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:32:45 compute-2 nova_compute[226829]: 2026-01-31 08:32:45.653 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:32:45 compute-2 nova_compute[226829]: 2026-01-31 08:32:45.654 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:45.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:46 compute-2 ceph-mon[77282]: pgmap v2992: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 31 08:32:46 compute-2 nova_compute[226829]: 2026-01-31 08:32:46.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:47 compute-2 nova_compute[226829]: 2026-01-31 08:32:47.312 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:47.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:48 compute-2 ceph-mon[77282]: pgmap v2993: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 4.0 KiB/s wr, 15 op/s
Jan 31 08:32:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:48.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:32:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:49.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:32:49 compute-2 nova_compute[226829]: 2026-01-31 08:32:49.993 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:50 compute-2 ceph-mon[77282]: pgmap v2994: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 2.9 KiB/s wr, 14 op/s
Jan 31 08:32:50 compute-2 nova_compute[226829]: 2026-01-31 08:32:50.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:50.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:51.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:52 compute-2 ceph-mon[77282]: pgmap v2995: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.5 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Jan 31 08:32:52 compute-2 nova_compute[226829]: 2026-01-31 08:32:52.314 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:52 compute-2 nova_compute[226829]: 2026-01-31 08:32:52.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:52 compute-2 nova_compute[226829]: 2026-01-31 08:32:52.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:32:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:52.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:32:53 compute-2 ceph-mon[77282]: pgmap v2996: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 2.7 KiB/s wr, 1 op/s
Jan 31 08:32:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/723900355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:53 compute-2 sudo[306633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:53 compute-2 sudo[306633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:53 compute-2 sudo[306633]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:53 compute-2 sudo[306658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:53 compute-2 sudo[306658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:53 compute-2 sudo[306658]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:53.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/214475849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/214475849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1373963127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:54 compute-2 nova_compute[226829]: 2026-01-31 08:32:54.996 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:55 compute-2 nova_compute[226829]: 2026-01-31 08:32:55.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:55 compute-2 ceph-mon[77282]: pgmap v2997: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:32:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:55.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:56.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:57 compute-2 nova_compute[226829]: 2026-01-31 08:32:57.409 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:57 compute-2 ceph-mon[77282]: pgmap v2998: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:32:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:57.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:58 compute-2 nova_compute[226829]: 2026-01-31 08:32:58.066 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:58 compute-2 nova_compute[226829]: 2026-01-31 08:32:58.066 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:58 compute-2 nova_compute[226829]: 2026-01-31 08:32:58.066 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:58 compute-2 nova_compute[226829]: 2026-01-31 08:32:58.067 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:32:58 compute-2 nova_compute[226829]: 2026-01-31 08:32:58.067 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:32:58 compute-2 podman[306687]: 2026-01-31 08:32:58.175716092 +0000 UTC m=+0.061075119 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:32:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:32:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1899354999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:58 compute-2 nova_compute[226829]: 2026-01-31 08:32:58.474 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:32:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:58 compute-2 nova_compute[226829]: 2026-01-31 08:32:58.622 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:32:58 compute-2 nova_compute[226829]: 2026-01-31 08:32:58.624 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4176MB free_disk=20.94268798828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:32:58 compute-2 nova_compute[226829]: 2026-01-31 08:32:58.624 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:58 compute-2 nova_compute[226829]: 2026-01-31 08:32:58.624 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:58.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1899354999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:32:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:59.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:59 compute-2 nova_compute[226829]: 2026-01-31 08:32:59.998 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:00 compute-2 ceph-mon[77282]: pgmap v2999: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:00.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:01 compute-2 ceph-mon[77282]: pgmap v3000: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:01 compute-2 nova_compute[226829]: 2026-01-31 08:33:01.738 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:33:01 compute-2 nova_compute[226829]: 2026-01-31 08:33:01.739 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:33:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:01.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:02 compute-2 nova_compute[226829]: 2026-01-31 08:33:02.135 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:02 compute-2 nova_compute[226829]: 2026-01-31 08:33:02.411 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:33:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1638577862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:02 compute-2 nova_compute[226829]: 2026-01-31 08:33:02.625 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:02 compute-2 nova_compute[226829]: 2026-01-31 08:33:02.631 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:33:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:02.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:02 compute-2 nova_compute[226829]: 2026-01-31 08:33:02.798 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:33:02 compute-2 nova_compute[226829]: 2026-01-31 08:33:02.801 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:33:02 compute-2 nova_compute[226829]: 2026-01-31 08:33:02.802 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:03 compute-2 ceph-mon[77282]: pgmap v3001: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1638577862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/929178862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4132899862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:03.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:04.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:05 compute-2 nova_compute[226829]: 2026-01-31 08:33:05.001 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:05 compute-2 ceph-mon[77282]: pgmap v3002: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 31 08:33:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:05.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:06.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:33:06.908 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:33:06.909 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:33:06.909 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:07 compute-2 podman[306761]: 2026-01-31 08:33:07.200045071 +0000 UTC m=+0.073043593 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 31 08:33:07 compute-2 nova_compute[226829]: 2026-01-31 08:33:07.414 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:07.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:08 compute-2 nova_compute[226829]: 2026-01-31 08:33:08.101 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:33:08.102 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:33:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:33:08.103 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:33:08 compute-2 ceph-mon[77282]: pgmap v3003: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 31 08:33:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:08.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:09.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:10 compute-2 nova_compute[226829]: 2026-01-31 08:33:10.004 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:10 compute-2 ceph-mon[77282]: pgmap v3004: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 31 08:33:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:10.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:11.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:33:12.106 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:33:12 compute-2 ceph-mon[77282]: pgmap v3005: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 2 op/s
Jan 31 08:33:12 compute-2 nova_compute[226829]: 2026-01-31 08:33:12.416 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:33:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3327579943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:33:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3327579943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:12.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3327579943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3327579943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:13 compute-2 sudo[306784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:13 compute-2 sudo[306784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:13 compute-2 sudo[306784]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:13 compute-2 sudo[306809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:13 compute-2 sudo[306809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:13 compute-2 sudo[306809]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:33:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:13.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:33:14 compute-2 ceph-mon[77282]: pgmap v3006: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 3 op/s
Jan 31 08:33:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:14.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:15 compute-2 nova_compute[226829]: 2026-01-31 08:33:15.007 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:15 compute-2 nova_compute[226829]: 2026-01-31 08:33:15.803 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:15.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:15 compute-2 nova_compute[226829]: 2026-01-31 08:33:15.993 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:15 compute-2 nova_compute[226829]: 2026-01-31 08:33:15.994 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:33:16 compute-2 ceph-mon[77282]: pgmap v3007: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 13 op/s
Jan 31 08:33:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:33:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:16.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:33:17 compute-2 nova_compute[226829]: 2026-01-31 08:33:17.418 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:18 compute-2 ceph-mon[77282]: pgmap v3008: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.1 KiB/s wr, 26 op/s
Jan 31 08:33:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:18.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3387723786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3387723786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:19 compute-2 ceph-mon[77282]: pgmap v3009: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.2 KiB/s wr, 27 op/s
Jan 31 08:33:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:19.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:20 compute-2 nova_compute[226829]: 2026-01-31 08:33:20.008 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:20.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:21 compute-2 ceph-mon[77282]: pgmap v3010: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 6.2 KiB/s wr, 29 op/s
Jan 31 08:33:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:33:21 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3426008691' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:33:21 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3426008691' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:21.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:22 compute-2 nova_compute[226829]: 2026-01-31 08:33:22.419 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3426008691' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3426008691' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:22.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:23 compute-2 ceph-mon[77282]: pgmap v3011: 305 pgs: 305 active+clean; 164 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 6.9 KiB/s wr, 37 op/s
Jan 31 08:33:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1846240337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1846240337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:23.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:24.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:25 compute-2 nova_compute[226829]: 2026-01-31 08:33:25.011 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:25 compute-2 ceph-mon[77282]: pgmap v3012: 305 pgs: 305 active+clean; 153 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 6.7 KiB/s wr, 38 op/s
Jan 31 08:33:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1196754735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1196754735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:25.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1780901387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:26.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:27 compute-2 nova_compute[226829]: 2026-01-31 08:33:27.420 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:27 compute-2 ceph-mon[77282]: pgmap v3013: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 7.6 KiB/s wr, 83 op/s
Jan 31 08:33:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:33:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:28.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:33:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:33:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:28.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:33:29 compute-2 podman[306842]: 2026-01-31 08:33:29.263345724 +0000 UTC m=+0.149868319 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:33:29 compute-2 ceph-mon[77282]: pgmap v3014: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 5.2 KiB/s wr, 72 op/s
Jan 31 08:33:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:33:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:30.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:33:30 compute-2 nova_compute[226829]: 2026-01-31 08:33:30.013 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:33:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:33:31 compute-2 ceph-mon[77282]: pgmap v3015: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.9 KiB/s wr, 70 op/s
Jan 31 08:33:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:33:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:32.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:33:32 compute-2 nova_compute[226829]: 2026-01-31 08:33:32.423 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:32.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.837777) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412837839, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 798, "num_deletes": 251, "total_data_size": 1497543, "memory_usage": 1528208, "flush_reason": "Manual Compaction"}
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412845453, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 987800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71170, "largest_seqno": 71963, "table_properties": {"data_size": 983972, "index_size": 1607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8809, "raw_average_key_size": 19, "raw_value_size": 976286, "raw_average_value_size": 2179, "num_data_blocks": 71, "num_entries": 448, "num_filter_entries": 448, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848358, "oldest_key_time": 1769848358, "file_creation_time": 1769848412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 7754 microseconds, and 3061 cpu microseconds.
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.845525) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 987800 bytes OK
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.845552) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.846970) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.846992) EVENT_LOG_v1 {"time_micros": 1769848412846985, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.847014) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1493374, prev total WAL file size 1493374, number of live WAL files 2.
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.847619) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(964KB)], [144(10MB)]
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412847660, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 12255489, "oldest_snapshot_seqno": -1}
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 9167 keys, 10449300 bytes, temperature: kUnknown
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412920651, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 10449300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10391934, "index_size": 33317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 243280, "raw_average_key_size": 26, "raw_value_size": 10232876, "raw_average_value_size": 1116, "num_data_blocks": 1259, "num_entries": 9167, "num_filter_entries": 9167, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769848412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.920862) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10449300 bytes
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.922524) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.8 rd, 143.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(23.0) write-amplify(10.6) OK, records in: 9681, records dropped: 514 output_compression: NoCompression
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.922540) EVENT_LOG_v1 {"time_micros": 1769848412922532, "job": 92, "event": "compaction_finished", "compaction_time_micros": 73046, "compaction_time_cpu_micros": 31422, "output_level": 6, "num_output_files": 1, "total_output_size": 10449300, "num_input_records": 9681, "num_output_records": 9167, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412922724, "job": 92, "event": "table_file_deletion", "file_number": 146}
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412923915, "job": 92, "event": "table_file_deletion", "file_number": 144}
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.847540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.924060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.924070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.924073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.924076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:33:32.924079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:33 compute-2 sudo[306871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:33 compute-2 sudo[306871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:33 compute-2 sudo[306871]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:33 compute-2 ceph-mon[77282]: pgmap v3016: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 67 op/s
Jan 31 08:33:33 compute-2 sudo[306896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:33 compute-2 sudo[306896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:33 compute-2 sudo[306896]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:34.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:34.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:35 compute-2 nova_compute[226829]: 2026-01-31 08:33:35.015 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:35 compute-2 ceph-mon[77282]: pgmap v3017: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.1 KiB/s wr, 58 op/s
Jan 31 08:33:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:36.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:36.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:37 compute-2 nova_compute[226829]: 2026-01-31 08:33:37.426 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:37 compute-2 ceph-mon[77282]: pgmap v3018: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.1 KiB/s wr, 56 op/s
Jan 31 08:33:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:38.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:38 compute-2 podman[306924]: 2026-01-31 08:33:38.164698595 +0000 UTC m=+0.053064890 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:33:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:38.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:33:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:40.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:33:40 compute-2 nova_compute[226829]: 2026-01-31 08:33:40.017 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:40 compute-2 ceph-mon[77282]: pgmap v3019: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:33:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:40.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2748956884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:41 compute-2 ceph-mon[77282]: pgmap v3020: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:42.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:42 compute-2 nova_compute[226829]: 2026-01-31 08:33:42.428 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:42.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:43 compute-2 ceph-mon[77282]: pgmap v3021: 305 pgs: 305 active+clean; 129 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 437 KiB/s wr, 24 op/s
Jan 31 08:33:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:43 compute-2 nova_compute[226829]: 2026-01-31 08:33:43.675 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:44.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:44 compute-2 sudo[306947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:44 compute-2 sudo[306947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-2 sudo[306947]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-2 sudo[306972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:44 compute-2 sudo[306972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-2 sudo[306972]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-2 sudo[306997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:44 compute-2 sudo[306997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-2 sudo[306997]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-2 sudo[307022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:33:44 compute-2 sudo[307022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-2 sudo[307022]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:44.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:45 compute-2 nova_compute[226829]: 2026-01-31 08:33:45.019 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:45 compute-2 ceph-mon[77282]: pgmap v3022: 305 pgs: 305 active+clean; 148 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Jan 31 08:33:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:33:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:33:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:33:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:33:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:33:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:33:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:46.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:46 compute-2 nova_compute[226829]: 2026-01-31 08:33:46.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:46 compute-2 nova_compute[226829]: 2026-01-31 08:33:46.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:33:46 compute-2 nova_compute[226829]: 2026-01-31 08:33:46.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:33:46 compute-2 nova_compute[226829]: 2026-01-31 08:33:46.595 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:33:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:46.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:47 compute-2 nova_compute[226829]: 2026-01-31 08:33:47.429 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:47 compute-2 nova_compute[226829]: 2026-01-31 08:33:47.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:47 compute-2 ceph-mon[77282]: pgmap v3023: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:48.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:48 compute-2 nova_compute[226829]: 2026-01-31 08:33:48.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:48.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:49 compute-2 ceph-mon[77282]: pgmap v3024: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:50 compute-2 nova_compute[226829]: 2026-01-31 08:33:50.023 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:50.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:50 compute-2 nova_compute[226829]: 2026-01-31 08:33:50.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/522710267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:33:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3827577837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:33:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:50.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:50 compute-2 sudo[307081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:50 compute-2 sudo[307081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:50 compute-2 sudo[307081]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:51 compute-2 sudo[307106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:33:51 compute-2 sudo[307106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:51 compute-2 sudo[307106]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:51 compute-2 ceph-mon[77282]: pgmap v3025: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:33:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:33:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2128890528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:52.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:52 compute-2 nova_compute[226829]: 2026-01-31 08:33:52.431 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:52 compute-2 nova_compute[226829]: 2026-01-31 08:33:52.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:52 compute-2 nova_compute[226829]: 2026-01-31 08:33:52.673 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "fb68e4df-7075-4de3-8a2e-3d59677364ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:52 compute-2 nova_compute[226829]: 2026-01-31 08:33:52.673 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:52 compute-2 nova_compute[226829]: 2026-01-31 08:33:52.718 226833 DEBUG nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:33:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:52.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:53 compute-2 nova_compute[226829]: 2026-01-31 08:33:53.130 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:53 compute-2 nova_compute[226829]: 2026-01-31 08:33:53.130 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:53 compute-2 nova_compute[226829]: 2026-01-31 08:33:53.140 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:33:53 compute-2 nova_compute[226829]: 2026-01-31 08:33:53.141 226833 INFO nova.compute.claims [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:33:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:33:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/955352894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:33:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/955352894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:53 compute-2 nova_compute[226829]: 2026-01-31 08:33:53.606 226833 DEBUG nova.scheduler.client.report [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:33:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:53 compute-2 ceph-mon[77282]: pgmap v3026: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3946079666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/955352894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/955352894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:53 compute-2 nova_compute[226829]: 2026-01-31 08:33:53.699 226833 DEBUG nova.scheduler.client.report [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:33:53 compute-2 nova_compute[226829]: 2026-01-31 08:33:53.700 226833 DEBUG nova.compute.provider_tree [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:33:53 compute-2 nova_compute[226829]: 2026-01-31 08:33:53.718 226833 DEBUG nova.scheduler.client.report [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:33:53 compute-2 nova_compute[226829]: 2026-01-31 08:33:53.760 226833 DEBUG nova.scheduler.client.report [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:33:53 compute-2 nova_compute[226829]: 2026-01-31 08:33:53.901 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:53 compute-2 sudo[307133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:53 compute-2 sudo[307133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:53 compute-2 sudo[307133]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:53 compute-2 sudo[307159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:53 compute-2 sudo[307159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:53 compute-2 sudo[307159]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:54.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:33:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3141230756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:54 compute-2 nova_compute[226829]: 2026-01-31 08:33:54.318 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:54 compute-2 nova_compute[226829]: 2026-01-31 08:33:54.322 226833 DEBUG nova.compute.provider_tree [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:33:54 compute-2 nova_compute[226829]: 2026-01-31 08:33:54.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:54 compute-2 nova_compute[226829]: 2026-01-31 08:33:54.718 226833 DEBUG nova.scheduler.client.report [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:33:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3141230756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:54.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:55 compute-2 nova_compute[226829]: 2026-01-31 08:33:55.025 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:55 compute-2 nova_compute[226829]: 2026-01-31 08:33:55.233 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:55 compute-2 nova_compute[226829]: 2026-01-31 08:33:55.234 226833 DEBUG nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:33:55 compute-2 ceph-mon[77282]: pgmap v3027: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 1.3 MiB/s wr, 2 op/s
Jan 31 08:33:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4270474919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:55 compute-2 nova_compute[226829]: 2026-01-31 08:33:55.901 226833 DEBUG nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:33:55 compute-2 nova_compute[226829]: 2026-01-31 08:33:55.901 226833 DEBUG nova.network.neutron [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:33:55 compute-2 nova_compute[226829]: 2026-01-31 08:33:55.962 226833 INFO nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.005 226833 DEBUG nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:33:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:56.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.031 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:33:56.030 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:33:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:33:56.032 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.255 226833 DEBUG nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.258 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.259 226833 INFO nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Creating image(s)
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.305 226833 DEBUG nova.storage.rbd_utils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image fb68e4df-7075-4de3-8a2e-3d59677364ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.343 226833 DEBUG nova.storage.rbd_utils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image fb68e4df-7075-4de3-8a2e-3d59677364ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.371 226833 DEBUG nova.storage.rbd_utils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image fb68e4df-7075-4de3-8a2e-3d59677364ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.375 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.447 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.449 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.450 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.451 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.479 226833 DEBUG nova.storage.rbd_utils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image fb68e4df-7075-4de3-8a2e-3d59677364ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.482 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 fb68e4df-7075-4de3-8a2e-3d59677364ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.504 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.594 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.595 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.595 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.596 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.596 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/875697994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.796 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 fb68e4df-7075-4de3-8a2e-3d59677364ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:33:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:56.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:33:56 compute-2 nova_compute[226829]: 2026-01-31 08:33:56.873 226833 DEBUG nova.storage.rbd_utils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] resizing rbd image fb68e4df-7075-4de3-8a2e-3d59677364ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.009 226833 DEBUG nova.objects.instance [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lazy-loading 'migration_context' on Instance uuid fb68e4df-7075-4de3-8a2e-3d59677364ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:33:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:33:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/189213466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.025 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.035 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.036 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Ensure instance console log exists: /var/lib/nova/instances/fb68e4df-7075-4de3-8a2e-3d59677364ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.036 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.036 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.037 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.141 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.142 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4117MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.142 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.142 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.293 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance fb68e4df-7075-4de3-8a2e-3d59677364ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.293 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.293 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.340 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:57 compute-2 nova_compute[226829]: 2026-01-31 08:33:57.433 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:57 compute-2 ceph-mon[77282]: pgmap v3028: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 615 KiB/s wr, 46 op/s
Jan 31 08:33:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/189213466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:58.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:33:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3629884805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:58 compute-2 nova_compute[226829]: 2026-01-31 08:33:58.118 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.778s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:58 compute-2 nova_compute[226829]: 2026-01-31 08:33:58.123 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:33:58 compute-2 nova_compute[226829]: 2026-01-31 08:33:58.208 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:33:58 compute-2 nova_compute[226829]: 2026-01-31 08:33:58.591 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:33:58 compute-2 nova_compute[226829]: 2026-01-31 08:33:58.591 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3629884805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:33:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:58.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:58 compute-2 nova_compute[226829]: 2026-01-31 08:33:58.990 226833 DEBUG nova.network.neutron [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Successfully created port: 7c80f45f-1c7d-43ab-b526-0caeb530f258 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:33:59 compute-2 ceph-mon[77282]: pgmap v3029: 305 pgs: 305 active+clean; 179 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 646 KiB/s wr, 74 op/s
Jan 31 08:34:00 compute-2 nova_compute[226829]: 2026-01-31 08:34:00.028 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:00.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:00 compute-2 podman[307419]: 2026-01-31 08:34:00.203563775 +0000 UTC m=+0.088080762 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:34:00 compute-2 nova_compute[226829]: 2026-01-31 08:34:00.689 226833 DEBUG nova.network.neutron [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Successfully updated port: 7c80f45f-1c7d-43ab-b526-0caeb530f258 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:34:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2726001139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:00 compute-2 nova_compute[226829]: 2026-01-31 08:34:00.839 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "refresh_cache-fb68e4df-7075-4de3-8a2e-3d59677364ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:34:00 compute-2 nova_compute[226829]: 2026-01-31 08:34:00.840 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquired lock "refresh_cache-fb68e4df-7075-4de3-8a2e-3d59677364ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:34:00 compute-2 nova_compute[226829]: 2026-01-31 08:34:00.840 226833 DEBUG nova.network.neutron [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:34:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:34:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:00.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:34:00 compute-2 nova_compute[226829]: 2026-01-31 08:34:00.913 226833 DEBUG nova.compute.manager [req-1608c270-4890-4661-b6a5-9fb9729e21e2 req-cbcd9a86-c077-4049-8096-53a5b838f7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Received event network-changed-7c80f45f-1c7d-43ab-b526-0caeb530f258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:00 compute-2 nova_compute[226829]: 2026-01-31 08:34:00.913 226833 DEBUG nova.compute.manager [req-1608c270-4890-4661-b6a5-9fb9729e21e2 req-cbcd9a86-c077-4049-8096-53a5b838f7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Refreshing instance network info cache due to event network-changed-7c80f45f-1c7d-43ab-b526-0caeb530f258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:34:00 compute-2 nova_compute[226829]: 2026-01-31 08:34:00.914 226833 DEBUG oslo_concurrency.lockutils [req-1608c270-4890-4661-b6a5-9fb9729e21e2 req-cbcd9a86-c077-4049-8096-53a5b838f7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-fb68e4df-7075-4de3-8a2e-3d59677364ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:34:01 compute-2 nova_compute[226829]: 2026-01-31 08:34:01.424 226833 DEBUG nova.network.neutron [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:34:01 compute-2 ceph-mon[77282]: pgmap v3030: 305 pgs: 305 active+clean; 209 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 78 op/s
Jan 31 08:34:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:02.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:02 compute-2 nova_compute[226829]: 2026-01-31 08:34:02.442 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:02.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:03 compute-2 nova_compute[226829]: 2026-01-31 08:34:03.711 226833 DEBUG nova.network.neutron [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Updating instance_info_cache with network_info: [{"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:03 compute-2 ceph-mon[77282]: pgmap v3031: 305 pgs: 305 active+clean; 228 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Jan 31 08:34:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:04.035 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:04.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.106 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Releasing lock "refresh_cache-fb68e4df-7075-4de3-8a2e-3d59677364ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.107 226833 DEBUG nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Instance network_info: |[{"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.108 226833 DEBUG oslo_concurrency.lockutils [req-1608c270-4890-4661-b6a5-9fb9729e21e2 req-cbcd9a86-c077-4049-8096-53a5b838f7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-fb68e4df-7075-4de3-8a2e-3d59677364ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.109 226833 DEBUG nova.network.neutron [req-1608c270-4890-4661-b6a5-9fb9729e21e2 req-cbcd9a86-c077-4049-8096-53a5b838f7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Refreshing network info cache for port 7c80f45f-1c7d-43ab-b526-0caeb530f258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.115 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Start _get_guest_xml network_info=[{"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.121 226833 WARNING nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.145 226833 DEBUG nova.virt.libvirt.host [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.146 226833 DEBUG nova.virt.libvirt.host [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.156 226833 DEBUG nova.virt.libvirt.host [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.157 226833 DEBUG nova.virt.libvirt.host [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.159 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.160 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.161 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.161 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.162 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.162 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.162 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.163 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.163 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.164 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.164 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.165 226833 DEBUG nova.virt.hardware [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.171 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:34:04 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4123949811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.605 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.632 226833 DEBUG nova.storage.rbd_utils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image fb68e4df-7075-4de3-8a2e-3d59677364ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:04 compute-2 nova_compute[226829]: 2026-01-31 08:34:04.637 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4123949811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:34:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:04.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.030 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:34:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2887728537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.077 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.078 226833 DEBUG nova.virt.libvirt.vif [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-891765106',display_name='tempest-TestServerMultinode-server-891765106',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-891765106',id=175,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d69b70b0e4e340758cc43d45c1113d2f',ramdisk_id='',reservation_id='r-df19z5fl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2117392928',owner_user_name='tempest-TestServerMultinode-2117392928-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:33:56Z,user_data=None,user_id='f601ac628957410b995fa67e240e4871',uuid=fb68e4df-7075-4de3-8a2e-3d59677364ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.078 226833 DEBUG nova.network.os_vif_util [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converting VIF {"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.079 226833 DEBUG nova.network.os_vif_util [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:d4:5c,bridge_name='br-int',has_traffic_filtering=True,id=7c80f45f-1c7d-43ab-b526-0caeb530f258,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c80f45f-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.080 226833 DEBUG nova.objects.instance [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lazy-loading 'pci_devices' on Instance uuid fb68e4df-7075-4de3-8a2e-3d59677364ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.571 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <uuid>fb68e4df-7075-4de3-8a2e-3d59677364ce</uuid>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <name>instance-000000af</name>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <nova:name>tempest-TestServerMultinode-server-891765106</nova:name>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:34:04</nova:creationTime>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <nova:user uuid="f601ac628957410b995fa67e240e4871">tempest-TestServerMultinode-2117392928-project-admin</nova:user>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <nova:project uuid="d69b70b0e4e340758cc43d45c1113d2f">tempest-TestServerMultinode-2117392928</nova:project>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <nova:port uuid="7c80f45f-1c7d-43ab-b526-0caeb530f258">
Jan 31 08:34:05 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <system>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <entry name="serial">fb68e4df-7075-4de3-8a2e-3d59677364ce</entry>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <entry name="uuid">fb68e4df-7075-4de3-8a2e-3d59677364ce</entry>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     </system>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <os>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   </os>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <features>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   </features>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/fb68e4df-7075-4de3-8a2e-3d59677364ce_disk">
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       </source>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/fb68e4df-7075-4de3-8a2e-3d59677364ce_disk.config">
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       </source>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:34:05 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:ad:d4:5c"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <target dev="tap7c80f45f-1c"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/fb68e4df-7075-4de3-8a2e-3d59677364ce/console.log" append="off"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <video>
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     </video>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:34:05 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:34:05 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:34:05 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:34:05 compute-2 nova_compute[226829]: </domain>
Jan 31 08:34:05 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.572 226833 DEBUG nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Preparing to wait for external event network-vif-plugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.573 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.573 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.573 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.574 226833 DEBUG nova.virt.libvirt.vif [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-891765106',display_name='tempest-TestServerMultinode-server-891765106',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-891765106',id=175,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d69b70b0e4e340758cc43d45c1113d2f',ramdisk_id='',reservation_id='r-df19z5fl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2117392928',owner_user_name='tempest-TestServerMultinode-2117392928-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:33:56Z,user_data=None,user_id='f601ac628957410b995fa67e240e4871',uuid=fb68e4df-7075-4de3-8a2e-3d59677364ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.574 226833 DEBUG nova.network.os_vif_util [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converting VIF {"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.575 226833 DEBUG nova.network.os_vif_util [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:d4:5c,bridge_name='br-int',has_traffic_filtering=True,id=7c80f45f-1c7d-43ab-b526-0caeb530f258,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c80f45f-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.576 226833 DEBUG os_vif [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:d4:5c,bridge_name='br-int',has_traffic_filtering=True,id=7c80f45f-1c7d-43ab-b526-0caeb530f258,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c80f45f-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.576 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.577 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.578 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.585 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.585 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c80f45f-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.586 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c80f45f-1c, col_values=(('external_ids', {'iface-id': '7c80f45f-1c7d-43ab-b526-0caeb530f258', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:d4:5c', 'vm-uuid': 'fb68e4df-7075-4de3-8a2e-3d59677364ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:05 compute-2 NetworkManager[48999]: <info>  [1769848445.5888] manager: (tap7c80f45f-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.590 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.594 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.596 226833 INFO os_vif [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:d4:5c,bridge_name='br-int',has_traffic_filtering=True,id=7c80f45f-1c7d-43ab-b526-0caeb530f258,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c80f45f-1c')
Jan 31 08:34:05 compute-2 ceph-mon[77282]: pgmap v3032: 305 pgs: 305 active+clean; 228 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Jan 31 08:34:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2887728537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.966 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.966 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.967 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] No VIF found with MAC fa:16:3e:ad:d4:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.967 226833 INFO nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Using config drive
Jan 31 08:34:05 compute-2 nova_compute[226829]: 2026-01-31 08:34:05.985 226833 DEBUG nova.storage.rbd_utils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image fb68e4df-7075-4de3-8a2e-3d59677364ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:34:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:06.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:34:06 compute-2 ovn_controller[133834]: 2026-01-31T08:34:06Z|00703|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 08:34:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:34:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:06.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:34:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:06.910 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:06.911 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:06.911 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:07 compute-2 nova_compute[226829]: 2026-01-31 08:34:07.445 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:07 compute-2 ceph-mon[77282]: pgmap v3033: 305 pgs: 305 active+clean; 271 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 154 op/s
Jan 31 08:34:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:08.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.201 226833 INFO nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Creating config drive at /var/lib/nova/instances/fb68e4df-7075-4de3-8a2e-3d59677364ce/disk.config
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.210 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb68e4df-7075-4de3-8a2e-3d59677364ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzwhsi0ul execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.341 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb68e4df-7075-4de3-8a2e-3d59677364ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzwhsi0ul" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.382 226833 DEBUG nova.storage.rbd_utils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image fb68e4df-7075-4de3-8a2e-3d59677364ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.386 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb68e4df-7075-4de3-8a2e-3d59677364ce/disk.config fb68e4df-7075-4de3-8a2e-3d59677364ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.580 226833 DEBUG oslo_concurrency.processutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb68e4df-7075-4de3-8a2e-3d59677364ce/disk.config fb68e4df-7075-4de3-8a2e-3d59677364ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.582 226833 INFO nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Deleting local config drive /var/lib/nova/instances/fb68e4df-7075-4de3-8a2e-3d59677364ce/disk.config because it was imported into RBD.
Jan 31 08:34:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:08 compute-2 kernel: tap7c80f45f-1c: entered promiscuous mode
Jan 31 08:34:08 compute-2 NetworkManager[48999]: <info>  [1769848448.6380] manager: (tap7c80f45f-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Jan 31 08:34:08 compute-2 ovn_controller[133834]: 2026-01-31T08:34:08Z|00704|binding|INFO|Claiming lport 7c80f45f-1c7d-43ab-b526-0caeb530f258 for this chassis.
Jan 31 08:34:08 compute-2 ovn_controller[133834]: 2026-01-31T08:34:08Z|00705|binding|INFO|7c80f45f-1c7d-43ab-b526-0caeb530f258: Claiming fa:16:3e:ad:d4:5c 10.100.0.10
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.640 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.646 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.650 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:08 compute-2 systemd-machined[195142]: New machine qemu-81-instance-000000af.
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.690 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:08 compute-2 systemd[1]: Started Virtual Machine qemu-81-instance-000000af.
Jan 31 08:34:08 compute-2 ovn_controller[133834]: 2026-01-31T08:34:08Z|00706|binding|INFO|Setting lport 7c80f45f-1c7d-43ab-b526-0caeb530f258 ovn-installed in OVS
Jan 31 08:34:08 compute-2 nova_compute[226829]: 2026-01-31 08:34:08.697 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:08 compute-2 systemd-udevd[307599]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:34:08 compute-2 NetworkManager[48999]: <info>  [1769848448.7269] device (tap7c80f45f-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:34:08 compute-2 NetworkManager[48999]: <info>  [1769848448.7283] device (tap7c80f45f-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:34:08 compute-2 podman[307582]: 2026-01-31 08:34:08.744184358 +0000 UTC m=+0.078755408 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:34:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:08.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1097566403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:08 compute-2 ovn_controller[133834]: 2026-01-31T08:34:08Z|00707|binding|INFO|Setting lport 7c80f45f-1c7d-43ab-b526-0caeb530f258 up in Southbound
Jan 31 08:34:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:08.954 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:d4:5c 10.100.0.10'], port_security=['fa:16:3e:ad:d4:5c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fb68e4df-7075-4de3-8a2e-3d59677364ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd69b70b0e4e340758cc43d45c1113d2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7793481-993b-4688-8da1-03fa2a12b295', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f6c418e-e1df-4f83-9060-d1845d483973, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=7c80f45f-1c7d-43ab-b526-0caeb530f258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:34:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:08.955 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 7c80f45f-1c7d-43ab-b526-0caeb530f258 in datapath 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d bound to our chassis
Jan 31 08:34:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:08.956 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d
Jan 31 08:34:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:08.970 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad9d7ee-147b-40e7-928d-a605d42511e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:08.971 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap786b4c20-d1 in ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:34:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:08.973 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap786b4c20-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:34:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:08.974 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[719e4d28-7ddb-4a92-a0e4-d03dc6d5d3b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:08.975 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6c474e72-aca7-48cc-91ab-a5fb2c2d6eaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:08.992 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[9abb9b9d-7b02-4bf2-b20e-594b356d0fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.019 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b091bf-7811-4a2c-beb5-d0aedfa02d16]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.053 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[70544006-16d2-4aa5-b1c1-c89d50cc73e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 NetworkManager[48999]: <info>  [1769848449.0603] manager: (tap786b4c20-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/348)
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.061 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0f13dc48-906b-4604-bc68-460d949e045d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 systemd-udevd[307604]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.090 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce2ffdc-8d95-40d2-bbc5-a2fb8d925263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.094 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c1dfd742-7d60-4c74-acef-c1798ab8d3df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 NetworkManager[48999]: <info>  [1769848449.1143] device (tap786b4c20-d0): carrier: link connected
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.118 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7ceddeb0-5211-4d96-95ec-542e76a57411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.139 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[09a06dcc-9d2e-4b8b-a485-770f18167280]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap786b4c20-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:d1:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870158, 'reachable_time': 28500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307677, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.156 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[80a25eaa-8bf9-4f58-9f43-4acb01ea8661]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:d113'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870158, 'tstamp': 870158}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307678, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.176 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb2461e-7f8a-45bd-bcd6-e62816aa26a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap786b4c20-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:d1:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870158, 'reachable_time': 28500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307680, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.191 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848449.1914988, fb68e4df-7075-4de3-8a2e-3d59677364ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.192 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] VM Started (Lifecycle Event)
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.207 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[63207173-f109-4c15-8ee9-56cd02acaa5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.259 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eccdc107-4434-4ac3-8bfa-58ea6d373a07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.261 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap786b4c20-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.261 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.261 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap786b4c20-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.263 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:09 compute-2 kernel: tap786b4c20-d0: entered promiscuous mode
Jan 31 08:34:09 compute-2 NetworkManager[48999]: <info>  [1769848449.2649] manager: (tap786b4c20-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.267 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.269 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap786b4c20-d0, col_values=(('external_ids', {'iface-id': '7b0df60c-a238-46c0-acd9-976f981537f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.271 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:09 compute-2 ovn_controller[133834]: 2026-01-31T08:34:09Z|00708|binding|INFO|Releasing lport 7b0df60c-a238-46c0-acd9-976f981537f3 from this chassis (sb_readonly=0)
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.272 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.273 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4abb6bea-a0c8-4833-99c5-76e829c27597]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.274 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.pid.haproxy
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:34:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:09.275 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'env', 'PROCESS_TAG=haproxy-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.276 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.708 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.714 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848449.1923041, fb68e4df-7075-4de3-8a2e-3d59677364ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.715 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] VM Paused (Lifecycle Event)
Jan 31 08:34:09 compute-2 podman[307712]: 2026-01-31 08:34:09.725419814 +0000 UTC m=+0.060418851 container create b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 08:34:09 compute-2 systemd[1]: Started libpod-conmon-b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba.scope.
Jan 31 08:34:09 compute-2 podman[307712]: 2026-01-31 08:34:09.694161985 +0000 UTC m=+0.029161052 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:34:09 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:34:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81ef3d78de29e10a3a0f48f28c80397e8e687032913977f9e964fef448adbf9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:09 compute-2 podman[307712]: 2026-01-31 08:34:09.818660903 +0000 UTC m=+0.153660020 container init b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:34:09 compute-2 podman[307712]: 2026-01-31 08:34:09.82625841 +0000 UTC m=+0.161257477 container start b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:34:09 compute-2 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[307727]: [NOTICE]   (307731) : New worker (307733) forked
Jan 31 08:34:09 compute-2 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[307727]: [NOTICE]   (307731) : Loading success.
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.871 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:09 compute-2 nova_compute[226829]: 2026-01-31 08:34:09.876 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:34:09 compute-2 ceph-mon[77282]: pgmap v3034: 305 pgs: 305 active+clean; 280 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 5.3 MiB/s wr, 118 op/s
Jan 31 08:34:10 compute-2 nova_compute[226829]: 2026-01-31 08:34:10.044 226833 DEBUG nova.network.neutron [req-1608c270-4890-4661-b6a5-9fb9729e21e2 req-cbcd9a86-c077-4049-8096-53a5b838f7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Updated VIF entry in instance network info cache for port 7c80f45f-1c7d-43ab-b526-0caeb530f258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:34:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:10.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:10 compute-2 nova_compute[226829]: 2026-01-31 08:34:10.045 226833 DEBUG nova.network.neutron [req-1608c270-4890-4661-b6a5-9fb9729e21e2 req-cbcd9a86-c077-4049-8096-53a5b838f7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Updating instance_info_cache with network_info: [{"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:10 compute-2 nova_compute[226829]: 2026-01-31 08:34:10.464 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:34:10 compute-2 nova_compute[226829]: 2026-01-31 08:34:10.597 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:10 compute-2 nova_compute[226829]: 2026-01-31 08:34:10.658 226833 DEBUG oslo_concurrency.lockutils [req-1608c270-4890-4661-b6a5-9fb9729e21e2 req-cbcd9a86-c077-4049-8096-53a5b838f7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-fb68e4df-7075-4de3-8a2e-3d59677364ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:34:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:10.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.613 226833 DEBUG nova.compute.manager [req-e1e6b746-ffd9-4e08-8393-743514068987 req-d691d5f3-e125-4211-9b91-d7d9ac12c7d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Received event network-vif-plugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.614 226833 DEBUG oslo_concurrency.lockutils [req-e1e6b746-ffd9-4e08-8393-743514068987 req-d691d5f3-e125-4211-9b91-d7d9ac12c7d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.615 226833 DEBUG oslo_concurrency.lockutils [req-e1e6b746-ffd9-4e08-8393-743514068987 req-d691d5f3-e125-4211-9b91-d7d9ac12c7d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.615 226833 DEBUG oslo_concurrency.lockutils [req-e1e6b746-ffd9-4e08-8393-743514068987 req-d691d5f3-e125-4211-9b91-d7d9ac12c7d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.615 226833 DEBUG nova.compute.manager [req-e1e6b746-ffd9-4e08-8393-743514068987 req-d691d5f3-e125-4211-9b91-d7d9ac12c7d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Processing event network-vif-plugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.617 226833 DEBUG nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.622 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848451.6219943, fb68e4df-7075-4de3-8a2e-3d59677364ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.622 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] VM Resumed (Lifecycle Event)
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.626 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.630 226833 INFO nova.virt.libvirt.driver [-] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Instance spawned successfully.
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.631 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.675 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.681 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.717 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.717 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.718 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.719 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.719 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.720 226833 DEBUG nova.virt.libvirt.driver [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:11 compute-2 nova_compute[226829]: 2026-01-31 08:34:11.759 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:34:11 compute-2 ceph-mon[77282]: pgmap v3035: 305 pgs: 305 active+clean; 289 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 357 KiB/s rd, 5.0 MiB/s wr, 116 op/s
Jan 31 08:34:12 compute-2 nova_compute[226829]: 2026-01-31 08:34:12.031 226833 INFO nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Took 15.77 seconds to spawn the instance on the hypervisor.
Jan 31 08:34:12 compute-2 nova_compute[226829]: 2026-01-31 08:34:12.031 226833 DEBUG nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:34:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:12.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:34:12 compute-2 nova_compute[226829]: 2026-01-31 08:34:12.265 226833 INFO nova.compute.manager [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Took 19.37 seconds to build instance.
Jan 31 08:34:12 compute-2 nova_compute[226829]: 2026-01-31 08:34:12.385 226833 DEBUG oslo_concurrency.lockutils [None req-ef04078b-aa9d-488f-819e-107fc1b9692b f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:12 compute-2 nova_compute[226829]: 2026-01-31 08:34:12.448 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:12 compute-2 nova_compute[226829]: 2026-01-31 08:34:12.576 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:12 compute-2 nova_compute[226829]: 2026-01-31 08:34:12.576 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:34:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:12.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3283750753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:13 compute-2 nova_compute[226829]: 2026-01-31 08:34:13.914 226833 DEBUG nova.compute.manager [req-c5ce3b0a-1635-402b-9295-9c354685124f req-2bd48396-6d2e-4488-a953-78dd92589389 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Received event network-vif-plugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:13 compute-2 nova_compute[226829]: 2026-01-31 08:34:13.915 226833 DEBUG oslo_concurrency.lockutils [req-c5ce3b0a-1635-402b-9295-9c354685124f req-2bd48396-6d2e-4488-a953-78dd92589389 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:13 compute-2 nova_compute[226829]: 2026-01-31 08:34:13.915 226833 DEBUG oslo_concurrency.lockutils [req-c5ce3b0a-1635-402b-9295-9c354685124f req-2bd48396-6d2e-4488-a953-78dd92589389 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:13 compute-2 nova_compute[226829]: 2026-01-31 08:34:13.915 226833 DEBUG oslo_concurrency.lockutils [req-c5ce3b0a-1635-402b-9295-9c354685124f req-2bd48396-6d2e-4488-a953-78dd92589389 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:13 compute-2 nova_compute[226829]: 2026-01-31 08:34:13.916 226833 DEBUG nova.compute.manager [req-c5ce3b0a-1635-402b-9295-9c354685124f req-2bd48396-6d2e-4488-a953-78dd92589389 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] No waiting events found dispatching network-vif-plugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:34:13 compute-2 nova_compute[226829]: 2026-01-31 08:34:13.916 226833 WARNING nova.compute.manager [req-c5ce3b0a-1635-402b-9295-9c354685124f req-2bd48396-6d2e-4488-a953-78dd92589389 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Received unexpected event network-vif-plugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 for instance with vm_state active and task_state None.
Jan 31 08:34:14 compute-2 ceph-mon[77282]: pgmap v3036: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 3.9 MiB/s wr, 122 op/s
Jan 31 08:34:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4035119726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:14.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:14 compute-2 sudo[307745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:14 compute-2 sudo[307745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:14 compute-2 sudo[307745]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:14 compute-2 sudo[307770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:14 compute-2 sudo[307770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:14 compute-2 sudo[307770]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:14.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:15 compute-2 nova_compute[226829]: 2026-01-31 08:34:15.641 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:16 compute-2 ceph-mon[77282]: pgmap v3037: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1013 KiB/s rd, 3.4 MiB/s wr, 121 op/s
Jan 31 08:34:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:34:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:16.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:34:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:16.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:17 compute-2 nova_compute[226829]: 2026-01-31 08:34:17.450 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:17 compute-2 ceph-mon[77282]: pgmap v3038: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 182 op/s
Jan 31 08:34:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:18.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:18.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:19 compute-2 ceph-mon[77282]: pgmap v3039: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 147 op/s
Jan 31 08:34:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/148716107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3577396168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:20.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:20 compute-2 nova_compute[226829]: 2026-01-31 08:34:20.653 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:20.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:21 compute-2 ceph-mon[77282]: pgmap v3040: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 31 08:34:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:22.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:22 compute-2 nova_compute[226829]: 2026-01-31 08:34:22.450 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:22.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:23 compute-2 ceph-mon[77282]: pgmap v3041: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Jan 31 08:34:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:24.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:24 compute-2 ovn_controller[133834]: 2026-01-31T08:34:24Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:d4:5c 10.100.0.10
Jan 31 08:34:24 compute-2 ovn_controller[133834]: 2026-01-31T08:34:24Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:d4:5c 10.100.0.10
Jan 31 08:34:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:24.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:25 compute-2 nova_compute[226829]: 2026-01-31 08:34:25.690 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:26.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:26 compute-2 ceph-mon[77282]: pgmap v3042: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 181 op/s
Jan 31 08:34:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:26.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:27 compute-2 nova_compute[226829]: 2026-01-31 08:34:27.451 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:27 compute-2 ceph-mon[77282]: pgmap v3043: 305 pgs: 305 active+clean; 369 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 3.8 MiB/s wr, 272 op/s
Jan 31 08:34:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:28.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:34:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:28.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:34:30 compute-2 ceph-mon[77282]: pgmap v3044: 305 pgs: 305 active+clean; 363 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.2 MiB/s wr, 271 op/s
Jan 31 08:34:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:30.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:30 compute-2 nova_compute[226829]: 2026-01-31 08:34:30.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:30.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:31 compute-2 podman[307804]: 2026-01-31 08:34:31.219989351 +0000 UTC m=+0.101245228 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:34:32 compute-2 ceph-mon[77282]: pgmap v3045: 305 pgs: 305 active+clean; 357 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.8 MiB/s wr, 315 op/s
Jan 31 08:34:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:32.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:32 compute-2 nova_compute[226829]: 2026-01-31 08:34:32.454 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:32.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:34 compute-2 ceph-mon[77282]: pgmap v3046: 305 pgs: 305 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.8 MiB/s wr, 309 op/s
Jan 31 08:34:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:34.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:34 compute-2 sudo[307832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:34 compute-2 sudo[307832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:34 compute-2 sudo[307832]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:34 compute-2 sudo[307857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:34 compute-2 sudo[307857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:34 compute-2 sudo[307857]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:34.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3017647581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/206312902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:35 compute-2 nova_compute[226829]: 2026-01-31 08:34:35.695 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:36.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:36 compute-2 ceph-mon[77282]: pgmap v3047: 305 pgs: 305 active+clean; 355 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.2 MiB/s wr, 289 op/s
Jan 31 08:34:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:36.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:37 compute-2 nova_compute[226829]: 2026-01-31 08:34:37.457 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:34:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:38.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:34:38 compute-2 ceph-mon[77282]: pgmap v3048: 305 pgs: 305 active+clean; 359 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 289 op/s
Jan 31 08:34:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:38.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:39 compute-2 podman[307884]: 2026-01-31 08:34:39.16966523 +0000 UTC m=+0.054852549 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:34:39 compute-2 ceph-mon[77282]: pgmap v3049: 305 pgs: 305 active+clean; 359 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 31 08:34:39 compute-2 nova_compute[226829]: 2026-01-31 08:34:39.937 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:39 compute-2 nova_compute[226829]: 2026-01-31 08:34:39.938 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:34:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:40.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:34:40 compute-2 nova_compute[226829]: 2026-01-31 08:34:40.697 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:40 compute-2 nova_compute[226829]: 2026-01-31 08:34:40.763 226833 DEBUG nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:34:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:40.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:41 compute-2 ceph-mon[77282]: pgmap v3050: 305 pgs: 305 active+clean; 340 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 162 op/s
Jan 31 08:34:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:42.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:42 compute-2 nova_compute[226829]: 2026-01-31 08:34:42.423 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:42 compute-2 nova_compute[226829]: 2026-01-31 08:34:42.424 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:42 compute-2 nova_compute[226829]: 2026-01-31 08:34:42.435 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:34:42 compute-2 nova_compute[226829]: 2026-01-31 08:34:42.436 226833 INFO nova.compute.claims [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:34:42 compute-2 nova_compute[226829]: 2026-01-31 08:34:42.459 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:34:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:42.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:34:43 compute-2 nova_compute[226829]: 2026-01-31 08:34:43.264 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:43 compute-2 nova_compute[226829]: 2026-01-31 08:34:43.485 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:43.633 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:34:43 compute-2 nova_compute[226829]: 2026-01-31 08:34:43.634 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:43.636 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:34:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:34:43 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/163603597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:43 compute-2 nova_compute[226829]: 2026-01-31 08:34:43.689 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:43 compute-2 nova_compute[226829]: 2026-01-31 08:34:43.697 226833 DEBUG nova.compute.provider_tree [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:34:43 compute-2 nova_compute[226829]: 2026-01-31 08:34:43.936 226833 DEBUG nova.scheduler.client.report [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:34:44 compute-2 ceph-mon[77282]: pgmap v3051: 305 pgs: 305 active+clean; 323 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 2.8 MiB/s wr, 143 op/s
Jan 31 08:34:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/163603597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:34:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:44.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:34:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:34:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:44.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:34:45 compute-2 nova_compute[226829]: 2026-01-31 08:34:45.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:46.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:46 compute-2 ceph-mon[77282]: pgmap v3052: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 614 KiB/s rd, 2.4 MiB/s wr, 121 op/s
Jan 31 08:34:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:34:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:46.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:34:47 compute-2 ceph-mon[77282]: pgmap v3053: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 612 KiB/s rd, 1.9 MiB/s wr, 116 op/s
Jan 31 08:34:47 compute-2 nova_compute[226829]: 2026-01-31 08:34:47.460 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:47 compute-2 nova_compute[226829]: 2026-01-31 08:34:47.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:47 compute-2 nova_compute[226829]: 2026-01-31 08:34:47.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:34:47 compute-2 nova_compute[226829]: 2026-01-31 08:34:47.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:34:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:48.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:48 compute-2 nova_compute[226829]: 2026-01-31 08:34:48.864 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 6.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:48 compute-2 nova_compute[226829]: 2026-01-31 08:34:48.865 226833 DEBUG nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:34:48 compute-2 nova_compute[226829]: 2026-01-31 08:34:48.901 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:34:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:48.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:49 compute-2 nova_compute[226829]: 2026-01-31 08:34:49.133 226833 DEBUG nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:34:49 compute-2 nova_compute[226829]: 2026-01-31 08:34:49.134 226833 DEBUG nova.network.neutron [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:34:49 compute-2 nova_compute[226829]: 2026-01-31 08:34:49.237 226833 INFO nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:34:49 compute-2 nova_compute[226829]: 2026-01-31 08:34:49.461 226833 DEBUG nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:34:49 compute-2 ceph-mon[77282]: pgmap v3054: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 08:34:49 compute-2 nova_compute[226829]: 2026-01-31 08:34:49.989 226833 DEBUG nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:34:49 compute-2 nova_compute[226829]: 2026-01-31 08:34:49.991 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:34:49 compute-2 nova_compute[226829]: 2026-01-31 08:34:49.992 226833 INFO nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Creating image(s)
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.026 226833 DEBUG nova.storage.rbd_utils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] rbd image e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.058 226833 DEBUG nova.storage.rbd_utils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] rbd image e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:34:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:50.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.092 226833 DEBUG nova.storage.rbd_utils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] rbd image e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.099 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.139 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-fb68e4df-7075-4de3-8a2e-3d59677364ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.140 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-fb68e4df-7075-4de3-8a2e-3d59677364ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.140 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.140 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid fb68e4df-7075-4de3-8a2e-3d59677364ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.195 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.196 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.196 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.196 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.227 226833 DEBUG nova.storage.rbd_utils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] rbd image e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.231 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.310 226833 DEBUG nova.policy [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb8e3d6cd4094a62b23e39cec9023f18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '149ccba7b87e4284a3a6462e3a1dace1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.528 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.608 226833 DEBUG nova.storage.rbd_utils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] resizing rbd image e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:34:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3721828798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.747 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.755 226833 DEBUG nova.objects.instance [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lazy-loading 'migration_context' on Instance uuid e93a14b8-ef43-4615-a9c8-8b2df69d20d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.836 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.837 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Ensure instance console log exists: /var/lib/nova/instances/e93a14b8-ef43-4615-a9c8-8b2df69d20d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.837 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.838 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:50 compute-2 nova_compute[226829]: 2026-01-31 08:34:50.839 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:50.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:51 compute-2 sudo[308097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:51 compute-2 sudo[308097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:51 compute-2 sudo[308097]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:51 compute-2 sudo[308122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:51 compute-2 sudo[308122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:51 compute-2 sudo[308122]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:51 compute-2 sudo[308147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:51 compute-2 sudo[308147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:51 compute-2 sudo[308147]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:51 compute-2 sudo[308172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:34:51 compute-2 sudo[308172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.409 226833 DEBUG oslo_concurrency.lockutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "fb68e4df-7075-4de3-8a2e-3d59677364ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.410 226833 DEBUG oslo_concurrency.lockutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.410 226833 DEBUG oslo_concurrency.lockutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.410 226833 DEBUG oslo_concurrency.lockutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.410 226833 DEBUG oslo_concurrency.lockutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.412 226833 INFO nova.compute.manager [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Terminating instance
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.413 226833 DEBUG nova.compute.manager [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:34:51 compute-2 kernel: tap7c80f45f-1c (unregistering): left promiscuous mode
Jan 31 08:34:51 compute-2 NetworkManager[48999]: <info>  [1769848491.4759] device (tap7c80f45f-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.475 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-2 ovn_controller[133834]: 2026-01-31T08:34:51Z|00709|binding|INFO|Releasing lport 7c80f45f-1c7d-43ab-b526-0caeb530f258 from this chassis (sb_readonly=0)
Jan 31 08:34:51 compute-2 ovn_controller[133834]: 2026-01-31T08:34:51Z|00710|binding|INFO|Setting lport 7c80f45f-1c7d-43ab-b526-0caeb530f258 down in Southbound
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.484 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-2 ovn_controller[133834]: 2026-01-31T08:34:51Z|00711|binding|INFO|Removing iface tap7c80f45f-1c ovn-installed in OVS
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.486 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.491 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.510 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:d4:5c 10.100.0.10'], port_security=['fa:16:3e:ad:d4:5c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fb68e4df-7075-4de3-8a2e-3d59677364ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd69b70b0e4e340758cc43d45c1113d2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7793481-993b-4688-8da1-03fa2a12b295', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f6c418e-e1df-4f83-9060-d1845d483973, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=7c80f45f-1c7d-43ab-b526-0caeb530f258) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.511 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 7c80f45f-1c7d-43ab-b526-0caeb530f258 in datapath 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d unbound from our chassis
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.513 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.514 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1752a572-b43a-40aa-94e3-218192be5eaa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.515 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d namespace which is not needed anymore
Jan 31 08:34:51 compute-2 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000af.scope: Deactivated successfully.
Jan 31 08:34:51 compute-2 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000af.scope: Consumed 13.636s CPU time.
Jan 31 08:34:51 compute-2 systemd-machined[195142]: Machine qemu-81-instance-000000af terminated.
Jan 31 08:34:51 compute-2 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[307727]: [NOTICE]   (307731) : haproxy version is 2.8.14-c23fe91
Jan 31 08:34:51 compute-2 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[307727]: [NOTICE]   (307731) : path to executable is /usr/sbin/haproxy
Jan 31 08:34:51 compute-2 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[307727]: [WARNING]  (307731) : Exiting Master process...
Jan 31 08:34:51 compute-2 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[307727]: [ALERT]    (307731) : Current worker (307733) exited with code 143 (Terminated)
Jan 31 08:34:51 compute-2 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[307727]: [WARNING]  (307731) : All workers exited. Exiting... (0)
Jan 31 08:34:51 compute-2 systemd[1]: libpod-b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba.scope: Deactivated successfully.
Jan 31 08:34:51 compute-2 sudo[308172]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:51 compute-2 podman[308241]: 2026-01-31 08:34:51.627378761 +0000 UTC m=+0.042216727 container died b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.630 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.634 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-2 ceph-mon[77282]: pgmap v3055: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 469 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.642 226833 INFO nova.virt.libvirt.driver [-] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Instance destroyed successfully.
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.643 226833 DEBUG nova.objects.instance [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lazy-loading 'resources' on Instance uuid fb68e4df-7075-4de3-8a2e-3d59677364ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:34:51 compute-2 systemd[1]: var-lib-containers-storage-overlay-a81ef3d78de29e10a3a0f48f28c80397e8e687032913977f9e964fef448adbf9-merged.mount: Deactivated successfully.
Jan 31 08:34:51 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba-userdata-shm.mount: Deactivated successfully.
Jan 31 08:34:51 compute-2 podman[308241]: 2026-01-31 08:34:51.665864585 +0000 UTC m=+0.080702551 container cleanup b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.672 226833 DEBUG nova.virt.libvirt.vif [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:33:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-891765106',display_name='tempest-TestServerMultinode-server-891765106',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-891765106',id=175,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:34:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d69b70b0e4e340758cc43d45c1113d2f',ramdisk_id='',reservation_id='r-df19z5fl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-2117392928',owner_user_name='tempest-TestServerMultinode-2117392928-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:34:12Z,user_data=None,user_id='f601ac628957410b995fa67e240e4871',uuid=fb68e4df-7075-4de3-8a2e-3d59677364ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:34:51 compute-2 systemd[1]: libpod-conmon-b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba.scope: Deactivated successfully.
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.673 226833 DEBUG nova.network.os_vif_util [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converting VIF {"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.674 226833 DEBUG nova.network.os_vif_util [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:d4:5c,bridge_name='br-int',has_traffic_filtering=True,id=7c80f45f-1c7d-43ab-b526-0caeb530f258,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c80f45f-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.675 226833 DEBUG os_vif [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:d4:5c,bridge_name='br-int',has_traffic_filtering=True,id=7c80f45f-1c7d-43ab-b526-0caeb530f258,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c80f45f-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.678 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.678 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c80f45f-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:51 compute-2 podman[308293]: 2026-01-31 08:34:51.719056769 +0000 UTC m=+0.037543091 container remove b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.726 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.726 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ef91f830-3834-407f-b494-f8cf4a708795]: (4, ('Sat Jan 31 08:34:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d (b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba)\nb93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba\nSat Jan 31 08:34:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d (b93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba)\nb93b4706be25cc4f15bc3e1594409b3b08aa688917f8ccb107d172564cec9eba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.728 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3f8a25-9ddc-42fd-ae28-6f4b2e54addf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.729 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.729 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap786b4c20-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.730 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.732 226833 INFO os_vif [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:d4:5c,bridge_name='br-int',has_traffic_filtering=True,id=7c80f45f-1c7d-43ab-b526-0caeb530f258,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c80f45f-1c')
Jan 31 08:34:51 compute-2 kernel: tap786b4c20-d0: left promiscuous mode
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.741 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[45f75e1c-7a6b-4ebb-a6a9-b25a85bea17a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:51 compute-2 nova_compute[226829]: 2026-01-31 08:34:51.752 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.760 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[153d81c1-aaad-4842-a1b5-fc6a511b8641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.762 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ba42ea7f-a306-47c0-909e-1d251adef1b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.773 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2d62d9-c48b-4282-8d6c-b326368e744e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870152, 'reachable_time': 35769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308323, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:51 compute-2 systemd[1]: run-netns-ovnmeta\x2d786b4c20\x2dd3c9\x2d4eba\x2db2c7\x2d0e7b9805b52d.mount: Deactivated successfully.
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.778 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:34:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:51.778 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ba1862-8902-4d06-92f5-20d6474360ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:52.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:52 compute-2 nova_compute[226829]: 2026-01-31 08:34:52.167 226833 INFO nova.virt.libvirt.driver [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Deleting instance files /var/lib/nova/instances/fb68e4df-7075-4de3-8a2e-3d59677364ce_del
Jan 31 08:34:52 compute-2 nova_compute[226829]: 2026-01-31 08:34:52.168 226833 INFO nova.virt.libvirt.driver [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Deletion of /var/lib/nova/instances/fb68e4df-7075-4de3-8a2e-3d59677364ce_del complete
Jan 31 08:34:52 compute-2 nova_compute[226829]: 2026-01-31 08:34:52.396 226833 INFO nova.compute.manager [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Took 0.98 seconds to destroy the instance on the hypervisor.
Jan 31 08:34:52 compute-2 nova_compute[226829]: 2026-01-31 08:34:52.397 226833 DEBUG oslo.service.loopingcall [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:34:52 compute-2 nova_compute[226829]: 2026-01-31 08:34:52.398 226833 DEBUG nova.compute.manager [-] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:34:52 compute-2 nova_compute[226829]: 2026-01-31 08:34:52.399 226833 DEBUG nova.network.neutron [-] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:34:52 compute-2 nova_compute[226829]: 2026-01-31 08:34:52.465 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:52 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:34:52.638 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:52.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:34:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:34:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:34:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:34:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:34:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:53 compute-2 nova_compute[226829]: 2026-01-31 08:34:53.915 226833 DEBUG nova.compute.manager [req-d5efdc67-0e2c-42d8-b644-3043f8ef236f req-a59988db-ab64-4b51-bf38-d7df50998c55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Received event network-vif-unplugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:53 compute-2 nova_compute[226829]: 2026-01-31 08:34:53.916 226833 DEBUG oslo_concurrency.lockutils [req-d5efdc67-0e2c-42d8-b644-3043f8ef236f req-a59988db-ab64-4b51-bf38-d7df50998c55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:53 compute-2 nova_compute[226829]: 2026-01-31 08:34:53.916 226833 DEBUG oslo_concurrency.lockutils [req-d5efdc67-0e2c-42d8-b644-3043f8ef236f req-a59988db-ab64-4b51-bf38-d7df50998c55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:53 compute-2 nova_compute[226829]: 2026-01-31 08:34:53.916 226833 DEBUG oslo_concurrency.lockutils [req-d5efdc67-0e2c-42d8-b644-3043f8ef236f req-a59988db-ab64-4b51-bf38-d7df50998c55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:53 compute-2 nova_compute[226829]: 2026-01-31 08:34:53.917 226833 DEBUG nova.compute.manager [req-d5efdc67-0e2c-42d8-b644-3043f8ef236f req-a59988db-ab64-4b51-bf38-d7df50998c55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] No waiting events found dispatching network-vif-unplugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:34:53 compute-2 nova_compute[226829]: 2026-01-31 08:34:53.917 226833 DEBUG nova.compute.manager [req-d5efdc67-0e2c-42d8-b644-3043f8ef236f req-a59988db-ab64-4b51-bf38-d7df50998c55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Received event network-vif-unplugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:34:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:54.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:54 compute-2 nova_compute[226829]: 2026-01-31 08:34:54.111 226833 DEBUG nova.network.neutron [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Successfully created port: 7bd650c8-4238-4749-a373-22597e7927ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:34:54 compute-2 ceph-mon[77282]: pgmap v3056: 305 pgs: 305 active+clean; 347 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 1.6 MiB/s wr, 43 op/s
Jan 31 08:34:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1367228748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:34:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1367228748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:34:54 compute-2 sudo[308330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:54 compute-2 sudo[308330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:54 compute-2 sudo[308330]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:54 compute-2 sudo[308355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:54 compute-2 sudo[308355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:54 compute-2 sudo[308355]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:54 compute-2 nova_compute[226829]: 2026-01-31 08:34:54.652 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Updating instance_info_cache with network_info: [{"id": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "address": "fa:16:3e:ad:d4:5c", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c80f45f-1c", "ovs_interfaceid": "7c80f45f-1c7d-43ab-b526-0caeb530f258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:54 compute-2 nova_compute[226829]: 2026-01-31 08:34:54.722 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-fb68e4df-7075-4de3-8a2e-3d59677364ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:34:54 compute-2 nova_compute[226829]: 2026-01-31 08:34:54.722 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:34:54 compute-2 nova_compute[226829]: 2026-01-31 08:34:54.723 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:54 compute-2 nova_compute[226829]: 2026-01-31 08:34:54.723 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:54 compute-2 nova_compute[226829]: 2026-01-31 08:34:54.723 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:54 compute-2 nova_compute[226829]: 2026-01-31 08:34:54.723 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:54 compute-2 nova_compute[226829]: 2026-01-31 08:34:54.723 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:54.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:54 compute-2 nova_compute[226829]: 2026-01-31 08:34:54.993 226833 DEBUG nova.network.neutron [-] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.099 226833 INFO nova.compute.manager [-] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Took 2.70 seconds to deallocate network for instance.
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.114 226833 DEBUG nova.compute.manager [req-831e4895-a17c-4a80-a3e3-748b94d1e9c2 req-8a8694b1-62fc-44f0-98d9-91bd7b03fd33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Received event network-vif-deleted-7c80f45f-1c7d-43ab-b526-0caeb530f258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.114 226833 INFO nova.compute.manager [req-831e4895-a17c-4a80-a3e3-748b94d1e9c2 req-8a8694b1-62fc-44f0-98d9-91bd7b03fd33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Neutron deleted interface 7c80f45f-1c7d-43ab-b526-0caeb530f258; detaching it from the instance and deleting it from the info cache
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.115 226833 DEBUG nova.network.neutron [req-831e4895-a17c-4a80-a3e3-748b94d1e9c2 req-8a8694b1-62fc-44f0-98d9-91bd7b03fd33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.188 226833 DEBUG nova.compute.manager [req-831e4895-a17c-4a80-a3e3-748b94d1e9c2 req-8a8694b1-62fc-44f0-98d9-91bd7b03fd33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Detach interface failed, port_id=7c80f45f-1c7d-43ab-b526-0caeb530f258, reason: Instance fb68e4df-7075-4de3-8a2e-3d59677364ce could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.198 226833 DEBUG oslo_concurrency.lockutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.198 226833 DEBUG oslo_concurrency.lockutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.334 226833 DEBUG oslo_concurrency.processutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4053869991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3255376033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2109295381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3869672271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:34:55 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3423816108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.791 226833 DEBUG oslo_concurrency.processutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.796 226833 DEBUG nova.compute.provider_tree [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.881 226833 DEBUG nova.scheduler.client.report [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:34:55 compute-2 nova_compute[226829]: 2026-01-31 08:34:55.962 226833 DEBUG oslo_concurrency.lockutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:34:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:56.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:34:56 compute-2 nova_compute[226829]: 2026-01-31 08:34:56.622 226833 INFO nova.scheduler.client.report [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Deleted allocations for instance fb68e4df-7075-4de3-8a2e-3d59677364ce
Jan 31 08:34:56 compute-2 nova_compute[226829]: 2026-01-31 08:34:56.727 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:56 compute-2 nova_compute[226829]: 2026-01-31 08:34:56.766 226833 DEBUG nova.compute.manager [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Received event network-vif-plugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:56 compute-2 nova_compute[226829]: 2026-01-31 08:34:56.766 226833 DEBUG oslo_concurrency.lockutils [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:56 compute-2 nova_compute[226829]: 2026-01-31 08:34:56.766 226833 DEBUG oslo_concurrency.lockutils [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:56 compute-2 nova_compute[226829]: 2026-01-31 08:34:56.768 226833 DEBUG oslo_concurrency.lockutils [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:56 compute-2 nova_compute[226829]: 2026-01-31 08:34:56.768 226833 DEBUG nova.compute.manager [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] No waiting events found dispatching network-vif-plugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:34:56 compute-2 nova_compute[226829]: 2026-01-31 08:34:56.769 226833 WARNING nova.compute.manager [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Received unexpected event network-vif-plugged-7c80f45f-1c7d-43ab-b526-0caeb530f258 for instance with vm_state deleted and task_state None.
Jan 31 08:34:56 compute-2 ceph-mon[77282]: pgmap v3057: 305 pgs: 305 active+clean; 304 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.9 KiB/s rd, 1.5 MiB/s wr, 16 op/s
Jan 31 08:34:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3423816108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2292615389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1064002843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:56.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:56 compute-2 nova_compute[226829]: 2026-01-31 08:34:56.988 226833 DEBUG oslo_concurrency.lockutils [None req-ca4255e8-f479-4c35-a294-3a61d0f673a1 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "fb68e4df-7075-4de3-8a2e-3d59677364ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:57 compute-2 nova_compute[226829]: 2026-01-31 08:34:57.464 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:57 compute-2 nova_compute[226829]: 2026-01-31 08:34:57.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:57 compute-2 nova_compute[226829]: 2026-01-31 08:34:57.575 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:57 compute-2 nova_compute[226829]: 2026-01-31 08:34:57.576 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:57 compute-2 nova_compute[226829]: 2026-01-31 08:34:57.577 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:57 compute-2 nova_compute[226829]: 2026-01-31 08:34:57.577 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:34:57 compute-2 nova_compute[226829]: 2026-01-31 08:34:57.577 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:34:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1951587907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.009 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:58 compute-2 ceph-mon[77282]: pgmap v3058: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 31 08:34:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4220231484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:58.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.168 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.170 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4101MB free_disk=20.946773529052734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.171 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.171 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.358 226833 DEBUG nova.network.neutron [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Successfully updated port: 7bd650c8-4238-4749-a373-22597e7927ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:34:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.654 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.654 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquired lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.655 226833 DEBUG nova.network.neutron [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.676 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e93a14b8-ef43-4615-a9c8-8b2df69d20d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.676 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.677 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:34:58 compute-2 nova_compute[226829]: 2026-01-31 08:34:58.723 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:34:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:58.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:59 compute-2 nova_compute[226829]: 2026-01-31 08:34:59.006 226833 DEBUG nova.network.neutron [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:34:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1951587907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:59 compute-2 nova_compute[226829]: 2026-01-31 08:34:59.072 226833 DEBUG nova.compute.manager [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Received event network-changed-7bd650c8-4238-4749-a373-22597e7927ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:59 compute-2 nova_compute[226829]: 2026-01-31 08:34:59.073 226833 DEBUG nova.compute.manager [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Refreshing instance network info cache due to event network-changed-7bd650c8-4238-4749-a373-22597e7927ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:34:59 compute-2 nova_compute[226829]: 2026-01-31 08:34:59.073 226833 DEBUG oslo_concurrency.lockutils [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:34:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:34:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1931554054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:59 compute-2 nova_compute[226829]: 2026-01-31 08:34:59.164 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:59 compute-2 nova_compute[226829]: 2026-01-31 08:34:59.170 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:34:59 compute-2 nova_compute[226829]: 2026-01-31 08:34:59.222 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:34:59 compute-2 nova_compute[226829]: 2026-01-31 08:34:59.331 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:34:59 compute-2 nova_compute[226829]: 2026-01-31 08:34:59.332 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:59 compute-2 sudo[308449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:59 compute-2 sudo[308449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:59 compute-2 sudo[308449]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:59 compute-2 sudo[308474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:34:59 compute-2 sudo[308474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:59 compute-2 sudo[308474]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:00 compute-2 ceph-mon[77282]: pgmap v3059: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Jan 31 08:35:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:35:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:35:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1931554054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:00.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:00.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:01 compute-2 nova_compute[226829]: 2026-01-31 08:35:01.731 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:02.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.139 226833 DEBUG nova.network.neutron [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Updating instance_info_cache with network_info: [{"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:35:02 compute-2 ceph-mon[77282]: pgmap v3060: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 31 08:35:02 compute-2 podman[308502]: 2026-01-31 08:35:02.190390669 +0000 UTC m=+0.076126556 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.330 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Releasing lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.330 226833 DEBUG nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Instance network_info: |[{"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.330 226833 DEBUG oslo_concurrency.lockutils [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.331 226833 DEBUG nova.network.neutron [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Refreshing network info cache for port 7bd650c8-4238-4749-a373-22597e7927ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.333 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Start _get_guest_xml network_info=[{"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.338 226833 WARNING nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.341 226833 DEBUG nova.virt.libvirt.host [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.342 226833 DEBUG nova.virt.libvirt.host [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.345 226833 DEBUG nova.virt.libvirt.host [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.345 226833 DEBUG nova.virt.libvirt.host [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.346 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.346 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.347 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.347 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.347 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.347 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.347 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.348 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.348 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.348 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.348 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.348 226833 DEBUG nova.virt.hardware [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.351 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.466 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:35:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4115752316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.777 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.804 226833 DEBUG nova.storage.rbd_utils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] rbd image e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:35:02 compute-2 nova_compute[226829]: 2026-01-31 08:35:02.808 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:02.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:35:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1044282013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.193 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.195 226833 DEBUG nova.virt.libvirt.vif [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:34:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-934828188',display_name='tempest-TestEncryptedCinderVolumes-server-934828188',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-934828188',id=179,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNyos7xhbpDhC1gFz6fqTxaWA5nUvx96vOOlcOqZHvZkPkc7S2Kk8P7fNSNHyu38uFKRbQD2QM+FfTWJdeBwXJke/v3qgX5nHItGsGkDis68eyGL2HY9r53McJv5koXrYg==',key_name='tempest-keypair-829743997',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='149ccba7b87e4284a3a6462e3a1dace1',ramdisk_id='',reservation_id='r-xvyy3y3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1271777404',owner_user_name='tempest-TestEncryptedCinderVolumes-1271777404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb8e3d6cd4094a62b23e39cec9023f18',uuid=e93a14b8-ef43-4615-a9c8-8b2df69d20d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.196 226833 DEBUG nova.network.os_vif_util [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Converting VIF {"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.198 226833 DEBUG nova.network.os_vif_util [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:4f:45,bridge_name='br-int',has_traffic_filtering=True,id=7bd650c8-4238-4749-a373-22597e7927ff,network=Network(52eb33fc-fc1d-4d23-8941-b2b9e959253d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bd650c8-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.200 226833 DEBUG nova.objects.instance [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lazy-loading 'pci_devices' on Instance uuid e93a14b8-ef43-4615-a9c8-8b2df69d20d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.227 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <uuid>e93a14b8-ef43-4615-a9c8-8b2df69d20d6</uuid>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <name>instance-000000b3</name>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-934828188</nova:name>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:35:02</nova:creationTime>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <nova:user uuid="eb8e3d6cd4094a62b23e39cec9023f18">tempest-TestEncryptedCinderVolumes-1271777404-project-member</nova:user>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <nova:project uuid="149ccba7b87e4284a3a6462e3a1dace1">tempest-TestEncryptedCinderVolumes-1271777404</nova:project>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <nova:port uuid="7bd650c8-4238-4749-a373-22597e7927ff">
Jan 31 08:35:03 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <system>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <entry name="serial">e93a14b8-ef43-4615-a9c8-8b2df69d20d6</entry>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <entry name="uuid">e93a14b8-ef43-4615-a9c8-8b2df69d20d6</entry>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     </system>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <os>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   </os>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <features>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   </features>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk">
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       </source>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk.config">
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       </source>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:35:03 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:83:4f:45"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <target dev="tap7bd650c8-42"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/e93a14b8-ef43-4615-a9c8-8b2df69d20d6/console.log" append="off"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <video>
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     </video>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:35:03 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:35:03 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:35:03 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:35:03 compute-2 nova_compute[226829]: </domain>
Jan 31 08:35:03 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.228 226833 DEBUG nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Preparing to wait for external event network-vif-plugged-7bd650c8-4238-4749-a373-22597e7927ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.229 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.230 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.230 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.231 226833 DEBUG nova.virt.libvirt.vif [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:34:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-934828188',display_name='tempest-TestEncryptedCinderVolumes-server-934828188',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-934828188',id=179,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNyos7xhbpDhC1gFz6fqTxaWA5nUvx96vOOlcOqZHvZkPkc7S2Kk8P7fNSNHyu38uFKRbQD2QM+FfTWJdeBwXJke/v3qgX5nHItGsGkDis68eyGL2HY9r53McJv5koXrYg==',key_name='tempest-keypair-829743997',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='149ccba7b87e4284a3a6462e3a1dace1',ramdisk_id='',reservation_id='r-xvyy3y3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1271777404',owner_user_name='tempest-TestEncryptedCinderVolumes-1271777404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb8e3d6cd4094a62b23e39cec9023f18',uuid=e93a14b8-ef43-4615-a9c8-8b2df69d20d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.232 226833 DEBUG nova.network.os_vif_util [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Converting VIF {"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.233 226833 DEBUG nova.network.os_vif_util [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:4f:45,bridge_name='br-int',has_traffic_filtering=True,id=7bd650c8-4238-4749-a373-22597e7927ff,network=Network(52eb33fc-fc1d-4d23-8941-b2b9e959253d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bd650c8-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.234 226833 DEBUG os_vif [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:4f:45,bridge_name='br-int',has_traffic_filtering=True,id=7bd650c8-4238-4749-a373-22597e7927ff,network=Network(52eb33fc-fc1d-4d23-8941-b2b9e959253d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bd650c8-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.235 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.235 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.236 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.243 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.243 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bd650c8-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.244 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7bd650c8-42, col_values=(('external_ids', {'iface-id': '7bd650c8-4238-4749-a373-22597e7927ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:4f:45', 'vm-uuid': 'e93a14b8-ef43-4615-a9c8-8b2df69d20d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.245 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:03 compute-2 NetworkManager[48999]: <info>  [1769848503.2469] manager: (tap7bd650c8-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.247 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.252 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.253 226833 INFO os_vif [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:4f:45,bridge_name='br-int',has_traffic_filtering=True,id=7bd650c8-4238-4749-a373-22597e7927ff,network=Network(52eb33fc-fc1d-4d23-8941-b2b9e959253d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bd650c8-42')
Jan 31 08:35:03 compute-2 sshd-session[308501]: Connection closed by 170.64.139.8 port 41640
Jan 31 08:35:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4115752316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.343 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.343 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.344 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] No VIF found with MAC fa:16:3e:83:4f:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.344 226833 INFO nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Using config drive
Jan 31 08:35:03 compute-2 nova_compute[226829]: 2026-01-31 08:35:03.374 226833 DEBUG nova.storage.rbd_utils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] rbd image e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:35:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:04.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:04 compute-2 nova_compute[226829]: 2026-01-31 08:35:04.440 226833 INFO nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Creating config drive at /var/lib/nova/instances/e93a14b8-ef43-4615-a9c8-8b2df69d20d6/disk.config
Jan 31 08:35:04 compute-2 nova_compute[226829]: 2026-01-31 08:35:04.446 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e93a14b8-ef43-4615-a9c8-8b2df69d20d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp4hitgwj8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:04 compute-2 ceph-mon[77282]: pgmap v3061: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 793 KiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 31 08:35:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1044282013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:35:04 compute-2 nova_compute[226829]: 2026-01-31 08:35:04.579 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e93a14b8-ef43-4615-a9c8-8b2df69d20d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp4hitgwj8" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:04 compute-2 nova_compute[226829]: 2026-01-31 08:35:04.605 226833 DEBUG nova.storage.rbd_utils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] rbd image e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:35:04 compute-2 nova_compute[226829]: 2026-01-31 08:35:04.608 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e93a14b8-ef43-4615-a9c8-8b2df69d20d6/disk.config e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:04.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:04 compute-2 nova_compute[226829]: 2026-01-31 08:35:04.975 226833 DEBUG oslo_concurrency.processutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e93a14b8-ef43-4615-a9c8-8b2df69d20d6/disk.config e93a14b8-ef43-4615-a9c8-8b2df69d20d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:04 compute-2 nova_compute[226829]: 2026-01-31 08:35:04.976 226833 INFO nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Deleting local config drive /var/lib/nova/instances/e93a14b8-ef43-4615-a9c8-8b2df69d20d6/disk.config because it was imported into RBD.
Jan 31 08:35:05 compute-2 kernel: tap7bd650c8-42: entered promiscuous mode
Jan 31 08:35:05 compute-2 NetworkManager[48999]: <info>  [1769848505.0334] manager: (tap7bd650c8-42): new Tun device (/org/freedesktop/NetworkManager/Devices/351)
Jan 31 08:35:05 compute-2 ovn_controller[133834]: 2026-01-31T08:35:05Z|00712|binding|INFO|Claiming lport 7bd650c8-4238-4749-a373-22597e7927ff for this chassis.
Jan 31 08:35:05 compute-2 nova_compute[226829]: 2026-01-31 08:35:05.034 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:05 compute-2 ovn_controller[133834]: 2026-01-31T08:35:05Z|00713|binding|INFO|7bd650c8-4238-4749-a373-22597e7927ff: Claiming fa:16:3e:83:4f:45 10.100.0.11
Jan 31 08:35:05 compute-2 nova_compute[226829]: 2026-01-31 08:35:05.043 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:05 compute-2 systemd-udevd[308662]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:35:05 compute-2 ovn_controller[133834]: 2026-01-31T08:35:05Z|00714|binding|INFO|Setting lport 7bd650c8-4238-4749-a373-22597e7927ff ovn-installed in OVS
Jan 31 08:35:05 compute-2 NetworkManager[48999]: <info>  [1769848505.0756] device (tap7bd650c8-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:35:05 compute-2 NetworkManager[48999]: <info>  [1769848505.0765] device (tap7bd650c8-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:35:05 compute-2 nova_compute[226829]: 2026-01-31 08:35:05.117 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:05 compute-2 systemd-machined[195142]: New machine qemu-82-instance-000000b3.
Jan 31 08:35:05 compute-2 systemd[1]: Started Virtual Machine qemu-82-instance-000000b3.
Jan 31 08:35:05 compute-2 ceph-mon[77282]: pgmap v3062: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 132 op/s
Jan 31 08:35:05 compute-2 ovn_controller[133834]: 2026-01-31T08:35:05Z|00715|binding|INFO|Setting lport 7bd650c8-4238-4749-a373-22597e7927ff up in Southbound
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.732 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:4f:45 10.100.0.11'], port_security=['fa:16:3e:83:4f:45 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e93a14b8-ef43-4615-a9c8-8b2df69d20d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52eb33fc-fc1d-4d23-8941-b2b9e959253d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '149ccba7b87e4284a3a6462e3a1dace1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b3cd206-70ab-41d3-9525-744c797907c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39b1084f-15db-4d54-a3cf-e2f9c462ed13, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=7bd650c8-4238-4749-a373-22597e7927ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.734 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 7bd650c8-4238-4749-a373-22597e7927ff in datapath 52eb33fc-fc1d-4d23-8941-b2b9e959253d bound to our chassis
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.736 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52eb33fc-fc1d-4d23-8941-b2b9e959253d
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.745 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1eccd25e-b3d5-42f2-886d-ffe1be216198]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.746 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52eb33fc-f1 in ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.749 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52eb33fc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.749 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ff789c95-6ffc-4fa7-be2f-2942a0fb8147]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.750 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7b4c36-b50a-41c9-93ba-eaaa9e7624ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.764 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[69a62f18-cad1-4457-bc7b-fd4c8c923cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.774 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcd9a8c-5f5a-41e2-a654-fc71a145f8f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.802 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[dc65c407-4ac7-40ff-9a77-d4862aa04beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 systemd-udevd[308664]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.808 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdf79b1-53d8-4b39-90ad-ab55f080e8c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 NetworkManager[48999]: <info>  [1769848505.8097] manager: (tap52eb33fc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/352)
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.837 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3813c3-8943-498c-a6ba-a594a0ab027b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.840 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[645729a8-ed3e-4370-9364-411e81fc80da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 NetworkManager[48999]: <info>  [1769848505.8550] device (tap52eb33fc-f0): carrier: link connected
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.859 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e52aae7f-f78f-48f2-9903-2a02f7155cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.869 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[672541b1-f93b-4482-a59a-18d2dc718d22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52eb33fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:23:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 875832, 'reachable_time': 37837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308735, 'error': None, 'target': 'ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.879 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[08baa321-fbcc-4158-912e-312747c78490]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:2348'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 875832, 'tstamp': 875832}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308739, 'error': None, 'target': 'ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.892 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[263857d2-58f8-4ef9-9904-74c3f5ff8ff9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52eb33fc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:23:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 875832, 'reachable_time': 37837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308742, 'error': None, 'target': 'ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.914 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7080e7-f631-4f63-bd6c-2c3942a6e777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 nova_compute[226829]: 2026-01-31 08:35:05.944 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848505.9440072, e93a14b8-ef43-4615-a9c8-8b2df69d20d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:35:05 compute-2 nova_compute[226829]: 2026-01-31 08:35:05.944 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] VM Started (Lifecycle Event)
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.956 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[969fc630-3b86-4e2e-8231-57af22c48560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.957 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52eb33fc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.958 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.958 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52eb33fc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:35:05 compute-2 nova_compute[226829]: 2026-01-31 08:35:05.959 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:05 compute-2 NetworkManager[48999]: <info>  [1769848505.9605] manager: (tap52eb33fc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Jan 31 08:35:05 compute-2 kernel: tap52eb33fc-f0: entered promiscuous mode
Jan 31 08:35:05 compute-2 nova_compute[226829]: 2026-01-31 08:35:05.962 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.963 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52eb33fc-f0, col_values=(('external_ids', {'iface-id': '43baf541-f089-4fa6-822b-c140fb5d713b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:35:05 compute-2 nova_compute[226829]: 2026-01-31 08:35:05.963 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:05 compute-2 ovn_controller[133834]: 2026-01-31T08:35:05Z|00716|binding|INFO|Releasing lport 43baf541-f089-4fa6-822b-c140fb5d713b from this chassis (sb_readonly=1)
Jan 31 08:35:05 compute-2 nova_compute[226829]: 2026-01-31 08:35:05.969 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.971 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52eb33fc-fc1d-4d23-8941-b2b9e959253d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52eb33fc-fc1d-4d23-8941-b2b9e959253d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.971 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2b065aa5-2de4-4dbc-a8a0-8d7129c30de0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.972 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-52eb33fc-fc1d-4d23-8941-b2b9e959253d
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/52eb33fc-fc1d-4d23-8941-b2b9e959253d.pid.haproxy
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 52eb33fc-fc1d-4d23-8941-b2b9e959253d
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:35:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:05.973 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d', 'env', 'PROCESS_TAG=haproxy-52eb33fc-fc1d-4d23-8941-b2b9e959253d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52eb33fc-fc1d-4d23-8941-b2b9e959253d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:35:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:06.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:06 compute-2 podman[308775]: 2026-01-31 08:35:06.31873881 +0000 UTC m=+0.054945902 container create 162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:35:06 compute-2 systemd[1]: Started libpod-conmon-162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0.scope.
Jan 31 08:35:06 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:35:06 compute-2 podman[308775]: 2026-01-31 08:35:06.283368781 +0000 UTC m=+0.019575953 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:35:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84910b3f3fe00852c246f0bd16f35aabfe58efd7e2e4e3d8c5fa22fabe682a83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:06 compute-2 podman[308775]: 2026-01-31 08:35:06.39431582 +0000 UTC m=+0.130522942 container init 162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:35:06 compute-2 podman[308775]: 2026-01-31 08:35:06.400014485 +0000 UTC m=+0.136221587 container start 162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 08:35:06 compute-2 neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d[308790]: [NOTICE]   (308794) : New worker (308796) forked
Jan 31 08:35:06 compute-2 neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d[308790]: [NOTICE]   (308794) : Loading success.
Jan 31 08:35:06 compute-2 nova_compute[226829]: 2026-01-31 08:35:06.640 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848491.6394076, fb68e4df-7075-4de3-8a2e-3d59677364ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:35:06 compute-2 nova_compute[226829]: 2026-01-31 08:35:06.641 226833 INFO nova.compute.manager [-] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] VM Stopped (Lifecycle Event)
Jan 31 08:35:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:06.911 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:06.912 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:06.912 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:06.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:07 compute-2 nova_compute[226829]: 2026-01-31 08:35:07.470 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:07 compute-2 ceph-mon[77282]: pgmap v3063: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 776 KiB/s wr, 154 op/s
Jan 31 08:35:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:08.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.177 226833 DEBUG nova.compute.manager [None req-fb185b65-02b5-49a4-bfdd-4514824d7369 - - - - - -] [instance: fb68e4df-7075-4de3-8a2e-3d59677364ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.179 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.183 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848505.9463058, e93a14b8-ef43-4615-a9c8-8b2df69d20d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.183 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] VM Paused (Lifecycle Event)
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.209 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.213 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.246 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.276 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:35:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.638 226833 DEBUG nova.network.neutron [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Updated VIF entry in instance network info cache for port 7bd650c8-4238-4749-a373-22597e7927ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.639 226833 DEBUG nova.network.neutron [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Updating instance_info_cache with network_info: [{"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:35:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:08.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:08 compute-2 nova_compute[226829]: 2026-01-31 08:35:08.998 226833 DEBUG oslo_concurrency.lockutils [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:35:09 compute-2 ceph-mon[77282]: pgmap v3064: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 79 op/s
Jan 31 08:35:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:10.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:10 compute-2 podman[308807]: 2026-01-31 08:35:10.162374895 +0000 UTC m=+0.048560049 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.424 226833 DEBUG nova.compute.manager [req-94826aea-044e-4c46-9850-6be131c63172 req-25e758ca-4410-49b3-a3d1-16a5ac53585b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Received event network-vif-plugged-7bd650c8-4238-4749-a373-22597e7927ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.425 226833 DEBUG oslo_concurrency.lockutils [req-94826aea-044e-4c46-9850-6be131c63172 req-25e758ca-4410-49b3-a3d1-16a5ac53585b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.425 226833 DEBUG oslo_concurrency.lockutils [req-94826aea-044e-4c46-9850-6be131c63172 req-25e758ca-4410-49b3-a3d1-16a5ac53585b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.426 226833 DEBUG oslo_concurrency.lockutils [req-94826aea-044e-4c46-9850-6be131c63172 req-25e758ca-4410-49b3-a3d1-16a5ac53585b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.426 226833 DEBUG nova.compute.manager [req-94826aea-044e-4c46-9850-6be131c63172 req-25e758ca-4410-49b3-a3d1-16a5ac53585b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Processing event network-vif-plugged-7bd650c8-4238-4749-a373-22597e7927ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.427 226833 DEBUG nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.433 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848510.4332848, e93a14b8-ef43-4615-a9c8-8b2df69d20d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.434 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] VM Resumed (Lifecycle Event)
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.437 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.441 226833 INFO nova.virt.libvirt.driver [-] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Instance spawned successfully.
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.442 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.607 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.607 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.608 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.609 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.609 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.610 226833 DEBUG nova.virt.libvirt.driver [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.614 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:35:10 compute-2 nova_compute[226829]: 2026-01-31 08:35:10.619 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:35:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:35:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:10.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:35:11 compute-2 nova_compute[226829]: 2026-01-31 08:35:11.209 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:35:11 compute-2 nova_compute[226829]: 2026-01-31 08:35:11.615 226833 INFO nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Took 21.62 seconds to spawn the instance on the hypervisor.
Jan 31 08:35:11 compute-2 nova_compute[226829]: 2026-01-31 08:35:11.615 226833 DEBUG nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:35:11 compute-2 ceph-mon[77282]: pgmap v3065: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 83 op/s
Jan 31 08:35:11 compute-2 nova_compute[226829]: 2026-01-31 08:35:11.804 226833 INFO nova.compute.manager [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Took 29.43 seconds to build instance.
Jan 31 08:35:11 compute-2 nova_compute[226829]: 2026-01-31 08:35:11.933 226833 DEBUG oslo_concurrency.lockutils [None req-4396a14a-d306-459e-a7e1-1349b4870cd9 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 31.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:12.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.333 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.333 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.471 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.921 226833 DEBUG nova.compute.manager [req-f69838b2-a6f6-4726-93eb-a2709bdbe7f9 req-bad783ca-9dea-4819-b265-aa7c78123671 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Received event network-vif-plugged-7bd650c8-4238-4749-a373-22597e7927ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.921 226833 DEBUG oslo_concurrency.lockutils [req-f69838b2-a6f6-4726-93eb-a2709bdbe7f9 req-bad783ca-9dea-4819-b265-aa7c78123671 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.922 226833 DEBUG oslo_concurrency.lockutils [req-f69838b2-a6f6-4726-93eb-a2709bdbe7f9 req-bad783ca-9dea-4819-b265-aa7c78123671 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.922 226833 DEBUG oslo_concurrency.lockutils [req-f69838b2-a6f6-4726-93eb-a2709bdbe7f9 req-bad783ca-9dea-4819-b265-aa7c78123671 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.922 226833 DEBUG nova.compute.manager [req-f69838b2-a6f6-4726-93eb-a2709bdbe7f9 req-bad783ca-9dea-4819-b265-aa7c78123671 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] No waiting events found dispatching network-vif-plugged-7bd650c8-4238-4749-a373-22597e7927ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.923 226833 WARNING nova.compute.manager [req-f69838b2-a6f6-4726-93eb-a2709bdbe7f9 req-bad783ca-9dea-4819-b265-aa7c78123671 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Received unexpected event network-vif-plugged-7bd650c8-4238-4749-a373-22597e7927ff for instance with vm_state active and task_state None.
Jan 31 08:35:12 compute-2 ovn_controller[133834]: 2026-01-31T08:35:12Z|00717|binding|INFO|Releasing lport 43baf541-f089-4fa6-822b-c140fb5d713b from this chassis (sb_readonly=0)
Jan 31 08:35:12 compute-2 nova_compute[226829]: 2026-01-31 08:35:12.959 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:35:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:12.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:35:13 compute-2 nova_compute[226829]: 2026-01-31 08:35:13.247 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:13 compute-2 ceph-mon[77282]: pgmap v3066: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 82 op/s
Jan 31 08:35:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:14.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:14 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #57. Immutable memtables: 13.
Jan 31 08:35:14 compute-2 sudo[308829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:14 compute-2 sudo[308829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:14 compute-2 sudo[308829]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:14 compute-2 sudo[308854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:14 compute-2 sudo[308854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:14 compute-2 sudo[308854]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:14.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:15 compute-2 ceph-mon[77282]: pgmap v3067: 305 pgs: 305 active+clean; 222 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 865 KiB/s wr, 95 op/s
Jan 31 08:35:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:16.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:17 compute-2 nova_compute[226829]: 2026-01-31 08:35:17.473 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:17 compute-2 ceph-mon[77282]: pgmap v3068: 305 pgs: 305 active+clean; 245 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Jan 31 08:35:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:18.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:18 compute-2 nova_compute[226829]: 2026-01-31 08:35:18.250 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:18 compute-2 NetworkManager[48999]: <info>  [1769848518.3725] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Jan 31 08:35:18 compute-2 NetworkManager[48999]: <info>  [1769848518.3735] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Jan 31 08:35:18 compute-2 nova_compute[226829]: 2026-01-31 08:35:18.371 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:18 compute-2 nova_compute[226829]: 2026-01-31 08:35:18.412 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:18 compute-2 ovn_controller[133834]: 2026-01-31T08:35:18Z|00718|binding|INFO|Releasing lport 43baf541-f089-4fa6-822b-c140fb5d713b from this chassis (sb_readonly=0)
Jan 31 08:35:18 compute-2 nova_compute[226829]: 2026-01-31 08:35:18.444 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:18.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:19 compute-2 nova_compute[226829]: 2026-01-31 08:35:19.349 226833 DEBUG nova.compute.manager [req-e6e7035d-71ea-416c-8c4a-8e8376734a79 req-41cbe5d2-0fd0-4ab0-9155-f349477f461d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Received event network-changed-7bd650c8-4238-4749-a373-22597e7927ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:35:19 compute-2 nova_compute[226829]: 2026-01-31 08:35:19.349 226833 DEBUG nova.compute.manager [req-e6e7035d-71ea-416c-8c4a-8e8376734a79 req-41cbe5d2-0fd0-4ab0-9155-f349477f461d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Refreshing instance network info cache due to event network-changed-7bd650c8-4238-4749-a373-22597e7927ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:35:19 compute-2 nova_compute[226829]: 2026-01-31 08:35:19.350 226833 DEBUG oslo_concurrency.lockutils [req-e6e7035d-71ea-416c-8c4a-8e8376734a79 req-41cbe5d2-0fd0-4ab0-9155-f349477f461d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:35:19 compute-2 nova_compute[226829]: 2026-01-31 08:35:19.350 226833 DEBUG oslo_concurrency.lockutils [req-e6e7035d-71ea-416c-8c4a-8e8376734a79 req-41cbe5d2-0fd0-4ab0-9155-f349477f461d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:35:19 compute-2 nova_compute[226829]: 2026-01-31 08:35:19.350 226833 DEBUG nova.network.neutron [req-e6e7035d-71ea-416c-8c4a-8e8376734a79 req-41cbe5d2-0fd0-4ab0-9155-f349477f461d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Refreshing network info cache for port 7bd650c8-4238-4749-a373-22597e7927ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:35:20 compute-2 ceph-mon[77282]: pgmap v3069: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 31 08:35:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:20.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:20.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:21 compute-2 sshd-session[308880]: Connection closed by authenticating user root 170.64.139.8 port 47558 [preauth]
Jan 31 08:35:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:35:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:22.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:35:22 compute-2 ceph-mon[77282]: pgmap v3070: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 31 08:35:22 compute-2 nova_compute[226829]: 2026-01-31 08:35:22.474 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:23.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:23 compute-2 nova_compute[226829]: 2026-01-31 08:35:23.063 226833 DEBUG nova.network.neutron [req-e6e7035d-71ea-416c-8c4a-8e8376734a79 req-41cbe5d2-0fd0-4ab0-9155-f349477f461d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Updated VIF entry in instance network info cache for port 7bd650c8-4238-4749-a373-22597e7927ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:35:23 compute-2 nova_compute[226829]: 2026-01-31 08:35:23.063 226833 DEBUG nova.network.neutron [req-e6e7035d-71ea-416c-8c4a-8e8376734a79 req-41cbe5d2-0fd0-4ab0-9155-f349477f461d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Updating instance_info_cache with network_info: [{"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:35:23 compute-2 nova_compute[226829]: 2026-01-31 08:35:23.221 226833 DEBUG oslo_concurrency.lockutils [req-e6e7035d-71ea-416c-8c4a-8e8376734a79 req-41cbe5d2-0fd0-4ab0-9155-f349477f461d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:35:23 compute-2 nova_compute[226829]: 2026-01-31 08:35:23.252 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:24.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:24 compute-2 ceph-mon[77282]: pgmap v3071: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Jan 31 08:35:24 compute-2 ovn_controller[133834]: 2026-01-31T08:35:24Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:4f:45 10.100.0.11
Jan 31 08:35:24 compute-2 ovn_controller[133834]: 2026-01-31T08:35:24Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:4f:45 10.100.0.11
Jan 31 08:35:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:25.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:26.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:26 compute-2 ceph-mon[77282]: pgmap v3072: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 31 08:35:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:27.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:27 compute-2 ceph-mon[77282]: pgmap v3073: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 144 op/s
Jan 31 08:35:27 compute-2 nova_compute[226829]: 2026-01-31 08:35:27.476 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:28.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:28 compute-2 nova_compute[226829]: 2026-01-31 08:35:28.255 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:29.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:29 compute-2 ceph-mon[77282]: pgmap v3074: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 31 08:35:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:30.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:31.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:31 compute-2 ceph-mon[77282]: pgmap v3075: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:32.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:32 compute-2 nova_compute[226829]: 2026-01-31 08:35:32.480 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:33 compute-2 ovn_controller[133834]: 2026-01-31T08:35:33Z|00719|binding|INFO|Releasing lport 43baf541-f089-4fa6-822b-c140fb5d713b from this chassis (sb_readonly=0)
Jan 31 08:35:33 compute-2 nova_compute[226829]: 2026-01-31 08:35:33.136 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:33 compute-2 podman[308891]: 2026-01-31 08:35:33.254340361 +0000 UTC m=+0.122560807 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:35:33 compute-2 nova_compute[226829]: 2026-01-31 08:35:33.256 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:33 compute-2 ceph-mon[77282]: pgmap v3076: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:34 compute-2 nova_compute[226829]: 2026-01-31 08:35:34.164 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:34 compute-2 sudo[308918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:34 compute-2 sudo[308918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:34 compute-2 sudo[308918]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:34 compute-2 sudo[308943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:34 compute-2 sudo[308943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:34 compute-2 sudo[308943]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:35.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:35 compute-2 ceph-mon[77282]: pgmap v3077: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:36.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:36 compute-2 nova_compute[226829]: 2026-01-31 08:35:36.759 226833 DEBUG oslo_concurrency.lockutils [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:36 compute-2 nova_compute[226829]: 2026-01-31 08:35:36.760 226833 DEBUG oslo_concurrency.lockutils [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:36 compute-2 nova_compute[226829]: 2026-01-31 08:35:36.985 226833 DEBUG nova.objects.instance [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lazy-loading 'flavor' on Instance uuid e93a14b8-ef43-4615-a9c8-8b2df69d20d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:35:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:37.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:37 compute-2 nova_compute[226829]: 2026-01-31 08:35:37.476 226833 DEBUG oslo_concurrency.lockutils [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:37 compute-2 nova_compute[226829]: 2026-01-31 08:35:37.483 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:37 compute-2 nova_compute[226829]: 2026-01-31 08:35:37.691 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:38.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:38 compute-2 nova_compute[226829]: 2026-01-31 08:35:38.258 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:38 compute-2 ceph-mon[77282]: pgmap v3078: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:39.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.337 226833 DEBUG oslo_concurrency.lockutils [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.339 226833 DEBUG oslo_concurrency.lockutils [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.340 226833 INFO nova.compute.manager [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Attaching volume 6d2d8f34-806c-41f7-aa1a-a0cc062684f4 to /dev/vdb
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.651 226833 DEBUG os_brick.utils [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.655 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.670 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.671 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[3625ca64-b893-4e52-8e37-d78372e97268]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.672 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.680 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.681 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[a867dc3d-a4a5-4f70-a202-74b32c56ee38]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.684 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.695 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.695 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ccbebe-8d35-46dd-9c5b-17556ff280f4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.697 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[0b32f14e-73ec-4ef0-916b-dc62ff613e53]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.698 226833 DEBUG oslo_concurrency.processutils [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.733 226833 DEBUG oslo_concurrency.processutils [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.737 226833 DEBUG os_brick.initiator.connectors.lightos [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.738 226833 DEBUG os_brick.initiator.connectors.lightos [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.738 226833 DEBUG os_brick.initiator.connectors.lightos [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.739 226833 DEBUG os_brick.utils [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] <== get_connector_properties: return (86ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:35:39 compute-2 nova_compute[226829]: 2026-01-31 08:35:39.740 226833 DEBUG nova.virt.block_device [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Updating existing volume attachment record: 3e821dc7-a772-419c-a06c-2ca20fe9cff2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:35:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:40.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:40 compute-2 ceph-mon[77282]: pgmap v3079: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Jan 31 08:35:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:35:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:41.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:35:41 compute-2 podman[308978]: 2026-01-31 08:35:41.16019247 +0000 UTC m=+0.047790428 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 31 08:35:41 compute-2 ceph-mon[77282]: pgmap v3080: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Jan 31 08:35:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:42.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:42 compute-2 nova_compute[226829]: 2026-01-31 08:35:42.523 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:35:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:43.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:35:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3676442032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:35:43 compute-2 nova_compute[226829]: 2026-01-31 08:35:43.261 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:44.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:44 compute-2 ceph-mon[77282]: pgmap v3081: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s wr, 0 op/s
Jan 31 08:35:44 compute-2 nova_compute[226829]: 2026-01-31 08:35:44.830 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:45.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:45 compute-2 ceph-mon[77282]: pgmap v3082: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Jan 31 08:35:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:46.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:47.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:47 compute-2 nova_compute[226829]: 2026-01-31 08:35:47.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:47 compute-2 nova_compute[226829]: 2026-01-31 08:35:47.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:35:47 compute-2 nova_compute[226829]: 2026-01-31 08:35:47.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:35:47 compute-2 nova_compute[226829]: 2026-01-31 08:35:47.525 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:48.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:48 compute-2 ceph-mon[77282]: pgmap v3083: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 7.4 KiB/s wr, 0 op/s
Jan 31 08:35:48 compute-2 nova_compute[226829]: 2026-01-31 08:35:48.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:49.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:50.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:50 compute-2 ceph-mon[77282]: pgmap v3084: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.7 KiB/s wr, 0 op/s
Jan 31 08:35:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:51.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:51 compute-2 nova_compute[226829]: 2026-01-31 08:35:51.773 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:35:51 compute-2 nova_compute[226829]: 2026-01-31 08:35:51.774 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:35:51 compute-2 nova_compute[226829]: 2026-01-31 08:35:51.774 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:35:51 compute-2 nova_compute[226829]: 2026-01-31 08:35:51.775 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid e93a14b8-ef43-4615-a9c8-8b2df69d20d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:35:51 compute-2 nova_compute[226829]: 2026-01-31 08:35:51.861 226833 DEBUG os_brick.encryptors [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Using volume encryption metadata '{'encryption_key_id': '2ca6a990-a9f2-407b-95b2-5887f411722b', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6d2d8f34-806c-41f7-aa1a-a0cc062684f4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6d2d8f34-806c-41f7-aa1a-a0cc062684f4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e93a14b8-ef43-4615-a9c8-8b2df69d20d6', 'attached_at': '', 'detached_at': '', 'volume_id': '6d2d8f34-806c-41f7-aa1a-a0cc062684f4', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Jan 31 08:35:51 compute-2 nova_compute[226829]: 2026-01-31 08:35:51.873 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Jan 31 08:35:51 compute-2 nova_compute[226829]: 2026-01-31 08:35:51.906 226833 DEBUG barbicanclient.v1.secrets [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/2ca6a990-a9f2-407b-95b2-5887f411722b get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Jan 31 08:35:51 compute-2 nova_compute[226829]: 2026-01-31 08:35:51.907 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:51 compute-2 nova_compute[226829]: 2026-01-31 08:35:51.957 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:51 compute-2 nova_compute[226829]: 2026-01-31 08:35:51.958 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.010 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.010 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.038 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.040 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.084 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.084 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.113 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.114 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.155 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.155 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:52.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.204 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.205 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.233 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.234 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.259 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.260 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.321 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.321 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.358 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.358 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.376 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.376 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.406 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.407 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.441 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.442 226833 INFO barbicanclient.base [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Calculated Secrets uuid ref: secrets/2ca6a990-a9f2-407b-95b2-5887f411722b
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.480 226833 DEBUG barbicanclient.client [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.481 226833 DEBUG nova.virt.libvirt.host [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 31 08:35:52 compute-2 nova_compute[226829]:   <usage type="volume">
Jan 31 08:35:52 compute-2 nova_compute[226829]:     <volume>6d2d8f34-806c-41f7-aa1a-a0cc062684f4</volume>
Jan 31 08:35:52 compute-2 nova_compute[226829]:   </usage>
Jan 31 08:35:52 compute-2 nova_compute[226829]: </secret>
Jan 31 08:35:52 compute-2 nova_compute[226829]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.493 226833 DEBUG nova.objects.instance [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lazy-loading 'flavor' on Instance uuid e93a14b8-ef43-4615-a9c8-8b2df69d20d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.527 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:52 compute-2 ceph-mon[77282]: pgmap v3085: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.940 226833 DEBUG nova.virt.libvirt.driver [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Attempting to attach volume 6d2d8f34-806c-41f7-aa1a-a0cc062684f4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:35:52 compute-2 nova_compute[226829]: 2026-01-31 08:35:52.943 226833 DEBUG nova.virt.libvirt.guest [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:35:52 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:35:52 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-6d2d8f34-806c-41f7-aa1a-a0cc062684f4">
Jan 31 08:35:52 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:35:52 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:35:52 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:35:52 compute-2 nova_compute[226829]:   </source>
Jan 31 08:35:52 compute-2 nova_compute[226829]:   <auth username="openstack">
Jan 31 08:35:52 compute-2 nova_compute[226829]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:35:52 compute-2 nova_compute[226829]:   </auth>
Jan 31 08:35:52 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:35:52 compute-2 nova_compute[226829]:   <serial>6d2d8f34-806c-41f7-aa1a-a0cc062684f4</serial>
Jan 31 08:35:52 compute-2 nova_compute[226829]:   <encryption format="luks">
Jan 31 08:35:52 compute-2 nova_compute[226829]:     <secret type="passphrase" uuid="8743b107-0307-4a7c-ae02-24ad087d38ab"/>
Jan 31 08:35:52 compute-2 nova_compute[226829]:   </encryption>
Jan 31 08:35:52 compute-2 nova_compute[226829]: </disk>
Jan 31 08:35:52 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:35:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:53.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:53 compute-2 nova_compute[226829]: 2026-01-31 08:35:53.267 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:53 compute-2 nova_compute[226829]: 2026-01-31 08:35:53.343 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:53.343 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:35:53 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:35:53.346 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:35:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:53 compute-2 ceph-mon[77282]: pgmap v3086: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 08:35:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/947235518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:35:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/947235518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:35:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:35:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:54.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:35:54 compute-2 sudo[309024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:54 compute-2 sudo[309024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:54 compute-2 sudo[309024]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:54 compute-2 sudo[309049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:54 compute-2 sudo[309049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:54 compute-2 sudo[309049]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:55.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1126290724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1326395793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:56.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:56 compute-2 ceph-mon[77282]: pgmap v3087: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 KiB/s rd, 2.4 KiB/s wr, 2 op/s
Jan 31 08:35:56 compute-2 nova_compute[226829]: 2026-01-31 08:35:56.734 226833 DEBUG nova.virt.libvirt.driver [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:35:56 compute-2 nova_compute[226829]: 2026-01-31 08:35:56.734 226833 DEBUG nova.virt.libvirt.driver [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:35:56 compute-2 nova_compute[226829]: 2026-01-31 08:35:56.735 226833 DEBUG nova.virt.libvirt.driver [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:35:56 compute-2 nova_compute[226829]: 2026-01-31 08:35:56.735 226833 DEBUG nova.virt.libvirt.driver [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] No VIF found with MAC fa:16:3e:83:4f:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:35:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:57.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2939846883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:57 compute-2 nova_compute[226829]: 2026-01-31 08:35:57.363 226833 DEBUG oslo_concurrency.lockutils [None req-f463054e-5209-4d11-92b8-fe9bdc36e3df eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 18.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:57 compute-2 nova_compute[226829]: 2026-01-31 08:35:57.529 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.010 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Updating instance_info_cache with network_info: [{"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.103 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-e93a14b8-ef43-4615-a9c8-8b2df69d20d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.103 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.104 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.105 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.105 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.106 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.107 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:58.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.269 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:58 compute-2 ceph-mon[77282]: pgmap v3088: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 3.7 KiB/s wr, 3 op/s
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.991 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.992 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.992 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.992 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:35:58 compute-2 nova_compute[226829]: 2026-01-31 08:35:58.993 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:35:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:59.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:35:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/272334319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:59 compute-2 nova_compute[226829]: 2026-01-31 08:35:59.397 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:59 compute-2 sudo[309099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:59 compute-2 sudo[309099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:59 compute-2 sudo[309099]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2355544338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:59 compute-2 ceph-mon[77282]: pgmap v3089: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Jan 31 08:35:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/272334319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:59 compute-2 sudo[309124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:35:59 compute-2 sudo[309124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:59 compute-2 sudo[309124]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:59 compute-2 sudo[309149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:59 compute-2 sudo[309149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:59 compute-2 sudo[309149]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:59 compute-2 sudo[309174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:35:59 compute-2 sudo[309174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:59 compute-2 sudo[309174]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:36:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:00.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:36:00 compute-2 nova_compute[226829]: 2026-01-31 08:36:00.432 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:36:00 compute-2 nova_compute[226829]: 2026-01-31 08:36:00.433 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:36:00 compute-2 nova_compute[226829]: 2026-01-31 08:36:00.433 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:36:00 compute-2 nova_compute[226829]: 2026-01-31 08:36:00.580 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:36:00 compute-2 nova_compute[226829]: 2026-01-31 08:36:00.582 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3958MB free_disk=20.897106170654297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:36:00 compute-2 nova_compute[226829]: 2026-01-31 08:36:00.582 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:00 compute-2 nova_compute[226829]: 2026-01-31 08:36:00.582 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:01.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:01 compute-2 nova_compute[226829]: 2026-01-31 08:36:01.198 226833 DEBUG oslo_concurrency.lockutils [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:01 compute-2 nova_compute[226829]: 2026-01-31 08:36:01.199 226833 DEBUG oslo_concurrency.lockutils [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:01 compute-2 nova_compute[226829]: 2026-01-31 08:36:01.423 226833 INFO nova.compute.manager [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Detaching volume 6d2d8f34-806c-41f7-aa1a-a0cc062684f4
Jan 31 08:36:01 compute-2 nova_compute[226829]: 2026-01-31 08:36:01.476 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance e93a14b8-ef43-4615-a9c8-8b2df69d20d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:36:01 compute-2 nova_compute[226829]: 2026-01-31 08:36:01.476 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:36:01 compute-2 nova_compute[226829]: 2026-01-31 08:36:01.476 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:36:01 compute-2 nova_compute[226829]: 2026-01-31 08:36:01.530 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:36:01 compute-2 nova_compute[226829]: 2026-01-31 08:36:01.796 226833 INFO nova.virt.block_device [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Attempting to driver detach volume 6d2d8f34-806c-41f7-aa1a-a0cc062684f4 from mountpoint /dev/vdb
Jan 31 08:36:01 compute-2 ceph-mon[77282]: pgmap v3090: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.8 KiB/s wr, 3 op/s
Jan 31 08:36:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:01 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:36:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2076501237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.048 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.053 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:36:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:02.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.357 226833 DEBUG os_brick.encryptors [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Using volume encryption metadata '{'encryption_key_id': '2ca6a990-a9f2-407b-95b2-5887f411722b', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6d2d8f34-806c-41f7-aa1a-a0cc062684f4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6d2d8f34-806c-41f7-aa1a-a0cc062684f4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e93a14b8-ef43-4615-a9c8-8b2df69d20d6', 'attached_at': '', 'detached_at': '', 'volume_id': '6d2d8f34-806c-41f7-aa1a-a0cc062684f4', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.365 226833 DEBUG nova.virt.libvirt.driver [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Attempting to detach device vdb from instance e93a14b8-ef43-4615-a9c8-8b2df69d20d6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.366 226833 DEBUG nova.virt.libvirt.guest [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-6d2d8f34-806c-41f7-aa1a-a0cc062684f4">
Jan 31 08:36:02 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   </source>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <serial>6d2d8f34-806c-41f7-aa1a-a0cc062684f4</serial>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <encryption format="luks">
Jan 31 08:36:02 compute-2 nova_compute[226829]:     <secret type="passphrase" uuid="8743b107-0307-4a7c-ae02-24ad087d38ab"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   </encryption>
Jan 31 08:36:02 compute-2 nova_compute[226829]: </disk>
Jan 31 08:36:02 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.371 226833 INFO nova.virt.libvirt.driver [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Successfully detached device vdb from instance e93a14b8-ef43-4615-a9c8-8b2df69d20d6 from the persistent domain config.
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.372 226833 DEBUG nova.virt.libvirt.driver [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e93a14b8-ef43-4615-a9c8-8b2df69d20d6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.372 226833 DEBUG nova.virt.libvirt.guest [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-6d2d8f34-806c-41f7-aa1a-a0cc062684f4">
Jan 31 08:36:02 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   </source>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <serial>6d2d8f34-806c-41f7-aa1a-a0cc062684f4</serial>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   <encryption format="luks">
Jan 31 08:36:02 compute-2 nova_compute[226829]:     <secret type="passphrase" uuid="8743b107-0307-4a7c-ae02-24ad087d38ab"/>
Jan 31 08:36:02 compute-2 nova_compute[226829]:   </encryption>
Jan 31 08:36:02 compute-2 nova_compute[226829]: </disk>
Jan 31 08:36:02 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.435 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769848562.4345868, e93a14b8-ef43-4615-a9c8-8b2df69d20d6 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.436 226833 DEBUG nova.virt.libvirt.driver [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e93a14b8-ef43-4615-a9c8-8b2df69d20d6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.439 226833 INFO nova.virt.libvirt.driver [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Successfully detached device vdb from instance e93a14b8-ef43-4615-a9c8-8b2df69d20d6 from the live domain config.
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.532 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:02 compute-2 nova_compute[226829]: 2026-01-31 08:36:02.563 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:36:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2076501237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:36:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:36:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:36:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:36:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:36:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:36:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:03.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:36:03 compute-2 nova_compute[226829]: 2026-01-31 08:36:03.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:03.348 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:36:03 compute-2 nova_compute[226829]: 2026-01-31 08:36:03.551 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:36:03 compute-2 nova_compute[226829]: 2026-01-31 08:36:03.552 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:03 compute-2 nova_compute[226829]: 2026-01-31 08:36:03.642 226833 DEBUG nova.objects.instance [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lazy-loading 'flavor' on Instance uuid e93a14b8-ef43-4615-a9c8-8b2df69d20d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:36:03 compute-2 ceph-mon[77282]: pgmap v3091: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 3 op/s
Jan 31 08:36:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:04.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:04 compute-2 podman[309257]: 2026-01-31 08:36:04.235876669 +0000 UTC m=+0.121903579 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 08:36:04 compute-2 nova_compute[226829]: 2026-01-31 08:36:04.246 226833 DEBUG oslo_concurrency.lockutils [None req-76cf99c8-4a1d-4b55-9985-3753d2e62348 eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 3.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:05.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:05 compute-2 ceph-mon[77282]: pgmap v3092: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 3 op/s
Jan 31 08:36:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:06.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:06.913 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:06.913 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:06.914 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:07.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.245 226833 DEBUG oslo_concurrency.lockutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.245 226833 DEBUG oslo_concurrency.lockutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.245 226833 DEBUG oslo_concurrency.lockutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.245 226833 DEBUG oslo_concurrency.lockutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.246 226833 DEBUG oslo_concurrency.lockutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.247 226833 INFO nova.compute.manager [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Terminating instance
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.248 226833 DEBUG nova.compute.manager [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:36:07 compute-2 kernel: tap7bd650c8-42 (unregistering): left promiscuous mode
Jan 31 08:36:07 compute-2 NetworkManager[48999]: <info>  [1769848567.3054] device (tap7bd650c8-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.305 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:07 compute-2 ovn_controller[133834]: 2026-01-31T08:36:07Z|00720|binding|INFO|Releasing lport 7bd650c8-4238-4749-a373-22597e7927ff from this chassis (sb_readonly=0)
Jan 31 08:36:07 compute-2 ovn_controller[133834]: 2026-01-31T08:36:07Z|00721|binding|INFO|Setting lport 7bd650c8-4238-4749-a373-22597e7927ff down in Southbound
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.310 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:07 compute-2 ovn_controller[133834]: 2026-01-31T08:36:07Z|00722|binding|INFO|Removing iface tap7bd650c8-42 ovn-installed in OVS
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.312 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:07 compute-2 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b3.scope: Deactivated successfully.
Jan 31 08:36:07 compute-2 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b3.scope: Consumed 16.952s CPU time.
Jan 31 08:36:07 compute-2 systemd-machined[195142]: Machine qemu-82-instance-000000b3 terminated.
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.465 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.467 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.478 226833 INFO nova.virt.libvirt.driver [-] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Instance destroyed successfully.
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.479 226833 DEBUG nova.objects.instance [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lazy-loading 'resources' on Instance uuid e93a14b8-ef43-4615-a9c8-8b2df69d20d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.563 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.795 226833 DEBUG nova.virt.libvirt.vif [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:34:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-934828188',display_name='tempest-TestEncryptedCinderVolumes-server-934828188',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-934828188',id=179,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNyos7xhbpDhC1gFz6fqTxaWA5nUvx96vOOlcOqZHvZkPkc7S2Kk8P7fNSNHyu38uFKRbQD2QM+FfTWJdeBwXJke/v3qgX5nHItGsGkDis68eyGL2HY9r53McJv5koXrYg==',key_name='tempest-keypair-829743997',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:35:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='149ccba7b87e4284a3a6462e3a1dace1',ramdisk_id='',reservation_id='r-xvyy3y3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1271777404',owner_user_name='tempest-TestEncryptedCinderVolumes-1271777404-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:35:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb8e3d6cd4094a62b23e39cec9023f18',uuid=e93a14b8-ef43-4615-a9c8-8b2df69d20d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.796 226833 DEBUG nova.network.os_vif_util [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Converting VIF {"id": "7bd650c8-4238-4749-a373-22597e7927ff", "address": "fa:16:3e:83:4f:45", "network": {"id": "52eb33fc-fc1d-4d23-8941-b2b9e959253d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-417866262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "149ccba7b87e4284a3a6462e3a1dace1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bd650c8-42", "ovs_interfaceid": "7bd650c8-4238-4749-a373-22597e7927ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.796 226833 DEBUG nova.network.os_vif_util [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:4f:45,bridge_name='br-int',has_traffic_filtering=True,id=7bd650c8-4238-4749-a373-22597e7927ff,network=Network(52eb33fc-fc1d-4d23-8941-b2b9e959253d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bd650c8-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.797 226833 DEBUG os_vif [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:4f:45,bridge_name='br-int',has_traffic_filtering=True,id=7bd650c8-4238-4749-a373-22597e7927ff,network=Network(52eb33fc-fc1d-4d23-8941-b2b9e959253d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bd650c8-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:36:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:07.798 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:4f:45 10.100.0.11'], port_security=['fa:16:3e:83:4f:45 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e93a14b8-ef43-4615-a9c8-8b2df69d20d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52eb33fc-fc1d-4d23-8941-b2b9e959253d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '149ccba7b87e4284a3a6462e3a1dace1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b3cd206-70ab-41d3-9525-744c797907c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39b1084f-15db-4d54-a3cf-e2f9c462ed13, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=7bd650c8-4238-4749-a373-22597e7927ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.799 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.799 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bd650c8-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:36:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:07.800 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 7bd650c8-4238-4749-a373-22597e7927ff in datapath 52eb33fc-fc1d-4d23-8941-b2b9e959253d unbound from our chassis
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.801 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:07.802 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52eb33fc-fc1d-4d23-8941-b2b9e959253d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.802 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:07.804 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c549aec2-fb78-4cc7-a61c-2e4e36a8b75f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:36:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:07.804 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d namespace which is not needed anymore
Jan 31 08:36:07 compute-2 nova_compute[226829]: 2026-01-31 08:36:07.805 226833 INFO os_vif [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:4f:45,bridge_name='br-int',has_traffic_filtering=True,id=7bd650c8-4238-4749-a373-22597e7927ff,network=Network(52eb33fc-fc1d-4d23-8941-b2b9e959253d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bd650c8-42')
Jan 31 08:36:07 compute-2 neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d[308790]: [NOTICE]   (308794) : haproxy version is 2.8.14-c23fe91
Jan 31 08:36:07 compute-2 neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d[308790]: [NOTICE]   (308794) : path to executable is /usr/sbin/haproxy
Jan 31 08:36:07 compute-2 neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d[308790]: [WARNING]  (308794) : Exiting Master process...
Jan 31 08:36:07 compute-2 neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d[308790]: [WARNING]  (308794) : Exiting Master process...
Jan 31 08:36:07 compute-2 neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d[308790]: [ALERT]    (308794) : Current worker (308796) exited with code 143 (Terminated)
Jan 31 08:36:07 compute-2 neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d[308790]: [WARNING]  (308794) : All workers exited. Exiting... (0)
Jan 31 08:36:07 compute-2 systemd[1]: libpod-162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0.scope: Deactivated successfully.
Jan 31 08:36:07 compute-2 podman[309337]: 2026-01-31 08:36:07.934106138 +0000 UTC m=+0.044833607 container died 162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:36:07 compute-2 systemd[1]: var-lib-containers-storage-overlay-84910b3f3fe00852c246f0bd16f35aabfe58efd7e2e4e3d8c5fa22fabe682a83-merged.mount: Deactivated successfully.
Jan 31 08:36:07 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0-userdata-shm.mount: Deactivated successfully.
Jan 31 08:36:07 compute-2 podman[309337]: 2026-01-31 08:36:07.965626653 +0000 UTC m=+0.076354122 container cleanup 162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 08:36:07 compute-2 systemd[1]: libpod-conmon-162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0.scope: Deactivated successfully.
Jan 31 08:36:08 compute-2 nova_compute[226829]: 2026-01-31 08:36:08.039 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:08 compute-2 podman[309367]: 2026-01-31 08:36:08.081304473 +0000 UTC m=+0.101797384 container remove 162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 08:36:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:08.086 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0438789e-f9bb-4251-b585-4b4632d60e28]: (4, ('Sat Jan 31 08:36:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d (162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0)\n162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0\nSat Jan 31 08:36:07 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d (162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0)\n162b429cabdbe35de7fdbdce8e6392fb46da8956eb1be2a0ed3773f76bd2deb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:36:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:08.087 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[899b9f32-d380-4bd3-86d9-f66d72667382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:36:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:08.088 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52eb33fc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:36:08 compute-2 nova_compute[226829]: 2026-01-31 08:36:08.090 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:08 compute-2 kernel: tap52eb33fc-f0: left promiscuous mode
Jan 31 08:36:08 compute-2 nova_compute[226829]: 2026-01-31 08:36:08.092 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:08.097 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[68808cd1-004b-4857-9178-bad72ff47f2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:36:08 compute-2 nova_compute[226829]: 2026-01-31 08:36:08.098 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:08.114 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3207bbb4-7716-4ae8-a4f8-77fa44cc4427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:36:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:08.115 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[375e1b06-3417-427f-8281-7e812cb3fea8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:36:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:08.131 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f6da6af3-96f8-439b-9438-0b0fa5c40237]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 875827, 'reachable_time': 23761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309384, 'error': None, 'target': 'ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:36:08 compute-2 systemd[1]: run-netns-ovnmeta\x2d52eb33fc\x2dfc1d\x2d4d23\x2d8941\x2db2b9e959253d.mount: Deactivated successfully.
Jan 31 08:36:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:08.134 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52eb33fc-fc1d-4d23-8941-b2b9e959253d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:36:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:08.135 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2de6a2-2519-4d5a-bbfb-96b0c228f463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:36:08 compute-2 ceph-mon[77282]: pgmap v3093: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.5 KiB/s wr, 1 op/s
Jan 31 08:36:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:08.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:36:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:09.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:36:09 compute-2 ceph-mon[77282]: pgmap v3094: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 1.2 KiB/s wr, 0 op/s
Jan 31 08:36:09 compute-2 nova_compute[226829]: 2026-01-31 08:36:09.632 226833 DEBUG nova.compute.manager [req-afed084f-bbe3-4f9c-8818-71325fdbea70 req-52a1ae7a-9487-41f8-ad66-13adfec8dafa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Received event network-vif-unplugged-7bd650c8-4238-4749-a373-22597e7927ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:36:09 compute-2 nova_compute[226829]: 2026-01-31 08:36:09.632 226833 DEBUG oslo_concurrency.lockutils [req-afed084f-bbe3-4f9c-8818-71325fdbea70 req-52a1ae7a-9487-41f8-ad66-13adfec8dafa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:09 compute-2 nova_compute[226829]: 2026-01-31 08:36:09.632 226833 DEBUG oslo_concurrency.lockutils [req-afed084f-bbe3-4f9c-8818-71325fdbea70 req-52a1ae7a-9487-41f8-ad66-13adfec8dafa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:09 compute-2 nova_compute[226829]: 2026-01-31 08:36:09.633 226833 DEBUG oslo_concurrency.lockutils [req-afed084f-bbe3-4f9c-8818-71325fdbea70 req-52a1ae7a-9487-41f8-ad66-13adfec8dafa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:09 compute-2 nova_compute[226829]: 2026-01-31 08:36:09.633 226833 DEBUG nova.compute.manager [req-afed084f-bbe3-4f9c-8818-71325fdbea70 req-52a1ae7a-9487-41f8-ad66-13adfec8dafa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] No waiting events found dispatching network-vif-unplugged-7bd650c8-4238-4749-a373-22597e7927ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:36:09 compute-2 nova_compute[226829]: 2026-01-31 08:36:09.633 226833 DEBUG nova.compute.manager [req-afed084f-bbe3-4f9c-8818-71325fdbea70 req-52a1ae7a-9487-41f8-ad66-13adfec8dafa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Received event network-vif-unplugged-7bd650c8-4238-4749-a373-22597e7927ff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:36:09 compute-2 nova_compute[226829]: 2026-01-31 08:36:09.657 226833 INFO nova.virt.libvirt.driver [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Deleting instance files /var/lib/nova/instances/e93a14b8-ef43-4615-a9c8-8b2df69d20d6_del
Jan 31 08:36:09 compute-2 nova_compute[226829]: 2026-01-31 08:36:09.658 226833 INFO nova.virt.libvirt.driver [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Deletion of /var/lib/nova/instances/e93a14b8-ef43-4615-a9c8-8b2df69d20d6_del complete
Jan 31 08:36:09 compute-2 sudo[309386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:09 compute-2 sudo[309386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:09 compute-2 sudo[309386]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:09 compute-2 sudo[309412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:36:09 compute-2 sudo[309412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:09 compute-2 sudo[309412]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:10 compute-2 nova_compute[226829]: 2026-01-31 08:36:10.143 226833 INFO nova.compute.manager [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Took 2.90 seconds to destroy the instance on the hypervisor.
Jan 31 08:36:10 compute-2 nova_compute[226829]: 2026-01-31 08:36:10.143 226833 DEBUG oslo.service.loopingcall [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:36:10 compute-2 nova_compute[226829]: 2026-01-31 08:36:10.144 226833 DEBUG nova.compute.manager [-] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:36:10 compute-2 nova_compute[226829]: 2026-01-31 08:36:10.145 226833 DEBUG nova.network.neutron [-] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:36:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:10.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:10 compute-2 nova_compute[226829]: 2026-01-31 08:36:10.178 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:11.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:11 compute-2 ceph-mon[77282]: pgmap v3095: 305 pgs: 305 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 10 KiB/s wr, 22 op/s
Jan 31 08:36:11 compute-2 nova_compute[226829]: 2026-01-31 08:36:11.889 226833 DEBUG nova.compute.manager [req-ca2221c1-f01b-478a-a79a-6c06077b352d req-69c6c9f2-ec80-4b3a-9045-f2908c48a85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Received event network-vif-plugged-7bd650c8-4238-4749-a373-22597e7927ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:36:11 compute-2 nova_compute[226829]: 2026-01-31 08:36:11.889 226833 DEBUG oslo_concurrency.lockutils [req-ca2221c1-f01b-478a-a79a-6c06077b352d req-69c6c9f2-ec80-4b3a-9045-f2908c48a85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:11 compute-2 nova_compute[226829]: 2026-01-31 08:36:11.890 226833 DEBUG oslo_concurrency.lockutils [req-ca2221c1-f01b-478a-a79a-6c06077b352d req-69c6c9f2-ec80-4b3a-9045-f2908c48a85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:11 compute-2 nova_compute[226829]: 2026-01-31 08:36:11.890 226833 DEBUG oslo_concurrency.lockutils [req-ca2221c1-f01b-478a-a79a-6c06077b352d req-69c6c9f2-ec80-4b3a-9045-f2908c48a85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:11 compute-2 nova_compute[226829]: 2026-01-31 08:36:11.890 226833 DEBUG nova.compute.manager [req-ca2221c1-f01b-478a-a79a-6c06077b352d req-69c6c9f2-ec80-4b3a-9045-f2908c48a85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] No waiting events found dispatching network-vif-plugged-7bd650c8-4238-4749-a373-22597e7927ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:36:11 compute-2 nova_compute[226829]: 2026-01-31 08:36:11.890 226833 WARNING nova.compute.manager [req-ca2221c1-f01b-478a-a79a-6c06077b352d req-69c6c9f2-ec80-4b3a-9045-f2908c48a85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Received unexpected event network-vif-plugged-7bd650c8-4238-4749-a373-22597e7927ff for instance with vm_state active and task_state deleting.
Jan 31 08:36:12 compute-2 podman[309438]: 2026-01-31 08:36:12.161730112 +0000 UTC m=+0.042549765 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 08:36:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:12.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:12 compute-2 nova_compute[226829]: 2026-01-31 08:36:12.566 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:12 compute-2 nova_compute[226829]: 2026-01-31 08:36:12.800 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:13.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:13 compute-2 nova_compute[226829]: 2026-01-31 08:36:13.678 226833 DEBUG nova.network.neutron [-] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:36:13 compute-2 ceph-mon[77282]: pgmap v3096: 305 pgs: 305 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 9.5 KiB/s wr, 25 op/s
Jan 31 08:36:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:14.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:14 compute-2 nova_compute[226829]: 2026-01-31 08:36:14.419 226833 DEBUG nova.compute.manager [req-b063fd3a-6f6e-4721-b12f-a398bd87edd0 req-c37c5f71-d548-4952-b986-5a42002dd3fd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Received event network-vif-deleted-7bd650c8-4238-4749-a373-22597e7927ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:36:14 compute-2 nova_compute[226829]: 2026-01-31 08:36:14.420 226833 INFO nova.compute.manager [req-b063fd3a-6f6e-4721-b12f-a398bd87edd0 req-c37c5f71-d548-4952-b986-5a42002dd3fd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Neutron deleted interface 7bd650c8-4238-4749-a373-22597e7927ff; detaching it from the instance and deleting it from the info cache
Jan 31 08:36:14 compute-2 nova_compute[226829]: 2026-01-31 08:36:14.420 226833 DEBUG nova.network.neutron [req-b063fd3a-6f6e-4721-b12f-a398bd87edd0 req-c37c5f71-d548-4952-b986-5a42002dd3fd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:36:14 compute-2 nova_compute[226829]: 2026-01-31 08:36:14.804 226833 INFO nova.compute.manager [-] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Took 4.66 seconds to deallocate network for instance.
Jan 31 08:36:14 compute-2 nova_compute[226829]: 2026-01-31 08:36:14.871 226833 DEBUG nova.compute.manager [req-b063fd3a-6f6e-4721-b12f-a398bd87edd0 req-c37c5f71-d548-4952-b986-5a42002dd3fd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Detach interface failed, port_id=7bd650c8-4238-4749-a373-22597e7927ff, reason: Instance e93a14b8-ef43-4615-a9c8-8b2df69d20d6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:36:15 compute-2 sudo[309458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:15 compute-2 sudo[309458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:15 compute-2 sudo[309458]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:15.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:15 compute-2 sudo[309483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:15 compute-2 sudo[309483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:15 compute-2 sudo[309483]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:15 compute-2 nova_compute[226829]: 2026-01-31 08:36:15.206 226833 DEBUG oslo_concurrency.lockutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:15 compute-2 nova_compute[226829]: 2026-01-31 08:36:15.206 226833 DEBUG oslo_concurrency.lockutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:15 compute-2 nova_compute[226829]: 2026-01-31 08:36:15.270 226833 DEBUG oslo_concurrency.processutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:36:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:36:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1655685972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:15 compute-2 nova_compute[226829]: 2026-01-31 08:36:15.714 226833 DEBUG oslo_concurrency.processutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:36:15 compute-2 nova_compute[226829]: 2026-01-31 08:36:15.722 226833 DEBUG nova.compute.provider_tree [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:36:15 compute-2 nova_compute[226829]: 2026-01-31 08:36:15.892 226833 DEBUG nova.scheduler.client.report [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:36:16 compute-2 ceph-mon[77282]: pgmap v3097: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Jan 31 08:36:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1655685972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:16 compute-2 nova_compute[226829]: 2026-01-31 08:36:16.154 226833 DEBUG oslo_concurrency.lockutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:16.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:16 compute-2 nova_compute[226829]: 2026-01-31 08:36:16.552 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:16 compute-2 nova_compute[226829]: 2026-01-31 08:36:16.553 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:36:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:17.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:17 compute-2 nova_compute[226829]: 2026-01-31 08:36:17.568 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:17 compute-2 nova_compute[226829]: 2026-01-31 08:36:17.802 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:18 compute-2 ceph-mon[77282]: pgmap v3098: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Jan 31 08:36:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:18.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:19 compute-2 nova_compute[226829]: 2026-01-31 08:36:19.042 226833 INFO nova.scheduler.client.report [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Deleted allocations for instance e93a14b8-ef43-4615-a9c8-8b2df69d20d6
Jan 31 08:36:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:19.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:19 compute-2 nova_compute[226829]: 2026-01-31 08:36:19.327 226833 DEBUG oslo_concurrency.lockutils [None req-8b722e9c-9e65-423e-ac54-ccc9f7df0bbd eb8e3d6cd4094a62b23e39cec9023f18 149ccba7b87e4284a3a6462e3a1dace1 - - default default] Lock "e93a14b8-ef43-4615-a9c8-8b2df69d20d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:20.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:20 compute-2 ceph-mon[77282]: pgmap v3099: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Jan 31 08:36:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:36:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:21.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:36:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:22.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:22 compute-2 ceph-mon[77282]: pgmap v3100: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 18 KiB/s wr, 29 op/s
Jan 31 08:36:22 compute-2 nova_compute[226829]: 2026-01-31 08:36:22.479 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848567.476968, e93a14b8-ef43-4615-a9c8-8b2df69d20d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:36:22 compute-2 nova_compute[226829]: 2026-01-31 08:36:22.479 226833 INFO nova.compute.manager [-] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] VM Stopped (Lifecycle Event)
Jan 31 08:36:22 compute-2 nova_compute[226829]: 2026-01-31 08:36:22.570 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:22 compute-2 nova_compute[226829]: 2026-01-31 08:36:22.793 226833 DEBUG nova.compute.manager [None req-1cf57d6e-e2fa-4345-854f-c710654d9057 - - - - - -] [instance: e93a14b8-ef43-4615-a9c8-8b2df69d20d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:36:22 compute-2 nova_compute[226829]: 2026-01-31 08:36:22.803 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:36:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:23.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:36:23 compute-2 ceph-mon[77282]: pgmap v3101: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 9.2 KiB/s wr, 7 op/s
Jan 31 08:36:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:24.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:25.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:25 compute-2 ceph-mon[77282]: pgmap v3102: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 KiB/s rd, 8.7 KiB/s wr, 4 op/s
Jan 31 08:36:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:26.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:27.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:27 compute-2 nova_compute[226829]: 2026-01-31 08:36:27.572 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:27 compute-2 nova_compute[226829]: 2026-01-31 08:36:27.718 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:27 compute-2 ceph-mon[77282]: pgmap v3103: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s wr, 1 op/s
Jan 31 08:36:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1161939193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:27 compute-2 nova_compute[226829]: 2026-01-31 08:36:27.804 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:28.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:36:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1696661387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:36:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:36:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1696661387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:36:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1696661387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:36:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1696661387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:36:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:29.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:30 compute-2 ceph-mon[77282]: pgmap v3104: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s rd, 8.3 KiB/s wr, 11 op/s
Jan 31 08:36:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4077521189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:30.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:31.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:32.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:32 compute-2 ceph-mon[77282]: pgmap v3105: 305 pgs: 305 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 508 KiB/s wr, 25 op/s
Jan 31 08:36:32 compute-2 nova_compute[226829]: 2026-01-31 08:36:32.574 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:32 compute-2 nova_compute[226829]: 2026-01-31 08:36:32.806 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:33.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:34.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:34 compute-2 ceph-mon[77282]: pgmap v3106: 305 pgs: 305 active+clean; 250 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.4 MiB/s wr, 40 op/s
Jan 31 08:36:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:35.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:35 compute-2 sudo[309547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:35 compute-2 sudo[309547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:35 compute-2 sudo[309547]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:35 compute-2 podman[309541]: 2026-01-31 08:36:35.200902905 +0000 UTC m=+0.084478543 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:36:35 compute-2 sudo[309592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:35 compute-2 sudo[309592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:35 compute-2 sudo[309592]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:35 compute-2 ceph-mon[77282]: pgmap v3107: 305 pgs: 305 active+clean; 262 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Jan 31 08:36:35 compute-2 nova_compute[226829]: 2026-01-31 08:36:35.876 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:35 compute-2 nova_compute[226829]: 2026-01-31 08:36:35.951 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:36.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:37.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:37 compute-2 nova_compute[226829]: 2026-01-31 08:36:37.575 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:37 compute-2 nova_compute[226829]: 2026-01-31 08:36:37.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:38.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:38 compute-2 ceph-mon[77282]: pgmap v3108: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Jan 31 08:36:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/120920381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:36:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2551731456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:36:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:39.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3052520029' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:36:40 compute-2 ceph-mon[77282]: pgmap v3109: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Jan 31 08:36:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/572176419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:36:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:40.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:41.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:42.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:42 compute-2 ceph-mon[77282]: pgmap v3110: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Jan 31 08:36:42 compute-2 nova_compute[226829]: 2026-01-31 08:36:42.577 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:42 compute-2 nova_compute[226829]: 2026-01-31 08:36:42.808 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:43.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:43 compute-2 podman[309623]: 2026-01-31 08:36:43.15197248 +0000 UTC m=+0.043607354 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:36:43 compute-2 ceph-mon[77282]: pgmap v3111: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 3.1 MiB/s wr, 44 op/s
Jan 31 08:36:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:44.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:45.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:45 compute-2 nova_compute[226829]: 2026-01-31 08:36:45.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:45 compute-2 ceph-mon[77282]: pgmap v3112: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.1 MiB/s wr, 32 op/s
Jan 31 08:36:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:36:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:47.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:36:47 compute-2 ceph-mon[77282]: pgmap v3113: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 828 KiB/s wr, 35 op/s
Jan 31 08:36:47 compute-2 nova_compute[226829]: 2026-01-31 08:36:47.579 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:47 compute-2 nova_compute[226829]: 2026-01-31 08:36:47.810 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:36:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:36:48 compute-2 nova_compute[226829]: 2026-01-31 08:36:48.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:48 compute-2 nova_compute[226829]: 2026-01-31 08:36:48.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:36:48 compute-2 nova_compute[226829]: 2026-01-31 08:36:48.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:36:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:48 compute-2 nova_compute[226829]: 2026-01-31 08:36:48.728 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:36:48 compute-2 nova_compute[226829]: 2026-01-31 08:36:48.729 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:49 compute-2 ceph-mon[77282]: pgmap v3114: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 31 08:36:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:50.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:50 compute-2 nova_compute[226829]: 2026-01-31 08:36:50.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:51.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:51 compute-2 nova_compute[226829]: 2026-01-31 08:36:51.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:51 compute-2 ceph-mon[77282]: pgmap v3115: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 13 KiB/s wr, 37 op/s
Jan 31 08:36:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:36:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:52.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:36:52 compute-2 nova_compute[226829]: 2026-01-31 08:36:52.580 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:52 compute-2 nova_compute[226829]: 2026-01-31 08:36:52.812 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:36:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:53.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:36:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:36:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3253503215' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:36:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:36:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3253503215' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:36:53 compute-2 nova_compute[226829]: 2026-01-31 08:36:53.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:53 compute-2 ceph-mon[77282]: pgmap v3116: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 25 KiB/s wr, 79 op/s
Jan 31 08:36:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3253503215' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:36:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3253503215' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:36:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:54.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:55.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:55 compute-2 sudo[309650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:55 compute-2 sudo[309650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:55 compute-2 sudo[309650]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:55 compute-2 sudo[309675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:55 compute-2 sudo[309675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:55 compute-2 sudo[309675]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:56.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:56 compute-2 ceph-mon[77282]: pgmap v3117: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 83 op/s
Jan 31 08:36:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:57.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:57 compute-2 nova_compute[226829]: 2026-01-31 08:36:57.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:57 compute-2 nova_compute[226829]: 2026-01-31 08:36:57.582 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:57 compute-2 ceph-mon[77282]: pgmap v3118: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 25 KiB/s wr, 135 op/s
Jan 31 08:36:57 compute-2 nova_compute[226829]: 2026-01-31 08:36:57.813 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:58.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:58.737 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:36:58 compute-2 nova_compute[226829]: 2026-01-31 08:36:58.739 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:36:58.739 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:36:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:36:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:36:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:59.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:36:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3720688179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:36:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 73K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1557 writes, 7154 keys, 1557 commit groups, 1.0 writes per commit group, ingest: 15.37 MB, 0.03 MB/s
                                           Interval WAL: 1557 writes, 1557 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     78.1      1.12              0.26        46    0.024       0      0       0.0       0.0
                                             L6      1/0    9.97 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.1    154.6    131.8      3.40              1.27        45    0.075    320K    24K       0.0       0.0
                                            Sum      1/0    9.97 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1    116.3    118.5      4.52              1.53        91    0.050    320K    24K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.7    134.2    133.7      0.37              0.12         8    0.046     39K   2043       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    154.6    131.8      3.40              1.27        45    0.075    320K    24K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     78.2      1.12              0.26        45    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.085, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.52 GB write, 0.10 MB/s write, 0.51 GB read, 0.10 MB/s read, 4.5 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 57.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000434 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3319,54.75 MB,18.01%) FilterBlock(91,891.36 KB,0.286338%) IndexBlock(91,1.46 MB,0.480978%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:37:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:00.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:00 compute-2 nova_compute[226829]: 2026-01-31 08:37:00.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:01 compute-2 nova_compute[226829]: 2026-01-31 08:37:01.073 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:01 compute-2 nova_compute[226829]: 2026-01-31 08:37:01.074 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:01 compute-2 nova_compute[226829]: 2026-01-31 08:37:01.074 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:01 compute-2 nova_compute[226829]: 2026-01-31 08:37:01.074 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:37:01 compute-2 nova_compute[226829]: 2026-01-31 08:37:01.075 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:37:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:01.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:01 compute-2 ceph-mon[77282]: pgmap v3119: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 139 op/s
Jan 31 08:37:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:37:01 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1795149143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:01 compute-2 nova_compute[226829]: 2026-01-31 08:37:01.517 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:37:01 compute-2 nova_compute[226829]: 2026-01-31 08:37:01.648 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:37:01 compute-2 nova_compute[226829]: 2026-01-31 08:37:01.649 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4170MB free_disk=20.900920867919922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:37:01 compute-2 nova_compute[226829]: 2026-01-31 08:37:01.649 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:01 compute-2 nova_compute[226829]: 2026-01-31 08:37:01.650 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:02 compute-2 nova_compute[226829]: 2026-01-31 08:37:02.028 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:37:02 compute-2 nova_compute[226829]: 2026-01-31 08:37:02.028 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:37:02 compute-2 nova_compute[226829]: 2026-01-31 08:37:02.050 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:37:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:02.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:37:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3438790100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:02 compute-2 nova_compute[226829]: 2026-01-31 08:37:02.461 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:37:02 compute-2 nova_compute[226829]: 2026-01-31 08:37:02.467 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:37:02 compute-2 nova_compute[226829]: 2026-01-31 08:37:02.583 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:02 compute-2 nova_compute[226829]: 2026-01-31 08:37:02.648 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:37:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2280977155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:02 compute-2 ceph-mon[77282]: pgmap v3120: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 139 op/s
Jan 31 08:37:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1795149143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4265750067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:02 compute-2 nova_compute[226829]: 2026-01-31 08:37:02.815 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:03 compute-2 nova_compute[226829]: 2026-01-31 08:37:03.162 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:37:03 compute-2 nova_compute[226829]: 2026-01-31 08:37:03.162 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:03.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:04.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:05.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3438790100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:05 compute-2 ceph-mon[77282]: pgmap v3121: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 12 KiB/s wr, 117 op/s
Jan 31 08:37:06 compute-2 podman[309751]: 2026-01-31 08:37:06.199411925 +0000 UTC m=+0.086423566 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:37:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:06.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3154223725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:06 compute-2 ceph-mon[77282]: pgmap v3122: 305 pgs: 305 active+clean; 297 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 613 KiB/s wr, 78 op/s
Jan 31 08:37:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:37:06.914 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:37:06.916 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:37:06.916 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:07.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:07 compute-2 ceph-mon[77282]: pgmap v3123: 305 pgs: 305 active+clean; 310 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:37:07 compute-2 nova_compute[226829]: 2026-01-31 08:37:07.585 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:07 compute-2 nova_compute[226829]: 2026-01-31 08:37:07.816 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:08.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:37:08.741 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:37:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:09.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.378364) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629378450, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2377, "num_deletes": 251, "total_data_size": 5745301, "memory_usage": 5822160, "flush_reason": "Manual Compaction"}
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629394841, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 3755670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71968, "largest_seqno": 74340, "table_properties": {"data_size": 3746166, "index_size": 5997, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19787, "raw_average_key_size": 20, "raw_value_size": 3727160, "raw_average_value_size": 3842, "num_data_blocks": 262, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848413, "oldest_key_time": 1769848413, "file_creation_time": 1769848629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 16564 microseconds, and 7667 cpu microseconds.
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.394925) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 3755670 bytes OK
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.394951) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.397333) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.397354) EVENT_LOG_v1 {"time_micros": 1769848629397347, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.397378) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 5734951, prev total WAL file size 5734951, number of live WAL files 2.
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.398549) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(3667KB)], [147(10204KB)]
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629398607, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 14204970, "oldest_snapshot_seqno": -1}
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 9620 keys, 12284890 bytes, temperature: kUnknown
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629495199, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 12284890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12223116, "index_size": 36637, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 253497, "raw_average_key_size": 26, "raw_value_size": 12054582, "raw_average_value_size": 1253, "num_data_blocks": 1397, "num_entries": 9620, "num_filter_entries": 9620, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769848629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.496811) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12284890 bytes
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.500590) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.9 rd, 125.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 10137, records dropped: 517 output_compression: NoCompression
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.500606) EVENT_LOG_v1 {"time_micros": 1769848629500598, "job": 94, "event": "compaction_finished", "compaction_time_micros": 98057, "compaction_time_cpu_micros": 43146, "output_level": 6, "num_output_files": 1, "total_output_size": 12284890, "num_input_records": 10137, "num_output_records": 9620, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629501207, "job": 94, "event": "table_file_deletion", "file_number": 149}
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629502199, "job": 94, "event": "table_file_deletion", "file_number": 147}
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.398377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.502375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.502382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.502385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.502387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:09 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:37:09.502389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:10 compute-2 sudo[309779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:10 compute-2 sudo[309779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-2 sudo[309779]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-2 sudo[309804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:10 compute-2 sudo[309804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-2 sudo[309804]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-2 sudo[309829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:10 compute-2 sudo[309829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-2 sudo[309829]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-2 sudo[309854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:37:10 compute-2 sudo[309854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:10.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:10 compute-2 sudo[309854]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-2 ceph-mon[77282]: pgmap v3124: 305 pgs: 305 active+clean; 316 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 408 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Jan 31 08:37:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:11.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:37:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:12.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:37:12 compute-2 ceph-mon[77282]: pgmap v3125: 305 pgs: 305 active+clean; 323 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 491 KiB/s rd, 3.3 MiB/s wr, 83 op/s
Jan 31 08:37:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:37:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:37:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:37:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:37:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:37:12 compute-2 nova_compute[226829]: 2026-01-31 08:37:12.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:12 compute-2 nova_compute[226829]: 2026-01-31 08:37:12.818 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:37:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:37:13 compute-2 ceph-mon[77282]: pgmap v3126: 305 pgs: 305 active+clean; 340 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 590 KiB/s rd, 4.2 MiB/s wr, 102 op/s
Jan 31 08:37:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:14 compute-2 podman[309912]: 2026-01-31 08:37:14.163784803 +0000 UTC m=+0.048330212 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 08:37:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:14.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:15.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:15 compute-2 sudo[309931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:15 compute-2 sudo[309931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:15 compute-2 sudo[309931]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:15 compute-2 sudo[309956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:15 compute-2 sudo[309956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:15 compute-2 sudo[309956]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:37:15 compute-2 ceph-mon[77282]: pgmap v3127: 305 pgs: 305 active+clean; 350 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 599 KiB/s rd, 4.2 MiB/s wr, 104 op/s
Jan 31 08:37:16 compute-2 nova_compute[226829]: 2026-01-31 08:37:16.158 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:16 compute-2 nova_compute[226829]: 2026-01-31 08:37:16.227 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:16 compute-2 nova_compute[226829]: 2026-01-31 08:37:16.228 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:37:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:37:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:16.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:37:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:17.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:17 compute-2 nova_compute[226829]: 2026-01-31 08:37:17.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:17 compute-2 nova_compute[226829]: 2026-01-31 08:37:17.820 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:17 compute-2 ceph-mon[77282]: pgmap v3128: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 3.7 MiB/s wr, 121 op/s
Jan 31 08:37:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:19.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:19 compute-2 sudo[309984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:19 compute-2 sudo[309984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:19 compute-2 sudo[309984]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:19 compute-2 ceph-mon[77282]: pgmap v3129: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 2.5 MiB/s wr, 106 op/s
Jan 31 08:37:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:19 compute-2 sudo[310010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:37:19 compute-2 sudo[310010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:19 compute-2 sudo[310010]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:20.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:37:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:21.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:37:21 compute-2 ceph-mon[77282]: pgmap v3130: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 537 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Jan 31 08:37:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:22.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:22 compute-2 nova_compute[226829]: 2026-01-31 08:37:22.589 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:22 compute-2 nova_compute[226829]: 2026-01-31 08:37:22.747 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:22 compute-2 nova_compute[226829]: 2026-01-31 08:37:22.822 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:23.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:24 compute-2 ceph-mon[77282]: pgmap v3131: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 220 KiB/s rd, 989 KiB/s wr, 48 op/s
Jan 31 08:37:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:24.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:25.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:26 compute-2 ceph-mon[77282]: pgmap v3132: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 117 KiB/s rd, 133 KiB/s wr, 28 op/s
Jan 31 08:37:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:26.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:37:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:27.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:37:27 compute-2 nova_compute[226829]: 2026-01-31 08:37:27.590 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:27 compute-2 ceph-mon[77282]: pgmap v3133: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 102 KiB/s wr, 41 op/s
Jan 31 08:37:27 compute-2 nova_compute[226829]: 2026-01-31 08:37:27.824 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:28.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:37:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 67K writes, 276K keys, 67K commit groups, 1.0 writes per commit group, ingest: 0.28 GB, 0.05 MB/s
                                           Cumulative WAL: 67K writes, 24K syncs, 2.78 writes per sync, written: 0.28 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7622 writes, 29K keys, 7622 commit groups, 1.0 writes per commit group, ingest: 31.61 MB, 0.05 MB/s
                                           Interval WAL: 7622 writes, 2985 syncs, 2.55 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:37:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:37:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:29.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:37:30 compute-2 ceph-mon[77282]: pgmap v3134: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 45 KiB/s wr, 32 op/s
Jan 31 08:37:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:30.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:31.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:31 compute-2 ceph-mon[77282]: pgmap v3135: 305 pgs: 305 active+clean; 257 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 529 KiB/s wr, 45 op/s
Jan 31 08:37:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:32.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:32 compute-2 nova_compute[226829]: 2026-01-31 08:37:32.592 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:32 compute-2 nova_compute[226829]: 2026-01-31 08:37:32.828 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/598306189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:37:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:33.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:33 compute-2 ceph-mon[77282]: pgmap v3136: 305 pgs: 305 active+clean; 249 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 31 08:37:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2614260154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:37:34 compute-2 ceph-mgr[77635]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 08:37:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:34.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/356581148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:37:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:35.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:37:35 compute-2 nova_compute[226829]: 2026-01-31 08:37:35.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:35 compute-2 nova_compute[226829]: 2026-01-31 08:37:35.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:37:35 compute-2 nova_compute[226829]: 2026-01-31 08:37:35.519 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:37:35 compute-2 sudo[310042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:35 compute-2 sudo[310042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:35 compute-2 sudo[310042]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:35 compute-2 sudo[310067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:35 compute-2 sudo[310067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:35 compute-2 sudo[310067]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:36 compute-2 ceph-mon[77282]: pgmap v3137: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 31 08:37:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:36.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:36 compute-2 nova_compute[226829]: 2026-01-31 08:37:36.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:36 compute-2 nova_compute[226829]: 2026-01-31 08:37:36.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:37:37 compute-2 podman[310093]: 2026-01-31 08:37:37.176926989 +0000 UTC m=+0.062954750 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 31 08:37:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:37.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:37 compute-2 nova_compute[226829]: 2026-01-31 08:37:37.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:37 compute-2 nova_compute[226829]: 2026-01-31 08:37:37.594 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:37 compute-2 nova_compute[226829]: 2026-01-31 08:37:37.829 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:38.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:38 compute-2 ceph-mon[77282]: pgmap v3138: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:37:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:39.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:39 compute-2 ceph-mon[77282]: pgmap v3139: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 467 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:37:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:40.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:41.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:42 compute-2 ceph-mon[77282]: pgmap v3140: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Jan 31 08:37:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:42.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:42 compute-2 nova_compute[226829]: 2026-01-31 08:37:42.594 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:42 compute-2 nova_compute[226829]: 2026-01-31 08:37:42.831 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:43.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:44.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:45 compute-2 podman[310124]: 2026-01-31 08:37:45.17461386 +0000 UTC m=+0.037678394 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 08:37:45 compute-2 ceph-mon[77282]: pgmap v3141: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Jan 31 08:37:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:46.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:46 compute-2 ceph-mon[77282]: pgmap v3142: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 413 KiB/s wr, 111 op/s
Jan 31 08:37:46 compute-2 nova_compute[226829]: 2026-01-31 08:37:46.508 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:47 compute-2 ceph-mon[77282]: pgmap v3143: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 08:37:47 compute-2 nova_compute[226829]: 2026-01-31 08:37:47.596 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:47 compute-2 nova_compute[226829]: 2026-01-31 08:37:47.833 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:48.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:49.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:49 compute-2 ceph-mon[77282]: pgmap v3144: 305 pgs: 305 active+clean; 226 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 76 op/s
Jan 31 08:37:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:37:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:50.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:37:50 compute-2 nova_compute[226829]: 2026-01-31 08:37:50.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:50 compute-2 nova_compute[226829]: 2026-01-31 08:37:50.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:37:50 compute-2 nova_compute[226829]: 2026-01-31 08:37:50.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:37:50 compute-2 nova_compute[226829]: 2026-01-31 08:37:50.570 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:37:50 compute-2 nova_compute[226829]: 2026-01-31 08:37:50.571 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:50 compute-2 nova_compute[226829]: 2026-01-31 08:37:50.571 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/131859032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:51 compute-2 nova_compute[226829]: 2026-01-31 08:37:51.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:51 compute-2 ceph-mon[77282]: pgmap v3145: 305 pgs: 305 active+clean; 194 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 746 KiB/s wr, 102 op/s
Jan 31 08:37:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:52.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:52 compute-2 nova_compute[226829]: 2026-01-31 08:37:52.599 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:52 compute-2 nova_compute[226829]: 2026-01-31 08:37:52.889 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:53.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:37:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/869108237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:37:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:37:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/869108237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:37:53 compute-2 nova_compute[226829]: 2026-01-31 08:37:53.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:54.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:54 compute-2 ceph-mon[77282]: pgmap v3146: 305 pgs: 305 active+clean; 189 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.7 MiB/s wr, 103 op/s
Jan 31 08:37:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/869108237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:37:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/869108237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:37:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:55.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:55 compute-2 sudo[310148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:55 compute-2 sudo[310148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:55 compute-2 sudo[310148]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:55 compute-2 sudo[310173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:55 compute-2 sudo[310173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:55 compute-2 sudo[310173]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:56 compute-2 ceph-mon[77282]: pgmap v3147: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 31 08:37:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:56.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:37:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:57.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:37:57 compute-2 ceph-mon[77282]: pgmap v3148: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 31 08:37:57 compute-2 nova_compute[226829]: 2026-01-31 08:37:57.601 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:57 compute-2 nova_compute[226829]: 2026-01-31 08:37:57.891 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:37:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:58.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:37:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/751218233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2026186397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:37:58.876 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:37:58 compute-2 nova_compute[226829]: 2026-01-31 08:37:58.876 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:37:58.877 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:37:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:37:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:37:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:59.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:37:59 compute-2 nova_compute[226829]: 2026-01-31 08:37:59.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:59 compute-2 ceph-mon[77282]: pgmap v3149: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 31 08:37:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1654399255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3217867652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:00.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:00 compute-2 nova_compute[226829]: 2026-01-31 08:38:00.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:00 compute-2 nova_compute[226829]: 2026-01-31 08:38:00.585 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:00 compute-2 nova_compute[226829]: 2026-01-31 08:38:00.585 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:00 compute-2 nova_compute[226829]: 2026-01-31 08:38:00.586 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:00 compute-2 nova_compute[226829]: 2026-01-31 08:38:00.586 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:38:00 compute-2 nova_compute[226829]: 2026-01-31 08:38:00.586 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:00.880 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:38:01 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1761214171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.022 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.219 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.220 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4179MB free_disk=20.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.220 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.220 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:38:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:01.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.385 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.385 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.402 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:38:01 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1097934177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.842 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.848 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.904 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.906 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:38:01 compute-2 nova_compute[226829]: 2026-01-31 08:38:01.906 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:38:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:02.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:38:02 compute-2 nova_compute[226829]: 2026-01-31 08:38:02.604 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:02 compute-2 ceph-mon[77282]: pgmap v3150: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 31 08:38:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1761214171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1097934177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:02 compute-2 nova_compute[226829]: 2026-01-31 08:38:02.893 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:38:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:03.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:38:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:04 compute-2 ceph-mon[77282]: pgmap v3151: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 146 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 31 08:38:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:38:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:04.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:38:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:05.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:05 compute-2 ceph-mon[77282]: pgmap v3152: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 501 KiB/s wr, 29 op/s
Jan 31 08:38:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:06.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:06.916 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:06.916 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:06.916 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:07.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 e379: 3 total, 3 up, 3 in
Jan 31 08:38:07 compute-2 nova_compute[226829]: 2026-01-31 08:38:07.606 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:07 compute-2 nova_compute[226829]: 2026-01-31 08:38:07.895 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:08 compute-2 podman[310249]: 2026-01-31 08:38:08.185300348 +0000 UTC m=+0.068729616 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:38:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:08.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:08 compute-2 ceph-mon[77282]: pgmap v3153: 305 pgs: 305 active+clean; 138 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 13 KiB/s wr, 20 op/s
Jan 31 08:38:08 compute-2 ceph-mon[77282]: osdmap e379: 3 total, 3 up, 3 in
Jan 31 08:38:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:09.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:09 compute-2 ceph-mon[77282]: pgmap v3155: 305 pgs: 305 active+clean; 137 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.6 MiB/s wr, 39 op/s
Jan 31 08:38:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:10.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1947169940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:11.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:11 compute-2 ceph-mon[77282]: pgmap v3156: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Jan 31 08:38:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:12.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:12 compute-2 nova_compute[226829]: 2026-01-31 08:38:12.607 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:12 compute-2 nova_compute[226829]: 2026-01-31 08:38:12.896 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:38:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:38:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:14 compute-2 ceph-mon[77282]: pgmap v3157: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Jan 31 08:38:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:14.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:14 compute-2 nova_compute[226829]: 2026-01-31 08:38:14.907 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:14 compute-2 nova_compute[226829]: 2026-01-31 08:38:14.908 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:38:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:15.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:15 compute-2 sudo[310278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:15 compute-2 sudo[310278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:15 compute-2 sudo[310278]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:15 compute-2 sudo[310310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:15 compute-2 sudo[310310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:15 compute-2 sudo[310310]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:15 compute-2 podman[310302]: 2026-01-31 08:38:15.869144696 +0000 UTC m=+0.044577840 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:38:16 compute-2 ceph-mon[77282]: pgmap v3158: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Jan 31 08:38:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:16.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:17 compute-2 nova_compute[226829]: 2026-01-31 08:38:17.609 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:17 compute-2 nova_compute[226829]: 2026-01-31 08:38:17.937 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:18 compute-2 ceph-mon[77282]: pgmap v3159: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.0 MiB/s wr, 35 op/s
Jan 31 08:38:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:18.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:38:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:19.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:38:19 compute-2 sudo[310350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:19 compute-2 sudo[310350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:19 compute-2 sudo[310350]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:20 compute-2 sudo[310375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:20 compute-2 sudo[310375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-2 sudo[310375]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:20 compute-2 sudo[310400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:20 compute-2 sudo[310400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-2 sudo[310400]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:20 compute-2 sudo[310425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:38:20 compute-2 sudo[310425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-2 ceph-mon[77282]: pgmap v3160: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 08:38:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:38:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:20.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:38:20 compute-2 podman[310524]: 2026-01-31 08:38:20.577185345 +0000 UTC m=+0.065288083 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:38:20 compute-2 podman[310524]: 2026-01-31 08:38:20.666404946 +0000 UTC m=+0.154507694 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:38:21 compute-2 podman[310674]: 2026-01-31 08:38:21.143419569 +0000 UTC m=+0.046627536 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:38:21 compute-2 podman[310697]: 2026-01-31 08:38:21.205993237 +0000 UTC m=+0.047288874 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:38:21 compute-2 podman[310674]: 2026-01-31 08:38:21.212876194 +0000 UTC m=+0.116084181 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:38:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:21.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:21 compute-2 podman[310742]: 2026-01-31 08:38:21.388046667 +0000 UTC m=+0.042455224 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 08:38:21 compute-2 podman[310742]: 2026-01-31 08:38:21.402426807 +0000 UTC m=+0.056835334 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.component=keepalived-container, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 31 08:38:21 compute-2 sudo[310425]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:21 compute-2 sudo[310776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:21 compute-2 sudo[310776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:21 compute-2 sudo[310776]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:21 compute-2 sudo[310801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:21 compute-2 sudo[310801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:21 compute-2 sudo[310801]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:21 compute-2 sudo[310826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:21 compute-2 sudo[310826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:21 compute-2 sudo[310826]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:21 compute-2 sudo[310851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:38:21 compute-2 sudo[310851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:22 compute-2 ceph-mon[77282]: pgmap v3161: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 382 KiB/s wr, 17 op/s
Jan 31 08:38:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:22 compute-2 sudo[310851]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:22 compute-2 sudo[310908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:22 compute-2 sudo[310908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:22 compute-2 sudo[310908]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:22 compute-2 sudo[310933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:22 compute-2 sudo[310933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:22 compute-2 sudo[310933]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:22 compute-2 sudo[310958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:22 compute-2 sudo[310958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:22 compute-2 sudo[310958]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:22 compute-2 sudo[310983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 08:38:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:22.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:22 compute-2 sudo[310983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:22 compute-2 sudo[310983]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:22 compute-2 nova_compute[226829]: 2026-01-31 08:38:22.611 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:22 compute-2 nova_compute[226829]: 2026-01-31 08:38:22.938 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:23.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:23 compute-2 nova_compute[226829]: 2026-01-31 08:38:23.339 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "da76f9e6-a924-4a04-855a-2764b3edc1a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:23 compute-2 nova_compute[226829]: 2026-01-31 08:38:23.340 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:23 compute-2 ceph-mon[77282]: pgmap v3162: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.1 KiB/s wr, 45 op/s
Jan 31 08:38:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:23 compute-2 nova_compute[226829]: 2026-01-31 08:38:23.724 226833 DEBUG nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:38:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 31 08:38:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:24.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 31 08:38:24 compute-2 nova_compute[226829]: 2026-01-31 08:38:24.878 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:24 compute-2 nova_compute[226829]: 2026-01-31 08:38:24.878 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:24 compute-2 nova_compute[226829]: 2026-01-31 08:38:24.888 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:38:24 compute-2 nova_compute[226829]: 2026-01-31 08:38:24.889 226833 INFO nova.compute.claims [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:38:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:38:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:38:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:38:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:38:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:38:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:25.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:26 compute-2 nova_compute[226829]: 2026-01-31 08:38:26.142 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:26 compute-2 ceph-mon[77282]: pgmap v3163: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Jan 31 08:38:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:26.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:38:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/860123472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:26 compute-2 nova_compute[226829]: 2026-01-31 08:38:26.582 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:26 compute-2 nova_compute[226829]: 2026-01-31 08:38:26.592 226833 DEBUG nova.compute.provider_tree [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:38:26 compute-2 nova_compute[226829]: 2026-01-31 08:38:26.652 226833 DEBUG nova.scheduler.client.report [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:38:26 compute-2 nova_compute[226829]: 2026-01-31 08:38:26.735 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:26 compute-2 nova_compute[226829]: 2026-01-31 08:38:26.736 226833 DEBUG nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:38:26 compute-2 nova_compute[226829]: 2026-01-31 08:38:26.831 226833 DEBUG nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:38:26 compute-2 nova_compute[226829]: 2026-01-31 08:38:26.832 226833 DEBUG nova.network.neutron [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:38:26 compute-2 nova_compute[226829]: 2026-01-31 08:38:26.866 226833 INFO nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:38:26 compute-2 nova_compute[226829]: 2026-01-31 08:38:26.907 226833 DEBUG nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.029 226833 DEBUG nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.030 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.031 226833 INFO nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Creating image(s)
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.064 226833 DEBUG nova.storage.rbd_utils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image da76f9e6-a924-4a04-855a-2764b3edc1a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.096 226833 DEBUG nova.storage.rbd_utils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image da76f9e6-a924-4a04-855a-2764b3edc1a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.128 226833 DEBUG nova.storage.rbd_utils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image da76f9e6-a924-4a04-855a-2764b3edc1a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.132 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.210 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.211 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.213 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.213 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/860123472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.259 226833 DEBUG nova.storage.rbd_utils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image da76f9e6-a924-4a04-855a-2764b3edc1a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.262 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 da76f9e6-a924-4a04-855a-2764b3edc1a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:38:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:27.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.543 226833 DEBUG nova.policy [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6968a1ee10e4e3b8651ffe0240a7e46', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.613 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.660 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 da76f9e6-a924-4a04-855a-2764b3edc1a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.725 226833 DEBUG nova.storage.rbd_utils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] resizing rbd image da76f9e6-a924-4a04-855a-2764b3edc1a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.829 226833 DEBUG nova.objects.instance [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'migration_context' on Instance uuid da76f9e6-a924-4a04-855a-2764b3edc1a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.848 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.849 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Ensure instance console log exists: /var/lib/nova/instances/da76f9e6-a924-4a04-855a-2764b3edc1a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.849 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.849 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.850 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:27 compute-2 nova_compute[226829]: 2026-01-31 08:38:27.940 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:28.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:28 compute-2 ceph-mon[77282]: pgmap v3164: 305 pgs: 305 active+clean; 183 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 08:38:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:29.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:29 compute-2 nova_compute[226829]: 2026-01-31 08:38:29.871 226833 DEBUG nova.network.neutron [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Successfully created port: 0407be49-1c64-4010-bd1c-9a273b819442 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:38:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:30.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:30 compute-2 ceph-mon[77282]: pgmap v3165: 305 pgs: 305 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.6 MiB/s wr, 200 op/s
Jan 31 08:38:30 compute-2 sudo[311219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:30 compute-2 sudo[311219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:30 compute-2 sudo[311219]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:30 compute-2 sudo[311244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:38:30 compute-2 sudo[311244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:30 compute-2 sudo[311244]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:31 compute-2 ceph-mon[77282]: pgmap v3166: 305 pgs: 305 active+clean; 271 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 252 op/s
Jan 31 08:38:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:32.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:32 compute-2 nova_compute[226829]: 2026-01-31 08:38:32.523 226833 DEBUG nova.network.neutron [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Successfully updated port: 0407be49-1c64-4010-bd1c-9a273b819442 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:38:32 compute-2 nova_compute[226829]: 2026-01-31 08:38:32.559 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:38:32 compute-2 nova_compute[226829]: 2026-01-31 08:38:32.559 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquired lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:38:32 compute-2 nova_compute[226829]: 2026-01-31 08:38:32.560 226833 DEBUG nova.network.neutron [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:38:32 compute-2 nova_compute[226829]: 2026-01-31 08:38:32.615 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:32 compute-2 nova_compute[226829]: 2026-01-31 08:38:32.735 226833 DEBUG nova.compute.manager [req-16940d1a-667d-44c4-8cf4-6b748abef315 req-f6bf552c-8983-4a6c-9e8a-95b8afdc6b0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received event network-changed-0407be49-1c64-4010-bd1c-9a273b819442 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:38:32 compute-2 nova_compute[226829]: 2026-01-31 08:38:32.735 226833 DEBUG nova.compute.manager [req-16940d1a-667d-44c4-8cf4-6b748abef315 req-f6bf552c-8983-4a6c-9e8a-95b8afdc6b0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Refreshing instance network info cache due to event network-changed-0407be49-1c64-4010-bd1c-9a273b819442. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:38:32 compute-2 nova_compute[226829]: 2026-01-31 08:38:32.736 226833 DEBUG oslo_concurrency.lockutils [req-16940d1a-667d-44c4-8cf4-6b748abef315 req-f6bf552c-8983-4a6c-9e8a-95b8afdc6b0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:38:32 compute-2 nova_compute[226829]: 2026-01-31 08:38:32.942 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:33 compute-2 nova_compute[226829]: 2026-01-31 08:38:33.109 226833 DEBUG nova.network.neutron [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:38:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:33.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:33 compute-2 ceph-mon[77282]: pgmap v3167: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.1 MiB/s wr, 324 op/s
Jan 31 08:38:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:34.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.733 226833 DEBUG nova.network.neutron [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updating instance_info_cache with network_info: [{"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.831 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Releasing lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.831 226833 DEBUG nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Instance network_info: |[{"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.832 226833 DEBUG oslo_concurrency.lockutils [req-16940d1a-667d-44c4-8cf4-6b748abef315 req-f6bf552c-8983-4a6c-9e8a-95b8afdc6b0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.832 226833 DEBUG nova.network.neutron [req-16940d1a-667d-44c4-8cf4-6b748abef315 req-f6bf552c-8983-4a6c-9e8a-95b8afdc6b0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Refreshing network info cache for port 0407be49-1c64-4010-bd1c-9a273b819442 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.835 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Start _get_guest_xml network_info=[{"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.840 226833 WARNING nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.845 226833 DEBUG nova.virt.libvirt.host [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.845 226833 DEBUG nova.virt.libvirt.host [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.848 226833 DEBUG nova.virt.libvirt.host [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.849 226833 DEBUG nova.virt.libvirt.host [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.850 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.850 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.851 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.851 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.851 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.851 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.852 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.852 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.852 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.852 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.852 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.853 226833 DEBUG nova.virt.hardware [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:38:34 compute-2 nova_compute[226829]: 2026-01-31 08:38:34.855 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:38:35 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1303395771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.263 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.287 226833 DEBUG nova.storage.rbd_utils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image da76f9e6-a924-4a04-855a-2764b3edc1a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.290 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:35.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:38:35 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2515805486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.686 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.687 226833 DEBUG nova.virt.libvirt.vif [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1043169202',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1043169202',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=182,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCdvGufJvehmY0MUNN+f2LFenAJ+F98ZXIzoKpeHrgeapRWpAssUxwFK09XMQHpqJ4lB7bK+FLEX2qiQudAseuiTurXLcLJluW2RUOwpkRUQR5QMmu4IAl7WV20dROKLQ==',key_name='tempest-TestSecurityGroupsBasicOps-1358644469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-a7vlvawv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:26Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=da76f9e6-a924-4a04-855a-2764b3edc1a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.688 226833 DEBUG nova.network.os_vif_util [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.689 226833 DEBUG nova.network.os_vif_util [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:30:22,bridge_name='br-int',has_traffic_filtering=True,id=0407be49-1c64-4010-bd1c-9a273b819442,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0407be49-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.690 226833 DEBUG nova.objects.instance [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'pci_devices' on Instance uuid da76f9e6-a924-4a04-855a-2764b3edc1a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.844 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <uuid>da76f9e6-a924-4a04-855a-2764b3edc1a3</uuid>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <name>instance-000000b6</name>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1043169202</nova:name>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:38:34</nova:creationTime>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <nova:user uuid="c6968a1ee10e4e3b8651ffe0240a7e46">tempest-TestSecurityGroupsBasicOps-1014068786-project-member</nova:user>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <nova:project uuid="ba35ae24dbf3443e8a526dce39c6793b">tempest-TestSecurityGroupsBasicOps-1014068786</nova:project>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <nova:port uuid="0407be49-1c64-4010-bd1c-9a273b819442">
Jan 31 08:38:35 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <system>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <entry name="serial">da76f9e6-a924-4a04-855a-2764b3edc1a3</entry>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <entry name="uuid">da76f9e6-a924-4a04-855a-2764b3edc1a3</entry>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     </system>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <os>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   </os>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <features>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   </features>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/da76f9e6-a924-4a04-855a-2764b3edc1a3_disk">
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       </source>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/da76f9e6-a924-4a04-855a-2764b3edc1a3_disk.config">
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       </source>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:38:35 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:fe:30:22"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <target dev="tap0407be49-1c"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/da76f9e6-a924-4a04-855a-2764b3edc1a3/console.log" append="off"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <video>
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     </video>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:38:35 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:38:35 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:38:35 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:38:35 compute-2 nova_compute[226829]: </domain>
Jan 31 08:38:35 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.845 226833 DEBUG nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Preparing to wait for external event network-vif-plugged-0407be49-1c64-4010-bd1c-9a273b819442 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.846 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.846 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.846 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.847 226833 DEBUG nova.virt.libvirt.vif [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1043169202',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1043169202',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=182,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCdvGufJvehmY0MUNN+f2LFenAJ+F98ZXIzoKpeHrgeapRWpAssUxwFK09XMQHpqJ4lB7bK+FLEX2qiQudAseuiTurXLcLJluW2RUOwpkRUQR5QMmu4IAl7WV20dROKLQ==',key_name='tempest-TestSecurityGroupsBasicOps-1358644469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-a7vlvawv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:26Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=da76f9e6-a924-4a04-855a-2764b3edc1a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.848 226833 DEBUG nova.network.os_vif_util [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.849 226833 DEBUG nova.network.os_vif_util [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:30:22,bridge_name='br-int',has_traffic_filtering=True,id=0407be49-1c64-4010-bd1c-9a273b819442,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0407be49-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.849 226833 DEBUG os_vif [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:30:22,bridge_name='br-int',has_traffic_filtering=True,id=0407be49-1c64-4010-bd1c-9a273b819442,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0407be49-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.850 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.851 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.851 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.860 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.861 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0407be49-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.862 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0407be49-1c, col_values=(('external_ids', {'iface-id': '0407be49-1c64-4010-bd1c-9a273b819442', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:30:22', 'vm-uuid': 'da76f9e6-a924-4a04-855a-2764b3edc1a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.864 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:35 compute-2 NetworkManager[48999]: <info>  [1769848715.8682] manager: (tap0407be49-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.868 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.871 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:35 compute-2 nova_compute[226829]: 2026-01-31 08:38:35.872 226833 INFO os_vif [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:30:22,bridge_name='br-int',has_traffic_filtering=True,id=0407be49-1c64-4010-bd1c-9a273b819442,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0407be49-1c')
Jan 31 08:38:35 compute-2 ceph-mon[77282]: pgmap v3168: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 7.1 MiB/s wr, 295 op/s
Jan 31 08:38:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/906920371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1303395771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2515805486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:35 compute-2 sudo[311336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:35 compute-2 sudo[311336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:35 compute-2 sudo[311336]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:35 compute-2 sudo[311361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:35 compute-2 sudo[311361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:35 compute-2 sudo[311361]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:36 compute-2 nova_compute[226829]: 2026-01-31 08:38:36.230 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:38:36 compute-2 nova_compute[226829]: 2026-01-31 08:38:36.231 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:38:36 compute-2 nova_compute[226829]: 2026-01-31 08:38:36.231 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No VIF found with MAC fa:16:3e:fe:30:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:38:36 compute-2 nova_compute[226829]: 2026-01-31 08:38:36.232 226833 INFO nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Using config drive
Jan 31 08:38:36 compute-2 nova_compute[226829]: 2026-01-31 08:38:36.260 226833 DEBUG nova.storage.rbd_utils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image da76f9e6-a924-4a04-855a-2764b3edc1a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:38:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:36.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:36 compute-2 nova_compute[226829]: 2026-01-31 08:38:36.782 226833 INFO nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Creating config drive at /var/lib/nova/instances/da76f9e6-a924-4a04-855a-2764b3edc1a3/disk.config
Jan 31 08:38:36 compute-2 nova_compute[226829]: 2026-01-31 08:38:36.786 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da76f9e6-a924-4a04-855a-2764b3edc1a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps_y_i4wt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:36 compute-2 nova_compute[226829]: 2026-01-31 08:38:36.913 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da76f9e6-a924-4a04-855a-2764b3edc1a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps_y_i4wt" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:36 compute-2 nova_compute[226829]: 2026-01-31 08:38:36.937 226833 DEBUG nova.storage.rbd_utils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image da76f9e6-a924-4a04-855a-2764b3edc1a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:38:36 compute-2 nova_compute[226829]: 2026-01-31 08:38:36.940 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da76f9e6-a924-4a04-855a-2764b3edc1a3/disk.config da76f9e6-a924-4a04-855a-2764b3edc1a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:38:37 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/755283535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.086 226833 DEBUG oslo_concurrency.processutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da76f9e6-a924-4a04-855a-2764b3edc1a3/disk.config da76f9e6-a924-4a04-855a-2764b3edc1a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.086 226833 INFO nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Deleting local config drive /var/lib/nova/instances/da76f9e6-a924-4a04-855a-2764b3edc1a3/disk.config because it was imported into RBD.
Jan 31 08:38:37 compute-2 kernel: tap0407be49-1c: entered promiscuous mode
Jan 31 08:38:37 compute-2 NetworkManager[48999]: <info>  [1769848717.1266] manager: (tap0407be49-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/357)
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.127 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:37 compute-2 ovn_controller[133834]: 2026-01-31T08:38:37Z|00723|binding|INFO|Claiming lport 0407be49-1c64-4010-bd1c-9a273b819442 for this chassis.
Jan 31 08:38:37 compute-2 ovn_controller[133834]: 2026-01-31T08:38:37Z|00724|binding|INFO|0407be49-1c64-4010-bd1c-9a273b819442: Claiming fa:16:3e:fe:30:22 10.100.0.8
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.132 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:37 compute-2 systemd-udevd[311456]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.149 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:37 compute-2 systemd-machined[195142]: New machine qemu-83-instance-000000b6.
Jan 31 08:38:37 compute-2 ovn_controller[133834]: 2026-01-31T08:38:37Z|00725|binding|INFO|Setting lport 0407be49-1c64-4010-bd1c-9a273b819442 ovn-installed in OVS
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.152 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:37 compute-2 NetworkManager[48999]: <info>  [1769848717.1555] device (tap0407be49-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:38:37 compute-2 NetworkManager[48999]: <info>  [1769848717.1563] device (tap0407be49-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:38:37 compute-2 systemd[1]: Started Virtual Machine qemu-83-instance-000000b6.
Jan 31 08:38:37 compute-2 ovn_controller[133834]: 2026-01-31T08:38:37Z|00726|binding|INFO|Setting lport 0407be49-1c64-4010-bd1c-9a273b819442 up in Southbound
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.207 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:30:22 10.100.0.8'], port_security=['fa:16:3e:fe:30:22 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'da76f9e6-a924-4a04-855a-2764b3edc1a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-230cf542-2017-47c7-972b-7bbce9acd446', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '012bb765-67ff-4d8f-9099-94c4abf76027 c6a91abb-9646-4eaf-929b-fa5a4cf8f203', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5db3dfb8-63b9-4fff-8cde-54682a3b4ba9, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=0407be49-1c64-4010-bd1c-9a273b819442) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.208 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 0407be49-1c64-4010-bd1c-9a273b819442 in datapath 230cf542-2017-47c7-972b-7bbce9acd446 bound to our chassis
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.210 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 230cf542-2017-47c7-972b-7bbce9acd446
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.218 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[29b5502c-d7d3-4bac-b1ec-4d8f91b1da75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.219 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap230cf542-21 in ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.221 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap230cf542-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.221 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d893d4d3-2d2b-464e-ace0-fe80cdb8ea03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.222 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0ebb41-6964-4729-b12a-539b6b6f6404]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.234 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[55b19f44-4394-4468-b7ea-914451017353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.242 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[90131015-523e-4b25-83a5-b4e73958091d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.263 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac417cd-1697-4c80-9c8f-f8c09d5c49c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 NetworkManager[48999]: <info>  [1769848717.2702] manager: (tap230cf542-20): new Veth device (/org/freedesktop/NetworkManager/Devices/358)
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.269 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[abd353c8-7525-4342-9322-3c55a0d4463c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.290 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[7d577f9b-f812-4b06-b5ee-568cdbe1784a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.292 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbf6e5a-7907-45d5-acd8-c4af819547df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 NetworkManager[48999]: <info>  [1769848717.3051] device (tap230cf542-20): carrier: link connected
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.311 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[26ebb121-023c-4c39-a4f5-5ef71fc597a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.321 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f27e35af-e276-4dde-83ec-c91ab98289ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap230cf542-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:38:3d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896977, 'reachable_time': 34018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311490, 'error': None, 'target': 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.334 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2b236e39-4a7f-498c-b03f-5eeffd401467]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:383d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 896977, 'tstamp': 896977}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311491, 'error': None, 'target': 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:38:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:37.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.342 226833 DEBUG nova.network.neutron [req-16940d1a-667d-44c4-8cf4-6b748abef315 req-f6bf552c-8983-4a6c-9e8a-95b8afdc6b0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updated VIF entry in instance network info cache for port 0407be49-1c64-4010-bd1c-9a273b819442. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.343 226833 DEBUG nova.network.neutron [req-16940d1a-667d-44c4-8cf4-6b748abef315 req-f6bf552c-8983-4a6c-9e8a-95b8afdc6b0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updating instance_info_cache with network_info: [{"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.349 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[287ca9e4-ef50-4692-aa00-6a7864cf4ddb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap230cf542-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:38:3d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896977, 'reachable_time': 34018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311492, 'error': None, 'target': 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.373 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c8696bfb-07f3-4039-8ecd-6078125e5f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.377 226833 DEBUG oslo_concurrency.lockutils [req-16940d1a-667d-44c4-8cf4-6b748abef315 req-f6bf552c-8983-4a6c-9e8a-95b8afdc6b0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.411 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b322f24f-9358-434d-8d4b-7a3077fe5628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.412 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap230cf542-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.412 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.413 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap230cf542-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:37 compute-2 kernel: tap230cf542-20: entered promiscuous mode
Jan 31 08:38:37 compute-2 NetworkManager[48999]: <info>  [1769848717.4607] manager: (tap230cf542-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.462 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap230cf542-20, col_values=(('external_ids', {'iface-id': '1c0f6b16-7de3-433c-bddb-5e27fc8effc7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.460 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.464 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:37 compute-2 ovn_controller[133834]: 2026-01-31T08:38:37Z|00727|binding|INFO|Releasing lport 1c0f6b16-7de3-433c-bddb-5e27fc8effc7 from this chassis (sb_readonly=0)
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.465 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/230cf542-2017-47c7-972b-7bbce9acd446.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/230cf542-2017-47c7-972b-7bbce9acd446.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.466 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.466 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c159913c-35ef-43f8-8a45-bf0fef88e8b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.467 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-230cf542-2017-47c7-972b-7bbce9acd446
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/230cf542-2017-47c7-972b-7bbce9acd446.pid.haproxy
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 230cf542-2017-47c7-972b-7bbce9acd446
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:38:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:38:37.467 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'env', 'PROCESS_TAG=haproxy-230cf542-2017-47c7-972b-7bbce9acd446', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/230cf542-2017-47c7-972b-7bbce9acd446.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.468 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.584 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848717.5835204, da76f9e6-a924-4a04-855a-2764b3edc1a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.585 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] VM Started (Lifecycle Event)
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.617 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.667 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.671 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848717.5868862, da76f9e6-a924-4a04-855a-2764b3edc1a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.671 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] VM Paused (Lifecycle Event)
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.731 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.733 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:38:37 compute-2 podman[311566]: 2026-01-31 08:38:37.773640817 +0000 UTC m=+0.047563332 container create 8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:38:37 compute-2 systemd[1]: Started libpod-conmon-8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10.scope.
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.810 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:38:37 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:38:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6af97ae282ec679c60a6591c4271c45e108dc24d5b97a8a7b328de03150d454/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:37 compute-2 podman[311566]: 2026-01-31 08:38:37.832198746 +0000 UTC m=+0.106121281 container init 8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:38:37 compute-2 podman[311566]: 2026-01-31 08:38:37.835844394 +0000 UTC m=+0.109766909 container start 8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:38:37 compute-2 podman[311566]: 2026-01-31 08:38:37.744959058 +0000 UTC m=+0.018881593 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:38:37 compute-2 neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446[311582]: [NOTICE]   (311586) : New worker (311588) forked
Jan 31 08:38:37 compute-2 neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446[311582]: [NOTICE]   (311586) : Loading success.
Jan 31 08:38:37 compute-2 ceph-mon[77282]: pgmap v3169: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 165 KiB/s rd, 7.1 MiB/s wr, 268 op/s
Jan 31 08:38:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/755283535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.893 226833 DEBUG nova.compute.manager [req-0be7b9a8-ec41-4cc7-843a-8f85251097db req-d0b0125b-dcaa-42f4-9a25-6d71aa6aaa87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received event network-vif-plugged-0407be49-1c64-4010-bd1c-9a273b819442 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.894 226833 DEBUG oslo_concurrency.lockutils [req-0be7b9a8-ec41-4cc7-843a-8f85251097db req-d0b0125b-dcaa-42f4-9a25-6d71aa6aaa87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.894 226833 DEBUG oslo_concurrency.lockutils [req-0be7b9a8-ec41-4cc7-843a-8f85251097db req-d0b0125b-dcaa-42f4-9a25-6d71aa6aaa87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.894 226833 DEBUG oslo_concurrency.lockutils [req-0be7b9a8-ec41-4cc7-843a-8f85251097db req-d0b0125b-dcaa-42f4-9a25-6d71aa6aaa87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.894 226833 DEBUG nova.compute.manager [req-0be7b9a8-ec41-4cc7-843a-8f85251097db req-d0b0125b-dcaa-42f4-9a25-6d71aa6aaa87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Processing event network-vif-plugged-0407be49-1c64-4010-bd1c-9a273b819442 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.895 226833 DEBUG nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.899 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848717.8988612, da76f9e6-a924-4a04-855a-2764b3edc1a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.899 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] VM Resumed (Lifecycle Event)
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.900 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.904 226833 INFO nova.virt.libvirt.driver [-] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Instance spawned successfully.
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.904 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.945 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.948 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.948 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.949 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.949 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.949 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.950 226833 DEBUG nova.virt.libvirt.driver [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:38:37 compute-2 nova_compute[226829]: 2026-01-31 08:38:37.953 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:38:38 compute-2 nova_compute[226829]: 2026-01-31 08:38:38.050 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:38:38 compute-2 nova_compute[226829]: 2026-01-31 08:38:38.109 226833 INFO nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Took 11.08 seconds to spawn the instance on the hypervisor.
Jan 31 08:38:38 compute-2 nova_compute[226829]: 2026-01-31 08:38:38.110 226833 DEBUG nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:38:38 compute-2 nova_compute[226829]: 2026-01-31 08:38:38.278 226833 INFO nova.compute.manager [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Took 13.43 seconds to build instance.
Jan 31 08:38:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:38.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:38 compute-2 nova_compute[226829]: 2026-01-31 08:38:38.439 226833 DEBUG oslo_concurrency.lockutils [None req-849fbbd3-0bc8-4489-9d72-8102637d9310 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:39 compute-2 podman[311598]: 2026-01-31 08:38:39.181439016 +0000 UTC m=+0.067278887 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:38:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:39.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:40 compute-2 nova_compute[226829]: 2026-01-31 08:38:40.237 226833 DEBUG nova.compute.manager [req-4423c73a-6c2e-498d-9200-88ddc083bc16 req-deb0f8c8-f3c2-4492-b126-cb2d3971d2ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received event network-vif-plugged-0407be49-1c64-4010-bd1c-9a273b819442 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:38:40 compute-2 nova_compute[226829]: 2026-01-31 08:38:40.238 226833 DEBUG oslo_concurrency.lockutils [req-4423c73a-6c2e-498d-9200-88ddc083bc16 req-deb0f8c8-f3c2-4492-b126-cb2d3971d2ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:40 compute-2 nova_compute[226829]: 2026-01-31 08:38:40.238 226833 DEBUG oslo_concurrency.lockutils [req-4423c73a-6c2e-498d-9200-88ddc083bc16 req-deb0f8c8-f3c2-4492-b126-cb2d3971d2ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:40 compute-2 nova_compute[226829]: 2026-01-31 08:38:40.238 226833 DEBUG oslo_concurrency.lockutils [req-4423c73a-6c2e-498d-9200-88ddc083bc16 req-deb0f8c8-f3c2-4492-b126-cb2d3971d2ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:40 compute-2 nova_compute[226829]: 2026-01-31 08:38:40.238 226833 DEBUG nova.compute.manager [req-4423c73a-6c2e-498d-9200-88ddc083bc16 req-deb0f8c8-f3c2-4492-b126-cb2d3971d2ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] No waiting events found dispatching network-vif-plugged-0407be49-1c64-4010-bd1c-9a273b819442 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:38:40 compute-2 nova_compute[226829]: 2026-01-31 08:38:40.239 226833 WARNING nova.compute.manager [req-4423c73a-6c2e-498d-9200-88ddc083bc16 req-deb0f8c8-f3c2-4492-b126-cb2d3971d2ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received unexpected event network-vif-plugged-0407be49-1c64-4010-bd1c-9a273b819442 for instance with vm_state active and task_state None.
Jan 31 08:38:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:40.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:40 compute-2 ceph-mon[77282]: pgmap v3170: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 134 KiB/s rd, 5.3 MiB/s wr, 215 op/s
Jan 31 08:38:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/849201847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:40 compute-2 nova_compute[226829]: 2026-01-31 08:38:40.864 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:41.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:41 compute-2 ceph-mon[77282]: pgmap v3171: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 878 KiB/s rd, 4.5 MiB/s wr, 159 op/s
Jan 31 08:38:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:42.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:42 compute-2 nova_compute[226829]: 2026-01-31 08:38:42.620 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/929030816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2556544296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:43 compute-2 ceph-mon[77282]: pgmap v3172: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Jan 31 08:38:43 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:38:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:44.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:38:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:38:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:45.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:38:45 compute-2 nova_compute[226829]: 2026-01-31 08:38:45.866 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:45 compute-2 ceph-mon[77282]: pgmap v3173: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 31 08:38:46 compute-2 sshd-session[311629]: Invalid user  from 45.156.87.246 port 50008
Jan 31 08:38:46 compute-2 podman[311632]: 2026-01-31 08:38:46.168729601 +0000 UTC m=+0.048417624 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:38:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:46.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:47.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:47 compute-2 nova_compute[226829]: 2026-01-31 08:38:47.672 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:48 compute-2 ceph-mon[77282]: pgmap v3174: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:38:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:48.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:48 compute-2 nova_compute[226829]: 2026-01-31 08:38:48.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:49.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:49 compute-2 NetworkManager[48999]: <info>  [1769848729.9554] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Jan 31 08:38:49 compute-2 nova_compute[226829]: 2026-01-31 08:38:49.955 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:49 compute-2 NetworkManager[48999]: <info>  [1769848729.9561] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Jan 31 08:38:49 compute-2 nova_compute[226829]: 2026-01-31 08:38:49.995 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:50 compute-2 ovn_controller[133834]: 2026-01-31T08:38:50Z|00728|binding|INFO|Releasing lport 1c0f6b16-7de3-433c-bddb-5e27fc8effc7 from this chassis (sb_readonly=0)
Jan 31 08:38:50 compute-2 nova_compute[226829]: 2026-01-31 08:38:50.009 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:38:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:50.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:38:50 compute-2 ceph-mon[77282]: pgmap v3175: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 31 08:38:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1828810280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:50 compute-2 nova_compute[226829]: 2026-01-31 08:38:50.869 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:51.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:51 compute-2 nova_compute[226829]: 2026-01-31 08:38:51.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:51 compute-2 nova_compute[226829]: 2026-01-31 08:38:51.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:38:51 compute-2 nova_compute[226829]: 2026-01-31 08:38:51.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:38:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:38:51 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2756972332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:52 compute-2 ceph-mon[77282]: pgmap v3176: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 78 op/s
Jan 31 08:38:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2756972332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:38:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:52.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:38:52 compute-2 nova_compute[226829]: 2026-01-31 08:38:52.604 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:38:52 compute-2 nova_compute[226829]: 2026-01-31 08:38:52.605 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:38:52 compute-2 nova_compute[226829]: 2026-01-31 08:38:52.605 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:38:52 compute-2 nova_compute[226829]: 2026-01-31 08:38:52.605 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid da76f9e6-a924-4a04-855a-2764b3edc1a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:38:52 compute-2 ovn_controller[133834]: 2026-01-31T08:38:52Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:30:22 10.100.0.8
Jan 31 08:38:52 compute-2 ovn_controller[133834]: 2026-01-31T08:38:52Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:30:22 10.100.0.8
Jan 31 08:38:52 compute-2 nova_compute[226829]: 2026-01-31 08:38:52.673 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:52 compute-2 nova_compute[226829]: 2026-01-31 08:38:52.785 226833 DEBUG nova.compute.manager [req-f1548851-082f-413f-9c97-05c6debee5d8 req-0f5142ad-c978-4897-a3af-17e217e456eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received event network-changed-0407be49-1c64-4010-bd1c-9a273b819442 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:38:52 compute-2 nova_compute[226829]: 2026-01-31 08:38:52.785 226833 DEBUG nova.compute.manager [req-f1548851-082f-413f-9c97-05c6debee5d8 req-0f5142ad-c978-4897-a3af-17e217e456eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Refreshing instance network info cache due to event network-changed-0407be49-1c64-4010-bd1c-9a273b819442. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:38:52 compute-2 nova_compute[226829]: 2026-01-31 08:38:52.785 226833 DEBUG oslo_concurrency.lockutils [req-f1548851-082f-413f-9c97-05c6debee5d8 req-0f5142ad-c978-4897-a3af-17e217e456eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:38:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:38:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3861035264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:38:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:38:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3861035264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:38:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:38:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:53.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:38:53 compute-2 sshd-session[311629]: Connection closed by invalid user  45.156.87.246 port 50008 [preauth]
Jan 31 08:38:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:54 compute-2 ceph-mon[77282]: pgmap v3177: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 910 KiB/s wr, 70 op/s
Jan 31 08:38:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3861035264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:38:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3861035264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:38:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:54.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:55 compute-2 nova_compute[226829]: 2026-01-31 08:38:55.153 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updating instance_info_cache with network_info: [{"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:38:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2880464520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:55 compute-2 nova_compute[226829]: 2026-01-31 08:38:55.271 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:38:55 compute-2 nova_compute[226829]: 2026-01-31 08:38:55.271 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:38:55 compute-2 nova_compute[226829]: 2026-01-31 08:38:55.272 226833 DEBUG oslo_concurrency.lockutils [req-f1548851-082f-413f-9c97-05c6debee5d8 req-0f5142ad-c978-4897-a3af-17e217e456eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:38:55 compute-2 nova_compute[226829]: 2026-01-31 08:38:55.272 226833 DEBUG nova.network.neutron [req-f1548851-082f-413f-9c97-05c6debee5d8 req-0f5142ad-c978-4897-a3af-17e217e456eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Refreshing network info cache for port 0407be49-1c64-4010-bd1c-9a273b819442 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:38:55 compute-2 nova_compute[226829]: 2026-01-31 08:38:55.273 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:55 compute-2 nova_compute[226829]: 2026-01-31 08:38:55.274 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:55 compute-2 nova_compute[226829]: 2026-01-31 08:38:55.274 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:55.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:55 compute-2 nova_compute[226829]: 2026-01-31 08:38:55.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:55 compute-2 nova_compute[226829]: 2026-01-31 08:38:55.872 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:56 compute-2 sudo[311659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:56 compute-2 sudo[311659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:56 compute-2 sudo[311659]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:56 compute-2 sudo[311684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:56 compute-2 sudo[311684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:56 compute-2 sudo[311684]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:56.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:56 compute-2 ceph-mon[77282]: pgmap v3178: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 08:38:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:57.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:57 compute-2 nova_compute[226829]: 2026-01-31 08:38:57.675 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:57 compute-2 ceph-mon[77282]: pgmap v3179: 305 pgs: 305 active+clean; 356 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 08:38:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1619984088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:58.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/409800787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:59 compute-2 nova_compute[226829]: 2026-01-31 08:38:59.316 226833 DEBUG nova.network.neutron [req-f1548851-082f-413f-9c97-05c6debee5d8 req-0f5142ad-c978-4897-a3af-17e217e456eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updated VIF entry in instance network info cache for port 0407be49-1c64-4010-bd1c-9a273b819442. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:38:59 compute-2 nova_compute[226829]: 2026-01-31 08:38:59.317 226833 DEBUG nova.network.neutron [req-f1548851-082f-413f-9c97-05c6debee5d8 req-0f5142ad-c978-4897-a3af-17e217e456eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updating instance_info_cache with network_info: [{"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:38:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:38:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:59.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:59 compute-2 nova_compute[226829]: 2026-01-31 08:38:59.765 226833 DEBUG oslo_concurrency.lockutils [req-f1548851-082f-413f-9c97-05c6debee5d8 req-0f5142ad-c978-4897-a3af-17e217e456eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:38:59 compute-2 ceph-mon[77282]: pgmap v3180: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 795 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 31 08:38:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/87716784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:00.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:00 compute-2 nova_compute[226829]: 2026-01-31 08:39:00.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:00 compute-2 nova_compute[226829]: 2026-01-31 08:39:00.874 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1078090768' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:01.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:02 compute-2 ceph-mon[77282]: pgmap v3181: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 31 08:39:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:02.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:02 compute-2 nova_compute[226829]: 2026-01-31 08:39:02.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:02 compute-2 nova_compute[226829]: 2026-01-31 08:39:02.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:02 compute-2 nova_compute[226829]: 2026-01-31 08:39:02.804 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:02 compute-2 nova_compute[226829]: 2026-01-31 08:39:02.805 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:02 compute-2 nova_compute[226829]: 2026-01-31 08:39:02.805 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:02 compute-2 nova_compute[226829]: 2026-01-31 08:39:02.805 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:39:02 compute-2 nova_compute[226829]: 2026-01-31 08:39:02.806 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:39:03 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/136018308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:03 compute-2 nova_compute[226829]: 2026-01-31 08:39:03.246 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:03.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:04 compute-2 nova_compute[226829]: 2026-01-31 08:39:04.085 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:04.084 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:39:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:04.087 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:39:04 compute-2 ceph-mon[77282]: pgmap v3182: 305 pgs: 305 active+clean; 367 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 140 op/s
Jan 31 08:39:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/136018308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:04.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:05 compute-2 nova_compute[226829]: 2026-01-31 08:39:05.289 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:39:05 compute-2 nova_compute[226829]: 2026-01-31 08:39:05.290 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:39:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:05.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:05 compute-2 nova_compute[226829]: 2026-01-31 08:39:05.420 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:39:05 compute-2 nova_compute[226829]: 2026-01-31 08:39:05.421 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3943MB free_disk=20.93584442138672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:39:05 compute-2 nova_compute[226829]: 2026-01-31 08:39:05.422 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:05 compute-2 nova_compute[226829]: 2026-01-31 08:39:05.422 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:05 compute-2 nova_compute[226829]: 2026-01-31 08:39:05.876 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.105 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance da76f9e6-a924-4a04-855a-2764b3edc1a3 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.106 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.106 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:39:06 compute-2 ceph-mon[77282]: pgmap v3183: 305 pgs: 305 active+clean; 395 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 157 op/s
Jan 31 08:39:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2964727998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.290 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:39:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.378 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.378 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.399 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.441 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.497 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:06.917 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:06.918 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:06.919 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:39:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2495342213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.956 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:06 compute-2 nova_compute[226829]: 2026-01-31 08:39:06.961 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:39:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:07.089 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/962867648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2495342213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:07.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:07 compute-2 nova_compute[226829]: 2026-01-31 08:39:07.679 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:07 compute-2 nova_compute[226829]: 2026-01-31 08:39:07.836 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:39:08 compute-2 ceph-mon[77282]: pgmap v3184: 305 pgs: 305 active+clean; 406 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.9 MiB/s wr, 161 op/s
Jan 31 08:39:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:08.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:08 compute-2 nova_compute[226829]: 2026-01-31 08:39:08.698 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:39:08 compute-2 nova_compute[226829]: 2026-01-31 08:39:08.699 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:39:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:09.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:39:10 compute-2 podman[311761]: 2026-01-31 08:39:10.235519747 +0000 UTC m=+0.116163463 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:39:10 compute-2 ceph-mon[77282]: pgmap v3185: 305 pgs: 305 active+clean; 406 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.9 MiB/s wr, 162 op/s
Jan 31 08:39:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2241056019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:10.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:10 compute-2 nova_compute[226829]: 2026-01-31 08:39:10.878 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:11.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:12 compute-2 nova_compute[226829]: 2026-01-31 08:39:12.315 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "350b7ab1-25ae-4368-b319-529690bc394b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:12 compute-2 nova_compute[226829]: 2026-01-31 08:39:12.316 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:12.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:12 compute-2 nova_compute[226829]: 2026-01-31 08:39:12.382 226833 DEBUG nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:39:12 compute-2 ceph-mon[77282]: pgmap v3186: 305 pgs: 305 active+clean; 415 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Jan 31 08:39:12 compute-2 nova_compute[226829]: 2026-01-31 08:39:12.526 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:12 compute-2 nova_compute[226829]: 2026-01-31 08:39:12.527 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:12 compute-2 nova_compute[226829]: 2026-01-31 08:39:12.543 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:39:12 compute-2 nova_compute[226829]: 2026-01-31 08:39:12.543 226833 INFO nova.compute.claims [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:39:12 compute-2 nova_compute[226829]: 2026-01-31 08:39:12.719 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:12 compute-2 nova_compute[226829]: 2026-01-31 08:39:12.738 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:39:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4186895028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:13 compute-2 nova_compute[226829]: 2026-01-31 08:39:13.207 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:13 compute-2 nova_compute[226829]: 2026-01-31 08:39:13.212 226833 DEBUG nova.compute.provider_tree [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:39:13 compute-2 nova_compute[226829]: 2026-01-31 08:39:13.265 226833 DEBUG nova.scheduler.client.report [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:39:13 compute-2 nova_compute[226829]: 2026-01-31 08:39:13.337 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:13 compute-2 nova_compute[226829]: 2026-01-31 08:39:13.338 226833 DEBUG nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:39:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:13.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:13 compute-2 ceph-mon[77282]: pgmap v3187: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Jan 31 08:39:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4186895028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:14.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:14 compute-2 nova_compute[226829]: 2026-01-31 08:39:14.587 226833 DEBUG nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:39:14 compute-2 nova_compute[226829]: 2026-01-31 08:39:14.588 226833 DEBUG nova.network.neutron [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:39:14 compute-2 nova_compute[226829]: 2026-01-31 08:39:14.666 226833 INFO nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:39:14 compute-2 nova_compute[226829]: 2026-01-31 08:39:14.693 226833 DEBUG nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:39:14 compute-2 nova_compute[226829]: 2026-01-31 08:39:14.992 226833 DEBUG nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:39:14 compute-2 nova_compute[226829]: 2026-01-31 08:39:14.994 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:39:14 compute-2 nova_compute[226829]: 2026-01-31 08:39:14.994 226833 INFO nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Creating image(s)
Jan 31 08:39:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3098512103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1072869473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.330 226833 DEBUG nova.storage.rbd_utils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 350b7ab1-25ae-4368-b319-529690bc394b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.357 226833 DEBUG nova.storage.rbd_utils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 350b7ab1-25ae-4368-b319-529690bc394b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.382 226833 DEBUG nova.storage.rbd_utils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 350b7ab1-25ae-4368-b319-529690bc394b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.385 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.408 226833 DEBUG nova.policy [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6968a1ee10e4e3b8651ffe0240a7e46', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:39:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:39:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:15.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.439 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.440 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.440 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.441 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.540 226833 DEBUG nova.storage.rbd_utils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 350b7ab1-25ae-4368-b319-529690bc394b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.543 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 350b7ab1-25ae-4368-b319-529690bc394b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:15 compute-2 nova_compute[226829]: 2026-01-31 08:39:15.880 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:16 compute-2 sudo[311907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:16 compute-2 ceph-mon[77282]: pgmap v3188: 305 pgs: 305 active+clean; 439 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 230 op/s
Jan 31 08:39:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1354848748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:16 compute-2 sudo[311907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:16 compute-2 sudo[311907]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:16 compute-2 sudo[311938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:16 compute-2 sudo[311938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:16 compute-2 sudo[311938]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:16 compute-2 podman[311931]: 2026-01-31 08:39:16.287753508 +0000 UTC m=+0.055741134 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 08:39:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:16.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:16 compute-2 nova_compute[226829]: 2026-01-31 08:39:16.654 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 350b7ab1-25ae-4368-b319-529690bc394b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:16 compute-2 nova_compute[226829]: 2026-01-31 08:39:16.714 226833 DEBUG nova.storage.rbd_utils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] resizing rbd image 350b7ab1-25ae-4368-b319-529690bc394b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:39:16 compute-2 nova_compute[226829]: 2026-01-31 08:39:16.847 226833 DEBUG nova.objects.instance [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'migration_context' on Instance uuid 350b7ab1-25ae-4368-b319-529690bc394b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:39:16 compute-2 nova_compute[226829]: 2026-01-31 08:39:16.889 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:39:16 compute-2 nova_compute[226829]: 2026-01-31 08:39:16.889 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Ensure instance console log exists: /var/lib/nova/instances/350b7ab1-25ae-4368-b319-529690bc394b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:39:16 compute-2 nova_compute[226829]: 2026-01-31 08:39:16.890 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:16 compute-2 nova_compute[226829]: 2026-01-31 08:39:16.890 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:16 compute-2 nova_compute[226829]: 2026-01-31 08:39:16.890 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:17.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:17 compute-2 nova_compute[226829]: 2026-01-31 08:39:17.721 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:18 compute-2 ceph-mon[77282]: pgmap v3189: 305 pgs: 305 active+clean; 440 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 195 op/s
Jan 31 08:39:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:18.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:18 compute-2 nova_compute[226829]: 2026-01-31 08:39:18.567 226833 DEBUG nova.network.neutron [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Successfully created port: 171577a9-ec3a-49f5-abe3-26f9190b3419 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:39:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:19 compute-2 ceph-mon[77282]: pgmap v3190: 305 pgs: 305 active+clean; 468 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.8 MiB/s wr, 190 op/s
Jan 31 08:39:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:19.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:19 compute-2 nova_compute[226829]: 2026-01-31 08:39:19.699 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:19 compute-2 nova_compute[226829]: 2026-01-31 08:39:19.788 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:19 compute-2 nova_compute[226829]: 2026-01-31 08:39:19.788 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:39:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:20.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:20 compute-2 nova_compute[226829]: 2026-01-31 08:39:20.883 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:21.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:21 compute-2 ceph-mon[77282]: pgmap v3191: 305 pgs: 305 active+clean; 495 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.3 MiB/s wr, 223 op/s
Jan 31 08:39:21 compute-2 nova_compute[226829]: 2026-01-31 08:39:21.802 226833 DEBUG nova.network.neutron [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Successfully updated port: 171577a9-ec3a-49f5-abe3-26f9190b3419 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:39:21 compute-2 nova_compute[226829]: 2026-01-31 08:39:21.840 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "refresh_cache-350b7ab1-25ae-4368-b319-529690bc394b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:39:21 compute-2 nova_compute[226829]: 2026-01-31 08:39:21.841 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquired lock "refresh_cache-350b7ab1-25ae-4368-b319-529690bc394b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:39:21 compute-2 nova_compute[226829]: 2026-01-31 08:39:21.841 226833 DEBUG nova.network.neutron [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:39:22 compute-2 nova_compute[226829]: 2026-01-31 08:39:22.113 226833 DEBUG nova.compute.manager [req-86c9efdf-ce94-43d2-9b29-0fba1c4cb9b8 req-ed1866b1-c565-4f2e-8e68-a91abada62cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Received event network-changed-171577a9-ec3a-49f5-abe3-26f9190b3419 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:39:22 compute-2 nova_compute[226829]: 2026-01-31 08:39:22.113 226833 DEBUG nova.compute.manager [req-86c9efdf-ce94-43d2-9b29-0fba1c4cb9b8 req-ed1866b1-c565-4f2e-8e68-a91abada62cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Refreshing instance network info cache due to event network-changed-171577a9-ec3a-49f5-abe3-26f9190b3419. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:39:22 compute-2 nova_compute[226829]: 2026-01-31 08:39:22.114 226833 DEBUG oslo_concurrency.lockutils [req-86c9efdf-ce94-43d2-9b29-0fba1c4cb9b8 req-ed1866b1-c565-4f2e-8e68-a91abada62cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-350b7ab1-25ae-4368-b319-529690bc394b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:39:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:39:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:22.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:39:22 compute-2 nova_compute[226829]: 2026-01-31 08:39:22.557 226833 DEBUG nova.network.neutron [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:39:22 compute-2 nova_compute[226829]: 2026-01-31 08:39:22.723 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:23.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:23 compute-2 ceph-mon[77282]: pgmap v3192: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.8 MiB/s wr, 276 op/s
Jan 31 08:39:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:24.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:24 compute-2 nova_compute[226829]: 2026-01-31 08:39:24.627 226833 DEBUG nova.network.neutron [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Updating instance_info_cache with network_info: [{"id": "171577a9-ec3a-49f5-abe3-26f9190b3419", "address": "fa:16:3e:4d:ec:42", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap171577a9-ec", "ovs_interfaceid": "171577a9-ec3a-49f5-abe3-26f9190b3419", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.028 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Releasing lock "refresh_cache-350b7ab1-25ae-4368-b319-529690bc394b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.029 226833 DEBUG nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Instance network_info: |[{"id": "171577a9-ec3a-49f5-abe3-26f9190b3419", "address": "fa:16:3e:4d:ec:42", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap171577a9-ec", "ovs_interfaceid": "171577a9-ec3a-49f5-abe3-26f9190b3419", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.029 226833 DEBUG oslo_concurrency.lockutils [req-86c9efdf-ce94-43d2-9b29-0fba1c4cb9b8 req-ed1866b1-c565-4f2e-8e68-a91abada62cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-350b7ab1-25ae-4368-b319-529690bc394b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.029 226833 DEBUG nova.network.neutron [req-86c9efdf-ce94-43d2-9b29-0fba1c4cb9b8 req-ed1866b1-c565-4f2e-8e68-a91abada62cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Refreshing network info cache for port 171577a9-ec3a-49f5-abe3-26f9190b3419 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.033 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Start _get_guest_xml network_info=[{"id": "171577a9-ec3a-49f5-abe3-26f9190b3419", "address": "fa:16:3e:4d:ec:42", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap171577a9-ec", "ovs_interfaceid": "171577a9-ec3a-49f5-abe3-26f9190b3419", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.037 226833 WARNING nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.057 226833 DEBUG nova.virt.libvirt.host [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.058 226833 DEBUG nova.virt.libvirt.host [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.062 226833 DEBUG nova.virt.libvirt.host [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.062 226833 DEBUG nova.virt.libvirt.host [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.063 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.064 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.064 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.064 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.064 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.065 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.065 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.065 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.065 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.065 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.066 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.066 226833 DEBUG nova.virt.hardware [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.069 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:25.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:39:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3702192440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.474 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.507 226833 DEBUG nova.storage.rbd_utils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 350b7ab1-25ae-4368-b319-529690bc394b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.511 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.884 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:39:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/239804748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.992 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.993 226833 DEBUG nova.virt.libvirt.vif [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-0-1257759087',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-0-1257759087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ge',id=187,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCdvGufJvehmY0MUNN+f2LFenAJ+F98ZXIzoKpeHrgeapRWpAssUxwFK09XMQHpqJ4lB7bK+FLEX2qiQudAseuiTurXLcLJluW2RUOwpkRUQR5QMmu4IAl7WV20dROKLQ==',key_name='tempest-TestSecurityGroupsBasicOps-1358644469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-jrbt4824',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:39:14Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=350b7ab1-25ae-4368-b319-529690bc394b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "171577a9-ec3a-49f5-abe3-26f9190b3419", "address": "fa:16:3e:4d:ec:42", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap171577a9-ec", "ovs_interfaceid": "171577a9-ec3a-49f5-abe3-26f9190b3419", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.994 226833 DEBUG nova.network.os_vif_util [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "171577a9-ec3a-49f5-abe3-26f9190b3419", "address": "fa:16:3e:4d:ec:42", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap171577a9-ec", "ovs_interfaceid": "171577a9-ec3a-49f5-abe3-26f9190b3419", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.994 226833 DEBUG nova.network.os_vif_util [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=171577a9-ec3a-49f5-abe3-26f9190b3419,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap171577a9-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:39:25 compute-2 nova_compute[226829]: 2026-01-31 08:39:25.995 226833 DEBUG nova.objects.instance [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'pci_devices' on Instance uuid 350b7ab1-25ae-4368-b319-529690bc394b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:39:26 compute-2 ceph-mon[77282]: pgmap v3193: 305 pgs: 305 active+clean; 545 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.0 MiB/s wr, 215 op/s
Jan 31 08:39:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3702192440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.159 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <uuid>350b7ab1-25ae-4368-b319-529690bc394b</uuid>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <name>instance-000000bb</name>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-0-1257759087</nova:name>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:39:25</nova:creationTime>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <nova:user uuid="c6968a1ee10e4e3b8651ffe0240a7e46">tempest-TestSecurityGroupsBasicOps-1014068786-project-member</nova:user>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <nova:project uuid="ba35ae24dbf3443e8a526dce39c6793b">tempest-TestSecurityGroupsBasicOps-1014068786</nova:project>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <nova:port uuid="171577a9-ec3a-49f5-abe3-26f9190b3419">
Jan 31 08:39:26 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <system>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <entry name="serial">350b7ab1-25ae-4368-b319-529690bc394b</entry>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <entry name="uuid">350b7ab1-25ae-4368-b319-529690bc394b</entry>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     </system>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <os>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   </os>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <features>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   </features>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/350b7ab1-25ae-4368-b319-529690bc394b_disk">
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       </source>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/350b7ab1-25ae-4368-b319-529690bc394b_disk.config">
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       </source>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:39:26 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:4d:ec:42"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <target dev="tap171577a9-ec"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/350b7ab1-25ae-4368-b319-529690bc394b/console.log" append="off"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <video>
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     </video>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:39:26 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:39:26 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:39:26 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:39:26 compute-2 nova_compute[226829]: </domain>
Jan 31 08:39:26 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.160 226833 DEBUG nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Preparing to wait for external event network-vif-plugged-171577a9-ec3a-49f5-abe3-26f9190b3419 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.161 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "350b7ab1-25ae-4368-b319-529690bc394b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.161 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.161 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.162 226833 DEBUG nova.virt.libvirt.vif [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-0-1257759087',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-0-1257759087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ge',id=187,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCdvGufJvehmY0MUNN+f2LFenAJ+F98ZXIzoKpeHrgeapRWpAssUxwFK09XMQHpqJ4lB7bK+FLEX2qiQudAseuiTurXLcLJluW2RUOwpkRUQR5QMmu4IAl7WV20dROKLQ==',key_name='tempest-TestSecurityGroupsBasicOps-1358644469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-jrbt4824',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:39:14Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=350b7ab1-25ae-4368-b319-529690bc394b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "171577a9-ec3a-49f5-abe3-26f9190b3419", "address": "fa:16:3e:4d:ec:42", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap171577a9-ec", "ovs_interfaceid": "171577a9-ec3a-49f5-abe3-26f9190b3419", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.162 226833 DEBUG nova.network.os_vif_util [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "171577a9-ec3a-49f5-abe3-26f9190b3419", "address": "fa:16:3e:4d:ec:42", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap171577a9-ec", "ovs_interfaceid": "171577a9-ec3a-49f5-abe3-26f9190b3419", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.163 226833 DEBUG nova.network.os_vif_util [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=171577a9-ec3a-49f5-abe3-26f9190b3419,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap171577a9-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.163 226833 DEBUG os_vif [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=171577a9-ec3a-49f5-abe3-26f9190b3419,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap171577a9-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.164 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.164 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.165 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.171 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.171 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap171577a9-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.172 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap171577a9-ec, col_values=(('external_ids', {'iface-id': '171577a9-ec3a-49f5-abe3-26f9190b3419', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:ec:42', 'vm-uuid': '350b7ab1-25ae-4368-b319-529690bc394b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.174 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.176 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:39:26 compute-2 NetworkManager[48999]: <info>  [1769848766.1770] manager: (tap171577a9-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.181 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.184 226833 INFO os_vif [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=171577a9-ec3a-49f5-abe3-26f9190b3419,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap171577a9-ec')
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.273 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.273 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.274 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No VIF found with MAC fa:16:3e:4d:ec:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.274 226833 INFO nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Using config drive
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.310 226833 DEBUG nova.storage.rbd_utils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 350b7ab1-25ae-4368-b319-529690bc394b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:39:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:26.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.909 226833 INFO nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Creating config drive at /var/lib/nova/instances/350b7ab1-25ae-4368-b319-529690bc394b/disk.config
Jan 31 08:39:26 compute-2 nova_compute[226829]: 2026-01-31 08:39:26.913 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/350b7ab1-25ae-4368-b319-529690bc394b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbx_z17hv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:27 compute-2 nova_compute[226829]: 2026-01-31 08:39:27.050 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/350b7ab1-25ae-4368-b319-529690bc394b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbx_z17hv" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:27 compute-2 nova_compute[226829]: 2026-01-31 08:39:27.076 226833 DEBUG nova.storage.rbd_utils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 350b7ab1-25ae-4368-b319-529690bc394b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:39:27 compute-2 nova_compute[226829]: 2026-01-31 08:39:27.079 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/350b7ab1-25ae-4368-b319-529690bc394b/disk.config 350b7ab1-25ae-4368-b319-529690bc394b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/239804748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:27.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:27 compute-2 nova_compute[226829]: 2026-01-31 08:39:27.725 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:28.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:28 compute-2 nova_compute[226829]: 2026-01-31 08:39:28.425 226833 DEBUG nova.network.neutron [req-86c9efdf-ce94-43d2-9b29-0fba1c4cb9b8 req-ed1866b1-c565-4f2e-8e68-a91abada62cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Updated VIF entry in instance network info cache for port 171577a9-ec3a-49f5-abe3-26f9190b3419. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:39:28 compute-2 nova_compute[226829]: 2026-01-31 08:39:28.426 226833 DEBUG nova.network.neutron [req-86c9efdf-ce94-43d2-9b29-0fba1c4cb9b8 req-ed1866b1-c565-4f2e-8e68-a91abada62cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Updating instance_info_cache with network_info: [{"id": "171577a9-ec3a-49f5-abe3-26f9190b3419", "address": "fa:16:3e:4d:ec:42", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap171577a9-ec", "ovs_interfaceid": "171577a9-ec3a-49f5-abe3-26f9190b3419", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:39:28 compute-2 nova_compute[226829]: 2026-01-31 08:39:28.520 226833 DEBUG oslo_concurrency.lockutils [req-86c9efdf-ce94-43d2-9b29-0fba1c4cb9b8 req-ed1866b1-c565-4f2e-8e68-a91abada62cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-350b7ab1-25ae-4368-b319-529690bc394b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:39:28 compute-2 ceph-mon[77282]: pgmap v3194: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 232 op/s
Jan 31 08:39:28 compute-2 nova_compute[226829]: 2026-01-31 08:39:28.846 226833 DEBUG oslo_concurrency.processutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/350b7ab1-25ae-4368-b319-529690bc394b/disk.config 350b7ab1-25ae-4368-b319-529690bc394b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:28 compute-2 nova_compute[226829]: 2026-01-31 08:39:28.847 226833 INFO nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Deleting local config drive /var/lib/nova/instances/350b7ab1-25ae-4368-b319-529690bc394b/disk.config because it was imported into RBD.
Jan 31 08:39:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:28 compute-2 kernel: tap171577a9-ec: entered promiscuous mode
Jan 31 08:39:28 compute-2 NetworkManager[48999]: <info>  [1769848768.8861] manager: (tap171577a9-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Jan 31 08:39:28 compute-2 nova_compute[226829]: 2026-01-31 08:39:28.939 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:28 compute-2 ovn_controller[133834]: 2026-01-31T08:39:28Z|00729|binding|INFO|Claiming lport 171577a9-ec3a-49f5-abe3-26f9190b3419 for this chassis.
Jan 31 08:39:28 compute-2 systemd-udevd[312187]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:39:28 compute-2 ovn_controller[133834]: 2026-01-31T08:39:28Z|00730|binding|INFO|171577a9-ec3a-49f5-abe3-26f9190b3419: Claiming fa:16:3e:4d:ec:42 10.100.0.5
Jan 31 08:39:28 compute-2 ovn_controller[133834]: 2026-01-31T08:39:28Z|00731|binding|INFO|Setting lport 171577a9-ec3a-49f5-abe3-26f9190b3419 ovn-installed in OVS
Jan 31 08:39:28 compute-2 nova_compute[226829]: 2026-01-31 08:39:28.947 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:28 compute-2 NetworkManager[48999]: <info>  [1769848768.9536] device (tap171577a9-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:39:28 compute-2 NetworkManager[48999]: <info>  [1769848768.9542] device (tap171577a9-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:39:28 compute-2 systemd-machined[195142]: New machine qemu-84-instance-000000bb.
Jan 31 08:39:28 compute-2 systemd[1]: Started Virtual Machine qemu-84-instance-000000bb.
Jan 31 08:39:28 compute-2 ovn_controller[133834]: 2026-01-31T08:39:28Z|00732|binding|INFO|Setting lport 171577a9-ec3a-49f5-abe3-26f9190b3419 up in Southbound
Jan 31 08:39:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:28.969 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ec:42 10.100.0.5'], port_security=['fa:16:3e:4d:ec:42 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '350b7ab1-25ae-4368-b319-529690bc394b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-230cf542-2017-47c7-972b-7bbce9acd446', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c6a91abb-9646-4eaf-929b-fa5a4cf8f203', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5db3dfb8-63b9-4fff-8cde-54682a3b4ba9, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=171577a9-ec3a-49f5-abe3-26f9190b3419) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:39:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:28.970 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 171577a9-ec3a-49f5-abe3-26f9190b3419 in datapath 230cf542-2017-47c7-972b-7bbce9acd446 bound to our chassis
Jan 31 08:39:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:28.972 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 230cf542-2017-47c7-972b-7bbce9acd446
Jan 31 08:39:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:28.985 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[40c34319-7ab2-4e5f-a8a5-8e981ef9dec3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:29.009 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e71fc96e-836d-4e72-be7c-cbc78618966e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:29.012 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[383b19cd-d576-49d1-912c-e853bd9e7852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:29.033 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbe45a2-0123-44e3-8a3e-24457bc8e89c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:29.050 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7e73653e-4dbd-4363-a6ad-2187ccef7943]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap230cf542-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:38:3d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896977, 'reachable_time': 26927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312203, 'error': None, 'target': 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:29.062 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8274c7f3-991f-4296-983a-9a2b3739715a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap230cf542-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 896986, 'tstamp': 896986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312205, 'error': None, 'target': 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap230cf542-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 896988, 'tstamp': 896988}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312205, 'error': None, 'target': 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:29.064 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap230cf542-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:29 compute-2 nova_compute[226829]: 2026-01-31 08:39:29.065 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:29 compute-2 nova_compute[226829]: 2026-01-31 08:39:29.066 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:29.066 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap230cf542-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:29.066 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:39:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:29.067 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap230cf542-20, col_values=(('external_ids', {'iface-id': '1c0f6b16-7de3-433c-bddb-5e27fc8effc7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:29.067 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:39:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:29.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:29 compute-2 nova_compute[226829]: 2026-01-31 08:39:29.514 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848769.5133886, 350b7ab1-25ae-4368-b319-529690bc394b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:39:29 compute-2 nova_compute[226829]: 2026-01-31 08:39:29.514 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] VM Started (Lifecycle Event)
Jan 31 08:39:29 compute-2 ceph-mon[77282]: pgmap v3195: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.9 MiB/s wr, 228 op/s
Jan 31 08:39:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:30.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:30 compute-2 sudo[312249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:30 compute-2 sudo[312249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:30 compute-2 sudo[312249]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:30 compute-2 sudo[312274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:30 compute-2 sudo[312274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:30 compute-2 sudo[312274]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:30 compute-2 sudo[312299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:30 compute-2 sudo[312299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:30 compute-2 sudo[312299]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:30 compute-2 sudo[312324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:39:30 compute-2 sudo[312324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:31 compute-2 nova_compute[226829]: 2026-01-31 08:39:31.175 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:31 compute-2 sudo[312324]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:31 compute-2 sudo[312380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:31 compute-2 sudo[312380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:31 compute-2 sudo[312380]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:31 compute-2 sudo[312405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:31 compute-2 sudo[312405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:31 compute-2 sudo[312405]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:31 compute-2 sudo[312430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:31 compute-2 sudo[312430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:31 compute-2 sudo[312430]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:31.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:31 compute-2 sudo[312455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 08:39:31 compute-2 sudo[312455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:31 compute-2 podman[312521]: 2026-01-31 08:39:31.764679505 +0000 UTC m=+0.025122792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:39:31 compute-2 ceph-mon[77282]: pgmap v3196: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.4 MiB/s wr, 207 op/s
Jan 31 08:39:32 compute-2 podman[312521]: 2026-01-31 08:39:32.028762411 +0000 UTC m=+0.289205668 container create 2115d5486e6f9253946efd23419364f3532093e25aed376cd31493d927a92451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:39:32 compute-2 systemd[1]: Started libpod-conmon-2115d5486e6f9253946efd23419364f3532093e25aed376cd31493d927a92451.scope.
Jan 31 08:39:32 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:39:32 compute-2 podman[312521]: 2026-01-31 08:39:32.122923186 +0000 UTC m=+0.383366463 container init 2115d5486e6f9253946efd23419364f3532093e25aed376cd31493d927a92451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:39:32 compute-2 podman[312521]: 2026-01-31 08:39:32.130211094 +0000 UTC m=+0.390654351 container start 2115d5486e6f9253946efd23419364f3532093e25aed376cd31493d927a92451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 08:39:32 compute-2 bold_ganguly[312538]: 167 167
Jan 31 08:39:32 compute-2 systemd[1]: libpod-2115d5486e6f9253946efd23419364f3532093e25aed376cd31493d927a92451.scope: Deactivated successfully.
Jan 31 08:39:32 compute-2 podman[312521]: 2026-01-31 08:39:32.13704492 +0000 UTC m=+0.397488177 container attach 2115d5486e6f9253946efd23419364f3532093e25aed376cd31493d927a92451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:39:32 compute-2 podman[312521]: 2026-01-31 08:39:32.137396049 +0000 UTC m=+0.397839306 container died 2115d5486e6f9253946efd23419364f3532093e25aed376cd31493d927a92451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-c9a5f0c91bd947953baf5fb22379d4646d768701e42c46fd4719eaf97ed8fec4-merged.mount: Deactivated successfully.
Jan 31 08:39:32 compute-2 podman[312521]: 2026-01-31 08:39:32.183334925 +0000 UTC m=+0.443778182 container remove 2115d5486e6f9253946efd23419364f3532093e25aed376cd31493d927a92451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 08:39:32 compute-2 systemd[1]: libpod-conmon-2115d5486e6f9253946efd23419364f3532093e25aed376cd31493d927a92451.scope: Deactivated successfully.
Jan 31 08:39:32 compute-2 podman[312562]: 2026-01-31 08:39:32.30291337 +0000 UTC m=+0.039924834 container create 03bfbf572d96e6b5e5ba8e67dfc9ec68fa727ab766874ee789c2980a09e758e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:39:32 compute-2 systemd[1]: Started libpod-conmon-03bfbf572d96e6b5e5ba8e67dfc9ec68fa727ab766874ee789c2980a09e758e7.scope.
Jan 31 08:39:32 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:39:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfe8e5db2c82e550bd0e416b95db0d41f75481ebbeffb0ccb695cda51f7d2ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfe8e5db2c82e550bd0e416b95db0d41f75481ebbeffb0ccb695cda51f7d2ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfe8e5db2c82e550bd0e416b95db0d41f75481ebbeffb0ccb695cda51f7d2ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfe8e5db2c82e550bd0e416b95db0d41f75481ebbeffb0ccb695cda51f7d2ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:32 compute-2 podman[312562]: 2026-01-31 08:39:32.285810566 +0000 UTC m=+0.022822040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:39:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:32.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:32 compute-2 podman[312562]: 2026-01-31 08:39:32.395608025 +0000 UTC m=+0.132619509 container init 03bfbf572d96e6b5e5ba8e67dfc9ec68fa727ab766874ee789c2980a09e758e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:39:32 compute-2 podman[312562]: 2026-01-31 08:39:32.400400056 +0000 UTC m=+0.137411510 container start 03bfbf572d96e6b5e5ba8e67dfc9ec68fa727ab766874ee789c2980a09e758e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:39:32 compute-2 podman[312562]: 2026-01-31 08:39:32.405001 +0000 UTC m=+0.142012444 container attach 03bfbf572d96e6b5e5ba8e67dfc9ec68fa727ab766874ee789c2980a09e758e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:39:32 compute-2 nova_compute[226829]: 2026-01-31 08:39:32.727 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:32 compute-2 nova_compute[226829]: 2026-01-31 08:39:32.993 226833 DEBUG nova.compute.manager [req-10137a6a-f74c-49a9-a9f5-1cc62e1f1f78 req-5074d27d-15ce-4afe-8ff2-d851706efddf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Received event network-vif-plugged-171577a9-ec3a-49f5-abe3-26f9190b3419 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:39:32 compute-2 nova_compute[226829]: 2026-01-31 08:39:32.994 226833 DEBUG oslo_concurrency.lockutils [req-10137a6a-f74c-49a9-a9f5-1cc62e1f1f78 req-5074d27d-15ce-4afe-8ff2-d851706efddf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "350b7ab1-25ae-4368-b319-529690bc394b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:32 compute-2 nova_compute[226829]: 2026-01-31 08:39:32.994 226833 DEBUG oslo_concurrency.lockutils [req-10137a6a-f74c-49a9-a9f5-1cc62e1f1f78 req-5074d27d-15ce-4afe-8ff2-d851706efddf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:32 compute-2 nova_compute[226829]: 2026-01-31 08:39:32.995 226833 DEBUG oslo_concurrency.lockutils [req-10137a6a-f74c-49a9-a9f5-1cc62e1f1f78 req-5074d27d-15ce-4afe-8ff2-d851706efddf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:32 compute-2 nova_compute[226829]: 2026-01-31 08:39:32.995 226833 DEBUG nova.compute.manager [req-10137a6a-f74c-49a9-a9f5-1cc62e1f1f78 req-5074d27d-15ce-4afe-8ff2-d851706efddf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Processing event network-vif-plugged-171577a9-ec3a-49f5-abe3-26f9190b3419 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:39:32 compute-2 nova_compute[226829]: 2026-01-31 08:39:32.996 226833 DEBUG nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.001 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.004 226833 INFO nova.virt.libvirt.driver [-] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Instance spawned successfully.
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.005 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.127 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.130 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.130 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.131 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.131 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.131 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.132 226833 DEBUG nova.virt.libvirt.driver [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.135 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.211 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.212 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848769.5142868, 350b7ab1-25ae-4368-b319-529690bc394b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.212 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] VM Paused (Lifecycle Event)
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.272 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.277 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848773.0003667, 350b7ab1-25ae-4368-b319-529690bc394b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.277 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] VM Resumed (Lifecycle Event)
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.307 226833 INFO nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Took 18.31 seconds to spawn the instance on the hypervisor.
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.307 226833 DEBUG nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.325 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.330 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.398 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:39:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:33.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.451 226833 INFO nova.compute.manager [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Took 20.96 seconds to build instance.
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]: [
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:     {
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:         "available": false,
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:         "ceph_device": false,
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:         "lsm_data": {},
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:         "lvs": [],
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:         "path": "/dev/sr0",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:         "rejected_reasons": [
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "Has a FileSystem",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "Insufficient space (<5GB)"
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:         ],
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:         "sys_api": {
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "actuators": null,
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "device_nodes": "sr0",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "devname": "sr0",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "human_readable_size": "482.00 KB",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "id_bus": "ata",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "model": "QEMU DVD-ROM",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "nr_requests": "2",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "parent": "/dev/sr0",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "partitions": {},
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "path": "/dev/sr0",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "removable": "1",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "rev": "2.5+",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "ro": "0",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "rotational": "1",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "sas_address": "",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "sas_device_handle": "",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "scheduler_mode": "mq-deadline",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "sectors": 0,
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "sectorsize": "2048",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "size": 493568.0,
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "support_discard": "2048",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "type": "disk",
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:             "vendor": "QEMU"
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:         }
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]:     }
Jan 31 08:39:33 compute-2 admiring_stonebraker[312578]: ]
Jan 31 08:39:33 compute-2 systemd[1]: libpod-03bfbf572d96e6b5e5ba8e67dfc9ec68fa727ab766874ee789c2980a09e758e7.scope: Deactivated successfully.
Jan 31 08:39:33 compute-2 podman[312562]: 2026-01-31 08:39:33.515146754 +0000 UTC m=+1.252158228 container died 03bfbf572d96e6b5e5ba8e67dfc9ec68fa727ab766874ee789c2980a09e758e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:39:33 compute-2 systemd[1]: libpod-03bfbf572d96e6b5e5ba8e67dfc9ec68fa727ab766874ee789c2980a09e758e7.scope: Consumed 1.088s CPU time.
Jan 31 08:39:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-4bfe8e5db2c82e550bd0e416b95db0d41f75481ebbeffb0ccb695cda51f7d2ee-merged.mount: Deactivated successfully.
Jan 31 08:39:33 compute-2 nova_compute[226829]: 2026-01-31 08:39:33.554 226833 DEBUG oslo_concurrency.lockutils [None req-83771858-f3dd-43ab-95a7-69374e8e69d0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:33 compute-2 podman[312562]: 2026-01-31 08:39:33.565419068 +0000 UTC m=+1.302430512 container remove 03bfbf572d96e6b5e5ba8e67dfc9ec68fa727ab766874ee789c2980a09e758e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 08:39:33 compute-2 systemd[1]: libpod-conmon-03bfbf572d96e6b5e5ba8e67dfc9ec68fa727ab766874ee789c2980a09e758e7.scope: Deactivated successfully.
Jan 31 08:39:33 compute-2 sudo[312455]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:34.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:34 compute-2 ceph-mon[77282]: pgmap v3197: 305 pgs: 305 active+clean; 555 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 168 op/s
Jan 31 08:39:34 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:35 compute-2 nova_compute[226829]: 2026-01-31 08:39:35.290 226833 DEBUG nova.compute.manager [req-844653c9-a213-4278-8044-7a4be3d8eb1b req-fa097f53-20ef-4da7-828e-605c37c26d3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Received event network-vif-plugged-171577a9-ec3a-49f5-abe3-26f9190b3419 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:39:35 compute-2 nova_compute[226829]: 2026-01-31 08:39:35.290 226833 DEBUG oslo_concurrency.lockutils [req-844653c9-a213-4278-8044-7a4be3d8eb1b req-fa097f53-20ef-4da7-828e-605c37c26d3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "350b7ab1-25ae-4368-b319-529690bc394b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:35 compute-2 nova_compute[226829]: 2026-01-31 08:39:35.291 226833 DEBUG oslo_concurrency.lockutils [req-844653c9-a213-4278-8044-7a4be3d8eb1b req-fa097f53-20ef-4da7-828e-605c37c26d3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:35 compute-2 nova_compute[226829]: 2026-01-31 08:39:35.291 226833 DEBUG oslo_concurrency.lockutils [req-844653c9-a213-4278-8044-7a4be3d8eb1b req-fa097f53-20ef-4da7-828e-605c37c26d3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:35 compute-2 nova_compute[226829]: 2026-01-31 08:39:35.292 226833 DEBUG nova.compute.manager [req-844653c9-a213-4278-8044-7a4be3d8eb1b req-fa097f53-20ef-4da7-828e-605c37c26d3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] No waiting events found dispatching network-vif-plugged-171577a9-ec3a-49f5-abe3-26f9190b3419 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:39:35 compute-2 nova_compute[226829]: 2026-01-31 08:39:35.292 226833 WARNING nova.compute.manager [req-844653c9-a213-4278-8044-7a4be3d8eb1b req-fa097f53-20ef-4da7-828e-605c37c26d3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Received unexpected event network-vif-plugged-171577a9-ec3a-49f5-abe3-26f9190b3419 for instance with vm_state active and task_state None.
Jan 31 08:39:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 31 08:39:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:35.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 31 08:39:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:39:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:39:35 compute-2 ceph-mon[77282]: pgmap v3198: 305 pgs: 305 active+clean; 562 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 126 op/s
Jan 31 08:39:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:39:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:39:35 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:39:36 compute-2 nova_compute[226829]: 2026-01-31 08:39:36.186 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:36 compute-2 sudo[313802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:36 compute-2 sudo[313802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:36 compute-2 sudo[313802]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:36 compute-2 sudo[313827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:36 compute-2 sudo[313827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:36 compute-2 sudo[313827]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:36.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:37.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:37 compute-2 nova_compute[226829]: 2026-01-31 08:39:37.730 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:38 compute-2 ceph-mon[77282]: pgmap v3199: 305 pgs: 305 active+clean; 569 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Jan 31 08:39:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:38.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:38.877083) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848778877197, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1841, "num_deletes": 256, "total_data_size": 4403869, "memory_usage": 4467840, "flush_reason": "Manual Compaction"}
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Jan 31 08:39:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848778902620, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 2841736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74345, "largest_seqno": 76181, "table_properties": {"data_size": 2834087, "index_size": 4527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16107, "raw_average_key_size": 20, "raw_value_size": 2818706, "raw_average_value_size": 3510, "num_data_blocks": 198, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848630, "oldest_key_time": 1769848630, "file_creation_time": 1769848778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 25603 microseconds, and 5597 cpu microseconds.
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:38.902693) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 2841736 bytes OK
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:38.902712) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:38.904680) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:38.904692) EVENT_LOG_v1 {"time_micros": 1769848778904688, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:38.904743) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 4395562, prev total WAL file size 4395562, number of live WAL files 2.
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:38.905606) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(2775KB)], [150(11MB)]
Jan 31 08:39:38 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848778905684, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 15126626, "oldest_snapshot_seqno": -1}
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 9890 keys, 14972155 bytes, temperature: kUnknown
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848779034864, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 14972155, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14905703, "index_size": 40611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24773, "raw_key_size": 260101, "raw_average_key_size": 26, "raw_value_size": 14729864, "raw_average_value_size": 1489, "num_data_blocks": 1564, "num_entries": 9890, "num_filter_entries": 9890, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769848778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:39.035198) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14972155 bytes
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:39.036330) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.0 rd, 115.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.7 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(10.6) write-amplify(5.3) OK, records in: 10423, records dropped: 533 output_compression: NoCompression
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:39.036348) EVENT_LOG_v1 {"time_micros": 1769848779036339, "job": 96, "event": "compaction_finished", "compaction_time_micros": 129289, "compaction_time_cpu_micros": 36821, "output_level": 6, "num_output_files": 1, "total_output_size": 14972155, "num_input_records": 10423, "num_output_records": 9890, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848779036663, "job": 96, "event": "table_file_deletion", "file_number": 152}
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848779037526, "job": 96, "event": "table_file_deletion", "file_number": 150}
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:38.905394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:39.037554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:39.037559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:39.037561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:39.037563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:39:39.037565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:39.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:39 compute-2 ceph-mon[77282]: pgmap v3200: 305 pgs: 305 active+clean; 577 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 31 08:39:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:40.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:41 compute-2 podman[313854]: 2026-01-31 08:39:41.1776427 +0000 UTC m=+0.065499308 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:39:41 compute-2 nova_compute[226829]: 2026-01-31 08:39:41.188 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:41.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:42.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:42 compute-2 nova_compute[226829]: 2026-01-31 08:39:42.731 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:42 compute-2 ceph-mon[77282]: pgmap v3201: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 31 08:39:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:43.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:43 compute-2 sudo[313881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:43 compute-2 sudo[313881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:43 compute-2 sudo[313881]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:43 compute-2 sudo[313906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:39:43 compute-2 sudo[313906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:43 compute-2 sudo[313906]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:44.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:45 compute-2 ceph-mon[77282]: pgmap v3202: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Jan 31 08:39:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:45.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:46 compute-2 nova_compute[226829]: 2026-01-31 08:39:46.190 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:46.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:47 compute-2 podman[313933]: 2026-01-31 08:39:47.171381815 +0000 UTC m=+0.052192367 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:39:47 compute-2 ceph-mon[77282]: pgmap v3203: 305 pgs: 305 active+clean; 586 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 08:39:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:47.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:47 compute-2 nova_compute[226829]: 2026-01-31 08:39:47.734 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:48 compute-2 ceph-mon[77282]: pgmap v3204: 305 pgs: 305 active+clean; 586 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 91 op/s
Jan 31 08:39:48 compute-2 ovn_controller[133834]: 2026-01-31T08:39:48Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:ec:42 10.100.0.5
Jan 31 08:39:48 compute-2 ovn_controller[133834]: 2026-01-31T08:39:48Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:ec:42 10.100.0.5
Jan 31 08:39:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:48.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:48 compute-2 nova_compute[226829]: 2026-01-31 08:39:48.572 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:49 compute-2 ceph-mon[77282]: pgmap v3205: 305 pgs: 305 active+clean; 587 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 854 KiB/s wr, 65 op/s
Jan 31 08:39:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:49.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:49 compute-2 nova_compute[226829]: 2026-01-31 08:39:49.612 226833 DEBUG nova.compute.manager [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 08:39:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:50.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:50 compute-2 nova_compute[226829]: 2026-01-31 08:39:50.659 226833 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:50 compute-2 nova_compute[226829]: 2026-01-31 08:39:50.659 226833 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:50 compute-2 nova_compute[226829]: 2026-01-31 08:39:50.986 226833 DEBUG nova.objects.instance [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_requests' on Instance uuid 6193a6a7-8918-4625-b626-5d53c31bf0ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:39:51 compute-2 nova_compute[226829]: 2026-01-31 08:39:51.174 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:39:51 compute-2 nova_compute[226829]: 2026-01-31 08:39:51.174 226833 INFO nova.compute.claims [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:39:51 compute-2 nova_compute[226829]: 2026-01-31 08:39:51.175 226833 DEBUG nova.objects.instance [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 6193a6a7-8918-4625-b626-5d53c31bf0ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:39:51 compute-2 nova_compute[226829]: 2026-01-31 08:39:51.192 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:51 compute-2 nova_compute[226829]: 2026-01-31 08:39:51.345 226833 DEBUG nova.objects.instance [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6193a6a7-8918-4625-b626-5d53c31bf0ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:39:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:51.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:51 compute-2 ceph-mon[77282]: pgmap v3206: 305 pgs: 305 active+clean; 606 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 1.9 MiB/s wr, 65 op/s
Jan 31 08:39:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/153502525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:39:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/153502525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:39:51 compute-2 nova_compute[226829]: 2026-01-31 08:39:51.737 226833 INFO nova.compute.resource_tracker [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating resource usage from migration cf69e0cf-b556-4b51-a26d-2a27f3defe82
Jan 31 08:39:51 compute-2 nova_compute[226829]: 2026-01-31 08:39:51.738 226833 DEBUG nova.compute.resource_tracker [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Starting to track incoming migration cf69e0cf-b556-4b51-a26d-2a27f3defe82 with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 08:39:51 compute-2 nova_compute[226829]: 2026-01-31 08:39:51.988 226833 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:39:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/339453851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:52 compute-2 nova_compute[226829]: 2026-01-31 08:39:52.398 226833 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:52 compute-2 nova_compute[226829]: 2026-01-31 08:39:52.404 226833 DEBUG nova.compute.provider_tree [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:39:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:52.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:52 compute-2 nova_compute[226829]: 2026-01-31 08:39:52.553 226833 DEBUG nova.scheduler.client.report [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:39:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/339453851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:52 compute-2 nova_compute[226829]: 2026-01-31 08:39:52.743 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:39:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2237009316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:39:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:39:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2237009316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:39:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:53.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:53 compute-2 nova_compute[226829]: 2026-01-31 08:39:53.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:53 compute-2 nova_compute[226829]: 2026-01-31 08:39:53.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:39:53 compute-2 nova_compute[226829]: 2026-01-31 08:39:53.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:39:53 compute-2 ceph-mon[77282]: pgmap v3207: 305 pgs: 305 active+clean; 615 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 295 KiB/s rd, 2.3 MiB/s wr, 67 op/s
Jan 31 08:39:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2237009316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:39:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2237009316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:39:54 compute-2 nova_compute[226829]: 2026-01-31 08:39:54.068 226833 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 3.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:54 compute-2 nova_compute[226829]: 2026-01-31 08:39:54.068 226833 INFO nova.compute.manager [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Migrating
Jan 31 08:39:54 compute-2 nova_compute[226829]: 2026-01-31 08:39:54.166 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 08:39:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:54.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:54 compute-2 nova_compute[226829]: 2026-01-31 08:39:54.567 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:39:54 compute-2 nova_compute[226829]: 2026-01-31 08:39:54.567 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:39:54 compute-2 nova_compute[226829]: 2026-01-31 08:39:54.568 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:39:54 compute-2 nova_compute[226829]: 2026-01-31 08:39:54.568 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid da76f9e6-a924-4a04-855a-2764b3edc1a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.170 226833 DEBUG oslo_concurrency.lockutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "350b7ab1-25ae-4368-b319-529690bc394b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.171 226833 DEBUG oslo_concurrency.lockutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.171 226833 DEBUG oslo_concurrency.lockutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "350b7ab1-25ae-4368-b319-529690bc394b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.172 226833 DEBUG oslo_concurrency.lockutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.172 226833 DEBUG oslo_concurrency.lockutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.173 226833 INFO nova.compute.manager [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Terminating instance
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.174 226833 DEBUG nova.compute.manager [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:39:55 compute-2 kernel: tap171577a9-ec (unregistering): left promiscuous mode
Jan 31 08:39:55 compute-2 NetworkManager[48999]: <info>  [1769848795.2275] device (tap171577a9-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:39:55 compute-2 ovn_controller[133834]: 2026-01-31T08:39:55Z|00733|binding|INFO|Releasing lport 171577a9-ec3a-49f5-abe3-26f9190b3419 from this chassis (sb_readonly=0)
Jan 31 08:39:55 compute-2 ovn_controller[133834]: 2026-01-31T08:39:55Z|00734|binding|INFO|Setting lport 171577a9-ec3a-49f5-abe3-26f9190b3419 down in Southbound
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.271 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:55 compute-2 ovn_controller[133834]: 2026-01-31T08:39:55Z|00735|binding|INFO|Removing iface tap171577a9-ec ovn-installed in OVS
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.273 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.277 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:55 compute-2 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000bb.scope: Deactivated successfully.
Jan 31 08:39:55 compute-2 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000bb.scope: Consumed 13.212s CPU time.
Jan 31 08:39:55 compute-2 systemd-machined[195142]: Machine qemu-84-instance-000000bb terminated.
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.380 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ec:42 10.100.0.5'], port_security=['fa:16:3e:4d:ec:42 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '350b7ab1-25ae-4368-b319-529690bc394b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-230cf542-2017-47c7-972b-7bbce9acd446', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6a91abb-9646-4eaf-929b-fa5a4cf8f203', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5db3dfb8-63b9-4fff-8cde-54682a3b4ba9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=171577a9-ec3a-49f5-abe3-26f9190b3419) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.381 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 171577a9-ec3a-49f5-abe3-26f9190b3419 in datapath 230cf542-2017-47c7-972b-7bbce9acd446 unbound from our chassis
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.382 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 230cf542-2017-47c7-972b-7bbce9acd446
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.392 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fafbe7d7-639f-4230-87dd-1cd8edf47603]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.405 226833 INFO nova.virt.libvirt.driver [-] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Instance destroyed successfully.
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.407 226833 DEBUG nova.objects.instance [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'resources' on Instance uuid 350b7ab1-25ae-4368-b319-529690bc394b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.413 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[332d1607-fe74-4ee1-a667-362e927677b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.417 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[03053221-1f6c-4b44-954a-5d113ebd4e2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.435 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd08c54-ffb7-42f9-bc0b-91d42b490b6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.447 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9eae5930-c98b-42ea-af61-9b65debb03df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap230cf542-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:38:3d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896977, 'reachable_time': 26927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314003, 'error': None, 'target': 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.457 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[75b3b47e-50c3-4f40-ae20-5382afd8808f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap230cf542-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 896986, 'tstamp': 896986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314004, 'error': None, 'target': 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap230cf542-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 896988, 'tstamp': 896988}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314004, 'error': None, 'target': 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.459 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap230cf542-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.460 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.463 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.464 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap230cf542-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.464 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.464 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap230cf542-20, col_values=(('external_ids', {'iface-id': '1c0f6b16-7de3-433c-bddb-5e27fc8effc7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:39:55.465 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:39:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:55.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.630 226833 DEBUG nova.virt.libvirt.vif [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-0-1257759087',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-0-1257759087',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ge',id=187,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCdvGufJvehmY0MUNN+f2LFenAJ+F98ZXIzoKpeHrgeapRWpAssUxwFK09XMQHpqJ4lB7bK+FLEX2qiQudAseuiTurXLcLJluW2RUOwpkRUQR5QMmu4IAl7WV20dROKLQ==',key_name='tempest-TestSecurityGroupsBasicOps-1358644469',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:39:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-jrbt4824',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:39:33Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=350b7ab1-25ae-4368-b319-529690bc394b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "171577a9-ec3a-49f5-abe3-26f9190b3419", "address": "fa:16:3e:4d:ec:42", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap171577a9-ec", "ovs_interfaceid": "171577a9-ec3a-49f5-abe3-26f9190b3419", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.631 226833 DEBUG nova.network.os_vif_util [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "171577a9-ec3a-49f5-abe3-26f9190b3419", "address": "fa:16:3e:4d:ec:42", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap171577a9-ec", "ovs_interfaceid": "171577a9-ec3a-49f5-abe3-26f9190b3419", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.631 226833 DEBUG nova.network.os_vif_util [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=171577a9-ec3a-49f5-abe3-26f9190b3419,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap171577a9-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.632 226833 DEBUG os_vif [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=171577a9-ec3a-49f5-abe3-26f9190b3419,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap171577a9-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.634 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.634 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap171577a9-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.635 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.637 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:55 compute-2 nova_compute[226829]: 2026-01-31 08:39:55.639 226833 INFO os_vif [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=171577a9-ec3a-49f5-abe3-26f9190b3419,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap171577a9-ec')
Jan 31 08:39:55 compute-2 ceph-mon[77282]: pgmap v3208: 305 pgs: 305 active+clean; 619 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 282 KiB/s rd, 2.5 MiB/s wr, 82 op/s
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.020 226833 INFO nova.virt.libvirt.driver [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Deleting instance files /var/lib/nova/instances/350b7ab1-25ae-4368-b319-529690bc394b_del
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.020 226833 INFO nova.virt.libvirt.driver [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Deletion of /var/lib/nova/instances/350b7ab1-25ae-4368-b319-529690bc394b_del complete
Jan 31 08:39:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:56.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:56 compute-2 sudo[314026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:56 compute-2 sudo[314026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:56 compute-2 sudo[314026]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:56 compute-2 sudo[314051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:56 compute-2 sudo[314051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:56 compute-2 sudo[314051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.556 226833 DEBUG nova.compute.manager [req-900e1398-2b11-4a63-aa31-5e6369477a58 req-923fb897-b680-462d-a640-7a88c0728deb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Received event network-vif-unplugged-171577a9-ec3a-49f5-abe3-26f9190b3419 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.557 226833 DEBUG oslo_concurrency.lockutils [req-900e1398-2b11-4a63-aa31-5e6369477a58 req-923fb897-b680-462d-a640-7a88c0728deb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "350b7ab1-25ae-4368-b319-529690bc394b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.557 226833 DEBUG oslo_concurrency.lockutils [req-900e1398-2b11-4a63-aa31-5e6369477a58 req-923fb897-b680-462d-a640-7a88c0728deb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.558 226833 DEBUG oslo_concurrency.lockutils [req-900e1398-2b11-4a63-aa31-5e6369477a58 req-923fb897-b680-462d-a640-7a88c0728deb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.558 226833 DEBUG nova.compute.manager [req-900e1398-2b11-4a63-aa31-5e6369477a58 req-923fb897-b680-462d-a640-7a88c0728deb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] No waiting events found dispatching network-vif-unplugged-171577a9-ec3a-49f5-abe3-26f9190b3419 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.558 226833 DEBUG nova.compute.manager [req-900e1398-2b11-4a63-aa31-5e6369477a58 req-923fb897-b680-462d-a640-7a88c0728deb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Received event network-vif-unplugged-171577a9-ec3a-49f5-abe3-26f9190b3419 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.559 226833 INFO nova.compute.manager [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Took 1.39 seconds to destroy the instance on the hypervisor.
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.559 226833 DEBUG oslo.service.loopingcall [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.560 226833 DEBUG nova.compute.manager [-] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:39:56 compute-2 nova_compute[226829]: 2026-01-31 08:39:56.560 226833 DEBUG nova.network.neutron [-] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:39:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:57.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:57 compute-2 nova_compute[226829]: 2026-01-31 08:39:57.791 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:58 compute-2 ceph-mon[77282]: pgmap v3209: 305 pgs: 305 active+clean; 619 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 282 KiB/s rd, 2.3 MiB/s wr, 83 op/s
Jan 31 08:39:58 compute-2 nova_compute[226829]: 2026-01-31 08:39:58.250 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updating instance_info_cache with network_info: [{"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:39:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:58.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:58 compute-2 nova_compute[226829]: 2026-01-31 08:39:58.628 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:39:58 compute-2 nova_compute[226829]: 2026-01-31 08:39:58.628 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:39:58 compute-2 nova_compute[226829]: 2026-01-31 08:39:58.628 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:58 compute-2 nova_compute[226829]: 2026-01-31 08:39:58.629 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:58 compute-2 nova_compute[226829]: 2026-01-31 08:39:58.629 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:58 compute-2 nova_compute[226829]: 2026-01-31 08:39:58.629 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:58 compute-2 nova_compute[226829]: 2026-01-31 08:39:58.895 226833 DEBUG nova.network.neutron [-] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:39:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:39:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:39:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:59.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:39:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:59 compute-2 sshd-session[314077]: Accepted publickey for nova from 192.168.122.101 port 54842 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:39:59 compute-2 systemd-logind[801]: New session 63 of user nova.
Jan 31 08:39:59 compute-2 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 08:39:59 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 08:39:59 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 08:39:59 compute-2 systemd[1]: Starting User Manager for UID 42436...
Jan 31 08:39:59 compute-2 systemd[314081]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:39:59 compute-2 nova_compute[226829]: 2026-01-31 08:39:59.612 226833 INFO nova.compute.manager [-] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Took 3.05 seconds to deallocate network for instance.
Jan 31 08:39:59 compute-2 systemd[314081]: Queued start job for default target Main User Target.
Jan 31 08:39:59 compute-2 systemd[314081]: Created slice User Application Slice.
Jan 31 08:39:59 compute-2 systemd[314081]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:39:59 compute-2 systemd[314081]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:39:59 compute-2 systemd[314081]: Reached target Paths.
Jan 31 08:39:59 compute-2 systemd[314081]: Reached target Timers.
Jan 31 08:39:59 compute-2 systemd[314081]: Starting D-Bus User Message Bus Socket...
Jan 31 08:39:59 compute-2 systemd[314081]: Starting Create User's Volatile Files and Directories...
Jan 31 08:39:59 compute-2 systemd[314081]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:39:59 compute-2 systemd[314081]: Reached target Sockets.
Jan 31 08:39:59 compute-2 systemd[314081]: Finished Create User's Volatile Files and Directories.
Jan 31 08:39:59 compute-2 systemd[314081]: Reached target Basic System.
Jan 31 08:39:59 compute-2 systemd[314081]: Reached target Main User Target.
Jan 31 08:39:59 compute-2 systemd[314081]: Startup finished in 119ms.
Jan 31 08:39:59 compute-2 systemd[1]: Started User Manager for UID 42436.
Jan 31 08:39:59 compute-2 systemd[1]: Started Session 63 of User nova.
Jan 31 08:39:59 compute-2 sshd-session[314077]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:39:59 compute-2 sshd-session[314097]: Received disconnect from 192.168.122.101 port 54842:11: disconnected by user
Jan 31 08:39:59 compute-2 sshd-session[314097]: Disconnected from user nova 192.168.122.101 port 54842
Jan 31 08:39:59 compute-2 sshd-session[314077]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:39:59 compute-2 systemd[1]: session-63.scope: Deactivated successfully.
Jan 31 08:39:59 compute-2 systemd-logind[801]: Session 63 logged out. Waiting for processes to exit.
Jan 31 08:39:59 compute-2 systemd-logind[801]: Removed session 63.
Jan 31 08:39:59 compute-2 sshd-session[314099]: Accepted publickey for nova from 192.168.122.101 port 54848 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:39:59 compute-2 systemd-logind[801]: New session 65 of user nova.
Jan 31 08:39:59 compute-2 systemd[1]: Started Session 65 of User nova.
Jan 31 08:39:59 compute-2 sshd-session[314099]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:39:59 compute-2 sshd-session[314103]: Received disconnect from 192.168.122.101 port 54848:11: disconnected by user
Jan 31 08:39:59 compute-2 sshd-session[314103]: Disconnected from user nova 192.168.122.101 port 54848
Jan 31 08:39:59 compute-2 sshd-session[314099]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:39:59 compute-2 systemd[1]: session-65.scope: Deactivated successfully.
Jan 31 08:39:59 compute-2 systemd-logind[801]: Session 65 logged out. Waiting for processes to exit.
Jan 31 08:40:00 compute-2 systemd-logind[801]: Removed session 65.
Jan 31 08:40:00 compute-2 ceph-mon[77282]: pgmap v3210: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 2.3 MiB/s wr, 85 op/s
Jan 31 08:40:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 08:40:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:00.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.636 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.845 226833 DEBUG oslo_concurrency.lockutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.846 226833 DEBUG oslo_concurrency.lockutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.966 226833 DEBUG nova.compute.manager [req-707430db-2497-4d07-bf56-2b552fc15afd req-7d0d864d-519c-4d81-8796-b34023c0dd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Received event network-vif-plugged-171577a9-ec3a-49f5-abe3-26f9190b3419 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.966 226833 DEBUG oslo_concurrency.lockutils [req-707430db-2497-4d07-bf56-2b552fc15afd req-7d0d864d-519c-4d81-8796-b34023c0dd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "350b7ab1-25ae-4368-b319-529690bc394b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.966 226833 DEBUG oslo_concurrency.lockutils [req-707430db-2497-4d07-bf56-2b552fc15afd req-7d0d864d-519c-4d81-8796-b34023c0dd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.967 226833 DEBUG oslo_concurrency.lockutils [req-707430db-2497-4d07-bf56-2b552fc15afd req-7d0d864d-519c-4d81-8796-b34023c0dd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.967 226833 DEBUG nova.compute.manager [req-707430db-2497-4d07-bf56-2b552fc15afd req-7d0d864d-519c-4d81-8796-b34023c0dd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] No waiting events found dispatching network-vif-plugged-171577a9-ec3a-49f5-abe3-26f9190b3419 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.967 226833 WARNING nova.compute.manager [req-707430db-2497-4d07-bf56-2b552fc15afd req-7d0d864d-519c-4d81-8796-b34023c0dd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Received unexpected event network-vif-plugged-171577a9-ec3a-49f5-abe3-26f9190b3419 for instance with vm_state deleted and task_state None.
Jan 31 08:40:00 compute-2 nova_compute[226829]: 2026-01-31 08:40:00.967 226833 DEBUG nova.compute.manager [req-707430db-2497-4d07-bf56-2b552fc15afd req-7d0d864d-519c-4d81-8796-b34023c0dd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Received event network-vif-deleted-171577a9-ec3a-49f5-abe3-26f9190b3419 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:01 compute-2 nova_compute[226829]: 2026-01-31 08:40:01.055 226833 DEBUG oslo_concurrency.processutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:01 compute-2 ceph-mon[77282]: pgmap v3211: 305 pgs: 305 active+clean; 539 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.0 MiB/s wr, 108 op/s
Jan 31 08:40:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/984588358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:01.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:40:01 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4217328167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:01 compute-2 nova_compute[226829]: 2026-01-31 08:40:01.512 226833 DEBUG oslo_concurrency.processutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:01 compute-2 nova_compute[226829]: 2026-01-31 08:40:01.519 226833 DEBUG nova.compute.provider_tree [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:40:01 compute-2 nova_compute[226829]: 2026-01-31 08:40:01.861 226833 DEBUG nova.scheduler.client.report [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:40:02 compute-2 nova_compute[226829]: 2026-01-31 08:40:02.254 226833 DEBUG oslo_concurrency.lockutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4217328167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4210333335' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:40:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4210333335' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:40:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:02.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:02 compute-2 nova_compute[226829]: 2026-01-31 08:40:02.600 226833 INFO nova.scheduler.client.report [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Deleted allocations for instance 350b7ab1-25ae-4368-b319-529690bc394b
Jan 31 08:40:02 compute-2 nova_compute[226829]: 2026-01-31 08:40:02.792 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:03 compute-2 ceph-mon[77282]: pgmap v3212: 305 pgs: 305 active+clean; 539 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 726 KiB/s wr, 78 op/s
Jan 31 08:40:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2114058315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:03.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:03 compute-2 nova_compute[226829]: 2026-01-31 08:40:03.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:03 compute-2 nova_compute[226829]: 2026-01-31 08:40:03.642 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:03.642 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:40:03 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:03.644 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:40:04 compute-2 nova_compute[226829]: 2026-01-31 08:40:04.169 226833 DEBUG oslo_concurrency.lockutils [None req-05735e8c-b5f6-4ce6-8a72-b25d74c452c9 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "350b7ab1-25ae-4368-b319-529690bc394b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:04 compute-2 nova_compute[226829]: 2026-01-31 08:40:04.171 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:04 compute-2 nova_compute[226829]: 2026-01-31 08:40:04.172 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:04 compute-2 nova_compute[226829]: 2026-01-31 08:40:04.172 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:04 compute-2 nova_compute[226829]: 2026-01-31 08:40:04.173 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:40:04 compute-2 nova_compute[226829]: 2026-01-31 08:40:04.173 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:04.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:40:04 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/246431782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:04 compute-2 nova_compute[226829]: 2026-01-31 08:40:04.594 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/246431782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:05.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.551 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.552 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.637 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:05 compute-2 ceph-mon[77282]: pgmap v3213: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 64 KiB/s rd, 244 KiB/s wr, 75 op/s
Jan 31 08:40:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1740064414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:05.647 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.650 226833 DEBUG nova.compute.manager [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-unplugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.650 226833 DEBUG oslo_concurrency.lockutils [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.651 226833 DEBUG oslo_concurrency.lockutils [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.651 226833 DEBUG oslo_concurrency.lockutils [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.651 226833 DEBUG nova.compute.manager [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-unplugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.652 226833 WARNING nova.compute.manager [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received unexpected event network-vif-unplugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with vm_state active and task_state resize_migrated.
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.713 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.714 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3918MB free_disk=20.896697998046875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.715 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:05 compute-2 nova_compute[226829]: 2026-01-31 08:40:05.715 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:06 compute-2 nova_compute[226829]: 2026-01-31 08:40:06.519 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Applying migration context for instance 6193a6a7-8918-4625-b626-5d53c31bf0ff as it has an incoming, in-progress migration cf69e0cf-b556-4b51-a26d-2a27f3defe82. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Jan 31 08:40:06 compute-2 nova_compute[226829]: 2026-01-31 08:40:06.520 226833 INFO nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating resource usage from migration cf69e0cf-b556-4b51-a26d-2a27f3defe82
Jan 31 08:40:06 compute-2 nova_compute[226829]: 2026-01-31 08:40:06.583 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance da76f9e6-a924-4a04-855a-2764b3edc1a3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:40:06 compute-2 nova_compute[226829]: 2026-01-31 08:40:06.583 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 6193a6a7-8918-4625-b626-5d53c31bf0ff actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:40:06 compute-2 nova_compute[226829]: 2026-01-31 08:40:06.584 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:40:06 compute-2 nova_compute[226829]: 2026-01-31 08:40:06.584 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:40:06 compute-2 nova_compute[226829]: 2026-01-31 08:40:06.649 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:06 compute-2 nova_compute[226829]: 2026-01-31 08:40:06.732 226833 INFO nova.network.neutron [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating port 1950c089-af27-485c-8b1d-eefa73ac5064 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 08:40:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:06.919 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:06.920 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:06.920 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:40:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2125069320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:07 compute-2 nova_compute[226829]: 2026-01-31 08:40:07.081 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:07 compute-2 nova_compute[226829]: 2026-01-31 08:40:07.088 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:40:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:07.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:07 compute-2 nova_compute[226829]: 2026-01-31 08:40:07.642 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:40:07 compute-2 ceph-mon[77282]: pgmap v3214: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 36 KiB/s wr, 54 op/s
Jan 31 08:40:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2125069320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/397668415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:07 compute-2 nova_compute[226829]: 2026-01-31 08:40:07.794 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:08 compute-2 nova_compute[226829]: 2026-01-31 08:40:08.056 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:40:08 compute-2 nova_compute[226829]: 2026-01-31 08:40:08.057 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:08.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:09.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.619 226833 DEBUG nova.compute.manager [req-f93a2423-3d30-4836-ae37-3698b705f95b req-4963c7ae-8ca6-4d38-8e8f-63371a4a5739 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received event network-changed-0407be49-1c64-4010-bd1c-9a273b819442 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.619 226833 DEBUG nova.compute.manager [req-f93a2423-3d30-4836-ae37-3698b705f95b req-4963c7ae-8ca6-4d38-8e8f-63371a4a5739 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Refreshing instance network info cache due to event network-changed-0407be49-1c64-4010-bd1c-9a273b819442. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.620 226833 DEBUG oslo_concurrency.lockutils [req-f93a2423-3d30-4836-ae37-3698b705f95b req-4963c7ae-8ca6-4d38-8e8f-63371a4a5739 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.620 226833 DEBUG oslo_concurrency.lockutils [req-f93a2423-3d30-4836-ae37-3698b705f95b req-4963c7ae-8ca6-4d38-8e8f-63371a4a5739 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.620 226833 DEBUG nova.network.neutron [req-f93a2423-3d30-4836-ae37-3698b705f95b req-4963c7ae-8ca6-4d38-8e8f-63371a4a5739 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Refreshing network info cache for port 0407be49-1c64-4010-bd1c-9a273b819442 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.688 226833 DEBUG oslo_concurrency.lockutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "da76f9e6-a924-4a04-855a-2764b3edc1a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.688 226833 DEBUG oslo_concurrency.lockutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.688 226833 DEBUG oslo_concurrency.lockutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.689 226833 DEBUG oslo_concurrency.lockutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.689 226833 DEBUG oslo_concurrency.lockutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.690 226833 INFO nova.compute.manager [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Terminating instance
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.691 226833 DEBUG nova.compute.manager [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:40:09 compute-2 ceph-mon[77282]: pgmap v3215: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 39 KiB/s wr, 49 op/s
Jan 31 08:40:09 compute-2 kernel: tap0407be49-1c (unregistering): left promiscuous mode
Jan 31 08:40:09 compute-2 NetworkManager[48999]: <info>  [1769848809.8195] device (tap0407be49-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:40:09 compute-2 ovn_controller[133834]: 2026-01-31T08:40:09Z|00736|binding|INFO|Releasing lport 0407be49-1c64-4010-bd1c-9a273b819442 from this chassis (sb_readonly=0)
Jan 31 08:40:09 compute-2 ovn_controller[133834]: 2026-01-31T08:40:09Z|00737|binding|INFO|Setting lport 0407be49-1c64-4010-bd1c-9a273b819442 down in Southbound
Jan 31 08:40:09 compute-2 ovn_controller[133834]: 2026-01-31T08:40:09Z|00738|binding|INFO|Removing iface tap0407be49-1c ovn-installed in OVS
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.831 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.838 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:09 compute-2 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b6.scope: Deactivated successfully.
Jan 31 08:40:09 compute-2 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b6.scope: Consumed 15.932s CPU time.
Jan 31 08:40:09 compute-2 systemd-machined[195142]: Machine qemu-83-instance-000000b6 terminated.
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.907 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.911 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.920 226833 INFO nova.virt.libvirt.driver [-] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Instance destroyed successfully.
Jan 31 08:40:09 compute-2 nova_compute[226829]: 2026-01-31 08:40:09.921 226833 DEBUG nova.objects.instance [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'resources' on Instance uuid da76f9e6-a924-4a04-855a-2764b3edc1a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.014 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:30:22 10.100.0.8'], port_security=['fa:16:3e:fe:30:22 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'da76f9e6-a924-4a04-855a-2764b3edc1a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-230cf542-2017-47c7-972b-7bbce9acd446', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '012bb765-67ff-4d8f-9099-94c4abf76027 c6a91abb-9646-4eaf-929b-fa5a4cf8f203', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5db3dfb8-63b9-4fff-8cde-54682a3b4ba9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=0407be49-1c64-4010-bd1c-9a273b819442) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.015 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 0407be49-1c64-4010-bd1c-9a273b819442 in datapath 230cf542-2017-47c7-972b-7bbce9acd446 unbound from our chassis
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.016 226833 DEBUG nova.compute.manager [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.016 226833 DEBUG oslo_concurrency.lockutils [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.016 226833 DEBUG oslo_concurrency.lockutils [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.016 226833 DEBUG oslo_concurrency.lockutils [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.016 226833 DEBUG nova.compute.manager [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.017 226833 WARNING nova.compute.manager [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received unexpected event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with vm_state active and task_state resize_migrated.
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.016 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 230cf542-2017-47c7-972b-7bbce9acd446, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.018 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c85a6ba7-cf15-4e6c-9440-479b80d20e9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.019 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446 namespace which is not needed anymore
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.053 226833 DEBUG nova.virt.libvirt.vif [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1043169202',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1043169202',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=182,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCdvGufJvehmY0MUNN+f2LFenAJ+F98ZXIzoKpeHrgeapRWpAssUxwFK09XMQHpqJ4lB7bK+FLEX2qiQudAseuiTurXLcLJluW2RUOwpkRUQR5QMmu4IAl7WV20dROKLQ==',key_name='tempest-TestSecurityGroupsBasicOps-1358644469',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:38:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-a7vlvawv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:38:38Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=da76f9e6-a924-4a04-855a-2764b3edc1a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.054 226833 DEBUG nova.network.os_vif_util [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.055 226833 DEBUG nova.network.os_vif_util [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:30:22,bridge_name='br-int',has_traffic_filtering=True,id=0407be49-1c64-4010-bd1c-9a273b819442,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0407be49-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.055 226833 DEBUG os_vif [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:30:22,bridge_name='br-int',has_traffic_filtering=True,id=0407be49-1c64-4010-bd1c-9a273b819442,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0407be49-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.059 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.060 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0407be49-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.061 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.062 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.067 226833 INFO os_vif [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:30:22,bridge_name='br-int',has_traffic_filtering=True,id=0407be49-1c64-4010-bd1c-9a273b819442,network=Network(230cf542-2017-47c7-972b-7bbce9acd446),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0407be49-1c')
Jan 31 08:40:10 compute-2 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 08:40:10 compute-2 systemd[314081]: Activating special unit Exit the Session...
Jan 31 08:40:10 compute-2 systemd[314081]: Stopped target Main User Target.
Jan 31 08:40:10 compute-2 systemd[314081]: Stopped target Basic System.
Jan 31 08:40:10 compute-2 systemd[314081]: Stopped target Paths.
Jan 31 08:40:10 compute-2 systemd[314081]: Stopped target Sockets.
Jan 31 08:40:10 compute-2 systemd[314081]: Stopped target Timers.
Jan 31 08:40:10 compute-2 systemd[314081]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:40:10 compute-2 systemd[314081]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 08:40:10 compute-2 systemd[314081]: Closed D-Bus User Message Bus Socket.
Jan 31 08:40:10 compute-2 systemd[314081]: Stopped Create User's Volatile Files and Directories.
Jan 31 08:40:10 compute-2 systemd[314081]: Removed slice User Application Slice.
Jan 31 08:40:10 compute-2 systemd[314081]: Reached target Shutdown.
Jan 31 08:40:10 compute-2 systemd[314081]: Finished Exit the Session.
Jan 31 08:40:10 compute-2 systemd[314081]: Reached target Exit the Session.
Jan 31 08:40:10 compute-2 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 08:40:10 compute-2 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 08:40:10 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 08:40:10 compute-2 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 08:40:10 compute-2 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 08:40:10 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 08:40:10 compute-2 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 08:40:10 compute-2 neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446[311582]: [NOTICE]   (311586) : haproxy version is 2.8.14-c23fe91
Jan 31 08:40:10 compute-2 neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446[311582]: [NOTICE]   (311586) : path to executable is /usr/sbin/haproxy
Jan 31 08:40:10 compute-2 neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446[311582]: [WARNING]  (311586) : Exiting Master process...
Jan 31 08:40:10 compute-2 neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446[311582]: [WARNING]  (311586) : Exiting Master process...
Jan 31 08:40:10 compute-2 neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446[311582]: [ALERT]    (311586) : Current worker (311588) exited with code 143 (Terminated)
Jan 31 08:40:10 compute-2 neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446[311582]: [WARNING]  (311586) : All workers exited. Exiting... (0)
Jan 31 08:40:10 compute-2 systemd[1]: libpod-8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10.scope: Deactivated successfully.
Jan 31 08:40:10 compute-2 podman[314227]: 2026-01-31 08:40:10.14496324 +0000 UTC m=+0.047923094 container died 8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 08:40:10 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10-userdata-shm.mount: Deactivated successfully.
Jan 31 08:40:10 compute-2 systemd[1]: var-lib-containers-storage-overlay-b6af97ae282ec679c60a6591c4271c45e108dc24d5b97a8a7b328de03150d454-merged.mount: Deactivated successfully.
Jan 31 08:40:10 compute-2 podman[314227]: 2026-01-31 08:40:10.178896512 +0000 UTC m=+0.081856366 container cleanup 8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:40:10 compute-2 systemd[1]: libpod-conmon-8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10.scope: Deactivated successfully.
Jan 31 08:40:10 compute-2 podman[314263]: 2026-01-31 08:40:10.238975565 +0000 UTC m=+0.042783134 container remove 8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.245 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbc11f9-f9ce-4265-97da-57af29330187]: (4, ('Sat Jan 31 08:40:10 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446 (8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10)\n8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10\nSat Jan 31 08:40:10 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446 (8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10)\n8d6c540e39d9c5a97db3ec6f6ab0aababf56f0c9908248a6b9fb8bcb2998ee10\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.247 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ab9e1101-9c8c-4fa9-b7ed-4ea4dfec42f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.248 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap230cf542-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.249 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:10 compute-2 kernel: tap230cf542-20: left promiscuous mode
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.254 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.258 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a44160d2-3b73-415b-8fda-3105e4dd4cd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.283 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f900a9-93d8-4160-ab5a-3d0899d4ed65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.285 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a588f252-d292-4856-976d-9e91a5f13bc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.299 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[68a70fb6-2c36-4d3e-af7c-2d9469b869f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896973, 'reachable_time': 35099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314278, 'error': None, 'target': 'ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:10 compute-2 systemd[1]: run-netns-ovnmeta\x2d230cf542\x2d2017\x2d47c7\x2d972b\x2d7bbce9acd446.mount: Deactivated successfully.
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.303 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-230cf542-2017-47c7-972b-7bbce9acd446 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:40:10 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:10.304 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8f818c-29f6-4918-b8a6-4e637b6175d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.404 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848795.4033225, 350b7ab1-25ae-4368-b319-529690bc394b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.406 226833 INFO nova.compute.manager [-] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] VM Stopped (Lifecycle Event)
Jan 31 08:40:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:10.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.610 226833 DEBUG nova.compute.manager [None req-437035d1-da1c-4777-9216-e26241a276d8 - - - - - -] [instance: 350b7ab1-25ae-4368-b319-529690bc394b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.729 226833 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.730 226833 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:40:10 compute-2 nova_compute[226829]: 2026-01-31 08:40:10.730 226833 DEBUG nova.network.neutron [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:40:11 compute-2 nova_compute[226829]: 2026-01-31 08:40:11.282 226833 DEBUG nova.compute.manager [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-changed-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:11 compute-2 nova_compute[226829]: 2026-01-31 08:40:11.283 226833 DEBUG nova.compute.manager [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Refreshing instance network info cache due to event network-changed-1950c089-af27-485c-8b1d-eefa73ac5064. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:40:11 compute-2 nova_compute[226829]: 2026-01-31 08:40:11.284 226833 DEBUG oslo_concurrency.lockutils [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:40:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:11.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:11 compute-2 nova_compute[226829]: 2026-01-31 08:40:11.528 226833 INFO nova.virt.libvirt.driver [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Deleting instance files /var/lib/nova/instances/da76f9e6-a924-4a04-855a-2764b3edc1a3_del
Jan 31 08:40:11 compute-2 nova_compute[226829]: 2026-01-31 08:40:11.528 226833 INFO nova.virt.libvirt.driver [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Deletion of /var/lib/nova/instances/da76f9e6-a924-4a04-855a-2764b3edc1a3_del complete
Jan 31 08:40:11 compute-2 ceph-mon[77282]: pgmap v3216: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 37 KiB/s wr, 54 op/s
Jan 31 08:40:11 compute-2 nova_compute[226829]: 2026-01-31 08:40:11.868 226833 INFO nova.compute.manager [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Took 2.18 seconds to destroy the instance on the hypervisor.
Jan 31 08:40:11 compute-2 nova_compute[226829]: 2026-01-31 08:40:11.869 226833 DEBUG oslo.service.loopingcall [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:40:11 compute-2 nova_compute[226829]: 2026-01-31 08:40:11.869 226833 DEBUG nova.compute.manager [-] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:40:11 compute-2 nova_compute[226829]: 2026-01-31 08:40:11.869 226833 DEBUG nova.network.neutron [-] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:40:12 compute-2 podman[314281]: 2026-01-31 08:40:12.19913818 +0000 UTC m=+0.090495550 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:40:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:12.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.796 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.813 226833 DEBUG nova.compute.manager [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received event network-vif-unplugged-0407be49-1c64-4010-bd1c-9a273b819442 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.813 226833 DEBUG oslo_concurrency.lockutils [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.813 226833 DEBUG oslo_concurrency.lockutils [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.814 226833 DEBUG oslo_concurrency.lockutils [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.814 226833 DEBUG nova.compute.manager [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] No waiting events found dispatching network-vif-unplugged-0407be49-1c64-4010-bd1c-9a273b819442 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.814 226833 DEBUG nova.compute.manager [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received event network-vif-unplugged-0407be49-1c64-4010-bd1c-9a273b819442 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.814 226833 DEBUG nova.compute.manager [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received event network-vif-plugged-0407be49-1c64-4010-bd1c-9a273b819442 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.815 226833 DEBUG oslo_concurrency.lockutils [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.815 226833 DEBUG oslo_concurrency.lockutils [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.815 226833 DEBUG oslo_concurrency.lockutils [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.815 226833 DEBUG nova.compute.manager [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] No waiting events found dispatching network-vif-plugged-0407be49-1c64-4010-bd1c-9a273b819442 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:40:12 compute-2 nova_compute[226829]: 2026-01-31 08:40:12.816 226833 WARNING nova.compute.manager [req-ce201ce5-c942-42ef-8e3a-091943db2539 req-2cc5c42d-23c8-4e79-8a2c-2a54712b5117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received unexpected event network-vif-plugged-0407be49-1c64-4010-bd1c-9a273b819442 for instance with vm_state active and task_state deleting.
Jan 31 08:40:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:13.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:13 compute-2 nova_compute[226829]: 2026-01-31 08:40:13.600 226833 DEBUG nova.network.neutron [req-f93a2423-3d30-4836-ae37-3698b705f95b req-4963c7ae-8ca6-4d38-8e8f-63371a4a5739 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updated VIF entry in instance network info cache for port 0407be49-1c64-4010-bd1c-9a273b819442. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:40:13 compute-2 nova_compute[226829]: 2026-01-31 08:40:13.601 226833 DEBUG nova.network.neutron [req-f93a2423-3d30-4836-ae37-3698b705f95b req-4963c7ae-8ca6-4d38-8e8f-63371a4a5739 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updating instance_info_cache with network_info: [{"id": "0407be49-1c64-4010-bd1c-9a273b819442", "address": "fa:16:3e:fe:30:22", "network": {"id": "230cf542-2017-47c7-972b-7bbce9acd446", "bridge": "br-int", "label": "tempest-network-smoke--657911166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0407be49-1c", "ovs_interfaceid": "0407be49-1c64-4010-bd1c-9a273b819442", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:13 compute-2 ceph-mon[77282]: pgmap v3217: 305 pgs: 305 active+clean; 513 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 28 op/s
Jan 31 08:40:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/951835952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:14 compute-2 nova_compute[226829]: 2026-01-31 08:40:14.005 226833 DEBUG oslo_concurrency.lockutils [req-f93a2423-3d30-4836-ae37-3698b705f95b req-4963c7ae-8ca6-4d38-8e8f-63371a4a5739 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-da76f9e6-a924-4a04-855a-2764b3edc1a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:40:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:14.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:14 compute-2 nova_compute[226829]: 2026-01-31 08:40:14.722 226833 DEBUG nova.network.neutron [-] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:14 compute-2 nova_compute[226829]: 2026-01-31 08:40:14.830 226833 DEBUG nova.compute.manager [req-db9be34d-184e-4be1-9a51-d67621ba9ec8 req-e68e2316-41e5-4278-916a-ab67bc7fd553 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Received event network-vif-deleted-0407be49-1c64-4010-bd1c-9a273b819442 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:14 compute-2 nova_compute[226829]: 2026-01-31 08:40:14.830 226833 INFO nova.compute.manager [req-db9be34d-184e-4be1-9a51-d67621ba9ec8 req-e68e2316-41e5-4278-916a-ab67bc7fd553 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Neutron deleted interface 0407be49-1c64-4010-bd1c-9a273b819442; detaching it from the instance and deleting it from the info cache
Jan 31 08:40:14 compute-2 nova_compute[226829]: 2026-01-31 08:40:14.830 226833 DEBUG nova.network.neutron [req-db9be34d-184e-4be1-9a51-d67621ba9ec8 req-e68e2316-41e5-4278-916a-ab67bc7fd553 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:14 compute-2 nova_compute[226829]: 2026-01-31 08:40:14.876 226833 INFO nova.compute.manager [-] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Took 3.01 seconds to deallocate network for instance.
Jan 31 08:40:14 compute-2 nova_compute[226829]: 2026-01-31 08:40:14.885 226833 DEBUG nova.compute.manager [req-db9be34d-184e-4be1-9a51-d67621ba9ec8 req-e68e2316-41e5-4278-916a-ab67bc7fd553 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Detach interface failed, port_id=0407be49-1c64-4010-bd1c-9a273b819442, reason: Instance da76f9e6-a924-4a04-855a-2764b3edc1a3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:40:14 compute-2 nova_compute[226829]: 2026-01-31 08:40:14.988 226833 DEBUG oslo_concurrency.lockutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:14 compute-2 nova_compute[226829]: 2026-01-31 08:40:14.989 226833 DEBUG oslo_concurrency.lockutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.046 226833 DEBUG nova.network.neutron [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.063 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.109 226833 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.113 226833 DEBUG oslo_concurrency.processutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.136 226833 DEBUG oslo_concurrency.lockutils [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.137 226833 DEBUG nova.network.neutron [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Refreshing network info cache for port 1950c089-af27-485c-8b1d-eefa73ac5064 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.319 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.321 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.321 226833 INFO nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Creating image(s)
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.358 226833 DEBUG nova.storage.rbd_utils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] creating snapshot(nova-resize) on rbd image(6193a6a7-8918-4625-b626-5d53c31bf0ff_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:40:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:15.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:40:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3737376873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.523 226833 DEBUG oslo_concurrency.processutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.530 226833 DEBUG nova.compute.provider_tree [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.551 226833 DEBUG nova.scheduler.client.report [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.598 226833 DEBUG oslo_concurrency.lockutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.658 226833 INFO nova.scheduler.client.report [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Deleted allocations for instance da76f9e6-a924-4a04-855a-2764b3edc1a3
Jan 31 08:40:15 compute-2 nova_compute[226829]: 2026-01-31 08:40:15.811 226833 DEBUG oslo_concurrency.lockutils [None req-d0b1be35-7d48-4f06-840b-fa3464d0bf67 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "da76f9e6-a924-4a04-855a-2764b3edc1a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:15 compute-2 ceph-mon[77282]: pgmap v3218: 305 pgs: 305 active+clean; 467 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 24 KiB/s wr, 38 op/s
Jan 31 08:40:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3737376873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e380 e380: 3 total, 3 up, 3 in
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.000 226833 DEBUG nova.objects.instance [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6193a6a7-8918-4625-b626-5d53c31bf0ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.156 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.156 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Ensure instance console log exists: /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.157 226833 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.157 226833 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.157 226833 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.160 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Start _get_guest_xml network_info=[{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1489833336", "vif_mac": "fa:16:3e:43:d3:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.164 226833 WARNING nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.168 226833 DEBUG nova.virt.libvirt.host [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.169 226833 DEBUG nova.virt.libvirt.host [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.173 226833 DEBUG nova.virt.libvirt.host [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.173 226833 DEBUG nova.virt.libvirt.host [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.174 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.174 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.175 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.175 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.175 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.176 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.176 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.176 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.176 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.176 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.177 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.177 226833 DEBUG nova.virt.hardware [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.177 226833 DEBUG nova.objects.instance [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6193a6a7-8918-4625-b626-5d53c31bf0ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.223 226833 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:16.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:16 compute-2 sudo[314424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:16 compute-2 sudo[314424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:16 compute-2 sudo[314424]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:16 compute-2 sudo[314449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:16 compute-2 sudo[314449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:16 compute-2 sudo[314449]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:40:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/532896343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.686 226833 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:16 compute-2 nova_compute[226829]: 2026-01-31 08:40:16.725 226833 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:16 compute-2 ceph-mon[77282]: osdmap e380: 3 total, 3 up, 3 in
Jan 31 08:40:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/532896343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:40:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2290654224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.150 226833 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.152 226833 DEBUG nova.virt.libvirt.vif [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1044697057',display_name='tempest-TestNetworkAdvancedServerOps-server-1044697057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1044697057',id=186,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM+PhejEoD7g/QWN1GYstJMXVFnv1wBVmxecXIfvr4KjdawbuS6IVqr3y8FzKBH2jXkdyk7w4kVVSMOQCA1tqgtHQYS5flQfReRJgL/KNYlba9qTCk6eFD00Tadz9cy3w==',key_name='tempest-TestNetworkAdvancedServerOps-1072390140',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:39:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hmxtgjg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:40:04Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=6193a6a7-8918-4625-b626-5d53c31bf0ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1489833336", "vif_mac": "fa:16:3e:43:d3:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.152 226833 DEBUG nova.network.os_vif_util [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1489833336", "vif_mac": "fa:16:3e:43:d3:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.153 226833 DEBUG nova.network.os_vif_util [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.155 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <uuid>6193a6a7-8918-4625-b626-5d53c31bf0ff</uuid>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <name>instance-000000ba</name>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <memory>196608</memory>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1044697057</nova:name>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:40:16</nova:creationTime>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <nova:flavor name="m1.micro">
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <nova:memory>192</nova:memory>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <nova:port uuid="1950c089-af27-485c-8b1d-eefa73ac5064">
Jan 31 08:40:17 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <system>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <entry name="serial">6193a6a7-8918-4625-b626-5d53c31bf0ff</entry>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <entry name="uuid">6193a6a7-8918-4625-b626-5d53c31bf0ff</entry>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     </system>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <os>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   </os>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <features>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   </features>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/6193a6a7-8918-4625-b626-5d53c31bf0ff_disk">
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       </source>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/6193a6a7-8918-4625-b626-5d53c31bf0ff_disk.config">
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       </source>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:40:17 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:43:d3:f9"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <target dev="tap1950c089-af"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/console.log" append="off"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <video>
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     </video>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:40:17 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:40:17 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:40:17 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:40:17 compute-2 nova_compute[226829]: </domain>
Jan 31 08:40:17 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.156 226833 DEBUG nova.virt.libvirt.vif [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1044697057',display_name='tempest-TestNetworkAdvancedServerOps-server-1044697057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1044697057',id=186,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM+PhejEoD7g/QWN1GYstJMXVFnv1wBVmxecXIfvr4KjdawbuS6IVqr3y8FzKBH2jXkdyk7w4kVVSMOQCA1tqgtHQYS5flQfReRJgL/KNYlba9qTCk6eFD00Tadz9cy3w==',key_name='tempest-TestNetworkAdvancedServerOps-1072390140',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:39:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hmxtgjg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:40:04Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=6193a6a7-8918-4625-b626-5d53c31bf0ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1489833336", "vif_mac": "fa:16:3e:43:d3:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.157 226833 DEBUG nova.network.os_vif_util [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1489833336", "vif_mac": "fa:16:3e:43:d3:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.157 226833 DEBUG nova.network.os_vif_util [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.157 226833 DEBUG os_vif [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.158 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.159 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.159 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.161 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.161 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1950c089-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.161 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1950c089-af, col_values=(('external_ids', {'iface-id': '1950c089-af27-485c-8b1d-eefa73ac5064', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:d3:f9', 'vm-uuid': '6193a6a7-8918-4625-b626-5d53c31bf0ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.163 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 NetworkManager[48999]: <info>  [1769848817.1643] manager: (tap1950c089-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.166 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.169 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.170 226833 INFO os_vif [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af')
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.251 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.251 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.251 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:43:d3:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.252 226833 INFO nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Using config drive
Jan 31 08:40:17 compute-2 kernel: tap1950c089-af: entered promiscuous mode
Jan 31 08:40:17 compute-2 NetworkManager[48999]: <info>  [1769848817.3292] manager: (tap1950c089-af): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Jan 31 08:40:17 compute-2 ovn_controller[133834]: 2026-01-31T08:40:17Z|00739|binding|INFO|Claiming lport 1950c089-af27-485c-8b1d-eefa73ac5064 for this chassis.
Jan 31 08:40:17 compute-2 ovn_controller[133834]: 2026-01-31T08:40:17Z|00740|binding|INFO|1950c089-af27-485c-8b1d-eefa73ac5064: Claiming fa:16:3e:43:d3:f9 10.100.0.8
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.328 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.335 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 ovn_controller[133834]: 2026-01-31T08:40:17Z|00741|binding|INFO|Setting lport 1950c089-af27-485c-8b1d-eefa73ac5064 ovn-installed in OVS
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.337 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.338 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 ovn_controller[133834]: 2026-01-31T08:40:17Z|00742|binding|INFO|Setting lport 1950c089-af27-485c-8b1d-eefa73ac5064 up in Southbound
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.349 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:d3:f9 10.100.0.8'], port_security=['fa:16:3e:43:d3:f9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6193a6a7-8918-4625-b626-5d53c31bf0ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-afa5ed82-7034-4517-a39e-5fc8b872592f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1618d12c-038a-456c-8d56-5eb45e7b0852', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=002a439c-290e-44cc-a5b6-bc1f707c1ea1, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=1950c089-af27-485c-8b1d-eefa73ac5064) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.350 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 1950c089-af27-485c-8b1d-eefa73ac5064 in datapath afa5ed82-7034-4517-a39e-5fc8b872592f bound to our chassis
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.352 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network afa5ed82-7034-4517-a39e-5fc8b872592f
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.359 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[800f6d2d-17bd-432a-8b88-a9499815a675]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.362 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapafa5ed82-71 in ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.363 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapafa5ed82-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.363 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4083f704-07f6-4f81-a7cb-e55e4f729bf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.364 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[48e4a492-f89f-461e-8cab-73912aec8c71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 systemd-machined[195142]: New machine qemu-85-instance-000000ba.
Jan 31 08:40:17 compute-2 systemd[1]: Started Virtual Machine qemu-85-instance-000000ba.
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.376 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[12f4266b-3e0c-4528-b94d-38145b3decc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 systemd-udevd[314562]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:40:17 compute-2 NetworkManager[48999]: <info>  [1769848817.3966] device (tap1950c089-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:40:17 compute-2 NetworkManager[48999]: <info>  [1769848817.3970] device (tap1950c089-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.397 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cfba01b4-07b8-4fd7-8c59-f8f7c63622f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.421 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[efc84552-adf0-4f9b-af1b-408df5fe086e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 podman[314546]: 2026-01-31 08:40:17.425757885 +0000 UTC m=+0.058752708 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.425 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d15fd131-1b62-44df-bfd1-2d27d6256966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 NetworkManager[48999]: <info>  [1769848817.4265] manager: (tapafa5ed82-70): new Veth device (/org/freedesktop/NetworkManager/Devices/366)
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.446 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ba62b05e-1930-4d54-92d3-f84bfea0861b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.449 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2c8503c2-c659-4eef-9fcd-219bf0cd79d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 NetworkManager[48999]: <info>  [1769848817.4641] device (tapafa5ed82-70): carrier: link connected
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.469 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e95b859a-3876-41d1-8075-25dc565f78af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.480 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[93335142-761c-40ea-b64a-1ef601c432d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapafa5ed82-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:81:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906993, 'reachable_time': 44063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314598, 'error': None, 'target': 'ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.492 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9e53a60c-e9fd-4149-bbde-13a33ba5d05a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:815d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 906993, 'tstamp': 906993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314599, 'error': None, 'target': 'ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:17.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.504 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d40912-8e5e-4468-8de8-fc696190dc32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapafa5ed82-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:81:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906993, 'reachable_time': 44063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314600, 'error': None, 'target': 'ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.524 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd39c7b-6b3a-4721-97c7-627c2617e54e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.595 226833 DEBUG nova.network.neutron [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updated VIF entry in instance network info cache for port 1950c089-af27-485c-8b1d-eefa73ac5064. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.596 226833 DEBUG nova.network.neutron [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.626 226833 DEBUG oslo_concurrency.lockutils [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.681 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5e07511d-2718-47d1-b431-6dd8c8cb7d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.682 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafa5ed82-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.682 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.683 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapafa5ed82-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.688 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 NetworkManager[48999]: <info>  [1769848817.6892] manager: (tapafa5ed82-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Jan 31 08:40:17 compute-2 kernel: tapafa5ed82-70: entered promiscuous mode
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.691 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.691 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapafa5ed82-70, col_values=(('external_ids', {'iface-id': '4738d457-86b5-496e-a4f4-02b685103bfd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.692 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 ovn_controller[133834]: 2026-01-31T08:40:17Z|00743|binding|INFO|Releasing lport 4738d457-86b5-496e-a4f4-02b685103bfd from this chassis (sb_readonly=0)
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.693 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/afa5ed82-7034-4517-a39e-5fc8b872592f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/afa5ed82-7034-4517-a39e-5fc8b872592f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.695 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a36d3936-98c6-402c-991a-30f160c2a3ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.696 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-afa5ed82-7034-4517-a39e-5fc8b872592f
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/afa5ed82-7034-4517-a39e-5fc8b872592f.pid.haproxy
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID afa5ed82-7034-4517-a39e-5fc8b872592f
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.696 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:17.696 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f', 'env', 'PROCESS_TAG=haproxy-afa5ed82-7034-4517-a39e-5fc8b872592f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/afa5ed82-7034-4517-a39e-5fc8b872592f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.798 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.801 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848817.8007307, 6193a6a7-8918-4625-b626-5d53c31bf0ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.801 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] VM Resumed (Lifecycle Event)
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.803 226833 DEBUG nova.compute.manager [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.806 226833 INFO nova.virt.libvirt.driver [-] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Instance running successfully.
Jan 31 08:40:17 compute-2 virtqemud[226546]: argument unsupported: QEMU guest agent is not configured
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.809 226833 DEBUG nova.virt.libvirt.guest [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.810 226833 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.875 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.878 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.967 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.968 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848817.8009079, 6193a6a7-8918-4625-b626-5d53c31bf0ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:40:17 compute-2 nova_compute[226829]: 2026-01-31 08:40:17.968 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] VM Started (Lifecycle Event)
Jan 31 08:40:18 compute-2 ceph-mon[77282]: pgmap v3220: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 39 op/s
Jan 31 08:40:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2290654224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:18 compute-2 podman[314675]: 2026-01-31 08:40:18.025004052 +0000 UTC m=+0.042828994 container create b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:40:18 compute-2 systemd[1]: Started libpod-conmon-b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745.scope.
Jan 31 08:40:18 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:40:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df375f3fb2a896fb88a669eda30421448b0e5841f5fe86fc2871ef2ec73e55fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:18 compute-2 podman[314675]: 2026-01-31 08:40:18.088864188 +0000 UTC m=+0.106689150 container init b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:40:18 compute-2 podman[314675]: 2026-01-31 08:40:18.093775052 +0000 UTC m=+0.111599994 container start b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 08:40:18 compute-2 podman[314675]: 2026-01-31 08:40:18.004613298 +0000 UTC m=+0.022438240 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:40:18 compute-2 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[314691]: [NOTICE]   (314695) : New worker (314697) forked
Jan 31 08:40:18 compute-2 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[314691]: [NOTICE]   (314695) : Loading success.
Jan 31 08:40:18 compute-2 nova_compute[226829]: 2026-01-31 08:40:18.212 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:40:18 compute-2 nova_compute[226829]: 2026-01-31 08:40:18.217 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:40:18 compute-2 nova_compute[226829]: 2026-01-31 08:40:18.247 226833 DEBUG nova.compute.manager [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:18 compute-2 nova_compute[226829]: 2026-01-31 08:40:18.248 226833 DEBUG oslo_concurrency.lockutils [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:18 compute-2 nova_compute[226829]: 2026-01-31 08:40:18.248 226833 DEBUG oslo_concurrency.lockutils [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:18 compute-2 nova_compute[226829]: 2026-01-31 08:40:18.248 226833 DEBUG oslo_concurrency.lockutils [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:18 compute-2 nova_compute[226829]: 2026-01-31 08:40:18.249 226833 DEBUG nova.compute.manager [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:40:18 compute-2 nova_compute[226829]: 2026-01-31 08:40:18.249 226833 WARNING nova.compute.manager [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received unexpected event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with vm_state resized and task_state None.
Jan 31 08:40:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:40:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:18.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:40:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:40:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514366191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:20 compute-2 nova_compute[226829]: 2026-01-31 08:40:20.057 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:20 compute-2 nova_compute[226829]: 2026-01-31 08:40:20.058 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:40:20 compute-2 ceph-mon[77282]: pgmap v3221: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 11 KiB/s wr, 38 op/s
Jan 31 08:40:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1514366191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:20.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:20 compute-2 nova_compute[226829]: 2026-01-31 08:40:20.896 226833 DEBUG nova.compute.manager [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:20 compute-2 nova_compute[226829]: 2026-01-31 08:40:20.897 226833 DEBUG oslo_concurrency.lockutils [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:20 compute-2 nova_compute[226829]: 2026-01-31 08:40:20.897 226833 DEBUG oslo_concurrency.lockutils [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:20 compute-2 nova_compute[226829]: 2026-01-31 08:40:20.898 226833 DEBUG oslo_concurrency.lockutils [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:20 compute-2 nova_compute[226829]: 2026-01-31 08:40:20.898 226833 DEBUG nova.compute.manager [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:40:20 compute-2 nova_compute[226829]: 2026-01-31 08:40:20.898 226833 WARNING nova.compute.manager [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received unexpected event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with vm_state resized and task_state None.
Jan 31 08:40:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:21.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:22 compute-2 ceph-mon[77282]: pgmap v3222: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 KiB/s wr, 110 op/s
Jan 31 08:40:22 compute-2 nova_compute[226829]: 2026-01-31 08:40:22.164 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:22 compute-2 nova_compute[226829]: 2026-01-31 08:40:22.801 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:23 compute-2 ovn_controller[133834]: 2026-01-31T08:40:23Z|00744|binding|INFO|Releasing lport 4738d457-86b5-496e-a4f4-02b685103bfd from this chassis (sb_readonly=0)
Jan 31 08:40:23 compute-2 nova_compute[226829]: 2026-01-31 08:40:23.957 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.082588) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824082668, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 705, "num_deletes": 250, "total_data_size": 1266307, "memory_usage": 1283872, "flush_reason": "Manual Compaction"}
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824087991, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 575300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76186, "largest_seqno": 76886, "table_properties": {"data_size": 572221, "index_size": 986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8355, "raw_average_key_size": 20, "raw_value_size": 565717, "raw_average_value_size": 1410, "num_data_blocks": 43, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848779, "oldest_key_time": 1769848779, "file_creation_time": 1769848824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 5462 microseconds, and 2344 cpu microseconds.
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.088052) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 575300 bytes OK
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.088067) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.090344) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.090367) EVENT_LOG_v1 {"time_micros": 1769848824090356, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.090383) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1262514, prev total WAL file size 1262514, number of live WAL files 2.
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.090830) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353130' seq:72057594037927935, type:22 .. '6D6772737461740032373631' seq:0, type:0; will stop at (end)
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(561KB)], [153(14MB)]
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824090871, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15547455, "oldest_snapshot_seqno": -1}
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 9792 keys, 11951067 bytes, temperature: kUnknown
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824184887, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 11951067, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11889518, "index_size": 35964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 258311, "raw_average_key_size": 26, "raw_value_size": 11719539, "raw_average_value_size": 1196, "num_data_blocks": 1371, "num_entries": 9792, "num_filter_entries": 9792, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769848824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.185165) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11951067 bytes
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.187332) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.2 rd, 127.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.3 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(47.8) write-amplify(20.8) OK, records in: 10291, records dropped: 499 output_compression: NoCompression
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.187348) EVENT_LOG_v1 {"time_micros": 1769848824187340, "job": 98, "event": "compaction_finished", "compaction_time_micros": 94113, "compaction_time_cpu_micros": 24857, "output_level": 6, "num_output_files": 1, "total_output_size": 11951067, "num_input_records": 10291, "num_output_records": 9792, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824187563, "job": 98, "event": "table_file_deletion", "file_number": 155}
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824188462, "job": 98, "event": "table_file_deletion", "file_number": 153}
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.090778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.188496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.188504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.188506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.188508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:24.188510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:24.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:24 compute-2 ceph-mon[77282]: pgmap v3223: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 131 op/s
Jan 31 08:40:24 compute-2 nova_compute[226829]: 2026-01-31 08:40:24.920 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848809.9192338, da76f9e6-a924-4a04-855a-2764b3edc1a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:40:24 compute-2 nova_compute[226829]: 2026-01-31 08:40:24.920 226833 INFO nova.compute.manager [-] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] VM Stopped (Lifecycle Event)
Jan 31 08:40:25 compute-2 nova_compute[226829]: 2026-01-31 08:40:25.308 226833 DEBUG nova.compute.manager [None req-59adc2e6-8282-47f3-805d-71da7ce94f63 - - - - - -] [instance: da76f9e6-a924-4a04-855a-2764b3edc1a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:40:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:25.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:26 compute-2 ceph-mon[77282]: pgmap v3224: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 119 op/s
Jan 31 08:40:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:26.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:27 compute-2 nova_compute[226829]: 2026-01-31 08:40:27.169 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:27.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:27 compute-2 nova_compute[226829]: 2026-01-31 08:40:27.803 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:28 compute-2 ceph-mon[77282]: pgmap v3225: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 673 B/s wr, 100 op/s
Jan 31 08:40:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:28.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e381 e381: 3 total, 3 up, 3 in
Jan 31 08:40:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:29.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:29 compute-2 ovn_controller[133834]: 2026-01-31T08:40:29Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:d3:f9 10.100.0.8
Jan 31 08:40:30 compute-2 ceph-mon[77282]: pgmap v3226: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 88 op/s
Jan 31 08:40:30 compute-2 ceph-mon[77282]: osdmap e381: 3 total, 3 up, 3 in
Jan 31 08:40:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2559611403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:30.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:31.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:32 compute-2 nova_compute[226829]: 2026-01-31 08:40:32.172 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:32 compute-2 ceph-mon[77282]: pgmap v3228: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 59 op/s
Jan 31 08:40:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:32.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:32 compute-2 nova_compute[226829]: 2026-01-31 08:40:32.806 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.261913) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833261943, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 353, "num_deletes": 251, "total_data_size": 289081, "memory_usage": 296128, "flush_reason": "Manual Compaction"}
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833264927, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 190497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76891, "largest_seqno": 77239, "table_properties": {"data_size": 188326, "index_size": 334, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5534, "raw_average_key_size": 18, "raw_value_size": 183952, "raw_average_value_size": 623, "num_data_blocks": 15, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848824, "oldest_key_time": 1769848824, "file_creation_time": 1769848833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 3052 microseconds, and 1014 cpu microseconds.
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.264963) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 190497 bytes OK
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.264977) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.266932) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.266944) EVENT_LOG_v1 {"time_micros": 1769848833266940, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.266958) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 286683, prev total WAL file size 286683, number of live WAL files 2.
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.267327) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(186KB)], [156(11MB)]
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833267364, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 12141564, "oldest_snapshot_seqno": -1}
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 9573 keys, 10210088 bytes, temperature: kUnknown
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833315685, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10210088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10151544, "index_size": 33535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23941, "raw_key_size": 254483, "raw_average_key_size": 26, "raw_value_size": 9986895, "raw_average_value_size": 1043, "num_data_blocks": 1262, "num_entries": 9573, "num_filter_entries": 9573, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769848833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.315960) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10210088 bytes
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.317404) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 250.7 rd, 210.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(117.3) write-amplify(53.6) OK, records in: 10087, records dropped: 514 output_compression: NoCompression
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.317422) EVENT_LOG_v1 {"time_micros": 1769848833317414, "job": 100, "event": "compaction_finished", "compaction_time_micros": 48429, "compaction_time_cpu_micros": 21401, "output_level": 6, "num_output_files": 1, "total_output_size": 10210088, "num_input_records": 10087, "num_output_records": 9573, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833317645, "job": 100, "event": "table_file_deletion", "file_number": 158}
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833318837, "job": 100, "event": "table_file_deletion", "file_number": 156}
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.267213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.318887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.318891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.318893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.318895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:40:33.318897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:33.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:34 compute-2 ceph-mon[77282]: pgmap v3229: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 396 KiB/s rd, 12 KiB/s wr, 39 op/s
Jan 31 08:40:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:40:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:34.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:40:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:35.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:35 compute-2 nova_compute[226829]: 2026-01-31 08:40:35.573 226833 INFO nova.compute.manager [None req-37330d20-e650-43be-89aa-fcbc000cf402 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Get console output
Jan 31 08:40:35 compute-2 nova_compute[226829]: 2026-01-31 08:40:35.581 267670 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:40:36 compute-2 ceph-mon[77282]: pgmap v3230: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 645 KiB/s rd, 15 KiB/s wr, 67 op/s
Jan 31 08:40:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:36.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:36 compute-2 sudo[314715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:36 compute-2 sudo[314715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:36 compute-2 sudo[314715]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:36 compute-2 nova_compute[226829]: 2026-01-31 08:40:36.829 226833 DEBUG nova.compute.manager [req-ebff615d-9de4-4f04-99bc-4c27af280825 req-b4599d94-140e-4bf1-961f-90fcfe99120f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-changed-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:36 compute-2 nova_compute[226829]: 2026-01-31 08:40:36.829 226833 DEBUG nova.compute.manager [req-ebff615d-9de4-4f04-99bc-4c27af280825 req-b4599d94-140e-4bf1-961f-90fcfe99120f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Refreshing instance network info cache due to event network-changed-1950c089-af27-485c-8b1d-eefa73ac5064. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:40:36 compute-2 nova_compute[226829]: 2026-01-31 08:40:36.830 226833 DEBUG oslo_concurrency.lockutils [req-ebff615d-9de4-4f04-99bc-4c27af280825 req-b4599d94-140e-4bf1-961f-90fcfe99120f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:40:36 compute-2 nova_compute[226829]: 2026-01-31 08:40:36.830 226833 DEBUG oslo_concurrency.lockutils [req-ebff615d-9de4-4f04-99bc-4c27af280825 req-b4599d94-140e-4bf1-961f-90fcfe99120f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:40:36 compute-2 nova_compute[226829]: 2026-01-31 08:40:36.831 226833 DEBUG nova.network.neutron [req-ebff615d-9de4-4f04-99bc-4c27af280825 req-b4599d94-140e-4bf1-961f-90fcfe99120f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Refreshing network info cache for port 1950c089-af27-485c-8b1d-eefa73ac5064 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:40:36 compute-2 sudo[314740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:36 compute-2 sudo[314740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:36 compute-2 sudo[314740]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.047 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.218 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.225 226833 DEBUG oslo_concurrency.lockutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.226 226833 DEBUG oslo_concurrency.lockutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.226 226833 DEBUG oslo_concurrency.lockutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.227 226833 DEBUG oslo_concurrency.lockutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.227 226833 DEBUG oslo_concurrency.lockutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.229 226833 INFO nova.compute.manager [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Terminating instance
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.231 226833 DEBUG nova.compute.manager [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:40:37 compute-2 kernel: tap1950c089-af (unregistering): left promiscuous mode
Jan 31 08:40:37 compute-2 NetworkManager[48999]: <info>  [1769848837.2964] device (tap1950c089-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:40:37 compute-2 ovn_controller[133834]: 2026-01-31T08:40:37Z|00745|binding|INFO|Releasing lport 1950c089-af27-485c-8b1d-eefa73ac5064 from this chassis (sb_readonly=0)
Jan 31 08:40:37 compute-2 ovn_controller[133834]: 2026-01-31T08:40:37Z|00746|binding|INFO|Setting lport 1950c089-af27-485c-8b1d-eefa73ac5064 down in Southbound
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.303 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:37 compute-2 ovn_controller[133834]: 2026-01-31T08:40:37Z|00747|binding|INFO|Removing iface tap1950c089-af ovn-installed in OVS
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.305 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.311 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:37 compute-2 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000ba.scope: Deactivated successfully.
Jan 31 08:40:37 compute-2 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000ba.scope: Consumed 12.880s CPU time.
Jan 31 08:40:37 compute-2 systemd-machined[195142]: Machine qemu-85-instance-000000ba terminated.
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.359 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:d3:f9 10.100.0.8'], port_security=['fa:16:3e:43:d3:f9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6193a6a7-8918-4625-b626-5d53c31bf0ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-afa5ed82-7034-4517-a39e-5fc8b872592f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1618d12c-038a-456c-8d56-5eb45e7b0852', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=002a439c-290e-44cc-a5b6-bc1f707c1ea1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=1950c089-af27-485c-8b1d-eefa73ac5064) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.361 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 1950c089-af27-485c-8b1d-eefa73ac5064 in datapath afa5ed82-7034-4517-a39e-5fc8b872592f unbound from our chassis
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.362 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network afa5ed82-7034-4517-a39e-5fc8b872592f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.364 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[87e16099-6e4d-43ba-8613-f736ee52d570]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.364 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f namespace which is not needed anymore
Jan 31 08:40:37 compute-2 NetworkManager[48999]: <info>  [1769848837.4482] manager: (tap1950c089-af): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.464 226833 INFO nova.virt.libvirt.driver [-] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Instance destroyed successfully.
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.465 226833 DEBUG nova.objects.instance [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 6193a6a7-8918-4625-b626-5d53c31bf0ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:40:37 compute-2 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[314691]: [NOTICE]   (314695) : haproxy version is 2.8.14-c23fe91
Jan 31 08:40:37 compute-2 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[314691]: [NOTICE]   (314695) : path to executable is /usr/sbin/haproxy
Jan 31 08:40:37 compute-2 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[314691]: [WARNING]  (314695) : Exiting Master process...
Jan 31 08:40:37 compute-2 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[314691]: [ALERT]    (314695) : Current worker (314697) exited with code 143 (Terminated)
Jan 31 08:40:37 compute-2 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[314691]: [WARNING]  (314695) : All workers exited. Exiting... (0)
Jan 31 08:40:37 compute-2 systemd[1]: libpod-b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745.scope: Deactivated successfully.
Jan 31 08:40:37 compute-2 podman[314790]: 2026-01-31 08:40:37.496958933 +0000 UTC m=+0.050699988 container died b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:40:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:37.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.534 226833 DEBUG nova.virt.libvirt.vif [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1044697057',display_name='tempest-TestNetworkAdvancedServerOps-server-1044697057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1044697057',id=186,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM+PhejEoD7g/QWN1GYstJMXVFnv1wBVmxecXIfvr4KjdawbuS6IVqr3y8FzKBH2jXkdyk7w4kVVSMOQCA1tqgtHQYS5flQfReRJgL/KNYlba9qTCk6eFD00Tadz9cy3w==',key_name='tempest-TestNetworkAdvancedServerOps-1072390140',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:40:17Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hmxtgjg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:40:30Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=6193a6a7-8918-4625-b626-5d53c31bf0ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.534 226833 DEBUG nova.network.os_vif_util [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.536 226833 DEBUG nova.network.os_vif_util [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.536 226833 DEBUG os_vif [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.539 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.539 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1950c089-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.542 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.546 226833 INFO os_vif [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af')
Jan 31 08:40:37 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745-userdata-shm.mount: Deactivated successfully.
Jan 31 08:40:37 compute-2 systemd[1]: var-lib-containers-storage-overlay-df375f3fb2a896fb88a669eda30421448b0e5841f5fe86fc2871ef2ec73e55fa-merged.mount: Deactivated successfully.
Jan 31 08:40:37 compute-2 podman[314790]: 2026-01-31 08:40:37.588895712 +0000 UTC m=+0.142636767 container cleanup b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:40:37 compute-2 systemd[1]: libpod-conmon-b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745.scope: Deactivated successfully.
Jan 31 08:40:37 compute-2 podman[314849]: 2026-01-31 08:40:37.675151157 +0000 UTC m=+0.071516045 container remove b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.680 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9e9868-9a47-4f95-b0ec-766697af5824]: (4, ('Sat Jan 31 08:40:37 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f (b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745)\nb447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745\nSat Jan 31 08:40:37 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f (b447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745)\nb447b26c753eff7bbabd75b50ee53711feea28b6c1b2a57b33957067070bc745\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.681 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f72f8d17-d112-480c-8086-ebbcf535102d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.682 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafa5ed82-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:37 compute-2 kernel: tapafa5ed82-70: left promiscuous mode
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.685 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.689 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.692 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4d6fab-dc93-4356-ab4f-a3ad77c39fc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.707 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1b10b2f6-6346-407f-ac40-cdac39bc4f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.709 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b724e8-bb26-4d2e-a380-6701a5530668]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.722 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5058730f-ffe2-4366-b2ee-3fbe3875d5f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906989, 'reachable_time': 44846, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314863, 'error': None, 'target': 'ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.725 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:40:37 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:40:37.726 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[359d33e3-b283-4dda-b9c7-21898c5ab3c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:37 compute-2 systemd[1]: run-netns-ovnmeta\x2dafa5ed82\x2d7034\x2d4517\x2da39e\x2d5fc8b872592f.mount: Deactivated successfully.
Jan 31 08:40:37 compute-2 nova_compute[226829]: 2026-01-31 08:40:37.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:38 compute-2 nova_compute[226829]: 2026-01-31 08:40:38.077 226833 DEBUG nova.compute.manager [req-4dee5b60-1644-4cb9-abcf-7a7ee88a4aeb req-c5fd11a2-32d4-4987-b82c-6012ffe6b258 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-unplugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:38 compute-2 nova_compute[226829]: 2026-01-31 08:40:38.078 226833 DEBUG oslo_concurrency.lockutils [req-4dee5b60-1644-4cb9-abcf-7a7ee88a4aeb req-c5fd11a2-32d4-4987-b82c-6012ffe6b258 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:38 compute-2 nova_compute[226829]: 2026-01-31 08:40:38.078 226833 DEBUG oslo_concurrency.lockutils [req-4dee5b60-1644-4cb9-abcf-7a7ee88a4aeb req-c5fd11a2-32d4-4987-b82c-6012ffe6b258 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:38 compute-2 nova_compute[226829]: 2026-01-31 08:40:38.079 226833 DEBUG oslo_concurrency.lockutils [req-4dee5b60-1644-4cb9-abcf-7a7ee88a4aeb req-c5fd11a2-32d4-4987-b82c-6012ffe6b258 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:38 compute-2 nova_compute[226829]: 2026-01-31 08:40:38.079 226833 DEBUG nova.compute.manager [req-4dee5b60-1644-4cb9-abcf-7a7ee88a4aeb req-c5fd11a2-32d4-4987-b82c-6012ffe6b258 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-unplugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:40:38 compute-2 nova_compute[226829]: 2026-01-31 08:40:38.079 226833 DEBUG nova.compute.manager [req-4dee5b60-1644-4cb9-abcf-7a7ee88a4aeb req-c5fd11a2-32d4-4987-b82c-6012ffe6b258 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-unplugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:40:38 compute-2 ceph-mon[77282]: pgmap v3231: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 15 KiB/s wr, 67 op/s
Jan 31 08:40:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:38.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:38 compute-2 nova_compute[226829]: 2026-01-31 08:40:38.524 226833 INFO nova.virt.libvirt.driver [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Deleting instance files /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff_del
Jan 31 08:40:38 compute-2 nova_compute[226829]: 2026-01-31 08:40:38.525 226833 INFO nova.virt.libvirt.driver [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Deletion of /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff_del complete
Jan 31 08:40:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 e382: 3 total, 3 up, 3 in
Jan 31 08:40:39 compute-2 nova_compute[226829]: 2026-01-31 08:40:39.149 226833 INFO nova.compute.manager [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Took 1.92 seconds to destroy the instance on the hypervisor.
Jan 31 08:40:39 compute-2 nova_compute[226829]: 2026-01-31 08:40:39.150 226833 DEBUG oslo.service.loopingcall [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:40:39 compute-2 nova_compute[226829]: 2026-01-31 08:40:39.150 226833 DEBUG nova.compute.manager [-] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:40:39 compute-2 nova_compute[226829]: 2026-01-31 08:40:39.150 226833 DEBUG nova.network.neutron [-] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:40:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:40:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:39.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:40:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:40.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:41 compute-2 ceph-mon[77282]: pgmap v3232: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 15 KiB/s wr, 67 op/s
Jan 31 08:40:41 compute-2 ceph-mon[77282]: osdmap e382: 3 total, 3 up, 3 in
Jan 31 08:40:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:41.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:42 compute-2 ceph-mon[77282]: pgmap v3234: 305 pgs: 305 active+clean; 415 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 349 KiB/s rd, 17 KiB/s wr, 52 op/s
Jan 31 08:40:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:42.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:42 compute-2 nova_compute[226829]: 2026-01-31 08:40:42.542 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:42 compute-2 nova_compute[226829]: 2026-01-31 08:40:42.830 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:42 compute-2 podman[314868]: 2026-01-31 08:40:42.926609287 +0000 UTC m=+0.063554379 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:40:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:43.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:43 compute-2 sudo[314895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:43 compute-2 sudo[314895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:43 compute-2 sudo[314895]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:43 compute-2 sudo[314920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:43 compute-2 sudo[314920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:43 compute-2 sudo[314920]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:43 compute-2 sudo[314945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:43 compute-2 sudo[314945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:43 compute-2 sudo[314945]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:43 compute-2 sudo[314970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:40:43 compute-2 sudo[314970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:44 compute-2 ceph-mon[77282]: pgmap v3235: 305 pgs: 305 active+clean; 398 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 17 KiB/s wr, 59 op/s
Jan 31 08:40:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:44 compute-2 sudo[314970]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:44.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:44 compute-2 nova_compute[226829]: 2026-01-31 08:40:44.693 226833 DEBUG nova.compute.manager [req-f701543e-5c53-4d2d-b3a6-d62e0f3c29c7 req-e3128a74-189e-4283-bd2b-609892a1c6cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:44 compute-2 nova_compute[226829]: 2026-01-31 08:40:44.694 226833 DEBUG oslo_concurrency.lockutils [req-f701543e-5c53-4d2d-b3a6-d62e0f3c29c7 req-e3128a74-189e-4283-bd2b-609892a1c6cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:44 compute-2 nova_compute[226829]: 2026-01-31 08:40:44.695 226833 DEBUG oslo_concurrency.lockutils [req-f701543e-5c53-4d2d-b3a6-d62e0f3c29c7 req-e3128a74-189e-4283-bd2b-609892a1c6cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:44 compute-2 nova_compute[226829]: 2026-01-31 08:40:44.695 226833 DEBUG oslo_concurrency.lockutils [req-f701543e-5c53-4d2d-b3a6-d62e0f3c29c7 req-e3128a74-189e-4283-bd2b-609892a1c6cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:44 compute-2 nova_compute[226829]: 2026-01-31 08:40:44.695 226833 DEBUG nova.compute.manager [req-f701543e-5c53-4d2d-b3a6-d62e0f3c29c7 req-e3128a74-189e-4283-bd2b-609892a1c6cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:40:44 compute-2 nova_compute[226829]: 2026-01-31 08:40:44.695 226833 WARNING nova.compute.manager [req-f701543e-5c53-4d2d-b3a6-d62e0f3c29c7 req-e3128a74-189e-4283-bd2b-609892a1c6cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received unexpected event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with vm_state active and task_state deleting.
Jan 31 08:40:44 compute-2 nova_compute[226829]: 2026-01-31 08:40:44.720 226833 DEBUG nova.network.neutron [req-ebff615d-9de4-4f04-99bc-4c27af280825 req-b4599d94-140e-4bf1-961f-90fcfe99120f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updated VIF entry in instance network info cache for port 1950c089-af27-485c-8b1d-eefa73ac5064. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:40:44 compute-2 nova_compute[226829]: 2026-01-31 08:40:44.720 226833 DEBUG nova.network.neutron [req-ebff615d-9de4-4f04-99bc-4c27af280825 req-b4599d94-140e-4bf1-961f-90fcfe99120f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:45 compute-2 nova_compute[226829]: 2026-01-31 08:40:45.017 226833 DEBUG oslo_concurrency.lockutils [req-ebff615d-9de4-4f04-99bc-4c27af280825 req-b4599d94-140e-4bf1-961f-90fcfe99120f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:40:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:40:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:40:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:40:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:40:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:40:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:40:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:45.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:40:46 compute-2 ceph-mon[77282]: pgmap v3236: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 31 08:40:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:46.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:46 compute-2 nova_compute[226829]: 2026-01-31 08:40:46.830 226833 DEBUG nova.network.neutron [-] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:47 compute-2 nova_compute[226829]: 2026-01-31 08:40:47.390 226833 INFO nova.compute.manager [-] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Took 8.24 seconds to deallocate network for instance.
Jan 31 08:40:47 compute-2 nova_compute[226829]: 2026-01-31 08:40:47.463 226833 DEBUG nova.compute.manager [req-19667ebb-9342-42bc-9abf-6f1e03e67e8b req-cb86cddb-d3a4-46f8-827a-123282f5ea64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-deleted-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:47 compute-2 nova_compute[226829]: 2026-01-31 08:40:47.464 226833 INFO nova.compute.manager [req-19667ebb-9342-42bc-9abf-6f1e03e67e8b req-cb86cddb-d3a4-46f8-827a-123282f5ea64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Neutron deleted interface 1950c089-af27-485c-8b1d-eefa73ac5064; detaching it from the instance and deleting it from the info cache
Jan 31 08:40:47 compute-2 nova_compute[226829]: 2026-01-31 08:40:47.464 226833 DEBUG nova.network.neutron [req-19667ebb-9342-42bc-9abf-6f1e03e67e8b req-cb86cddb-d3a4-46f8-827a-123282f5ea64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:47.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:47 compute-2 nova_compute[226829]: 2026-01-31 08:40:47.545 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:47 compute-2 nova_compute[226829]: 2026-01-31 08:40:47.832 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:47 compute-2 nova_compute[226829]: 2026-01-31 08:40:47.868 226833 DEBUG nova.compute.manager [req-19667ebb-9342-42bc-9abf-6f1e03e67e8b req-cb86cddb-d3a4-46f8-827a-123282f5ea64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Detach interface failed, port_id=1950c089-af27-485c-8b1d-eefa73ac5064, reason: Instance 6193a6a7-8918-4625-b626-5d53c31bf0ff could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:40:47 compute-2 nova_compute[226829]: 2026-01-31 08:40:47.882 226833 DEBUG oslo_concurrency.lockutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:47 compute-2 nova_compute[226829]: 2026-01-31 08:40:47.882 226833 DEBUG oslo_concurrency.lockutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:47 compute-2 nova_compute[226829]: 2026-01-31 08:40:47.935 226833 DEBUG oslo_concurrency.processutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:48 compute-2 podman[315051]: 2026-01-31 08:40:48.159571334 +0000 UTC m=+0.047761599 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:40:48 compute-2 ceph-mon[77282]: pgmap v3237: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 31 08:40:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:40:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1154103464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:48 compute-2 nova_compute[226829]: 2026-01-31 08:40:48.348 226833 DEBUG oslo_concurrency.processutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:48 compute-2 nova_compute[226829]: 2026-01-31 08:40:48.355 226833 DEBUG nova.compute.provider_tree [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:40:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:48.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:48 compute-2 nova_compute[226829]: 2026-01-31 08:40:48.629 226833 DEBUG nova.scheduler.client.report [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:40:49 compute-2 nova_compute[226829]: 2026-01-31 08:40:49.307 226833 DEBUG oslo_concurrency.lockutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1154103464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:49.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:50 compute-2 nova_compute[226829]: 2026-01-31 08:40:50.057 226833 INFO nova.scheduler.client.report [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocations for instance 6193a6a7-8918-4625-b626-5d53c31bf0ff
Jan 31 08:40:50 compute-2 ceph-mon[77282]: pgmap v3238: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 31 08:40:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:50.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:50 compute-2 nova_compute[226829]: 2026-01-31 08:40:50.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:50 compute-2 nova_compute[226829]: 2026-01-31 08:40:50.500 226833 DEBUG oslo_concurrency.lockutils [None req-2bd46a43-f030-4247-ba24-da5568da1c79 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:51 compute-2 sudo[315073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:51 compute-2 sudo[315073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:51 compute-2 sudo[315073]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:51 compute-2 sudo[315098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:40:51 compute-2 sudo[315098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:51 compute-2 sudo[315098]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:51.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:52 compute-2 ceph-mon[77282]: pgmap v3239: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 31 08:40:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:52 compute-2 nova_compute[226829]: 2026-01-31 08:40:52.463 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848837.4617164, 6193a6a7-8918-4625-b626-5d53c31bf0ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:40:52 compute-2 nova_compute[226829]: 2026-01-31 08:40:52.463 226833 INFO nova.compute.manager [-] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] VM Stopped (Lifecycle Event)
Jan 31 08:40:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:52.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:52 compute-2 nova_compute[226829]: 2026-01-31 08:40:52.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:52 compute-2 nova_compute[226829]: 2026-01-31 08:40:52.605 226833 DEBUG nova.compute.manager [None req-8c0ff2b6-add2-457c-8b80-64db7310e84b - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:40:52 compute-2 nova_compute[226829]: 2026-01-31 08:40:52.834 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:53 compute-2 nova_compute[226829]: 2026-01-31 08:40:53.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:53 compute-2 ceph-mon[77282]: pgmap v3240: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 852 B/s wr, 14 op/s
Jan 31 08:40:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3761983410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:40:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3761983410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:40:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:54.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:55 compute-2 nova_compute[226829]: 2026-01-31 08:40:55.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:55 compute-2 nova_compute[226829]: 2026-01-31 08:40:55.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:40:55 compute-2 nova_compute[226829]: 2026-01-31 08:40:55.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:40:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:55 compute-2 nova_compute[226829]: 2026-01-31 08:40:55.922 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:40:55 compute-2 nova_compute[226829]: 2026-01-31 08:40:55.922 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:55 compute-2 nova_compute[226829]: 2026-01-31 08:40:55.923 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:56 compute-2 ceph-mon[77282]: pgmap v3241: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 3 op/s
Jan 31 08:40:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:56.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:56 compute-2 nova_compute[226829]: 2026-01-31 08:40:56.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:56 compute-2 sudo[315126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:56 compute-2 sudo[315126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:56 compute-2 sudo[315126]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:56 compute-2 sudo[315151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:56 compute-2 sudo[315151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:56 compute-2 sudo[315151]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:57 compute-2 nova_compute[226829]: 2026-01-31 08:40:57.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:40:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:40:57 compute-2 nova_compute[226829]: 2026-01-31 08:40:57.835 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:58 compute-2 ceph-mon[77282]: pgmap v3242: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:40:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:58.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:40:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:59.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:59 compute-2 nova_compute[226829]: 2026-01-31 08:40:59.822 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:00 compute-2 ceph-mon[77282]: pgmap v3243: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:41:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1036511080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:00.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:01 compute-2 ceph-mon[77282]: pgmap v3244: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 8.9 KiB/s wr, 2 op/s
Jan 31 08:41:01 compute-2 nova_compute[226829]: 2026-01-31 08:41:01.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:02.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:02 compute-2 nova_compute[226829]: 2026-01-31 08:41:02.555 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:02 compute-2 nova_compute[226829]: 2026-01-31 08:41:02.837 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:41:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:03.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:41:03 compute-2 ceph-mon[77282]: pgmap v3245: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 8.9 KiB/s wr, 2 op/s
Jan 31 08:41:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1342224106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:03 compute-2 nova_compute[226829]: 2026-01-31 08:41:03.982 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:04 compute-2 nova_compute[226829]: 2026-01-31 08:41:04.073 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:04 compute-2 nova_compute[226829]: 2026-01-31 08:41:04.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:04.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:04.980 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:41:04 compute-2 nova_compute[226829]: 2026-01-31 08:41:04.981 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:04.981 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:41:05 compute-2 nova_compute[226829]: 2026-01-31 08:41:05.016 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:05 compute-2 nova_compute[226829]: 2026-01-31 08:41:05.016 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:05 compute-2 nova_compute[226829]: 2026-01-31 08:41:05.016 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:05 compute-2 nova_compute[226829]: 2026-01-31 08:41:05.016 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:41:05 compute-2 nova_compute[226829]: 2026-01-31 08:41:05.017 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:41:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3864762599' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:41:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:41:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3864762599' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:41:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:41:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1859579774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:05 compute-2 nova_compute[226829]: 2026-01-31 08:41:05.427 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:05 compute-2 nova_compute[226829]: 2026-01-31 08:41:05.551 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:41:05 compute-2 nova_compute[226829]: 2026-01-31 08:41:05.552 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4133MB free_disk=20.987842559814453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:41:05 compute-2 nova_compute[226829]: 2026-01-31 08:41:05.552 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:05 compute-2 nova_compute[226829]: 2026-01-31 08:41:05.553 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:05.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:05 compute-2 ceph-mon[77282]: pgmap v3246: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 186 KiB/s wr, 6 op/s
Jan 31 08:41:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3864762599' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:41:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3864762599' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:41:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1859579774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:06 compute-2 nova_compute[226829]: 2026-01-31 08:41:06.116 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:41:06 compute-2 nova_compute[226829]: 2026-01-31 08:41:06.117 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:41:06 compute-2 nova_compute[226829]: 2026-01-31 08:41:06.153 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:06.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:41:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1931840101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:06 compute-2 nova_compute[226829]: 2026-01-31 08:41:06.571 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:06 compute-2 nova_compute[226829]: 2026-01-31 08:41:06.576 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:41:06 compute-2 nova_compute[226829]: 2026-01-31 08:41:06.696 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:41:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1931840101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:06.919 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:06.920 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:06.920 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:06 compute-2 nova_compute[226829]: 2026-01-31 08:41:06.972 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:41:06 compute-2 nova_compute[226829]: 2026-01-31 08:41:06.973 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:07 compute-2 nova_compute[226829]: 2026-01-31 08:41:07.559 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:07.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:07 compute-2 nova_compute[226829]: 2026-01-31 08:41:07.839 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:07 compute-2 ceph-mon[77282]: pgmap v3247: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 187 KiB/s wr, 21 op/s
Jan 31 08:41:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1254181457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:08.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1395554617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:09.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:10 compute-2 ceph-mon[77282]: pgmap v3248: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 361 KiB/s wr, 25 op/s
Jan 31 08:41:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:10.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:41:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2250320806' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:41:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:41:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2250320806' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:41:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2250320806' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:41:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2250320806' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:41:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:11.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:12 compute-2 ceph-mon[77282]: pgmap v3249: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 367 KiB/s wr, 29 op/s
Jan 31 08:41:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:12.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:12 compute-2 nova_compute[226829]: 2026-01-31 08:41:12.562 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:12 compute-2 nova_compute[226829]: 2026-01-31 08:41:12.887 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:12.983 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:13 compute-2 podman[315230]: 2026-01-31 08:41:13.167835207 +0000 UTC m=+0.058123011 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:41:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:13.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:14 compute-2 ceph-mon[77282]: pgmap v3250: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 358 KiB/s wr, 27 op/s
Jan 31 08:41:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:41:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:41:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:14 compute-2 nova_compute[226829]: 2026-01-31 08:41:14.735 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "6bdcbd02-3b18-468f-9304-f2beb486b15f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:14 compute-2 nova_compute[226829]: 2026-01-31 08:41:14.735 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:15 compute-2 nova_compute[226829]: 2026-01-31 08:41:15.014 226833 DEBUG nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:41:15 compute-2 nova_compute[226829]: 2026-01-31 08:41:15.332 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:15 compute-2 nova_compute[226829]: 2026-01-31 08:41:15.332 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:15 compute-2 nova_compute[226829]: 2026-01-31 08:41:15.340 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:41:15 compute-2 nova_compute[226829]: 2026-01-31 08:41:15.340 226833 INFO nova.compute.claims [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:41:15 compute-2 ceph-mon[77282]: pgmap v3251: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 361 KiB/s wr, 40 op/s
Jan 31 08:41:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:15.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:15 compute-2 nova_compute[226829]: 2026-01-31 08:41:15.644 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:41:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2175157771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:16 compute-2 nova_compute[226829]: 2026-01-31 08:41:16.097 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:16 compute-2 nova_compute[226829]: 2026-01-31 08:41:16.102 226833 DEBUG nova.compute.provider_tree [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:41:16 compute-2 nova_compute[226829]: 2026-01-31 08:41:16.155 226833 DEBUG nova.scheduler.client.report [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:41:16 compute-2 nova_compute[226829]: 2026-01-31 08:41:16.428 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:16 compute-2 nova_compute[226829]: 2026-01-31 08:41:16.429 226833 DEBUG nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:41:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:16.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:16 compute-2 nova_compute[226829]: 2026-01-31 08:41:16.596 226833 DEBUG nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:41:16 compute-2 nova_compute[226829]: 2026-01-31 08:41:16.596 226833 DEBUG nova.network.neutron [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:41:16 compute-2 nova_compute[226829]: 2026-01-31 08:41:16.681 226833 INFO nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:41:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2175157771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:16 compute-2 nova_compute[226829]: 2026-01-31 08:41:16.823 226833 DEBUG nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:41:16 compute-2 nova_compute[226829]: 2026-01-31 08:41:16.947 226833 DEBUG nova.policy [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6968a1ee10e4e3b8651ffe0240a7e46', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:41:17 compute-2 sudo[315279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:17 compute-2 sudo[315279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:17 compute-2 sudo[315279]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:17 compute-2 sudo[315304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:17 compute-2 sudo[315304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:17 compute-2 sudo[315304]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.281 226833 DEBUG nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.283 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.284 226833 INFO nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Creating image(s)
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.316 226833 DEBUG nova.storage.rbd_utils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.348 226833 DEBUG nova.storage.rbd_utils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.374 226833 DEBUG nova.storage.rbd_utils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.379 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.435 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.436 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.437 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.437 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.471 226833 DEBUG nova.storage.rbd_utils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.476 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.565 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:17 compute-2 nova_compute[226829]: 2026-01-31 08:41:17.942 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:18 compute-2 ceph-mon[77282]: pgmap v3252: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 183 KiB/s wr, 35 op/s
Jan 31 08:41:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:18.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:18 compute-2 nova_compute[226829]: 2026-01-31 08:41:18.973 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:18 compute-2 nova_compute[226829]: 2026-01-31 08:41:18.974 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:41:19 compute-2 nova_compute[226829]: 2026-01-31 08:41:19.061 226833 DEBUG nova.network.neutron [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Successfully created port: 5dc6375a-c15a-4d30-8638-a43e70cedff3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:41:19 compute-2 podman[315424]: 2026-01-31 08:41:19.149626338 +0000 UTC m=+0.039565497 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:41:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:19.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:19 compute-2 ceph-mon[77282]: pgmap v3253: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 183 KiB/s wr, 21 op/s
Jan 31 08:41:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:20.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:20 compute-2 nova_compute[226829]: 2026-01-31 08:41:20.621 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:20 compute-2 nova_compute[226829]: 2026-01-31 08:41:20.805 226833 DEBUG nova.storage.rbd_utils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] resizing rbd image 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:41:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:21.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.610 226833 DEBUG nova.objects.instance [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'migration_context' on Instance uuid 6bdcbd02-3b18-468f-9304-f2beb486b15f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.653 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.653 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Ensure instance console log exists: /var/lib/nova/instances/6bdcbd02-3b18-468f-9304-f2beb486b15f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.654 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.654 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.654 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.691 226833 DEBUG nova.network.neutron [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Successfully updated port: 5dc6375a-c15a-4d30-8638-a43e70cedff3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:41:21 compute-2 ceph-mon[77282]: pgmap v3254: 305 pgs: 305 active+clean; 419 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.6 MiB/s wr, 33 op/s
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.942 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.943 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquired lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.943 226833 DEBUG nova.network.neutron [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.995 226833 DEBUG nova.compute.manager [req-4778d851-ce27-4b93-9662-ca9d0e516c15 req-eb80fdc2-c177-40e4-ad17-37180237a86a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received event network-changed-5dc6375a-c15a-4d30-8638-a43e70cedff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.995 226833 DEBUG nova.compute.manager [req-4778d851-ce27-4b93-9662-ca9d0e516c15 req-eb80fdc2-c177-40e4-ad17-37180237a86a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Refreshing instance network info cache due to event network-changed-5dc6375a-c15a-4d30-8638-a43e70cedff3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:41:21 compute-2 nova_compute[226829]: 2026-01-31 08:41:21.995 226833 DEBUG oslo_concurrency.lockutils [req-4778d851-ce27-4b93-9662-ca9d0e516c15 req-eb80fdc2-c177-40e4-ad17-37180237a86a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:41:22 compute-2 nova_compute[226829]: 2026-01-31 08:41:22.483 226833 DEBUG nova.network.neutron [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:41:22 compute-2 nova_compute[226829]: 2026-01-31 08:41:22.485 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:22.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:22 compute-2 nova_compute[226829]: 2026-01-31 08:41:22.569 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:22 compute-2 nova_compute[226829]: 2026-01-31 08:41:22.944 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1587974525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:23.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:24 compute-2 ceph-mon[77282]: pgmap v3255: 305 pgs: 305 active+clean; 419 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.6 MiB/s wr, 29 op/s
Jan 31 08:41:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:24.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:25 compute-2 ceph-mon[77282]: pgmap v3256: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 31 08:41:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:25.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:26.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:27 compute-2 nova_compute[226829]: 2026-01-31 08:41:27.573 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:27.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:27 compute-2 ceph-mon[77282]: pgmap v3257: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:41:27 compute-2 nova_compute[226829]: 2026-01-31 08:41:27.946 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:28.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:29.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:29 compute-2 nova_compute[226829]: 2026-01-31 08:41:29.738 226833 DEBUG nova.network.neutron [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updating instance_info_cache with network_info: [{"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:41:30 compute-2 ceph-mon[77282]: pgmap v3258: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:41:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:30.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.563 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Releasing lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.564 226833 DEBUG nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Instance network_info: |[{"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.564 226833 DEBUG oslo_concurrency.lockutils [req-4778d851-ce27-4b93-9662-ca9d0e516c15 req-eb80fdc2-c177-40e4-ad17-37180237a86a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.564 226833 DEBUG nova.network.neutron [req-4778d851-ce27-4b93-9662-ca9d0e516c15 req-eb80fdc2-c177-40e4-ad17-37180237a86a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Refreshing network info cache for port 5dc6375a-c15a-4d30-8638-a43e70cedff3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.566 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Start _get_guest_xml network_info=[{"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.571 226833 WARNING nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.579 226833 DEBUG nova.virt.libvirt.host [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.579 226833 DEBUG nova.virt.libvirt.host [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.583 226833 DEBUG nova.virt.libvirt.host [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.583 226833 DEBUG nova.virt.libvirt.host [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.585 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.585 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.586 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.586 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.586 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.587 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.587 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.587 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.587 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.588 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.588 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.588 226833 DEBUG nova.virt.hardware [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:41:30 compute-2 nova_compute[226829]: 2026-01-31 08:41:30.591 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:41:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2642737543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.036 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.068 226833 DEBUG nova.storage.rbd_utils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.072 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2642737543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:41:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2081854837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.508 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.510 226833 DEBUG nova.virt.libvirt.vif [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1321703218',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1321703218',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=188,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMYEP+FH4+JiIZbr79pJv9aYh00yukVd4LTTixBSPYJFPBphM6LwgldEUNc8Ik1ZLIJ1fhNhcXRkJiEr1nLs/12gffuMpLG+id2YXJFVSw24pEnIZ6xMN8/ONialxUfUA==',key_name='tempest-TestSecurityGroupsBasicOps-943486148',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-wyyqc6i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:41:16Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=6bdcbd02-3b18-468f-9304-f2beb486b15f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.511 226833 DEBUG nova.network.os_vif_util [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.511 226833 DEBUG nova.network.os_vif_util [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ff:13,bridge_name='br-int',has_traffic_filtering=True,id=5dc6375a-c15a-4d30-8638-a43e70cedff3,network=Network(9817d028-13b8-4f0c-b501-093938ded45a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dc6375a-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.513 226833 DEBUG nova.objects.instance [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'pci_devices' on Instance uuid 6bdcbd02-3b18-468f-9304-f2beb486b15f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:41:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:31.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.614 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <uuid>6bdcbd02-3b18-468f-9304-f2beb486b15f</uuid>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <name>instance-000000bc</name>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1321703218</nova:name>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:41:30</nova:creationTime>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <nova:user uuid="c6968a1ee10e4e3b8651ffe0240a7e46">tempest-TestSecurityGroupsBasicOps-1014068786-project-member</nova:user>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <nova:project uuid="ba35ae24dbf3443e8a526dce39c6793b">tempest-TestSecurityGroupsBasicOps-1014068786</nova:project>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <nova:port uuid="5dc6375a-c15a-4d30-8638-a43e70cedff3">
Jan 31 08:41:31 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <system>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <entry name="serial">6bdcbd02-3b18-468f-9304-f2beb486b15f</entry>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <entry name="uuid">6bdcbd02-3b18-468f-9304-f2beb486b15f</entry>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     </system>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <os>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   </os>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <features>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   </features>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/6bdcbd02-3b18-468f-9304-f2beb486b15f_disk">
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       </source>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/6bdcbd02-3b18-468f-9304-f2beb486b15f_disk.config">
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       </source>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:41:31 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:d5:ff:13"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <target dev="tap5dc6375a-c1"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/6bdcbd02-3b18-468f-9304-f2beb486b15f/console.log" append="off"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <video>
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     </video>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:41:31 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:41:31 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:41:31 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:41:31 compute-2 nova_compute[226829]: </domain>
Jan 31 08:41:31 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.615 226833 DEBUG nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Preparing to wait for external event network-vif-plugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.615 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.616 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.616 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.617 226833 DEBUG nova.virt.libvirt.vif [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1321703218',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1321703218',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=188,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMYEP+FH4+JiIZbr79pJv9aYh00yukVd4LTTixBSPYJFPBphM6LwgldEUNc8Ik1ZLIJ1fhNhcXRkJiEr1nLs/12gffuMpLG+id2YXJFVSw24pEnIZ6xMN8/ONialxUfUA==',key_name='tempest-TestSecurityGroupsBasicOps-943486148',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-wyyqc6i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:41:16Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=6bdcbd02-3b18-468f-9304-f2beb486b15f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.617 226833 DEBUG nova.network.os_vif_util [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.618 226833 DEBUG nova.network.os_vif_util [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ff:13,bridge_name='br-int',has_traffic_filtering=True,id=5dc6375a-c15a-4d30-8638-a43e70cedff3,network=Network(9817d028-13b8-4f0c-b501-093938ded45a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dc6375a-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.618 226833 DEBUG os_vif [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ff:13,bridge_name='br-int',has_traffic_filtering=True,id=5dc6375a-c15a-4d30-8638-a43e70cedff3,network=Network(9817d028-13b8-4f0c-b501-093938ded45a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dc6375a-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.620 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.620 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.625 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.625 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5dc6375a-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.626 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5dc6375a-c1, col_values=(('external_ids', {'iface-id': '5dc6375a-c15a-4d30-8638-a43e70cedff3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:ff:13', 'vm-uuid': '6bdcbd02-3b18-468f-9304-f2beb486b15f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.627 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:31 compute-2 NetworkManager[48999]: <info>  [1769848891.6287] manager: (tap5dc6375a-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.629 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.633 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:31 compute-2 nova_compute[226829]: 2026-01-31 08:41:31.635 226833 INFO os_vif [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ff:13,bridge_name='br-int',has_traffic_filtering=True,id=5dc6375a-c15a-4d30-8638-a43e70cedff3,network=Network(9817d028-13b8-4f0c-b501-093938ded45a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dc6375a-c1')
Jan 31 08:41:32 compute-2 nova_compute[226829]: 2026-01-31 08:41:32.379 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:41:32 compute-2 nova_compute[226829]: 2026-01-31 08:41:32.380 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:41:32 compute-2 nova_compute[226829]: 2026-01-31 08:41:32.380 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No VIF found with MAC fa:16:3e:d5:ff:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:41:32 compute-2 nova_compute[226829]: 2026-01-31 08:41:32.380 226833 INFO nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Using config drive
Jan 31 08:41:32 compute-2 nova_compute[226829]: 2026-01-31 08:41:32.406 226833 DEBUG nova.storage.rbd_utils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:41:32 compute-2 ceph-mon[77282]: pgmap v3259: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:41:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2081854837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:32.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:32 compute-2 nova_compute[226829]: 2026-01-31 08:41:32.956 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.118 226833 INFO nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Creating config drive at /var/lib/nova/instances/6bdcbd02-3b18-468f-9304-f2beb486b15f/disk.config
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.121 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6bdcbd02-3b18-468f-9304-f2beb486b15f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi33wly2f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.247 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6bdcbd02-3b18-468f-9304-f2beb486b15f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi33wly2f" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.278 226833 DEBUG nova.storage.rbd_utils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.282 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6bdcbd02-3b18-468f-9304-f2beb486b15f/disk.config 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:33 compute-2 ceph-mon[77282]: pgmap v3260: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 176 KiB/s wr, 13 op/s
Jan 31 08:41:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:41:33 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3396622110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.804 226833 DEBUG oslo_concurrency.processutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6bdcbd02-3b18-468f-9304-f2beb486b15f/disk.config 6bdcbd02-3b18-468f-9304-f2beb486b15f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.805 226833 INFO nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Deleting local config drive /var/lib/nova/instances/6bdcbd02-3b18-468f-9304-f2beb486b15f/disk.config because it was imported into RBD.
Jan 31 08:41:33 compute-2 kernel: tap5dc6375a-c1: entered promiscuous mode
Jan 31 08:41:33 compute-2 NetworkManager[48999]: <info>  [1769848893.8616] manager: (tap5dc6375a-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.864 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:33 compute-2 ovn_controller[133834]: 2026-01-31T08:41:33Z|00748|binding|INFO|Claiming lport 5dc6375a-c15a-4d30-8638-a43e70cedff3 for this chassis.
Jan 31 08:41:33 compute-2 ovn_controller[133834]: 2026-01-31T08:41:33Z|00749|binding|INFO|5dc6375a-c15a-4d30-8638-a43e70cedff3: Claiming fa:16:3e:d5:ff:13 10.100.0.12
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.867 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:33 compute-2 systemd-udevd[315656]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.890 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:33 compute-2 ovn_controller[133834]: 2026-01-31T08:41:33Z|00750|binding|INFO|Setting lport 5dc6375a-c15a-4d30-8638-a43e70cedff3 ovn-installed in OVS
Jan 31 08:41:33 compute-2 nova_compute[226829]: 2026-01-31 08:41:33.893 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:33 compute-2 NetworkManager[48999]: <info>  [1769848893.8950] device (tap5dc6375a-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:41:33 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:41:33 compute-2 NetworkManager[48999]: <info>  [1769848893.8956] device (tap5dc6375a-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:41:33 compute-2 systemd-machined[195142]: New machine qemu-86-instance-000000bc.
Jan 31 08:41:33 compute-2 systemd[1]: Started Virtual Machine qemu-86-instance-000000bc.
Jan 31 08:41:33 compute-2 ovn_controller[133834]: 2026-01-31T08:41:33Z|00751|binding|INFO|Setting lport 5dc6375a-c15a-4d30-8638-a43e70cedff3 up in Southbound
Jan 31 08:41:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:33.939 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:ff:13 10.100.0.12'], port_security=['fa:16:3e:d5:ff:13 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6bdcbd02-3b18-468f-9304-f2beb486b15f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9817d028-13b8-4f0c-b501-093938ded45a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '321391d8-3670-4204-8148-a6376e72af1f d5a7f2ce-19cf-4a0a-889d-4cd6514ef7af', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9c46936-e0c6-4ac4-8122-69894a2a9a80, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=5dc6375a-c15a-4d30-8638-a43e70cedff3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:41:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:33.942 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 5dc6375a-c15a-4d30-8638-a43e70cedff3 in datapath 9817d028-13b8-4f0c-b501-093938ded45a bound to our chassis
Jan 31 08:41:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:33.943 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9817d028-13b8-4f0c-b501-093938ded45a
Jan 31 08:41:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:33.957 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3bce2184-042b-49cf-a7db-ca6d0d5796be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:33.958 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9817d028-11 in ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:41:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:33.960 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9817d028-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:41:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:33.960 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[32f94756-711b-41e6-8759-36be8ee2e0c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:33.961 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4e34ddc9-1338-4a11-9907-b02cb8abee03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:33.972 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[25d0e136-98eb-4409-99a4-b9d3273ec68b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:33.982 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d0072f2b-2492-41cd-b00f-a4f3463ff72b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.006 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[903b9069-8253-4fae-bc71-c8bd5483fc8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 systemd-udevd[315660]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:41:34 compute-2 NetworkManager[48999]: <info>  [1769848894.0137] manager: (tap9817d028-10): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.014 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[20084f0b-2d5a-4572-bea9-0753a535f631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.040 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[32716fe0-1d96-4081-ae74-55dcc074e41e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.043 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2097e9aa-6ed7-4394-bdac-8d8e4cc11ab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 NetworkManager[48999]: <info>  [1769848894.0555] device (tap9817d028-10): carrier: link connected
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.059 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[395aa3ac-c6c4-499c-83fb-6fdb1cb9f3b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.071 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[467d0db5-bda0-4c97-a0d2-871cf8b3d2e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9817d028-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ad:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 914652, 'reachable_time': 35034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315693, 'error': None, 'target': 'ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.082 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c8089175-013c-41c0-946e-a91495271396]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:ade2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 914652, 'tstamp': 914652}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315694, 'error': None, 'target': 'ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.092 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[240690da-026e-474b-9406-e3938a91c409]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9817d028-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ad:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 914652, 'reachable_time': 35034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315695, 'error': None, 'target': 'ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.114 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[78418024-dcc6-4914-8fdd-11ca9d61d239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.153 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e90dc4cb-a78c-49aa-bee3-c5a09d053555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.155 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9817d028-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.155 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.156 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9817d028-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:34 compute-2 nova_compute[226829]: 2026-01-31 08:41:34.204 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:34 compute-2 NetworkManager[48999]: <info>  [1769848894.2052] manager: (tap9817d028-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Jan 31 08:41:34 compute-2 kernel: tap9817d028-10: entered promiscuous mode
Jan 31 08:41:34 compute-2 nova_compute[226829]: 2026-01-31 08:41:34.206 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.209 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9817d028-10, col_values=(('external_ids', {'iface-id': '60c86e7a-e59d-4d4b-8afb-1ad3e23f7991'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:34 compute-2 nova_compute[226829]: 2026-01-31 08:41:34.210 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:34 compute-2 ovn_controller[133834]: 2026-01-31T08:41:34Z|00752|binding|INFO|Releasing lport 60c86e7a-e59d-4d4b-8afb-1ad3e23f7991 from this chassis (sb_readonly=0)
Jan 31 08:41:34 compute-2 nova_compute[226829]: 2026-01-31 08:41:34.211 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.212 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9817d028-13b8-4f0c-b501-093938ded45a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9817d028-13b8-4f0c-b501-093938ded45a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.213 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0284980b-ed48-4140-b368-054ef622083d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.213 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-9817d028-13b8-4f0c-b501-093938ded45a
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/9817d028-13b8-4f0c-b501-093938ded45a.pid.haproxy
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 9817d028-13b8-4f0c-b501-093938ded45a
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:41:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:34.214 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a', 'env', 'PROCESS_TAG=haproxy-9817d028-13b8-4f0c-b501-093938ded45a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9817d028-13b8-4f0c-b501-093938ded45a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:41:34 compute-2 nova_compute[226829]: 2026-01-31 08:41:34.215 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:34 compute-2 podman[315727]: 2026-01-31 08:41:34.522834478 +0000 UTC m=+0.041553161 container create 375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:41:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:34.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:34 compute-2 systemd[1]: Started libpod-conmon-375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47.scope.
Jan 31 08:41:34 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:41:34 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2119829e6743fd161bc632aef896d9a5055d8259a3f6f92b3f085c3e6263e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:34 compute-2 podman[315727]: 2026-01-31 08:41:34.500382258 +0000 UTC m=+0.019100971 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:41:34 compute-2 podman[315727]: 2026-01-31 08:41:34.59796359 +0000 UTC m=+0.116682303 container init 375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 08:41:34 compute-2 podman[315727]: 2026-01-31 08:41:34.602709739 +0000 UTC m=+0.121428422 container start 375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:41:34 compute-2 neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a[315749]: [NOTICE]   (315771) : New worker (315782) forked
Jan 31 08:41:34 compute-2 neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a[315749]: [NOTICE]   (315771) : Loading success.
Jan 31 08:41:34 compute-2 nova_compute[226829]: 2026-01-31 08:41:34.627 226833 DEBUG nova.network.neutron [req-4778d851-ce27-4b93-9662-ca9d0e516c15 req-eb80fdc2-c177-40e4-ad17-37180237a86a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updated VIF entry in instance network info cache for port 5dc6375a-c15a-4d30-8638-a43e70cedff3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:41:34 compute-2 nova_compute[226829]: 2026-01-31 08:41:34.628 226833 DEBUG nova.network.neutron [req-4778d851-ce27-4b93-9662-ca9d0e516c15 req-eb80fdc2-c177-40e4-ad17-37180237a86a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updating instance_info_cache with network_info: [{"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:41:34 compute-2 nova_compute[226829]: 2026-01-31 08:41:34.725 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848894.7248538, 6bdcbd02-3b18-468f-9304-f2beb486b15f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:41:34 compute-2 nova_compute[226829]: 2026-01-31 08:41:34.725 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] VM Started (Lifecycle Event)
Jan 31 08:41:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3396622110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.482 226833 DEBUG nova.compute.manager [req-336b9f28-c90b-4237-976a-7b4696cd9348 req-394fef3b-156c-4ae9-bc9c-f257a69aa046 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received event network-vif-plugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.483 226833 DEBUG oslo_concurrency.lockutils [req-336b9f28-c90b-4237-976a-7b4696cd9348 req-394fef3b-156c-4ae9-bc9c-f257a69aa046 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.483 226833 DEBUG oslo_concurrency.lockutils [req-336b9f28-c90b-4237-976a-7b4696cd9348 req-394fef3b-156c-4ae9-bc9c-f257a69aa046 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.483 226833 DEBUG oslo_concurrency.lockutils [req-336b9f28-c90b-4237-976a-7b4696cd9348 req-394fef3b-156c-4ae9-bc9c-f257a69aa046 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.483 226833 DEBUG nova.compute.manager [req-336b9f28-c90b-4237-976a-7b4696cd9348 req-394fef3b-156c-4ae9-bc9c-f257a69aa046 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Processing event network-vif-plugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.484 226833 DEBUG nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.488 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.491 226833 INFO nova.virt.libvirt.driver [-] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Instance spawned successfully.
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.491 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.497 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.498 226833 DEBUG oslo_concurrency.lockutils [req-4778d851-ce27-4b93-9662-ca9d0e516c15 req-eb80fdc2-c177-40e4-ad17-37180237a86a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.500 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.565 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.565 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848894.7258995, 6bdcbd02-3b18-468f-9304-f2beb486b15f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.565 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] VM Paused (Lifecycle Event)
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.574 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.574 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.575 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.575 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.576 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.576 226833 DEBUG nova.virt.libvirt.driver [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:41:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:35.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.898 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.902 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769848895.4875243, 6bdcbd02-3b18-468f-9304-f2beb486b15f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:41:35 compute-2 nova_compute[226829]: 2026-01-31 08:41:35.902 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] VM Resumed (Lifecycle Event)
Jan 31 08:41:36 compute-2 ceph-mon[77282]: pgmap v3261: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 188 KiB/s wr, 15 op/s
Jan 31 08:41:36 compute-2 nova_compute[226829]: 2026-01-31 08:41:36.212 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:41:36 compute-2 nova_compute[226829]: 2026-01-31 08:41:36.215 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:41:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:36.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:36 compute-2 nova_compute[226829]: 2026-01-31 08:41:36.625 226833 INFO nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Took 19.34 seconds to spawn the instance on the hypervisor.
Jan 31 08:41:36 compute-2 nova_compute[226829]: 2026-01-31 08:41:36.627 226833 DEBUG nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:41:36 compute-2 nova_compute[226829]: 2026-01-31 08:41:36.628 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:36 compute-2 nova_compute[226829]: 2026-01-31 08:41:36.662 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:41:37 compute-2 sudo[315800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:37 compute-2 sudo[315800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:37 compute-2 sudo[315800]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:37 compute-2 nova_compute[226829]: 2026-01-31 08:41:37.214 226833 INFO nova.compute.manager [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Took 21.93 seconds to build instance.
Jan 31 08:41:37 compute-2 sudo[315825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:37 compute-2 sudo[315825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:37 compute-2 sudo[315825]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:37.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:37 compute-2 nova_compute[226829]: 2026-01-31 08:41:37.611 226833 DEBUG oslo_concurrency.lockutils [None req-8cd99b71-8862-4d3a-a151-38edbeb9abe0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:37 compute-2 nova_compute[226829]: 2026-01-31 08:41:37.958 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:38 compute-2 nova_compute[226829]: 2026-01-31 08:41:38.060 226833 DEBUG nova.compute.manager [req-51153d9b-62ae-4a9d-97dc-4afe15d37b99 req-4c1a7daf-39e4-46ab-b929-0f690ca72722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received event network-vif-plugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:41:38 compute-2 nova_compute[226829]: 2026-01-31 08:41:38.060 226833 DEBUG oslo_concurrency.lockutils [req-51153d9b-62ae-4a9d-97dc-4afe15d37b99 req-4c1a7daf-39e4-46ab-b929-0f690ca72722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:38 compute-2 nova_compute[226829]: 2026-01-31 08:41:38.060 226833 DEBUG oslo_concurrency.lockutils [req-51153d9b-62ae-4a9d-97dc-4afe15d37b99 req-4c1a7daf-39e4-46ab-b929-0f690ca72722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:38 compute-2 nova_compute[226829]: 2026-01-31 08:41:38.061 226833 DEBUG oslo_concurrency.lockutils [req-51153d9b-62ae-4a9d-97dc-4afe15d37b99 req-4c1a7daf-39e4-46ab-b929-0f690ca72722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:38 compute-2 nova_compute[226829]: 2026-01-31 08:41:38.061 226833 DEBUG nova.compute.manager [req-51153d9b-62ae-4a9d-97dc-4afe15d37b99 req-4c1a7daf-39e4-46ab-b929-0f690ca72722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] No waiting events found dispatching network-vif-plugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:41:38 compute-2 nova_compute[226829]: 2026-01-31 08:41:38.061 226833 WARNING nova.compute.manager [req-51153d9b-62ae-4a9d-97dc-4afe15d37b99 req-4c1a7daf-39e4-46ab-b929-0f690ca72722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received unexpected event network-vif-plugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 for instance with vm_state active and task_state None.
Jan 31 08:41:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:38.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:38 compute-2 ceph-mon[77282]: pgmap v3262: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 KiB/s rd, 13 KiB/s wr, 6 op/s
Jan 31 08:41:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:39.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:39 compute-2 ceph-mon[77282]: pgmap v3263: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 138 KiB/s rd, 14 KiB/s wr, 14 op/s
Jan 31 08:41:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:40.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:41.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:41 compute-2 nova_compute[226829]: 2026-01-31 08:41:41.630 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:42 compute-2 ceph-mon[77282]: pgmap v3264: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 31 08:41:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:42.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:42 compute-2 nova_compute[226829]: 2026-01-31 08:41:42.960 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:43.098 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:41:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:43.099 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:41:43 compute-2 nova_compute[226829]: 2026-01-31 08:41:43.099 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:43.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:43 compute-2 nova_compute[226829]: 2026-01-31 08:41:43.869 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:43 compute-2 NetworkManager[48999]: <info>  [1769848903.8706] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Jan 31 08:41:43 compute-2 NetworkManager[48999]: <info>  [1769848903.8721] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Jan 31 08:41:43 compute-2 nova_compute[226829]: 2026-01-31 08:41:43.908 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:43 compute-2 ovn_controller[133834]: 2026-01-31T08:41:43Z|00753|binding|INFO|Releasing lport 60c86e7a-e59d-4d4b-8afb-1ad3e23f7991 from this chassis (sb_readonly=0)
Jan 31 08:41:43 compute-2 nova_compute[226829]: 2026-01-31 08:41:43.923 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:44 compute-2 podman[315855]: 2026-01-31 08:41:44.185712537 +0000 UTC m=+0.065328987 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:41:44 compute-2 ceph-mon[77282]: pgmap v3265: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 31 08:41:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:45 compute-2 ceph-mon[77282]: pgmap v3266: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 31 08:41:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:45.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:46 compute-2 nova_compute[226829]: 2026-01-31 08:41:46.324 226833 DEBUG nova.compute.manager [req-fe95c2a8-7c39-4307-b405-b66b31f8b812 req-c71eff2c-c67b-41c5-a9ff-2de1c34018b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received event network-changed-5dc6375a-c15a-4d30-8638-a43e70cedff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:41:46 compute-2 nova_compute[226829]: 2026-01-31 08:41:46.325 226833 DEBUG nova.compute.manager [req-fe95c2a8-7c39-4307-b405-b66b31f8b812 req-c71eff2c-c67b-41c5-a9ff-2de1c34018b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Refreshing instance network info cache due to event network-changed-5dc6375a-c15a-4d30-8638-a43e70cedff3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:41:46 compute-2 nova_compute[226829]: 2026-01-31 08:41:46.325 226833 DEBUG oslo_concurrency.lockutils [req-fe95c2a8-7c39-4307-b405-b66b31f8b812 req-c71eff2c-c67b-41c5-a9ff-2de1c34018b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:41:46 compute-2 nova_compute[226829]: 2026-01-31 08:41:46.325 226833 DEBUG oslo_concurrency.lockutils [req-fe95c2a8-7c39-4307-b405-b66b31f8b812 req-c71eff2c-c67b-41c5-a9ff-2de1c34018b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:41:46 compute-2 nova_compute[226829]: 2026-01-31 08:41:46.325 226833 DEBUG nova.network.neutron [req-fe95c2a8-7c39-4307-b405-b66b31f8b812 req-c71eff2c-c67b-41c5-a9ff-2de1c34018b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Refreshing network info cache for port 5dc6375a-c15a-4d30-8638-a43e70cedff3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:41:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:46 compute-2 nova_compute[226829]: 2026-01-31 08:41:46.632 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:47.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:47 compute-2 nova_compute[226829]: 2026-01-31 08:41:47.962 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:48 compute-2 ceph-mon[77282]: pgmap v3267: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 74 op/s
Jan 31 08:41:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:48.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:49 compute-2 nova_compute[226829]: 2026-01-31 08:41:49.392 226833 DEBUG nova.network.neutron [req-fe95c2a8-7c39-4307-b405-b66b31f8b812 req-c71eff2c-c67b-41c5-a9ff-2de1c34018b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updated VIF entry in instance network info cache for port 5dc6375a-c15a-4d30-8638-a43e70cedff3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:41:49 compute-2 nova_compute[226829]: 2026-01-31 08:41:49.392 226833 DEBUG nova.network.neutron [req-fe95c2a8-7c39-4307-b405-b66b31f8b812 req-c71eff2c-c67b-41c5-a9ff-2de1c34018b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updating instance_info_cache with network_info: [{"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:41:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:41:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:49.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:41:49 compute-2 nova_compute[226829]: 2026-01-31 08:41:49.743 226833 DEBUG oslo_concurrency.lockutils [req-fe95c2a8-7c39-4307-b405-b66b31f8b812 req-c71eff2c-c67b-41c5-a9ff-2de1c34018b2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:41:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:41:50.101 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:50 compute-2 podman[315886]: 2026-01-31 08:41:50.186101222 +0000 UTC m=+0.076195832 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:41:50 compute-2 ceph-mon[77282]: pgmap v3268: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 KiB/s wr, 76 op/s
Jan 31 08:41:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:50.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:51 compute-2 sudo[315905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:51 compute-2 sudo[315905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:51 compute-2 sudo[315905]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:51 compute-2 sudo[315930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:51 compute-2 sudo[315930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:51 compute-2 sudo[315930]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:51 compute-2 sudo[315955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:51 compute-2 sudo[315955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:51 compute-2 sudo[315955]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:51 compute-2 sudo[315980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 08:41:51 compute-2 sudo[315980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:51 compute-2 nova_compute[226829]: 2026-01-31 08:41:51.552 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:51 compute-2 ceph-mon[77282]: pgmap v3269: 305 pgs: 305 active+clean; 431 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 512 KiB/s wr, 75 op/s
Jan 31 08:41:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:51.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:51 compute-2 nova_compute[226829]: 2026-01-31 08:41:51.634 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:51 compute-2 sudo[315980]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:51 compute-2 sudo[316024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:51 compute-2 sudo[316024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:51 compute-2 sudo[316024]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:51 compute-2 sudo[316049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:51 compute-2 sudo[316049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:51 compute-2 sudo[316049]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:52 compute-2 sudo[316075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:52 compute-2 sudo[316075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:52 compute-2 sudo[316075]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:52 compute-2 sudo[316100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:41:52 compute-2 sudo[316100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:52 compute-2 sudo[316100]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:52.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:41:52 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2637242050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:52 compute-2 nova_compute[226829]: 2026-01-31 08:41:52.964 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:53 compute-2 ovn_controller[133834]: 2026-01-31T08:41:53Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:ff:13 10.100.0.12
Jan 31 08:41:53 compute-2 ovn_controller[133834]: 2026-01-31T08:41:53Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:ff:13 10.100.0.12
Jan 31 08:41:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:53 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #58. Immutable memtables: 14.
Jan 31 08:41:54 compute-2 ceph-mon[77282]: pgmap v3270: 305 pgs: 305 active+clean; 431 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 512 KiB/s wr, 13 op/s
Jan 31 08:41:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:41:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:41:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:41:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:41:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:41:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:41:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/537536257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:41:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/537536257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:41:54 compute-2 nova_compute[226829]: 2026-01-31 08:41:54.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:54.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:55 compute-2 nova_compute[226829]: 2026-01-31 08:41:55.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:55 compute-2 nova_compute[226829]: 2026-01-31 08:41:55.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:41:55 compute-2 nova_compute[226829]: 2026-01-31 08:41:55.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:41:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:55.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:56 compute-2 nova_compute[226829]: 2026-01-31 08:41:56.046 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:41:56 compute-2 nova_compute[226829]: 2026-01-31 08:41:56.047 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:41:56 compute-2 nova_compute[226829]: 2026-01-31 08:41:56.047 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:41:56 compute-2 nova_compute[226829]: 2026-01-31 08:41:56.047 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6bdcbd02-3b18-468f-9304-f2beb486b15f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:41:56 compute-2 ceph-mon[77282]: pgmap v3271: 305 pgs: 305 active+clean; 446 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 187 KiB/s rd, 1.9 MiB/s wr, 44 op/s
Jan 31 08:41:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:56.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:56 compute-2 nova_compute[226829]: 2026-01-31 08:41:56.636 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:57 compute-2 sudo[316158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:57 compute-2 sudo[316158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:57 compute-2 sudo[316158]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:57 compute-2 sudo[316183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:57 compute-2 sudo[316183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:57 compute-2 sudo[316183]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:57.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:57 compute-2 nova_compute[226829]: 2026-01-31 08:41:57.965 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:58 compute-2 ceph-mon[77282]: pgmap v3272: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 217 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 31 08:41:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:58.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:58 compute-2 nova_compute[226829]: 2026-01-31 08:41:58.763 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updating instance_info_cache with network_info: [{"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:41:58 compute-2 nova_compute[226829]: 2026-01-31 08:41:58.817 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:41:58 compute-2 nova_compute[226829]: 2026-01-31 08:41:58.818 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:41:58 compute-2 nova_compute[226829]: 2026-01-31 08:41:58.818 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:58 compute-2 nova_compute[226829]: 2026-01-31 08:41:58.818 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:58 compute-2 nova_compute[226829]: 2026-01-31 08:41:58.818 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:59 compute-2 ceph-mon[77282]: pgmap v3273: 305 pgs: 305 active+clean; 456 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 227 KiB/s rd, 2.2 MiB/s wr, 52 op/s
Jan 31 08:41:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:41:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:59.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:00.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:01.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:01 compute-2 nova_compute[226829]: 2026-01-31 08:42:01.638 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:02.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:02 compute-2 sudo[316211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:02 compute-2 sudo[316211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:02 compute-2 sudo[316211]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:02 compute-2 sudo[316236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:42:02 compute-2 sudo[316236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:02 compute-2 sudo[316236]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:02 compute-2 ceph-mon[77282]: pgmap v3274: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 276 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Jan 31 08:42:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:42:02 compute-2 nova_compute[226829]: 2026-01-31 08:42:02.968 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:03 compute-2 nova_compute[226829]: 2026-01-31 08:42:03.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:42:03 compute-2 ceph-mon[77282]: pgmap v3275: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 3.4 MiB/s wr, 74 op/s
Jan 31 08:42:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:03.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:04 compute-2 nova_compute[226829]: 2026-01-31 08:42:04.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:04.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:04 compute-2 nova_compute[226829]: 2026-01-31 08:42:04.865 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:04 compute-2 nova_compute[226829]: 2026-01-31 08:42:04.866 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:04 compute-2 nova_compute[226829]: 2026-01-31 08:42:04.866 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:04 compute-2 nova_compute[226829]: 2026-01-31 08:42:04.867 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:42:04 compute-2 nova_compute[226829]: 2026-01-31 08:42:04.867 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:42:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2310673947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:42:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:05.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:42:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/119435219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:05 compute-2 nova_compute[226829]: 2026-01-31 08:42:05.673 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.806s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:42:06 compute-2 ceph-mon[77282]: pgmap v3276: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 257 KiB/s rd, 3.4 MiB/s wr, 77 op/s
Jan 31 08:42:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3250871770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:42:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/119435219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/471124134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.371 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.372 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.516 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.518 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3931MB free_disk=20.921566009521484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.519 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.519 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:06.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.640 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.728 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 6bdcbd02-3b18-468f-9304-f2beb486b15f actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.728 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.729 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:42:06 compute-2 nova_compute[226829]: 2026-01-31 08:42:06.901 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:42:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:06.920 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:06.921 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:06.921 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:42:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3452087438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:07 compute-2 nova_compute[226829]: 2026-01-31 08:42:07.418 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:42:07 compute-2 nova_compute[226829]: 2026-01-31 08:42:07.423 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:42:07 compute-2 ceph-mon[77282]: pgmap v3277: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Jan 31 08:42:07 compute-2 nova_compute[226829]: 2026-01-31 08:42:07.481 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:42:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:07.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:07 compute-2 nova_compute[226829]: 2026-01-31 08:42:07.709 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:42:07 compute-2 nova_compute[226829]: 2026-01-31 08:42:07.710 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:07 compute-2 nova_compute[226829]: 2026-01-31 08:42:07.970 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.104 226833 DEBUG nova.compute.manager [req-f13873d2-1093-4624-9c13-29797caa6d20 req-23704f41-6f3b-4037-9586-b9282ccaa828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received event network-changed-5dc6375a-c15a-4d30-8638-a43e70cedff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.105 226833 DEBUG nova.compute.manager [req-f13873d2-1093-4624-9c13-29797caa6d20 req-23704f41-6f3b-4037-9586-b9282ccaa828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Refreshing instance network info cache due to event network-changed-5dc6375a-c15a-4d30-8638-a43e70cedff3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.105 226833 DEBUG oslo_concurrency.lockutils [req-f13873d2-1093-4624-9c13-29797caa6d20 req-23704f41-6f3b-4037-9586-b9282ccaa828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.105 226833 DEBUG oslo_concurrency.lockutils [req-f13873d2-1093-4624-9c13-29797caa6d20 req-23704f41-6f3b-4037-9586-b9282ccaa828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.105 226833 DEBUG nova.network.neutron [req-f13873d2-1093-4624-9c13-29797caa6d20 req-23704f41-6f3b-4037-9586-b9282ccaa828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Refreshing network info cache for port 5dc6375a-c15a-4d30-8638-a43e70cedff3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.418 226833 DEBUG oslo_concurrency.lockutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "6bdcbd02-3b18-468f-9304-f2beb486b15f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.419 226833 DEBUG oslo_concurrency.lockutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.419 226833 DEBUG oslo_concurrency.lockutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.419 226833 DEBUG oslo_concurrency.lockutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.419 226833 DEBUG oslo_concurrency.lockutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.420 226833 INFO nova.compute.manager [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Terminating instance
Jan 31 08:42:08 compute-2 nova_compute[226829]: 2026-01-31 08:42:08.421 226833 DEBUG nova.compute.manager [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:42:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:08.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3452087438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/579147508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:09 compute-2 kernel: tap5dc6375a-c1 (unregistering): left promiscuous mode
Jan 31 08:42:09 compute-2 NetworkManager[48999]: <info>  [1769848929.5672] device (tap5dc6375a-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:42:09 compute-2 ovn_controller[133834]: 2026-01-31T08:42:09Z|00754|binding|INFO|Releasing lport 5dc6375a-c15a-4d30-8638-a43e70cedff3 from this chassis (sb_readonly=0)
Jan 31 08:42:09 compute-2 ovn_controller[133834]: 2026-01-31T08:42:09Z|00755|binding|INFO|Setting lport 5dc6375a-c15a-4d30-8638-a43e70cedff3 down in Southbound
Jan 31 08:42:09 compute-2 ovn_controller[133834]: 2026-01-31T08:42:09Z|00756|binding|INFO|Removing iface tap5dc6375a-c1 ovn-installed in OVS
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.575 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.583 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:09.609 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:ff:13 10.100.0.12'], port_security=['fa:16:3e:d5:ff:13 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6bdcbd02-3b18-468f-9304-f2beb486b15f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9817d028-13b8-4f0c-b501-093938ded45a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '321391d8-3670-4204-8148-a6376e72af1f d5a7f2ce-19cf-4a0a-889d-4cd6514ef7af', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9c46936-e0c6-4ac4-8122-69894a2a9a80, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=5dc6375a-c15a-4d30-8638-a43e70cedff3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:42:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:09.610 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 5dc6375a-c15a-4d30-8638-a43e70cedff3 in datapath 9817d028-13b8-4f0c-b501-093938ded45a unbound from our chassis
Jan 31 08:42:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:09.611 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9817d028-13b8-4f0c-b501-093938ded45a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:42:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:09.613 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[acf523fe-5869-423e-9df7-e3020f2f7cc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:42:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:09.614 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a namespace which is not needed anymore
Jan 31 08:42:09 compute-2 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000bc.scope: Deactivated successfully.
Jan 31 08:42:09 compute-2 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000bc.scope: Consumed 14.179s CPU time.
Jan 31 08:42:09 compute-2 systemd-machined[195142]: Machine qemu-86-instance-000000bc terminated.
Jan 31 08:42:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:09.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.655 226833 INFO nova.virt.libvirt.driver [-] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Instance destroyed successfully.
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.655 226833 DEBUG nova.objects.instance [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'resources' on Instance uuid 6bdcbd02-3b18-468f-9304-f2beb486b15f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.687 226833 DEBUG nova.virt.libvirt.vif [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1321703218',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1321703218',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=188,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMYEP+FH4+JiIZbr79pJv9aYh00yukVd4LTTixBSPYJFPBphM6LwgldEUNc8Ik1ZLIJ1fhNhcXRkJiEr1nLs/12gffuMpLG+id2YXJFVSw24pEnIZ6xMN8/ONialxUfUA==',key_name='tempest-TestSecurityGroupsBasicOps-943486148',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:41:36Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-wyyqc6i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:41:36Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=6bdcbd02-3b18-468f-9304-f2beb486b15f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.688 226833 DEBUG nova.network.os_vif_util [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:42:09 compute-2 ceph-mon[77282]: pgmap v3278: 305 pgs: 305 active+clean; 506 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Jan 31 08:42:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2503004743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.690 226833 DEBUG nova.network.os_vif_util [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:ff:13,bridge_name='br-int',has_traffic_filtering=True,id=5dc6375a-c15a-4d30-8638-a43e70cedff3,network=Network(9817d028-13b8-4f0c-b501-093938ded45a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dc6375a-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.690 226833 DEBUG os_vif [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:ff:13,bridge_name='br-int',has_traffic_filtering=True,id=5dc6375a-c15a-4d30-8638-a43e70cedff3,network=Network(9817d028-13b8-4f0c-b501-093938ded45a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dc6375a-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.692 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.693 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dc6375a-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.695 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.697 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:09 compute-2 nova_compute[226829]: 2026-01-31 08:42:09.702 226833 INFO os_vif [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:ff:13,bridge_name='br-int',has_traffic_filtering=True,id=5dc6375a-c15a-4d30-8638-a43e70cedff3,network=Network(9817d028-13b8-4f0c-b501-093938ded45a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dc6375a-c1')
Jan 31 08:42:10 compute-2 neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a[315749]: [NOTICE]   (315771) : haproxy version is 2.8.14-c23fe91
Jan 31 08:42:10 compute-2 neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a[315749]: [NOTICE]   (315771) : path to executable is /usr/sbin/haproxy
Jan 31 08:42:10 compute-2 neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a[315749]: [WARNING]  (315771) : Exiting Master process...
Jan 31 08:42:10 compute-2 neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a[315749]: [ALERT]    (315771) : Current worker (315782) exited with code 143 (Terminated)
Jan 31 08:42:10 compute-2 neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a[315749]: [WARNING]  (315771) : All workers exited. Exiting... (0)
Jan 31 08:42:10 compute-2 systemd[1]: libpod-375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47.scope: Deactivated successfully.
Jan 31 08:42:10 compute-2 podman[316346]: 2026-01-31 08:42:10.033651421 +0000 UTC m=+0.319832504 container died 375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 08:42:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:42:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:10.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:42:10 compute-2 nova_compute[226829]: 2026-01-31 08:42:10.575 226833 DEBUG nova.compute.manager [req-87472e22-f28a-43d1-a068-ff2af5195327 req-86afc801-f37a-40bd-8122-ab7dfdf44b43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received event network-vif-unplugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:42:10 compute-2 nova_compute[226829]: 2026-01-31 08:42:10.576 226833 DEBUG oslo_concurrency.lockutils [req-87472e22-f28a-43d1-a068-ff2af5195327 req-86afc801-f37a-40bd-8122-ab7dfdf44b43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:10 compute-2 nova_compute[226829]: 2026-01-31 08:42:10.576 226833 DEBUG oslo_concurrency.lockutils [req-87472e22-f28a-43d1-a068-ff2af5195327 req-86afc801-f37a-40bd-8122-ab7dfdf44b43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:10 compute-2 nova_compute[226829]: 2026-01-31 08:42:10.576 226833 DEBUG oslo_concurrency.lockutils [req-87472e22-f28a-43d1-a068-ff2af5195327 req-86afc801-f37a-40bd-8122-ab7dfdf44b43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:10 compute-2 nova_compute[226829]: 2026-01-31 08:42:10.576 226833 DEBUG nova.compute.manager [req-87472e22-f28a-43d1-a068-ff2af5195327 req-86afc801-f37a-40bd-8122-ab7dfdf44b43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] No waiting events found dispatching network-vif-unplugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:42:10 compute-2 nova_compute[226829]: 2026-01-31 08:42:10.577 226833 DEBUG nova.compute.manager [req-87472e22-f28a-43d1-a068-ff2af5195327 req-86afc801-f37a-40bd-8122-ab7dfdf44b43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received event network-vif-unplugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:42:10 compute-2 nova_compute[226829]: 2026-01-31 08:42:10.693 226833 DEBUG nova.network.neutron [req-f13873d2-1093-4624-9c13-29797caa6d20 req-23704f41-6f3b-4037-9586-b9282ccaa828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updated VIF entry in instance network info cache for port 5dc6375a-c15a-4d30-8638-a43e70cedff3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:42:10 compute-2 nova_compute[226829]: 2026-01-31 08:42:10.694 226833 DEBUG nova.network.neutron [req-f13873d2-1093-4624-9c13-29797caa6d20 req-23704f41-6f3b-4037-9586-b9282ccaa828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updating instance_info_cache with network_info: [{"id": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "address": "fa:16:3e:d5:ff:13", "network": {"id": "9817d028-13b8-4f0c-b501-093938ded45a", "bridge": "br-int", "label": "tempest-network-smoke--1995662602", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dc6375a-c1", "ovs_interfaceid": "5dc6375a-c15a-4d30-8638-a43e70cedff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:42:10 compute-2 systemd[1]: var-lib-containers-storage-overlay-2d2119829e6743fd161bc632aef896d9a5055d8259a3f6f92b3f085c3e6263e4-merged.mount: Deactivated successfully.
Jan 31 08:42:10 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47-userdata-shm.mount: Deactivated successfully.
Jan 31 08:42:10 compute-2 nova_compute[226829]: 2026-01-31 08:42:10.790 226833 DEBUG oslo_concurrency.lockutils [req-f13873d2-1093-4624-9c13-29797caa6d20 req-23704f41-6f3b-4037-9586-b9282ccaa828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6bdcbd02-3b18-468f-9304-f2beb486b15f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:42:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4186885530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:11 compute-2 podman[316346]: 2026-01-31 08:42:11.116123802 +0000 UTC m=+1.402304865 container cleanup 375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 08:42:11 compute-2 systemd[1]: libpod-conmon-375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47.scope: Deactivated successfully.
Jan 31 08:42:11 compute-2 podman[316394]: 2026-01-31 08:42:11.62748956 +0000 UTC m=+0.487972864 container remove 375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:42:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:11.632 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9b04fb-ecc2-4d4b-8374-b17e3415593e]: (4, ('Sat Jan 31 08:42:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a (375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47)\n375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47\nSat Jan 31 08:42:11 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a (375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47)\n375a25d890ecdbe88e4460a97e4f3b0d07c1eb8b773f4f4cfaf08bf2be991d47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:42:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:11.633 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[91eb7014-14e6-4267-a4aa-cbfcae76dd46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:42:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:11.634 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9817d028-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:42:11 compute-2 nova_compute[226829]: 2026-01-31 08:42:11.636 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:11 compute-2 kernel: tap9817d028-10: left promiscuous mode
Jan 31 08:42:11 compute-2 nova_compute[226829]: 2026-01-31 08:42:11.642 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:11.645 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2379a4-579f-4538-850c-096b80fe2e3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:42:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:11.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:11.663 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0921b5-6ae0-4004-a187-6f86da20aba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:42:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:11.665 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cc5ea2-200e-4fd3-8786-acf58e7bb41b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:42:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:11.677 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[80681d05-6ccf-4aa1-b036-12afbffbd9fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 914647, 'reachable_time': 20924, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316410, 'error': None, 'target': 'ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:42:11 compute-2 systemd[1]: run-netns-ovnmeta\x2d9817d028\x2d13b8\x2d4f0c\x2db501\x2d093938ded45a.mount: Deactivated successfully.
Jan 31 08:42:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:11.681 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9817d028-13b8-4f0c-b501-093938ded45a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:42:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:11.682 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[67cb663b-0c59-4b62-8e64-eabd9774324e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:42:11 compute-2 ceph-mon[77282]: pgmap v3279: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 83 KiB/s rd, 1.9 MiB/s wr, 69 op/s
Jan 31 08:42:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2291142291' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:42:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2291142291' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:42:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:12.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:12 compute-2 nova_compute[226829]: 2026-01-31 08:42:12.784 226833 DEBUG nova.compute.manager [req-b831c902-edca-4d06-8823-798c522cdefa req-e7db051c-6fe9-4f04-87ee-0f77e1c01a07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received event network-vif-plugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:42:12 compute-2 nova_compute[226829]: 2026-01-31 08:42:12.785 226833 DEBUG oslo_concurrency.lockutils [req-b831c902-edca-4d06-8823-798c522cdefa req-e7db051c-6fe9-4f04-87ee-0f77e1c01a07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:12 compute-2 nova_compute[226829]: 2026-01-31 08:42:12.785 226833 DEBUG oslo_concurrency.lockutils [req-b831c902-edca-4d06-8823-798c522cdefa req-e7db051c-6fe9-4f04-87ee-0f77e1c01a07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:12 compute-2 nova_compute[226829]: 2026-01-31 08:42:12.787 226833 DEBUG oslo_concurrency.lockutils [req-b831c902-edca-4d06-8823-798c522cdefa req-e7db051c-6fe9-4f04-87ee-0f77e1c01a07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:12 compute-2 nova_compute[226829]: 2026-01-31 08:42:12.787 226833 DEBUG nova.compute.manager [req-b831c902-edca-4d06-8823-798c522cdefa req-e7db051c-6fe9-4f04-87ee-0f77e1c01a07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] No waiting events found dispatching network-vif-plugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:42:12 compute-2 nova_compute[226829]: 2026-01-31 08:42:12.787 226833 WARNING nova.compute.manager [req-b831c902-edca-4d06-8823-798c522cdefa req-e7db051c-6fe9-4f04-87ee-0f77e1c01a07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received unexpected event network-vif-plugged-5dc6375a-c15a-4d30-8638-a43e70cedff3 for instance with vm_state active and task_state deleting.
Jan 31 08:42:12 compute-2 nova_compute[226829]: 2026-01-31 08:42:12.972 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:13.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:14 compute-2 ceph-mon[77282]: pgmap v3280: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 216 KiB/s wr, 33 op/s
Jan 31 08:42:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:14.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:14 compute-2 nova_compute[226829]: 2026-01-31 08:42:14.696 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:15 compute-2 nova_compute[226829]: 2026-01-31 08:42:15.033 226833 INFO nova.virt.libvirt.driver [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Deleting instance files /var/lib/nova/instances/6bdcbd02-3b18-468f-9304-f2beb486b15f_del
Jan 31 08:42:15 compute-2 nova_compute[226829]: 2026-01-31 08:42:15.034 226833 INFO nova.virt.libvirt.driver [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Deletion of /var/lib/nova/instances/6bdcbd02-3b18-468f-9304-f2beb486b15f_del complete
Jan 31 08:42:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:15 compute-2 podman[316414]: 2026-01-31 08:42:15.205279211 +0000 UTC m=+0.090081409 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:42:15 compute-2 nova_compute[226829]: 2026-01-31 08:42:15.370 226833 INFO nova.compute.manager [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Took 6.95 seconds to destroy the instance on the hypervisor.
Jan 31 08:42:15 compute-2 nova_compute[226829]: 2026-01-31 08:42:15.371 226833 DEBUG oslo.service.loopingcall [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:42:15 compute-2 nova_compute[226829]: 2026-01-31 08:42:15.372 226833 DEBUG nova.compute.manager [-] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:42:15 compute-2 nova_compute[226829]: 2026-01-31 08:42:15.372 226833 DEBUG nova.network.neutron [-] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:42:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:15.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:15 compute-2 ceph-mon[77282]: pgmap v3281: 305 pgs: 305 active+clean; 445 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 390 KiB/s wr, 90 op/s
Jan 31 08:42:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:16.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:17 compute-2 sudo[316442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:17 compute-2 sudo[316442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:17 compute-2 sudo[316442]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:17 compute-2 sudo[316467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:17 compute-2 sudo[316467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:17 compute-2 sudo[316467]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:17.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:17 compute-2 ceph-mon[77282]: pgmap v3282: 305 pgs: 305 active+clean; 427 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 377 KiB/s wr, 119 op/s
Jan 31 08:42:17 compute-2 nova_compute[226829]: 2026-01-31 08:42:17.973 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:18.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:18 compute-2 nova_compute[226829]: 2026-01-31 08:42:18.576 226833 DEBUG nova.compute.manager [req-2cda9cc3-5bd1-49c1-be42-03634c80986b req-ce297122-cb27-4cfa-a4b0-b1d6d35201af 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Received event network-vif-deleted-5dc6375a-c15a-4d30-8638-a43e70cedff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:42:18 compute-2 nova_compute[226829]: 2026-01-31 08:42:18.576 226833 INFO nova.compute.manager [req-2cda9cc3-5bd1-49c1-be42-03634c80986b req-ce297122-cb27-4cfa-a4b0-b1d6d35201af 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Neutron deleted interface 5dc6375a-c15a-4d30-8638-a43e70cedff3; detaching it from the instance and deleting it from the info cache
Jan 31 08:42:18 compute-2 nova_compute[226829]: 2026-01-31 08:42:18.576 226833 DEBUG nova.network.neutron [req-2cda9cc3-5bd1-49c1-be42-03634c80986b req-ce297122-cb27-4cfa-a4b0-b1d6d35201af 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:42:18 compute-2 nova_compute[226829]: 2026-01-31 08:42:18.682 226833 DEBUG nova.network.neutron [-] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:42:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:42:18 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1667139749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:42:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:42:18 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1667139749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:42:19 compute-2 nova_compute[226829]: 2026-01-31 08:42:19.473 226833 DEBUG nova.compute.manager [req-2cda9cc3-5bd1-49c1-be42-03634c80986b req-ce297122-cb27-4cfa-a4b0-b1d6d35201af 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Detach interface failed, port_id=5dc6375a-c15a-4d30-8638-a43e70cedff3, reason: Instance 6bdcbd02-3b18-468f-9304-f2beb486b15f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:42:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:19.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:19 compute-2 nova_compute[226829]: 2026-01-31 08:42:19.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:19 compute-2 nova_compute[226829]: 2026-01-31 08:42:19.990 226833 INFO nova.compute.manager [-] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Took 4.62 seconds to deallocate network for instance.
Jan 31 08:42:20 compute-2 ceph-mon[77282]: pgmap v3283: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 369 KiB/s wr, 134 op/s
Jan 31 08:42:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1667139749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:42:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1667139749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:42:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:20.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:21 compute-2 podman[316494]: 2026-01-31 08:42:21.149287745 +0000 UTC m=+0.037086779 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:42:21 compute-2 nova_compute[226829]: 2026-01-31 08:42:21.154 226833 DEBUG oslo_concurrency.lockutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:21 compute-2 nova_compute[226829]: 2026-01-31 08:42:21.154 226833 DEBUG oslo_concurrency.lockutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:21.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:21 compute-2 nova_compute[226829]: 2026-01-31 08:42:21.710 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:21 compute-2 nova_compute[226829]: 2026-01-31 08:42:21.710 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:42:22 compute-2 ceph-mon[77282]: pgmap v3284: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 199 KiB/s wr, 137 op/s
Jan 31 08:42:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:22.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:22 compute-2 nova_compute[226829]: 2026-01-31 08:42:22.975 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:23 compute-2 nova_compute[226829]: 2026-01-31 08:42:23.073 226833 DEBUG oslo_concurrency.processutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:42:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:42:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/445970851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:23 compute-2 nova_compute[226829]: 2026-01-31 08:42:23.512 226833 DEBUG oslo_concurrency.processutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:42:23 compute-2 nova_compute[226829]: 2026-01-31 08:42:23.517 226833 DEBUG nova.compute.provider_tree [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:42:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:23.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:23 compute-2 nova_compute[226829]: 2026-01-31 08:42:23.794 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:23.794 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:42:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:23.795 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:42:23 compute-2 nova_compute[226829]: 2026-01-31 08:42:23.852 226833 DEBUG nova.scheduler.client.report [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:42:24 compute-2 nova_compute[226829]: 2026-01-31 08:42:24.207 226833 DEBUG oslo_concurrency.lockutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:24 compute-2 ceph-mon[77282]: pgmap v3285: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 182 KiB/s wr, 112 op/s
Jan 31 08:42:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/445970851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:24.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:24 compute-2 nova_compute[226829]: 2026-01-31 08:42:24.641 226833 INFO nova.scheduler.client.report [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Deleted allocations for instance 6bdcbd02-3b18-468f-9304-f2beb486b15f
Jan 31 08:42:24 compute-2 nova_compute[226829]: 2026-01-31 08:42:24.653 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848929.652388, 6bdcbd02-3b18-468f-9304-f2beb486b15f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:42:24 compute-2 nova_compute[226829]: 2026-01-31 08:42:24.653 226833 INFO nova.compute.manager [-] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] VM Stopped (Lifecycle Event)
Jan 31 08:42:24 compute-2 nova_compute[226829]: 2026-01-31 08:42:24.700 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:24 compute-2 nova_compute[226829]: 2026-01-31 08:42:24.879 226833 DEBUG nova.compute.manager [None req-d6296d56-64c7-4fe4-8146-ce23b16f23d4 - - - - - -] [instance: 6bdcbd02-3b18-468f-9304-f2beb486b15f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:42:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:25 compute-2 nova_compute[226829]: 2026-01-31 08:42:25.258 226833 DEBUG oslo_concurrency.lockutils [None req-5aa4581c-98e9-4827-8fe2-20befa4c1a51 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "6bdcbd02-3b18-468f-9304-f2beb486b15f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:25 compute-2 ceph-mon[77282]: pgmap v3286: 305 pgs: 305 active+clean; 436 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 129 op/s
Jan 31 08:42:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:25.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:26.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:27 compute-2 nova_compute[226829]: 2026-01-31 08:42:27.977 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:28 compute-2 ceph-mon[77282]: pgmap v3287: 305 pgs: 305 active+clean; 445 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 970 KiB/s rd, 2.0 MiB/s wr, 86 op/s
Jan 31 08:42:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:28.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:29.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:29 compute-2 nova_compute[226829]: 2026-01-31 08:42:29.703 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:30 compute-2 ceph-mon[77282]: pgmap v3288: 305 pgs: 305 active+clean; 448 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Jan 31 08:42:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:30.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:42:30.797 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:42:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:31.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:32 compute-2 ceph-mon[77282]: pgmap v3289: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 491 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 31 08:42:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:32.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:32 compute-2 nova_compute[226829]: 2026-01-31 08:42:32.980 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:33 compute-2 nova_compute[226829]: 2026-01-31 08:42:33.080 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:33 compute-2 nova_compute[226829]: 2026-01-31 08:42:33.146 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:33 compute-2 ceph-mon[77282]: pgmap v3290: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 487 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 31 08:42:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:34 compute-2 nova_compute[226829]: 2026-01-31 08:42:34.706 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:35.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:35 compute-2 ceph-mon[77282]: pgmap v3291: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 490 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 31 08:42:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:42:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:42:37 compute-2 sudo[316545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:37 compute-2 sudo[316545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:37 compute-2 sudo[316545]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:37 compute-2 sudo[316570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:37 compute-2 sudo[316570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:37 compute-2 sudo[316570]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:37 compute-2 ceph-mon[77282]: pgmap v3292: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 459 KiB/s rd, 1.0 MiB/s wr, 55 op/s
Jan 31 08:42:37 compute-2 nova_compute[226829]: 2026-01-31 08:42:37.981 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:38.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:39 compute-2 nova_compute[226829]: 2026-01-31 08:42:39.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:39 compute-2 nova_compute[226829]: 2026-01-31 08:42:39.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:42:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:39.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:39 compute-2 nova_compute[226829]: 2026-01-31 08:42:39.709 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:40 compute-2 ceph-mon[77282]: pgmap v3293: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 157 KiB/s wr, 40 op/s
Jan 31 08:42:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:40.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:42:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:41.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:42:41 compute-2 nova_compute[226829]: 2026-01-31 08:42:41.907 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:41 compute-2 nova_compute[226829]: 2026-01-31 08:42:41.907 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:42:42 compute-2 nova_compute[226829]: 2026-01-31 08:42:42.042 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:42:42 compute-2 ceph-mon[77282]: pgmap v3294: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 243 KiB/s rd, 122 KiB/s wr, 35 op/s
Jan 31 08:42:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:42.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:42 compute-2 nova_compute[226829]: 2026-01-31 08:42:42.983 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:43.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:44 compute-2 ceph-mon[77282]: pgmap v3295: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 29 KiB/s wr, 4 op/s
Jan 31 08:42:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:44.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:44 compute-2 nova_compute[226829]: 2026-01-31 08:42:44.712 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:45.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:46 compute-2 ceph-mon[77282]: pgmap v3296: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 29 KiB/s wr, 4 op/s
Jan 31 08:42:46 compute-2 podman[316600]: 2026-01-31 08:42:46.185811756 +0000 UTC m=+0.073059567 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:42:46 compute-2 nova_compute[226829]: 2026-01-31 08:42:46.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:46.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:47.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:47 compute-2 nova_compute[226829]: 2026-01-31 08:42:47.985 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:48 compute-2 ceph-mon[77282]: pgmap v3297: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 852 B/s rd, 19 KiB/s wr, 1 op/s
Jan 31 08:42:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:48.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:49.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:49 compute-2 nova_compute[226829]: 2026-01-31 08:42:49.715 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:50 compute-2 ceph-mon[77282]: pgmap v3298: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:42:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:50.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:51.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:52 compute-2 podman[316630]: 2026-01-31 08:42:52.15817461 +0000 UTC m=+0.044822300 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:42:52 compute-2 ceph-mon[77282]: pgmap v3299: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:42:52 compute-2 nova_compute[226829]: 2026-01-31 08:42:52.396 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:52.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:52 compute-2 nova_compute[226829]: 2026-01-31 08:42:52.988 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:53.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:54 compute-2 ceph-mon[77282]: pgmap v3300: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:42:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4045926153' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:42:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4045926153' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:42:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:54.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:54 compute-2 nova_compute[226829]: 2026-01-31 08:42:54.717 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2214913132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:42:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:56 compute-2 ceph-mon[77282]: pgmap v3301: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:42:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3260758072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:42:56 compute-2 nova_compute[226829]: 2026-01-31 08:42:56.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:42:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:56.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:42:57 compute-2 nova_compute[226829]: 2026-01-31 08:42:57.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:57 compute-2 nova_compute[226829]: 2026-01-31 08:42:57.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:42:57 compute-2 nova_compute[226829]: 2026-01-31 08:42:57.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:42:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:57.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:57 compute-2 sudo[316652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:57 compute-2 sudo[316652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:57 compute-2 sudo[316652]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:57 compute-2 sudo[316677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:57 compute-2 sudo[316677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:57 compute-2 sudo[316677]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:57 compute-2 nova_compute[226829]: 2026-01-31 08:42:57.990 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:58 compute-2 nova_compute[226829]: 2026-01-31 08:42:58.088 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:42:58 compute-2 nova_compute[226829]: 2026-01-31 08:42:58.089 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:58 compute-2 ceph-mon[77282]: pgmap v3302: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 2 op/s
Jan 31 08:42:58 compute-2 nova_compute[226829]: 2026-01-31 08:42:58.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:58.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:59 compute-2 nova_compute[226829]: 2026-01-31 08:42:59.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:59 compute-2 ceph-mon[77282]: pgmap v3303: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 2 op/s
Jan 31 08:42:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:42:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:59.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:59 compute-2 nova_compute[226829]: 2026-01-31 08:42:59.720 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:43:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:00.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:43:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:01.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:01 compute-2 ceph-mon[77282]: pgmap v3304: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Jan 31 08:43:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:02.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:02 compute-2 sudo[316705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:02 compute-2 sudo[316705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:02 compute-2 sudo[316705]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:02 compute-2 sudo[316730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:43:02 compute-2 sudo[316730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:02 compute-2 sudo[316730]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:02 compute-2 sudo[316755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:02 compute-2 sudo[316755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:02 compute-2 sudo[316755]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:02 compute-2 sudo[316780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:43:02 compute-2 sudo[316780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:02 compute-2 nova_compute[226829]: 2026-01-31 08:43:02.992 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:03 compute-2 sudo[316780]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:03.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:03 compute-2 ceph-mon[77282]: pgmap v3305: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Jan 31 08:43:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:43:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:43:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:43:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:43:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:43:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:43:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:43:04 compute-2 nova_compute[226829]: 2026-01-31 08:43:04.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:04.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:04 compute-2 nova_compute[226829]: 2026-01-31 08:43:04.723 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:05.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:05 compute-2 ceph-mon[77282]: pgmap v3306: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 82 op/s
Jan 31 08:43:06 compute-2 nova_compute[226829]: 2026-01-31 08:43:06.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:43:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:06.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:43:06 compute-2 nova_compute[226829]: 2026-01-31 08:43:06.809 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:06 compute-2 nova_compute[226829]: 2026-01-31 08:43:06.809 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:06 compute-2 nova_compute[226829]: 2026-01-31 08:43:06.810 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:06 compute-2 nova_compute[226829]: 2026-01-31 08:43:06.810 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:43:06 compute-2 nova_compute[226829]: 2026-01-31 08:43:06.810 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:43:06.921 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:43:06.922 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:43:06.923 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:43:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3970953333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:07 compute-2 nova_compute[226829]: 2026-01-31 08:43:07.241 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:07 compute-2 nova_compute[226829]: 2026-01-31 08:43:07.389 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:43:07 compute-2 nova_compute[226829]: 2026-01-31 08:43:07.391 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4133MB free_disk=20.942401885986328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:43:07 compute-2 nova_compute[226829]: 2026-01-31 08:43:07.391 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:07 compute-2 nova_compute[226829]: 2026-01-31 08:43:07.391 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:07.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:07 compute-2 ceph-mon[77282]: pgmap v3307: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 682 B/s wr, 84 op/s
Jan 31 08:43:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2758710956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3970953333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:07 compute-2 nova_compute[226829]: 2026-01-31 08:43:07.992 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:08 compute-2 nova_compute[226829]: 2026-01-31 08:43:08.439 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:43:08 compute-2 nova_compute[226829]: 2026-01-31 08:43:08.439 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:43:08 compute-2 nova_compute[226829]: 2026-01-31 08:43:08.486 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:08.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:43:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1569233818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:08 compute-2 nova_compute[226829]: 2026-01-31 08:43:08.884 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:08 compute-2 nova_compute[226829]: 2026-01-31 08:43:08.889 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:43:09 compute-2 nova_compute[226829]: 2026-01-31 08:43:09.129 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:43:09 compute-2 sudo[316883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:09 compute-2 sudo[316883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:09 compute-2 sudo[316883]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:09 compute-2 sudo[316908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:43:09 compute-2 sudo[316908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:09 compute-2 sudo[316908]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:43:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:09.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:43:09 compute-2 nova_compute[226829]: 2026-01-31 08:43:09.725 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:09 compute-2 ceph-mon[77282]: pgmap v3308: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 682 B/s wr, 87 op/s
Jan 31 08:43:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1569233818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:43:09 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:43:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2257022816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:09 compute-2 nova_compute[226829]: 2026-01-31 08:43:09.987 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:43:09 compute-2 nova_compute[226829]: 2026-01-31 08:43:09.988 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:43:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:10.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:43:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:11.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:11 compute-2 ceph-mon[77282]: pgmap v3309: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 13 KiB/s wr, 123 op/s
Jan 31 08:43:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2965992183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1324190554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:12.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:12 compute-2 nova_compute[226829]: 2026-01-31 08:43:12.993 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:13.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:13 compute-2 ceph-mon[77282]: pgmap v3310: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 13 KiB/s wr, 55 op/s
Jan 31 08:43:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3180732374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:14 compute-2 nova_compute[226829]: 2026-01-31 08:43:14.729 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:15.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:15 compute-2 ceph-mon[77282]: pgmap v3311: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 533 KiB/s rd, 23 KiB/s wr, 56 op/s
Jan 31 08:43:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:16.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:17 compute-2 podman[316937]: 2026-01-31 08:43:17.222138537 +0000 UTC m=+0.104342747 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:43:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:17.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:17 compute-2 sudo[316964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:17 compute-2 sudo[316964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:17 compute-2 sudo[316964]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:17 compute-2 sudo[316989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:17 compute-2 sudo[316989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:17 compute-2 sudo[316989]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:17 compute-2 ceph-mon[77282]: pgmap v3312: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 525 KiB/s rd, 23 KiB/s wr, 45 op/s
Jan 31 08:43:17 compute-2 nova_compute[226829]: 2026-01-31 08:43:17.997 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:18.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:19 compute-2 nova_compute[226829]: 2026-01-31 08:43:19.733 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:43:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:19.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:43:19 compute-2 ceph-mon[77282]: pgmap v3313: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 526 KiB/s rd, 23 KiB/s wr, 46 op/s
Jan 31 08:43:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:20.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:21.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:22 compute-2 ceph-mon[77282]: pgmap v3314: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 502 KiB/s rd, 23 KiB/s wr, 51 op/s
Jan 31 08:43:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:22.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:22 compute-2 nova_compute[226829]: 2026-01-31 08:43:22.989 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:22 compute-2 nova_compute[226829]: 2026-01-31 08:43:22.989 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:43:22 compute-2 nova_compute[226829]: 2026-01-31 08:43:22.998 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:23 compute-2 podman[317017]: 2026-01-31 08:43:23.156755876 +0000 UTC m=+0.048109328 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent)
Jan 31 08:43:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:23.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:24 compute-2 ceph-mon[77282]: pgmap v3315: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 14 op/s
Jan 31 08:43:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:43:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:43:24 compute-2 nova_compute[226829]: 2026-01-31 08:43:24.736 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:43:25.121 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:43:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:43:25.122 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:43:25 compute-2 nova_compute[226829]: 2026-01-31 08:43:25.143 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:25.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:26 compute-2 ceph-mon[77282]: pgmap v3316: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 14 op/s
Jan 31 08:43:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:26.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:27 compute-2 nova_compute[226829]: 2026-01-31 08:43:27.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:27.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:28 compute-2 nova_compute[226829]: 2026-01-31 08:43:27.999 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:43:28.124 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:28 compute-2 ceph-mon[77282]: pgmap v3317: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 3.2 KiB/s wr, 13 op/s
Jan 31 08:43:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:28.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:29 compute-2 nova_compute[226829]: 2026-01-31 08:43:29.739 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:29.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:30 compute-2 ceph-mon[77282]: pgmap v3318: 305 pgs: 305 active+clean; 453 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 3.2 KiB/s wr, 13 op/s
Jan 31 08:43:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/817827549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:30.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1048975873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:43:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:31.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:43:32 compute-2 ceph-mon[77282]: pgmap v3319: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 4.1 KiB/s wr, 37 op/s
Jan 31 08:43:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:33 compute-2 nova_compute[226829]: 2026-01-31 08:43:33.000 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:33.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:34 compute-2 ceph-mon[77282]: pgmap v3320: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Jan 31 08:43:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:34.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:34 compute-2 nova_compute[226829]: 2026-01-31 08:43:34.741 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:35 compute-2 ceph-mon[77282]: pgmap v3321: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 31 op/s
Jan 31 08:43:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:35.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:43:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:36.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:43:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:37.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:37 compute-2 ceph-mon[77282]: pgmap v3322: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 31 08:43:38 compute-2 nova_compute[226829]: 2026-01-31 08:43:38.001 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:38 compute-2 sudo[317045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:38 compute-2 sudo[317045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:38 compute-2 sudo[317045]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:38 compute-2 sudo[317070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:38 compute-2 sudo[317070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:38 compute-2 sudo[317070]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:38.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:39 compute-2 nova_compute[226829]: 2026-01-31 08:43:39.744 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:39.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:39 compute-2 ceph-mon[77282]: pgmap v3323: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 31 08:43:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2520344341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:40.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:41.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:41 compute-2 ceph-mon[77282]: pgmap v3324: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 31 08:43:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1919391643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:42.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:43 compute-2 nova_compute[226829]: 2026-01-31 08:43:43.003 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:43:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:43:43 compute-2 ceph-mon[77282]: pgmap v3325: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 13 op/s
Jan 31 08:43:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:43:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:44.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:43:44 compute-2 nova_compute[226829]: 2026-01-31 08:43:44.746 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:45.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:45 compute-2 ceph-mon[77282]: pgmap v3326: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 13 op/s
Jan 31 08:43:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:47.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:47 compute-2 ceph-mon[77282]: pgmap v3327: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.9 KiB/s rd, 255 B/s wr, 9 op/s
Jan 31 08:43:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1650116221' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1650116221' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:48 compute-2 nova_compute[226829]: 2026-01-31 08:43:48.005 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:48 compute-2 podman[317100]: 2026-01-31 08:43:48.181644591 +0000 UTC m=+0.064969337 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 08:43:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:48.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1500088091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1500088091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:49 compute-2 nova_compute[226829]: 2026-01-31 08:43:49.749 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:49.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:49 compute-2 ceph-mon[77282]: pgmap v3328: 305 pgs: 305 active+clean; 341 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 562 KiB/s wr, 12 op/s
Jan 31 08:43:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:43:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4162508920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:43:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4162508920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:50.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4162508920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4162508920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:51 compute-2 nova_compute[226829]: 2026-01-31 08:43:51.704 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:51.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:51 compute-2 ceph-mon[77282]: pgmap v3329: 305 pgs: 305 active+clean; 203 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 31 08:43:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:43:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:52.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:43:53 compute-2 nova_compute[226829]: 2026-01-31 08:43:53.048 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:43:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1823716638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:43:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1823716638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:53.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:54 compute-2 podman[317129]: 2026-01-31 08:43:54.176968768 +0000 UTC m=+0.066243481 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:43:54 compute-2 ceph-mon[77282]: pgmap v3330: 305 pgs: 305 active+clean; 203 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 31 08:43:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1823716638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1823716638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:54 compute-2 nova_compute[226829]: 2026-01-31 08:43:54.752 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:55.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:56.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:56 compute-2 ceph-mon[77282]: pgmap v3331: 305 pgs: 305 active+clean; 187 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 31 08:43:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:57.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/416574134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:43:57 compute-2 ceph-mon[77282]: pgmap v3332: 305 pgs: 305 active+clean; 187 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 31 08:43:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2555142641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:43:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e383 e383: 3 total, 3 up, 3 in
Jan 31 08:43:58 compute-2 nova_compute[226829]: 2026-01-31 08:43:58.050 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:58 compute-2 sudo[317150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:58 compute-2 sudo[317150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:58 compute-2 sudo[317150]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:58 compute-2 sudo[317175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:58 compute-2 sudo[317175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:58 compute-2 sudo[317175]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:58 compute-2 nova_compute[226829]: 2026-01-31 08:43:58.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:58 compute-2 nova_compute[226829]: 2026-01-31 08:43:58.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:43:58 compute-2 nova_compute[226829]: 2026-01-31 08:43:58.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:43:58 compute-2 nova_compute[226829]: 2026-01-31 08:43:58.646 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:43:58 compute-2 nova_compute[226829]: 2026-01-31 08:43:58.646 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:43:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:58.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:43:58 compute-2 ceph-mon[77282]: osdmap e383: 3 total, 3 up, 3 in
Jan 31 08:43:59 compute-2 nova_compute[226829]: 2026-01-31 08:43:59.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:59 compute-2 nova_compute[226829]: 2026-01-31 08:43:59.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:59 compute-2 nova_compute[226829]: 2026-01-31 08:43:59.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:59 compute-2 nova_compute[226829]: 2026-01-31 08:43:59.754 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:43:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:59.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:00 compute-2 ceph-mon[77282]: pgmap v3334: 305 pgs: 305 active+clean; 187 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 1.5 MiB/s wr, 81 op/s
Jan 31 08:44:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:00.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:01.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:02 compute-2 ceph-mon[77282]: pgmap v3335: 305 pgs: 305 active+clean; 183 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 31 08:44:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:02.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/783645137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:03 compute-2 nova_compute[226829]: 2026-01-31 08:44:03.053 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:03.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:04 compute-2 ceph-mon[77282]: pgmap v3336: 305 pgs: 305 active+clean; 183 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 31 08:44:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 e384: 3 total, 3 up, 3 in
Jan 31 08:44:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:04.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:04 compute-2 nova_compute[226829]: 2026-01-31 08:44:04.757 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:05 compute-2 ceph-mon[77282]: osdmap e384: 3 total, 3 up, 3 in
Jan 31 08:44:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3149711549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.232361) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045232451, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2396, "num_deletes": 253, "total_data_size": 5783552, "memory_usage": 5860448, "flush_reason": "Manual Compaction"}
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045258787, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 3781573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77244, "largest_seqno": 79635, "table_properties": {"data_size": 3771879, "index_size": 6123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20388, "raw_average_key_size": 20, "raw_value_size": 3752409, "raw_average_value_size": 3801, "num_data_blocks": 266, "num_entries": 987, "num_filter_entries": 987, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848834, "oldest_key_time": 1769848834, "file_creation_time": 1769849045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 26507 microseconds, and 8444 cpu microseconds.
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.258872) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 3781573 bytes OK
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.258898) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.260175) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.260194) EVENT_LOG_v1 {"time_micros": 1769849045260187, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.260216) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 5773091, prev total WAL file size 5773091, number of live WAL files 2.
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.261166) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(3692KB)], [159(9970KB)]
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045261246, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 13991661, "oldest_snapshot_seqno": -1}
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 10032 keys, 12039817 bytes, temperature: kUnknown
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045343287, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 12039817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11976633, "index_size": 37023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 264850, "raw_average_key_size": 26, "raw_value_size": 11802536, "raw_average_value_size": 1176, "num_data_blocks": 1405, "num_entries": 10032, "num_filter_entries": 10032, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769849045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.343550) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12039817 bytes
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.344521) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.4 rd, 146.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.7 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 10560, records dropped: 528 output_compression: NoCompression
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.344537) EVENT_LOG_v1 {"time_micros": 1769849045344528, "job": 102, "event": "compaction_finished", "compaction_time_micros": 82115, "compaction_time_cpu_micros": 32033, "output_level": 6, "num_output_files": 1, "total_output_size": 12039817, "num_input_records": 10560, "num_output_records": 10032, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045345009, "job": 102, "event": "table_file_deletion", "file_number": 161}
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045345707, "job": 102, "event": "table_file_deletion", "file_number": 159}
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.261079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.345799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.345805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.345811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.345812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:44:05.345814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:05.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:06 compute-2 ceph-mon[77282]: pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 20 KiB/s wr, 116 op/s
Jan 31 08:44:06 compute-2 nova_compute[226829]: 2026-01-31 08:44:06.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:06.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:44:06.921 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:44:06.921 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:44:06.922 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:07 compute-2 nova_compute[226829]: 2026-01-31 08:44:07.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:07 compute-2 nova_compute[226829]: 2026-01-31 08:44:07.587 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:07 compute-2 nova_compute[226829]: 2026-01-31 08:44:07.587 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:07 compute-2 nova_compute[226829]: 2026-01-31 08:44:07.587 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:07 compute-2 nova_compute[226829]: 2026-01-31 08:44:07.587 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:44:07 compute-2 nova_compute[226829]: 2026-01-31 08:44:07.588 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:07.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:44:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/197898601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.029 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.054 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.173 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.174 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4153MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.174 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.175 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:08 compute-2 ceph-mon[77282]: pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 130 op/s
Jan 31 08:44:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/197898601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.662 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.663 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:44:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:08.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.742 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.850 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.851 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.906 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.926 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:44:08 compute-2 nova_compute[226829]: 2026-01-31 08:44:08.944 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:44:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/98158991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:09 compute-2 nova_compute[226829]: 2026-01-31 08:44:09.352 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:09 compute-2 nova_compute[226829]: 2026-01-31 08:44:09.358 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:44:09 compute-2 sudo[317249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:09 compute-2 sudo[317249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:09 compute-2 sudo[317249]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:09 compute-2 sudo[317274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:44:09 compute-2 sudo[317274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:09 compute-2 sudo[317274]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:09 compute-2 sudo[317299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:09 compute-2 sudo[317299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:09 compute-2 sudo[317299]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:09 compute-2 sudo[317324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:44:09 compute-2 sudo[317324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:09 compute-2 nova_compute[226829]: 2026-01-31 08:44:09.592 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:44:09 compute-2 nova_compute[226829]: 2026-01-31 08:44:09.761 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:09.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:09 compute-2 nova_compute[226829]: 2026-01-31 08:44:09.840 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:44:09 compute-2 nova_compute[226829]: 2026-01-31 08:44:09.841 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:09 compute-2 sudo[317324]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:10 compute-2 ceph-mon[77282]: pgmap v3340: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 110 op/s
Jan 31 08:44:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/98158991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:44:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:44:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:44:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:44:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:44:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:44:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:10.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:11.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:12 compute-2 ceph-mon[77282]: pgmap v3341: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 84 op/s
Jan 31 08:44:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/898821738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:12.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:13 compute-2 nova_compute[226829]: 2026-01-31 08:44:13.056 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:13 compute-2 ceph-mon[77282]: pgmap v3342: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 84 op/s
Jan 31 08:44:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3985699015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:14.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:14 compute-2 nova_compute[226829]: 2026-01-31 08:44:14.764 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:15 compute-2 ceph-mon[77282]: pgmap v3343: 305 pgs: 305 active+clean; 180 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 85 op/s
Jan 31 08:44:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:44:16.187 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:44:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:44:16.188 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:44:16 compute-2 nova_compute[226829]: 2026-01-31 08:44:16.188 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:44:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:44:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:44:17.190 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:44:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:17.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:18 compute-2 nova_compute[226829]: 2026-01-31 08:44:18.058 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:18 compute-2 ceph-mon[77282]: pgmap v3344: 305 pgs: 305 active+clean; 193 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 803 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 08:44:18 compute-2 sudo[317387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:18 compute-2 sudo[317387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:18 compute-2 sudo[317387]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:18 compute-2 sudo[317418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:18 compute-2 sudo[317418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:18 compute-2 sudo[317418]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:18 compute-2 podman[317411]: 2026-01-31 08:44:18.413698262 +0000 UTC m=+0.063482467 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:44:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:18.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:19 compute-2 sudo[317464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:19 compute-2 sudo[317464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:19 compute-2 sudo[317464]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:19 compute-2 sudo[317489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:44:19 compute-2 sudo[317489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:19 compute-2 sudo[317489]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:44:19 compute-2 ceph-mon[77282]: pgmap v3345: 305 pgs: 305 active+clean; 195 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 31 08:44:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:44:19 compute-2 nova_compute[226829]: 2026-01-31 08:44:19.768 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:19.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:20.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:21 compute-2 ceph-mon[77282]: pgmap v3346: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:44:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:21.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:22.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:22 compute-2 nova_compute[226829]: 2026-01-31 08:44:22.842 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:22 compute-2 nova_compute[226829]: 2026-01-31 08:44:22.843 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:44:23 compute-2 nova_compute[226829]: 2026-01-31 08:44:23.059 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:23.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:23 compute-2 ceph-mon[77282]: pgmap v3347: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:44:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:24 compute-2 nova_compute[226829]: 2026-01-31 08:44:24.770 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:25 compute-2 podman[317517]: 2026-01-31 08:44:25.156313051 +0000 UTC m=+0.037176562 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:44:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:25.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:25 compute-2 ceph-mon[77282]: pgmap v3348: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:44:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:27.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:27 compute-2 ceph-mon[77282]: pgmap v3349: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 762 KiB/s wr, 31 op/s
Jan 31 08:44:28 compute-2 nova_compute[226829]: 2026-01-31 08:44:28.061 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:28.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:29 compute-2 nova_compute[226829]: 2026-01-31 08:44:29.774 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:29.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:29 compute-2 ceph-mon[77282]: pgmap v3350: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 92 KiB/s wr, 16 op/s
Jan 31 08:44:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:30.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:31.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:31 compute-2 ceph-mon[77282]: pgmap v3351: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 65 KiB/s wr, 10 op/s
Jan 31 08:44:32 compute-2 nova_compute[226829]: 2026-01-31 08:44:32.316 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:32 compute-2 nova_compute[226829]: 2026-01-31 08:44:32.316 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:32 compute-2 nova_compute[226829]: 2026-01-31 08:44:32.889 226833 DEBUG nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:44:33 compute-2 nova_compute[226829]: 2026-01-31 08:44:33.063 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:33.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:33 compute-2 nova_compute[226829]: 2026-01-31 08:44:33.943 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:33 compute-2 nova_compute[226829]: 2026-01-31 08:44:33.943 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:33 compute-2 nova_compute[226829]: 2026-01-31 08:44:33.953 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:44:33 compute-2 nova_compute[226829]: 2026-01-31 08:44:33.953 226833 INFO nova.compute.claims [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:44:33 compute-2 ceph-mon[77282]: pgmap v3352: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Jan 31 08:44:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:34.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:34 compute-2 nova_compute[226829]: 2026-01-31 08:44:34.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:35.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:35 compute-2 ceph-mon[77282]: pgmap v3353: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 08:44:36 compute-2 nova_compute[226829]: 2026-01-31 08:44:36.041 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:44:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/225521445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:36 compute-2 nova_compute[226829]: 2026-01-31 08:44:36.454 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:36 compute-2 nova_compute[226829]: 2026-01-31 08:44:36.460 226833 DEBUG nova.compute.provider_tree [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:44:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:36.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/225521445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:37 compute-2 nova_compute[226829]: 2026-01-31 08:44:37.251 226833 DEBUG nova.scheduler.client.report [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:44:37 compute-2 nova_compute[226829]: 2026-01-31 08:44:37.683 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:37 compute-2 nova_compute[226829]: 2026-01-31 08:44:37.684 226833 DEBUG nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:44:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:37.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:38 compute-2 nova_compute[226829]: 2026-01-31 08:44:38.065 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:38 compute-2 ceph-mon[77282]: pgmap v3354: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 31 08:44:38 compute-2 sudo[317567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:38 compute-2 nova_compute[226829]: 2026-01-31 08:44:38.437 226833 DEBUG nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:44:38 compute-2 nova_compute[226829]: 2026-01-31 08:44:38.438 226833 DEBUG nova.network.neutron [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:44:38 compute-2 sudo[317567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:38 compute-2 sudo[317567]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:38 compute-2 sudo[317592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:38 compute-2 sudo[317592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:38 compute-2 sudo[317592]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:38.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:38 compute-2 nova_compute[226829]: 2026-01-31 08:44:38.712 226833 INFO nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:44:39 compute-2 nova_compute[226829]: 2026-01-31 08:44:39.221 226833 DEBUG nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:44:39 compute-2 nova_compute[226829]: 2026-01-31 08:44:39.779 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:39.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:40 compute-2 ceph-mon[77282]: pgmap v3355: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:44:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:40.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.452 226833 DEBUG nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.453 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.453 226833 INFO nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Creating image(s)
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.481 226833 DEBUG nova.storage.rbd_utils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.509 226833 DEBUG nova.storage.rbd_utils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.533 226833 DEBUG nova.storage.rbd_utils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.536 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.593 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.594 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.594 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.595 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.618 226833 DEBUG nova.storage.rbd_utils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.621 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:41.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.908 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:41 compute-2 nova_compute[226829]: 2026-01-31 08:44:41.982 226833 DEBUG nova.storage.rbd_utils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] resizing rbd image 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:44:42 compute-2 nova_compute[226829]: 2026-01-31 08:44:42.090 226833 DEBUG nova.objects.instance [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'migration_context' on Instance uuid 77d3758e-409b-4bdb-ba47-044b8c99ba4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:44:42 compute-2 ceph-mon[77282]: pgmap v3356: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:44:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:42.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:42 compute-2 nova_compute[226829]: 2026-01-31 08:44:42.760 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:44:42 compute-2 nova_compute[226829]: 2026-01-31 08:44:42.760 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Ensure instance console log exists: /var/lib/nova/instances/77d3758e-409b-4bdb-ba47-044b8c99ba4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:44:42 compute-2 nova_compute[226829]: 2026-01-31 08:44:42.761 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:42 compute-2 nova_compute[226829]: 2026-01-31 08:44:42.761 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:42 compute-2 nova_compute[226829]: 2026-01-31 08:44:42.761 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:43 compute-2 nova_compute[226829]: 2026-01-31 08:44:43.067 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:43 compute-2 ceph-mon[77282]: pgmap v3357: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:44:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:43.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:44.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:44 compute-2 nova_compute[226829]: 2026-01-31 08:44:44.782 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:45 compute-2 nova_compute[226829]: 2026-01-31 08:44:45.059 226833 DEBUG nova.policy [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6968a1ee10e4e3b8651ffe0240a7e46', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:44:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:45.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:45 compute-2 ceph-mon[77282]: pgmap v3358: 305 pgs: 305 active+clean; 229 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 867 KiB/s wr, 14 op/s
Jan 31 08:44:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:46.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:47.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:47 compute-2 ceph-mon[77282]: pgmap v3359: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:48 compute-2 nova_compute[226829]: 2026-01-31 08:44:48.070 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:48.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:49 compute-2 podman[317788]: 2026-01-31 08:44:49.191235488 +0000 UTC m=+0.078983337 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:44:49 compute-2 nova_compute[226829]: 2026-01-31 08:44:49.784 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:49.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:49 compute-2 ceph-mon[77282]: pgmap v3360: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:50.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:51 compute-2 nova_compute[226829]: 2026-01-31 08:44:51.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:51.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:51 compute-2 ceph-mon[77282]: pgmap v3361: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:52 compute-2 nova_compute[226829]: 2026-01-31 08:44:52.117 226833 DEBUG nova.network.neutron [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Successfully created port: c16d1b26-cea9-482d-85b5-1691e078aa5d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:44:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:52.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:53 compute-2 nova_compute[226829]: 2026-01-31 08:44:53.070 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:53.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:53 compute-2 ceph-mon[77282]: pgmap v3362: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3864210388' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:44:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3864210388' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:44:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3662965045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:44:54.691 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:44:54 compute-2 nova_compute[226829]: 2026-01-31 08:44:54.691 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:44:54.692 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:44:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:54.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:54 compute-2 nova_compute[226829]: 2026-01-31 08:44:54.785 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:55.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:55 compute-2 ceph-mon[77282]: pgmap v3363: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:56 compute-2 podman[317818]: 2026-01-31 08:44:56.160849167 +0000 UTC m=+0.049565858 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:44:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:56.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:57.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:58 compute-2 ceph-mon[77282]: pgmap v3364: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 950 KiB/s wr, 13 op/s
Jan 31 08:44:58 compute-2 nova_compute[226829]: 2026-01-31 08:44:58.073 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:58 compute-2 nova_compute[226829]: 2026-01-31 08:44:58.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:58 compute-2 nova_compute[226829]: 2026-01-31 08:44:58.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:44:58 compute-2 nova_compute[226829]: 2026-01-31 08:44:58.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:44:58 compute-2 sudo[317838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:58 compute-2 sudo[317838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:58 compute-2 sudo[317838]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:58 compute-2 sudo[317863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:58 compute-2 sudo[317863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:58 compute-2 nova_compute[226829]: 2026-01-31 08:44:58.646 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:44:58 compute-2 nova_compute[226829]: 2026-01-31 08:44:58.646 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:44:58 compute-2 sudo[317863]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:44:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:58.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.114 226833 DEBUG nova.network.neutron [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Successfully updated port: c16d1b26-cea9-482d-85b5-1691e078aa5d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.864 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.864 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquired lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.864 226833 DEBUG nova.network.neutron [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:44:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:44:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:59.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.949 226833 DEBUG nova.compute.manager [req-2ff2338a-3995-4b49-b6c9-0badfa872be1 req-2cf697fe-bad4-4292-bc21-979fd877d3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received event network-changed-c16d1b26-cea9-482d-85b5-1691e078aa5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.949 226833 DEBUG nova.compute.manager [req-2ff2338a-3995-4b49-b6c9-0badfa872be1 req-2cf697fe-bad4-4292-bc21-979fd877d3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Refreshing instance network info cache due to event network-changed-c16d1b26-cea9-482d-85b5-1691e078aa5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:44:59 compute-2 nova_compute[226829]: 2026-01-31 08:44:59.950 226833 DEBUG oslo_concurrency.lockutils [req-2ff2338a-3995-4b49-b6c9-0badfa872be1 req-2cf697fe-bad4-4292-bc21-979fd877d3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:45:00 compute-2 ceph-mon[77282]: pgmap v3365: 305 pgs: 305 active+clean; 254 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 386 KiB/s wr, 13 op/s
Jan 31 08:45:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:00.693 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:45:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:00.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:45:01 compute-2 nova_compute[226829]: 2026-01-31 08:45:01.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:01.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:02 compute-2 ceph-mon[77282]: pgmap v3366: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:45:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:02.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:03 compute-2 nova_compute[226829]: 2026-01-31 08:45:03.075 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:03 compute-2 nova_compute[226829]: 2026-01-31 08:45:03.289 226833 DEBUG nova.network.neutron [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:45:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:03.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:04 compute-2 ceph-mon[77282]: pgmap v3367: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:45:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2843535445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:04.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:04 compute-2 nova_compute[226829]: 2026-01-31 08:45:04.791 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1282128364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:05.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:06 compute-2 ceph-mon[77282]: pgmap v3368: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:45:06 compute-2 nova_compute[226829]: 2026-01-31 08:45:06.203 226833 DEBUG nova.network.neutron [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Updating instance_info_cache with network_info: [{"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:45:06 compute-2 ovn_controller[133834]: 2026-01-31T08:45:06Z|00757|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 08:45:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:06.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:06.922 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:06.922 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:06.923 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.439 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Releasing lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.440 226833 DEBUG nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Instance network_info: |[{"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.440 226833 DEBUG oslo_concurrency.lockutils [req-2ff2338a-3995-4b49-b6c9-0badfa872be1 req-2cf697fe-bad4-4292-bc21-979fd877d3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.441 226833 DEBUG nova.network.neutron [req-2ff2338a-3995-4b49-b6c9-0badfa872be1 req-2cf697fe-bad4-4292-bc21-979fd877d3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Refreshing network info cache for port c16d1b26-cea9-482d-85b5-1691e078aa5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.443 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Start _get_guest_xml network_info=[{"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.448 226833 WARNING nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.456 226833 DEBUG nova.virt.libvirt.host [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.457 226833 DEBUG nova.virt.libvirt.host [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.461 226833 DEBUG nova.virt.libvirt.host [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.461 226833 DEBUG nova.virt.libvirt.host [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.462 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.463 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.463 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.463 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.463 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.464 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.464 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.464 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.464 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.464 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.465 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.465 226833 DEBUG nova.virt.hardware [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.468 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.491 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:45:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4221698089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.881 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:07.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.917 226833 DEBUG nova.storage.rbd_utils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:45:07 compute-2 nova_compute[226829]: 2026-01-31 08:45:07.921 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:08 compute-2 nova_compute[226829]: 2026-01-31 08:45:08.076 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:08 compute-2 ceph-mon[77282]: pgmap v3369: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:45:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4221698089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:45:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:45:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3818538128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:45:08 compute-2 nova_compute[226829]: 2026-01-31 08:45:08.348 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:08 compute-2 nova_compute[226829]: 2026-01-31 08:45:08.350 226833 DEBUG nova.virt.libvirt.vif [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:44:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-1946463825',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-1946463825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ge',id=191,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+IEKLcDIGbeOvOdAVtorZ1xtiCqnJ7fs7G+aYTHXv48LqaidcMSGgy+Nrfu6X80mnMDyQMW/ANMH0isk5utMRMD3EHvSyRl+Xh4xqHrF93AhlQmH4UiDGLeTiTMaGHCw==',key_name='tempest-TestSecurityGroupsBasicOps-342550486',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-05a3xjl7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:44:39Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=77d3758e-409b-4bdb-ba47-044b8c99ba4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:45:08 compute-2 nova_compute[226829]: 2026-01-31 08:45:08.351 226833 DEBUG nova.network.os_vif_util [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:45:08 compute-2 nova_compute[226829]: 2026-01-31 08:45:08.352 226833 DEBUG nova.network.os_vif_util [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:9d:65,bridge_name='br-int',has_traffic_filtering=True,id=c16d1b26-cea9-482d-85b5-1691e078aa5d,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc16d1b26-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:45:08 compute-2 nova_compute[226829]: 2026-01-31 08:45:08.353 226833 DEBUG nova.objects.instance [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'pci_devices' on Instance uuid 77d3758e-409b-4bdb-ba47-044b8c99ba4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:45:08 compute-2 nova_compute[226829]: 2026-01-31 08:45:08.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:08.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.180 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <uuid>77d3758e-409b-4bdb-ba47-044b8c99ba4d</uuid>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <name>instance-000000bf</name>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-1946463825</nova:name>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:45:07</nova:creationTime>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <nova:user uuid="c6968a1ee10e4e3b8651ffe0240a7e46">tempest-TestSecurityGroupsBasicOps-1014068786-project-member</nova:user>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <nova:project uuid="ba35ae24dbf3443e8a526dce39c6793b">tempest-TestSecurityGroupsBasicOps-1014068786</nova:project>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <nova:port uuid="c16d1b26-cea9-482d-85b5-1691e078aa5d">
Jan 31 08:45:09 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <system>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <entry name="serial">77d3758e-409b-4bdb-ba47-044b8c99ba4d</entry>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <entry name="uuid">77d3758e-409b-4bdb-ba47-044b8c99ba4d</entry>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     </system>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <os>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   </os>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <features>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   </features>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk">
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       </source>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk.config">
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       </source>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:45:09 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:66:9d:65"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <target dev="tapc16d1b26-ce"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/77d3758e-409b-4bdb-ba47-044b8c99ba4d/console.log" append="off"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <video>
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     </video>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:45:09 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:45:09 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:45:09 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:45:09 compute-2 nova_compute[226829]: </domain>
Jan 31 08:45:09 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.181 226833 DEBUG nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Preparing to wait for external event network-vif-plugged-c16d1b26-cea9-482d-85b5-1691e078aa5d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.181 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.181 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.182 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.182 226833 DEBUG nova.virt.libvirt.vif [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:44:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-1946463825',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-1946463825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ge',id=191,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+IEKLcDIGbeOvOdAVtorZ1xtiCqnJ7fs7G+aYTHXv48LqaidcMSGgy+Nrfu6X80mnMDyQMW/ANMH0isk5utMRMD3EHvSyRl+Xh4xqHrF93AhlQmH4UiDGLeTiTMaGHCw==',key_name='tempest-TestSecurityGroupsBasicOps-342550486',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-05a3xjl7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:44:39Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=77d3758e-409b-4bdb-ba47-044b8c99ba4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.183 226833 DEBUG nova.network.os_vif_util [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.183 226833 DEBUG nova.network.os_vif_util [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:9d:65,bridge_name='br-int',has_traffic_filtering=True,id=c16d1b26-cea9-482d-85b5-1691e078aa5d,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc16d1b26-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.183 226833 DEBUG os_vif [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:9d:65,bridge_name='br-int',has_traffic_filtering=True,id=c16d1b26-cea9-482d-85b5-1691e078aa5d,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc16d1b26-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.185 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.185 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.185 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.186 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.186 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.187 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.187 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.187 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.214 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.215 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc16d1b26-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.215 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc16d1b26-ce, col_values=(('external_ids', {'iface-id': 'c16d1b26-cea9-482d-85b5-1691e078aa5d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:9d:65', 'vm-uuid': '77d3758e-409b-4bdb-ba47-044b8c99ba4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.223 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:09 compute-2 NetworkManager[48999]: <info>  [1769849109.2257] manager: (tapc16d1b26-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.226 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.233 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.234 226833 INFO os_vif [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:9d:65,bridge_name='br-int',has_traffic_filtering=True,id=c16d1b26-cea9-482d-85b5-1691e078aa5d,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc16d1b26-ce')
Jan 31 08:45:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3818538128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:45:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:45:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2327328053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:09 compute-2 nova_compute[226829]: 2026-01-31 08:45:09.595 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:09.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:10 compute-2 ceph-mon[77282]: pgmap v3370: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:45:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2327328053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:45:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:10.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.572 226833 DEBUG nova.network.neutron [req-2ff2338a-3995-4b49-b6c9-0badfa872be1 req-2cf697fe-bad4-4292-bc21-979fd877d3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Updated VIF entry in instance network info cache for port c16d1b26-cea9-482d-85b5-1691e078aa5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.573 226833 DEBUG nova.network.neutron [req-2ff2338a-3995-4b49-b6c9-0badfa872be1 req-2cf697fe-bad4-4292-bc21-979fd877d3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Updating instance_info_cache with network_info: [{"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.785 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.786 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.787 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.787 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.787 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No VIF found with MAC fa:16:3e:66:9d:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.787 226833 INFO nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Using config drive
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.815 226833 DEBUG nova.storage.rbd_utils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:45:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:11.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.972 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.973 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4112MB free_disk=20.901229858398438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.973 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:11 compute-2 nova_compute[226829]: 2026-01-31 08:45:11.974 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:12 compute-2 nova_compute[226829]: 2026-01-31 08:45:12.247 226833 DEBUG oslo_concurrency.lockutils [req-2ff2338a-3995-4b49-b6c9-0badfa872be1 req-2cf697fe-bad4-4292-bc21-979fd877d3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:45:12 compute-2 ceph-mon[77282]: pgmap v3371: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 31 08:45:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:12.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:12 compute-2 nova_compute[226829]: 2026-01-31 08:45:12.811 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 77d3758e-409b-4bdb-ba47-044b8c99ba4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:45:12 compute-2 nova_compute[226829]: 2026-01-31 08:45:12.811 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:45:12 compute-2 nova_compute[226829]: 2026-01-31 08:45:12.812 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:45:12 compute-2 nova_compute[226829]: 2026-01-31 08:45:12.881 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.079 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:45:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/337564026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.278 226833 INFO nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Creating config drive at /var/lib/nova/instances/77d3758e-409b-4bdb-ba47-044b8c99ba4d/disk.config
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.282 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77d3758e-409b-4bdb-ba47-044b8c99ba4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps6gn1fcf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.304 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.309 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:45:13 compute-2 ceph-mon[77282]: pgmap v3372: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s wr, 1 op/s
Jan 31 08:45:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/337564026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.410 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77d3758e-409b-4bdb-ba47-044b8c99ba4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps6gn1fcf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.434 226833 DEBUG nova.storage.rbd_utils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.438 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77d3758e-409b-4bdb-ba47-044b8c99ba4d/disk.config 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.462 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.605 226833 DEBUG oslo_concurrency.processutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77d3758e-409b-4bdb-ba47-044b8c99ba4d/disk.config 77d3758e-409b-4bdb-ba47-044b8c99ba4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.606 226833 INFO nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Deleting local config drive /var/lib/nova/instances/77d3758e-409b-4bdb-ba47-044b8c99ba4d/disk.config because it was imported into RBD.
Jan 31 08:45:13 compute-2 kernel: tapc16d1b26-ce: entered promiscuous mode
Jan 31 08:45:13 compute-2 NetworkManager[48999]: <info>  [1769849113.6489] manager: (tapc16d1b26-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.650 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:13 compute-2 ovn_controller[133834]: 2026-01-31T08:45:13Z|00758|binding|INFO|Claiming lport c16d1b26-cea9-482d-85b5-1691e078aa5d for this chassis.
Jan 31 08:45:13 compute-2 ovn_controller[133834]: 2026-01-31T08:45:13Z|00759|binding|INFO|c16d1b26-cea9-482d-85b5-1691e078aa5d: Claiming fa:16:3e:66:9d:65 10.100.0.4
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.653 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.655 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.658 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.665 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:13 compute-2 NetworkManager[48999]: <info>  [1769849113.6667] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Jan 31 08:45:13 compute-2 NetworkManager[48999]: <info>  [1769849113.6676] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Jan 31 08:45:13 compute-2 systemd-udevd[318074]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:45:13 compute-2 systemd-machined[195142]: New machine qemu-87-instance-000000bf.
Jan 31 08:45:13 compute-2 NetworkManager[48999]: <info>  [1769849113.6845] device (tapc16d1b26-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:45:13 compute-2 NetworkManager[48999]: <info>  [1769849113.6853] device (tapc16d1b26-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:45:13 compute-2 systemd[1]: Started Virtual Machine qemu-87-instance-000000bf.
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.696 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.704 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.778 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:9d:65 10.100.0.4'], port_security=['fa:16:3e:66:9d:65 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '77d3758e-409b-4bdb-ba47-044b8c99ba4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa8535db-1bf5-453e-8521-d36054020c47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77ec18db-6d74-4aac-9268-2e86e3cdfbe8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ebb0cf2-f8f8-4f5f-9f1c-79de32e76bba, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c16d1b26-cea9-482d-85b5-1691e078aa5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.780 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c16d1b26-cea9-482d-85b5-1691e078aa5d in datapath aa8535db-1bf5-453e-8521-d36054020c47 bound to our chassis
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.781 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa8535db-1bf5-453e-8521-d36054020c47
Jan 31 08:45:13 compute-2 ovn_controller[133834]: 2026-01-31T08:45:13Z|00760|binding|INFO|Setting lport c16d1b26-cea9-482d-85b5-1691e078aa5d ovn-installed in OVS
Jan 31 08:45:13 compute-2 ovn_controller[133834]: 2026-01-31T08:45:13Z|00761|binding|INFO|Setting lport c16d1b26-cea9-482d-85b5-1691e078aa5d up in Southbound
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.786 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.786 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.792 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[df472e91-5a56-41cb-8746-2b429b0e03d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.797 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa8535db-11 in ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:45:13 compute-2 nova_compute[226829]: 2026-01-31 08:45:13.797 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.799 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa8535db-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.799 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7782a3-e4ac-48e9-9808-44b5a567754f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.800 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[22964d20-b53e-4497-96eb-85abc3727aa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.812 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[e6052485-5c8b-4cd8-a532-599212f84148]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.823 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[afcd765c-ed6e-4ac9-b220-cc66a05b4f41]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.845 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[0a16dd95-a2e0-44db-9b68-62905c2133ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 NetworkManager[48999]: <info>  [1769849113.8515] manager: (tapaa8535db-10): new Veth device (/org/freedesktop/NetworkManager/Devices/379)
Jan 31 08:45:13 compute-2 systemd-udevd[318076]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.853 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a13626-8abe-4099-8c66-067b2018bc26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.883 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[524f2b04-e597-4243-869c-ace6b5e5b1f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.888 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c87f0846-46c3-46a1-9d1e-6a82d5f8a775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:13.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:13 compute-2 NetworkManager[48999]: <info>  [1769849113.9054] device (tapaa8535db-10): carrier: link connected
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.910 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f90bb034-2c40-419b-b76e-a7ff15fbff1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.921 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[abc0a998-7fc5-4b8f-830f-f2d193deca7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa8535db-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:1b:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936637, 'reachable_time': 41888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318107, 'error': None, 'target': 'ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.935 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d82bb4b1-5ebf-4f45-8ce0-bc88e292cdda]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:1b7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936637, 'tstamp': 936637}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318108, 'error': None, 'target': 'ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.944 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ae25f31c-cc10-4054-b863-cfcf8e73c074]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa8535db-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:1b:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936637, 'reachable_time': 41888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318109, 'error': None, 'target': 'ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:13.976 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3edf130f-eb76-4d1d-896e-5c77b765015a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:14.011 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3dab5a90-beeb-4f1f-b965-4379cb9851a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:14.013 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa8535db-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:14.014 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:14.015 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa8535db-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:14 compute-2 nova_compute[226829]: 2026-01-31 08:45:14.017 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:14 compute-2 kernel: tapaa8535db-10: entered promiscuous mode
Jan 31 08:45:14 compute-2 nova_compute[226829]: 2026-01-31 08:45:14.019 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:14 compute-2 NetworkManager[48999]: <info>  [1769849114.0199] manager: (tapaa8535db-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:14.021 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa8535db-10, col_values=(('external_ids', {'iface-id': '5305ef22-1d04-4b5f-9e47-65b8bd8d2725'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:14 compute-2 nova_compute[226829]: 2026-01-31 08:45:14.022 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:14 compute-2 ovn_controller[133834]: 2026-01-31T08:45:14Z|00762|binding|INFO|Releasing lport 5305ef22-1d04-4b5f-9e47-65b8bd8d2725 from this chassis (sb_readonly=1)
Jan 31 08:45:14 compute-2 nova_compute[226829]: 2026-01-31 08:45:14.031 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:14.033 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa8535db-1bf5-453e-8521-d36054020c47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa8535db-1bf5-453e-8521-d36054020c47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:14.034 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c24575ea-0e2d-4657-af63-ccaa52654417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:14.035 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-aa8535db-1bf5-453e-8521-d36054020c47
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/aa8535db-1bf5-453e-8521-d36054020c47.pid.haproxy
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID aa8535db-1bf5-453e-8521-d36054020c47
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:45:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:14.036 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47', 'env', 'PROCESS_TAG=haproxy-aa8535db-1bf5-453e-8521-d36054020c47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa8535db-1bf5-453e-8521-d36054020c47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:45:14 compute-2 nova_compute[226829]: 2026-01-31 08:45:14.224 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:14 compute-2 podman[318142]: 2026-01-31 08:45:14.407860135 +0000 UTC m=+0.078770022 container create 1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 08:45:14 compute-2 podman[318142]: 2026-01-31 08:45:14.353980401 +0000 UTC m=+0.024890328 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:45:14 compute-2 systemd[1]: Started libpod-conmon-1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41.scope.
Jan 31 08:45:14 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:45:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab170395873ecc1992903344d66344a59e6c6c62ced6ee92c751fd0f3a8e35d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:14 compute-2 podman[318142]: 2026-01-31 08:45:14.490236583 +0000 UTC m=+0.161146490 container init 1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:45:14 compute-2 podman[318142]: 2026-01-31 08:45:14.493825921 +0000 UTC m=+0.164735808 container start 1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 08:45:14 compute-2 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[318157]: [NOTICE]   (318161) : New worker (318163) forked
Jan 31 08:45:14 compute-2 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[318157]: [NOTICE]   (318161) : Loading success.
Jan 31 08:45:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3664369261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:45:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:14.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:45:14 compute-2 nova_compute[226829]: 2026-01-31 08:45:14.778 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849114.7770615, 77d3758e-409b-4bdb-ba47-044b8c99ba4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:45:14 compute-2 nova_compute[226829]: 2026-01-31 08:45:14.778 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] VM Started (Lifecycle Event)
Jan 31 08:45:15 compute-2 nova_compute[226829]: 2026-01-31 08:45:15.150 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:45:15 compute-2 nova_compute[226829]: 2026-01-31 08:45:15.154 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849114.7773488, 77d3758e-409b-4bdb-ba47-044b8c99ba4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:45:15 compute-2 nova_compute[226829]: 2026-01-31 08:45:15.154 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] VM Paused (Lifecycle Event)
Jan 31 08:45:15 compute-2 nova_compute[226829]: 2026-01-31 08:45:15.440 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:45:15 compute-2 nova_compute[226829]: 2026-01-31 08:45:15.444 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:45:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:15 compute-2 ceph-mon[77282]: pgmap v3373: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 8.4 KiB/s wr, 1 op/s
Jan 31 08:45:15 compute-2 nova_compute[226829]: 2026-01-31 08:45:15.726 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:45:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:15.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3680370433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:16.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:17 compute-2 ceph-mon[77282]: pgmap v3374: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 31 08:45:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:17.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.080 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.534 226833 DEBUG nova.compute.manager [req-19550c00-1ec3-44d1-bc90-769707081ba9 req-e238decb-4766-440b-a337-93bad9adb9eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received event network-vif-plugged-c16d1b26-cea9-482d-85b5-1691e078aa5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.535 226833 DEBUG oslo_concurrency.lockutils [req-19550c00-1ec3-44d1-bc90-769707081ba9 req-e238decb-4766-440b-a337-93bad9adb9eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.535 226833 DEBUG oslo_concurrency.lockutils [req-19550c00-1ec3-44d1-bc90-769707081ba9 req-e238decb-4766-440b-a337-93bad9adb9eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.535 226833 DEBUG oslo_concurrency.lockutils [req-19550c00-1ec3-44d1-bc90-769707081ba9 req-e238decb-4766-440b-a337-93bad9adb9eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.535 226833 DEBUG nova.compute.manager [req-19550c00-1ec3-44d1-bc90-769707081ba9 req-e238decb-4766-440b-a337-93bad9adb9eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Processing event network-vif-plugged-c16d1b26-cea9-482d-85b5-1691e078aa5d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.536 226833 DEBUG nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.539 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849118.539717, 77d3758e-409b-4bdb-ba47-044b8c99ba4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.540 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] VM Resumed (Lifecycle Event)
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.541 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.544 226833 INFO nova.virt.libvirt.driver [-] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Instance spawned successfully.
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.544 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:45:18 compute-2 sudo[318216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:18 compute-2 sudo[318216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:18 compute-2 sudo[318216]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:18.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:18 compute-2 sudo[318241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:18 compute-2 sudo[318241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:18 compute-2 sudo[318241]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.766 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.770 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.771 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.771 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.772 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.772 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.772 226833 DEBUG nova.virt.libvirt.driver [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.777 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:45:18 compute-2 nova_compute[226829]: 2026-01-31 08:45:18.952 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:45:19 compute-2 nova_compute[226829]: 2026-01-31 08:45:19.028 226833 INFO nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Took 37.58 seconds to spawn the instance on the hypervisor.
Jan 31 08:45:19 compute-2 nova_compute[226829]: 2026-01-31 08:45:19.029 226833 DEBUG nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:45:19 compute-2 sudo[318266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:19 compute-2 sudo[318266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:19 compute-2 sudo[318266]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:19 compute-2 sudo[318291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:19 compute-2 sudo[318291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:19 compute-2 sudo[318291]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:19 compute-2 nova_compute[226829]: 2026-01-31 08:45:19.227 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:19 compute-2 sudo[318316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:19 compute-2 sudo[318316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:19 compute-2 sudo[318316]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:19 compute-2 sudo[318347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:45:19 compute-2 sudo[318347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:19 compute-2 podman[318340]: 2026-01-31 08:45:19.347069398 +0000 UTC m=+0.078713390 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:45:19 compute-2 nova_compute[226829]: 2026-01-31 08:45:19.402 226833 INFO nova.compute.manager [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Took 45.50 seconds to build instance.
Jan 31 08:45:19 compute-2 nova_compute[226829]: 2026-01-31 08:45:19.480 226833 DEBUG oslo_concurrency.lockutils [None req-42f1e01a-5278-4991-93f8-5d274af17aca c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 47.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:19 compute-2 sudo[318347]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:19 compute-2 ceph-mon[77282]: pgmap v3375: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 31 08:45:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:19.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:45:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:20.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:45:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:21 compute-2 ceph-mon[77282]: pgmap v3376: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 45 op/s
Jan 31 08:45:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3518480498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:45:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:45:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:45:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:45:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:45:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:45:21 compute-2 nova_compute[226829]: 2026-01-31 08:45:21.824 226833 DEBUG nova.compute.manager [req-bf5c9cae-0f4e-4c94-96b1-4bfa3221eb2f req-da477b5b-7e87-4a0a-9559-f6781c0fa30a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received event network-vif-plugged-c16d1b26-cea9-482d-85b5-1691e078aa5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:21 compute-2 nova_compute[226829]: 2026-01-31 08:45:21.825 226833 DEBUG oslo_concurrency.lockutils [req-bf5c9cae-0f4e-4c94-96b1-4bfa3221eb2f req-da477b5b-7e87-4a0a-9559-f6781c0fa30a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:21 compute-2 nova_compute[226829]: 2026-01-31 08:45:21.825 226833 DEBUG oslo_concurrency.lockutils [req-bf5c9cae-0f4e-4c94-96b1-4bfa3221eb2f req-da477b5b-7e87-4a0a-9559-f6781c0fa30a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:21 compute-2 nova_compute[226829]: 2026-01-31 08:45:21.825 226833 DEBUG oslo_concurrency.lockutils [req-bf5c9cae-0f4e-4c94-96b1-4bfa3221eb2f req-da477b5b-7e87-4a0a-9559-f6781c0fa30a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:21 compute-2 nova_compute[226829]: 2026-01-31 08:45:21.825 226833 DEBUG nova.compute.manager [req-bf5c9cae-0f4e-4c94-96b1-4bfa3221eb2f req-da477b5b-7e87-4a0a-9559-f6781c0fa30a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] No waiting events found dispatching network-vif-plugged-c16d1b26-cea9-482d-85b5-1691e078aa5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:45:21 compute-2 nova_compute[226829]: 2026-01-31 08:45:21.825 226833 WARNING nova.compute.manager [req-bf5c9cae-0f4e-4c94-96b1-4bfa3221eb2f req-da477b5b-7e87-4a0a-9559-f6781c0fa30a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received unexpected event network-vif-plugged-c16d1b26-cea9-482d-85b5-1691e078aa5d for instance with vm_state active and task_state None.
Jan 31 08:45:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:21.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3257554958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:45:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:45:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:22.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:45:23 compute-2 nova_compute[226829]: 2026-01-31 08:45:23.082 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:23 compute-2 ceph-mon[77282]: pgmap v3377: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 45 op/s
Jan 31 08:45:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:23.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:24 compute-2 nova_compute[226829]: 2026-01-31 08:45:24.228 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:24.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:25 compute-2 nova_compute[226829]: 2026-01-31 08:45:25.786 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:25 compute-2 nova_compute[226829]: 2026-01-31 08:45:25.787 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:45:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:45:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:25.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:45:25 compute-2 ceph-mon[77282]: pgmap v3378: 305 pgs: 305 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Jan 31 08:45:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:26.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:26 compute-2 nova_compute[226829]: 2026-01-31 08:45:26.895 226833 DEBUG nova.compute.manager [req-6ea21b60-a2eb-4f4d-a807-2f904348b54b req-eb9bfe09-6fd6-4cdd-a275-b7f93638458d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received event network-changed-c16d1b26-cea9-482d-85b5-1691e078aa5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:26 compute-2 nova_compute[226829]: 2026-01-31 08:45:26.895 226833 DEBUG nova.compute.manager [req-6ea21b60-a2eb-4f4d-a807-2f904348b54b req-eb9bfe09-6fd6-4cdd-a275-b7f93638458d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Refreshing instance network info cache due to event network-changed-c16d1b26-cea9-482d-85b5-1691e078aa5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:45:26 compute-2 nova_compute[226829]: 2026-01-31 08:45:26.896 226833 DEBUG oslo_concurrency.lockutils [req-6ea21b60-a2eb-4f4d-a807-2f904348b54b req-eb9bfe09-6fd6-4cdd-a275-b7f93638458d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:45:26 compute-2 nova_compute[226829]: 2026-01-31 08:45:26.896 226833 DEBUG oslo_concurrency.lockutils [req-6ea21b60-a2eb-4f4d-a807-2f904348b54b req-eb9bfe09-6fd6-4cdd-a275-b7f93638458d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:45:26 compute-2 nova_compute[226829]: 2026-01-31 08:45:26.896 226833 DEBUG nova.network.neutron [req-6ea21b60-a2eb-4f4d-a807-2f904348b54b req-eb9bfe09-6fd6-4cdd-a275-b7f93638458d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Refreshing network info cache for port c16d1b26-cea9-482d-85b5-1691e078aa5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:45:27 compute-2 podman[318429]: 2026-01-31 08:45:27.157736685 +0000 UTC m=+0.043146123 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 08:45:27 compute-2 sudo[318448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:27 compute-2 sudo[318448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:27 compute-2 sudo[318448]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:27 compute-2 sudo[318473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:45:27 compute-2 sudo[318473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:27 compute-2 sudo[318473]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:27 compute-2 nova_compute[226829]: 2026-01-31 08:45:27.565 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:27.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:28 compute-2 ceph-mon[77282]: pgmap v3379: 305 pgs: 305 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 80 op/s
Jan 31 08:45:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:28 compute-2 nova_compute[226829]: 2026-01-31 08:45:28.083 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:28.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:28 compute-2 nova_compute[226829]: 2026-01-31 08:45:28.759 226833 DEBUG nova.compute.manager [req-876626a1-9f35-49ce-9895-37fc97686e07 req-ee96f839-d2b3-4f70-9525-a5d23b039488 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received event network-changed-c16d1b26-cea9-482d-85b5-1691e078aa5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:28 compute-2 nova_compute[226829]: 2026-01-31 08:45:28.760 226833 DEBUG nova.compute.manager [req-876626a1-9f35-49ce-9895-37fc97686e07 req-ee96f839-d2b3-4f70-9525-a5d23b039488 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Refreshing instance network info cache due to event network-changed-c16d1b26-cea9-482d-85b5-1691e078aa5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:45:28 compute-2 nova_compute[226829]: 2026-01-31 08:45:28.760 226833 DEBUG oslo_concurrency.lockutils [req-876626a1-9f35-49ce-9895-37fc97686e07 req-ee96f839-d2b3-4f70-9525-a5d23b039488 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:45:29 compute-2 nova_compute[226829]: 2026-01-31 08:45:29.229 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:29.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:30 compute-2 nova_compute[226829]: 2026-01-31 08:45:30.018 226833 DEBUG nova.network.neutron [req-6ea21b60-a2eb-4f4d-a807-2f904348b54b req-eb9bfe09-6fd6-4cdd-a275-b7f93638458d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Updated VIF entry in instance network info cache for port c16d1b26-cea9-482d-85b5-1691e078aa5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:45:30 compute-2 nova_compute[226829]: 2026-01-31 08:45:30.019 226833 DEBUG nova.network.neutron [req-6ea21b60-a2eb-4f4d-a807-2f904348b54b req-eb9bfe09-6fd6-4cdd-a275-b7f93638458d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Updating instance_info_cache with network_info: [{"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:45:30 compute-2 ceph-mon[77282]: pgmap v3380: 305 pgs: 305 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 87 op/s
Jan 31 08:45:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:30 compute-2 nova_compute[226829]: 2026-01-31 08:45:30.560 226833 DEBUG oslo_concurrency.lockutils [req-6ea21b60-a2eb-4f4d-a807-2f904348b54b req-eb9bfe09-6fd6-4cdd-a275-b7f93638458d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:45:30 compute-2 nova_compute[226829]: 2026-01-31 08:45:30.561 226833 DEBUG oslo_concurrency.lockutils [req-876626a1-9f35-49ce-9895-37fc97686e07 req-ee96f839-d2b3-4f70-9525-a5d23b039488 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:45:30 compute-2 nova_compute[226829]: 2026-01-31 08:45:30.561 226833 DEBUG nova.network.neutron [req-876626a1-9f35-49ce-9895-37fc97686e07 req-ee96f839-d2b3-4f70-9525-a5d23b039488 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Refreshing network info cache for port c16d1b26-cea9-482d-85b5-1691e078aa5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:45:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:45:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:45:31 compute-2 ovn_controller[133834]: 2026-01-31T08:45:31Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:9d:65 10.100.0.4
Jan 31 08:45:31 compute-2 ovn_controller[133834]: 2026-01-31T08:45:31Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:9d:65 10.100.0.4
Jan 31 08:45:31 compute-2 nova_compute[226829]: 2026-01-31 08:45:31.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:31.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:32 compute-2 ceph-mon[77282]: pgmap v3381: 305 pgs: 305 active+clean; 296 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 352 KiB/s wr, 144 op/s
Jan 31 08:45:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:32.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:33 compute-2 nova_compute[226829]: 2026-01-31 08:45:33.085 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:33.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:34 compute-2 ceph-mon[77282]: pgmap v3382: 305 pgs: 305 active+clean; 296 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 352 KiB/s wr, 109 op/s
Jan 31 08:45:34 compute-2 nova_compute[226829]: 2026-01-31 08:45:34.231 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:35 compute-2 nova_compute[226829]: 2026-01-31 08:45:35.694 226833 DEBUG nova.network.neutron [req-876626a1-9f35-49ce-9895-37fc97686e07 req-ee96f839-d2b3-4f70-9525-a5d23b039488 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Updated VIF entry in instance network info cache for port c16d1b26-cea9-482d-85b5-1691e078aa5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:45:35 compute-2 nova_compute[226829]: 2026-01-31 08:45:35.695 226833 DEBUG nova.network.neutron [req-876626a1-9f35-49ce-9895-37fc97686e07 req-ee96f839-d2b3-4f70-9525-a5d23b039488 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Updating instance_info_cache with network_info: [{"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:45:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:45:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:45:35 compute-2 nova_compute[226829]: 2026-01-31 08:45:35.957 226833 DEBUG oslo_concurrency.lockutils [req-876626a1-9f35-49ce-9895-37fc97686e07 req-ee96f839-d2b3-4f70-9525-a5d23b039488 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-77d3758e-409b-4bdb-ba47-044b8c99ba4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:45:36 compute-2 ceph-mon[77282]: pgmap v3383: 305 pgs: 305 active+clean; 324 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Jan 31 08:45:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:45:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:36.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:45:37 compute-2 nova_compute[226829]: 2026-01-31 08:45:37.856 226833 DEBUG oslo_concurrency.lockutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:37 compute-2 nova_compute[226829]: 2026-01-31 08:45:37.856 226833 DEBUG oslo_concurrency.lockutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:37 compute-2 nova_compute[226829]: 2026-01-31 08:45:37.857 226833 DEBUG oslo_concurrency.lockutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:37 compute-2 nova_compute[226829]: 2026-01-31 08:45:37.857 226833 DEBUG oslo_concurrency.lockutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:37 compute-2 nova_compute[226829]: 2026-01-31 08:45:37.857 226833 DEBUG oslo_concurrency.lockutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:37 compute-2 nova_compute[226829]: 2026-01-31 08:45:37.859 226833 INFO nova.compute.manager [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Terminating instance
Jan 31 08:45:37 compute-2 nova_compute[226829]: 2026-01-31 08:45:37.860 226833 DEBUG nova.compute.manager [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:45:37 compute-2 kernel: tapc16d1b26-ce (unregistering): left promiscuous mode
Jan 31 08:45:37 compute-2 NetworkManager[48999]: <info>  [1769849137.9193] device (tapc16d1b26-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:45:37 compute-2 nova_compute[226829]: 2026-01-31 08:45:37.928 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:37 compute-2 ovn_controller[133834]: 2026-01-31T08:45:37Z|00763|binding|INFO|Releasing lport c16d1b26-cea9-482d-85b5-1691e078aa5d from this chassis (sb_readonly=0)
Jan 31 08:45:37 compute-2 ovn_controller[133834]: 2026-01-31T08:45:37Z|00764|binding|INFO|Setting lport c16d1b26-cea9-482d-85b5-1691e078aa5d down in Southbound
Jan 31 08:45:37 compute-2 ovn_controller[133834]: 2026-01-31T08:45:37Z|00765|binding|INFO|Removing iface tapc16d1b26-ce ovn-installed in OVS
Jan 31 08:45:37 compute-2 nova_compute[226829]: 2026-01-31 08:45:37.931 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:37.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:37 compute-2 nova_compute[226829]: 2026-01-31 08:45:37.936 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:38 compute-2 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Jan 31 08:45:38 compute-2 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000bf.scope: Consumed 13.728s CPU time.
Jan 31 08:45:38 compute-2 systemd-machined[195142]: Machine qemu-87-instance-000000bf terminated.
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.087 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.094 226833 INFO nova.virt.libvirt.driver [-] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Instance destroyed successfully.
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.094 226833 DEBUG nova.objects.instance [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'resources' on Instance uuid 77d3758e-409b-4bdb-ba47-044b8c99ba4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.124 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:9d:65 10.100.0.4', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '77d3758e-409b-4bdb-ba47-044b8c99ba4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa8535db-1bf5-453e-8521-d36054020c47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ebb0cf2-f8f8-4f5f-9f1c-79de32e76bba, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=c16d1b26-cea9-482d-85b5-1691e078aa5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.128 143841 INFO neutron.agent.ovn.metadata.agent [-] Port c16d1b26-cea9-482d-85b5-1691e078aa5d in datapath aa8535db-1bf5-453e-8521-d36054020c47 unbound from our chassis
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.131 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa8535db-1bf5-453e-8521-d36054020c47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.134 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[48412796-9287-4945-ac86-a89ea6a63f61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.135 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47 namespace which is not needed anymore
Jan 31 08:45:38 compute-2 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[318157]: [NOTICE]   (318161) : haproxy version is 2.8.14-c23fe91
Jan 31 08:45:38 compute-2 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[318157]: [NOTICE]   (318161) : path to executable is /usr/sbin/haproxy
Jan 31 08:45:38 compute-2 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[318157]: [WARNING]  (318161) : Exiting Master process...
Jan 31 08:45:38 compute-2 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[318157]: [WARNING]  (318161) : Exiting Master process...
Jan 31 08:45:38 compute-2 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[318157]: [ALERT]    (318161) : Current worker (318163) exited with code 143 (Terminated)
Jan 31 08:45:38 compute-2 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[318157]: [WARNING]  (318161) : All workers exited. Exiting... (0)
Jan 31 08:45:38 compute-2 systemd[1]: libpod-1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41.scope: Deactivated successfully.
Jan 31 08:45:38 compute-2 podman[318540]: 2026-01-31 08:45:38.259345387 +0000 UTC m=+0.046367231 container died 1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:45:38 compute-2 ceph-mon[77282]: pgmap v3384: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 08:45:38 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41-userdata-shm.mount: Deactivated successfully.
Jan 31 08:45:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-ab170395873ecc1992903344d66344a59e6c6c62ced6ee92c751fd0f3a8e35d1-merged.mount: Deactivated successfully.
Jan 31 08:45:38 compute-2 podman[318540]: 2026-01-31 08:45:38.303299212 +0000 UTC m=+0.090321046 container cleanup 1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:45:38 compute-2 systemd[1]: libpod-conmon-1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41.scope: Deactivated successfully.
Jan 31 08:45:38 compute-2 podman[318572]: 2026-01-31 08:45:38.351955644 +0000 UTC m=+0.035384463 container remove 1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.355 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9c24b4a7-4414-4a98-a833-f40bb538b7f5]: (4, ('Sat Jan 31 08:45:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47 (1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41)\n1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41\nSat Jan 31 08:45:38 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47 (1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41)\n1911cd9c9f9380af59f2670519892998e0a7cb1d112c937931420993db646c41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.357 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[eb42d8b7-cf8b-4b6d-938f-f158feb82ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.358 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa8535db-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.391 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:38 compute-2 kernel: tapaa8535db-10: left promiscuous mode
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.400 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.404 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[47565541-ec45-4dbb-864d-d78532d0b098]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.416 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4f63811a-a3e5-4682-90c2-5ba81fc7d4d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.417 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[546484bc-ec36-4ffc-9889-32bf33d1a54c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.431 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e770c237-3189-4113-adf3-ce3bd58a0788]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936631, 'reachable_time': 21582, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318591, 'error': None, 'target': 'ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:38 compute-2 systemd[1]: run-netns-ovnmeta\x2daa8535db\x2d1bf5\x2d453e\x2d8521\x2dd36054020c47.mount: Deactivated successfully.
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.436 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:45:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:38.437 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb62d04-eb7c-4be2-bca0-e3c9fe0c559f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.535 226833 DEBUG nova.virt.libvirt.vif [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:44:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-1946463825',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-1946463825',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ge',id=191,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+IEKLcDIGbeOvOdAVtorZ1xtiCqnJ7fs7G+aYTHXv48LqaidcMSGgy+Nrfu6X80mnMDyQMW/ANMH0isk5utMRMD3EHvSyRl+Xh4xqHrF93AhlQmH4UiDGLeTiTMaGHCw==',key_name='tempest-TestSecurityGroupsBasicOps-342550486',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:45:19Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-05a3xjl7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:45:19Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=77d3758e-409b-4bdb-ba47-044b8c99ba4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.536 226833 DEBUG nova.network.os_vif_util [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "address": "fa:16:3e:66:9d:65", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc16d1b26-ce", "ovs_interfaceid": "c16d1b26-cea9-482d-85b5-1691e078aa5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.537 226833 DEBUG nova.network.os_vif_util [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:9d:65,bridge_name='br-int',has_traffic_filtering=True,id=c16d1b26-cea9-482d-85b5-1691e078aa5d,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc16d1b26-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.538 226833 DEBUG os_vif [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:9d:65,bridge_name='br-int',has_traffic_filtering=True,id=c16d1b26-cea9-482d-85b5-1691e078aa5d,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc16d1b26-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.542 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.543 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc16d1b26-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.546 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:45:38 compute-2 nova_compute[226829]: 2026-01-31 08:45:38.556 226833 INFO os_vif [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:9d:65,bridge_name='br-int',has_traffic_filtering=True,id=c16d1b26-cea9-482d-85b5-1691e078aa5d,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc16d1b26-ce')
Jan 31 08:45:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:38.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:38 compute-2 sudo[318610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:38 compute-2 sudo[318610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:38 compute-2 sudo[318610]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:38 compute-2 sudo[318635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:38 compute-2 sudo[318635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:38 compute-2 sudo[318635]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:39 compute-2 nova_compute[226829]: 2026-01-31 08:45:39.465 226833 INFO nova.virt.libvirt.driver [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Deleting instance files /var/lib/nova/instances/77d3758e-409b-4bdb-ba47-044b8c99ba4d_del
Jan 31 08:45:39 compute-2 nova_compute[226829]: 2026-01-31 08:45:39.465 226833 INFO nova.virt.libvirt.driver [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Deletion of /var/lib/nova/instances/77d3758e-409b-4bdb-ba47-044b8c99ba4d_del complete
Jan 31 08:45:39 compute-2 nova_compute[226829]: 2026-01-31 08:45:39.815 226833 DEBUG nova.compute.manager [req-853b1075-824e-41c7-8d41-c9fddda9d98b req-39a834ac-40c3-43db-b827-41d84ad72dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received event network-vif-unplugged-c16d1b26-cea9-482d-85b5-1691e078aa5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:39 compute-2 nova_compute[226829]: 2026-01-31 08:45:39.815 226833 DEBUG oslo_concurrency.lockutils [req-853b1075-824e-41c7-8d41-c9fddda9d98b req-39a834ac-40c3-43db-b827-41d84ad72dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:39 compute-2 nova_compute[226829]: 2026-01-31 08:45:39.816 226833 DEBUG oslo_concurrency.lockutils [req-853b1075-824e-41c7-8d41-c9fddda9d98b req-39a834ac-40c3-43db-b827-41d84ad72dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:39 compute-2 nova_compute[226829]: 2026-01-31 08:45:39.816 226833 DEBUG oslo_concurrency.lockutils [req-853b1075-824e-41c7-8d41-c9fddda9d98b req-39a834ac-40c3-43db-b827-41d84ad72dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:39 compute-2 nova_compute[226829]: 2026-01-31 08:45:39.817 226833 DEBUG nova.compute.manager [req-853b1075-824e-41c7-8d41-c9fddda9d98b req-39a834ac-40c3-43db-b827-41d84ad72dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] No waiting events found dispatching network-vif-unplugged-c16d1b26-cea9-482d-85b5-1691e078aa5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:45:39 compute-2 nova_compute[226829]: 2026-01-31 08:45:39.817 226833 DEBUG nova.compute.manager [req-853b1075-824e-41c7-8d41-c9fddda9d98b req-39a834ac-40c3-43db-b827-41d84ad72dcf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received event network-vif-unplugged-c16d1b26-cea9-482d-85b5-1691e078aa5d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:45:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:40 compute-2 nova_compute[226829]: 2026-01-31 08:45:40.025 226833 INFO nova.compute.manager [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Took 2.16 seconds to destroy the instance on the hypervisor.
Jan 31 08:45:40 compute-2 nova_compute[226829]: 2026-01-31 08:45:40.025 226833 DEBUG oslo.service.loopingcall [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:45:40 compute-2 nova_compute[226829]: 2026-01-31 08:45:40.026 226833 DEBUG nova.compute.manager [-] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:45:40 compute-2 nova_compute[226829]: 2026-01-31 08:45:40.026 226833 DEBUG nova.network.neutron [-] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:45:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:40 compute-2 ceph-mon[77282]: pgmap v3385: 305 pgs: 305 active+clean; 337 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 135 op/s
Jan 31 08:45:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:40.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:41 compute-2 ceph-mon[77282]: pgmap v3386: 305 pgs: 305 active+clean; 311 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 200 op/s
Jan 31 08:45:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:41.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:42 compute-2 nova_compute[226829]: 2026-01-31 08:45:42.275 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:42.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:43 compute-2 nova_compute[226829]: 2026-01-31 08:45:43.089 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:43 compute-2 nova_compute[226829]: 2026-01-31 08:45:43.545 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:43 compute-2 nova_compute[226829]: 2026-01-31 08:45:43.566 226833 DEBUG nova.compute.manager [req-2fa87e8b-31d8-4c7b-9f87-1eece14e9601 req-f1067704-fefb-4f16-8bb4-000644834bfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received event network-vif-plugged-c16d1b26-cea9-482d-85b5-1691e078aa5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:43 compute-2 nova_compute[226829]: 2026-01-31 08:45:43.566 226833 DEBUG oslo_concurrency.lockutils [req-2fa87e8b-31d8-4c7b-9f87-1eece14e9601 req-f1067704-fefb-4f16-8bb4-000644834bfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:43 compute-2 nova_compute[226829]: 2026-01-31 08:45:43.567 226833 DEBUG oslo_concurrency.lockutils [req-2fa87e8b-31d8-4c7b-9f87-1eece14e9601 req-f1067704-fefb-4f16-8bb4-000644834bfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:43 compute-2 nova_compute[226829]: 2026-01-31 08:45:43.567 226833 DEBUG oslo_concurrency.lockutils [req-2fa87e8b-31d8-4c7b-9f87-1eece14e9601 req-f1067704-fefb-4f16-8bb4-000644834bfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:43 compute-2 nova_compute[226829]: 2026-01-31 08:45:43.567 226833 DEBUG nova.compute.manager [req-2fa87e8b-31d8-4c7b-9f87-1eece14e9601 req-f1067704-fefb-4f16-8bb4-000644834bfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] No waiting events found dispatching network-vif-plugged-c16d1b26-cea9-482d-85b5-1691e078aa5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:45:43 compute-2 nova_compute[226829]: 2026-01-31 08:45:43.567 226833 WARNING nova.compute.manager [req-2fa87e8b-31d8-4c7b-9f87-1eece14e9601 req-f1067704-fefb-4f16-8bb4-000644834bfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received unexpected event network-vif-plugged-c16d1b26-cea9-482d-85b5-1691e078aa5d for instance with vm_state active and task_state deleting.
Jan 31 08:45:43 compute-2 ceph-mon[77282]: pgmap v3387: 305 pgs: 305 active+clean; 311 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 634 KiB/s rd, 3.9 MiB/s wr, 142 op/s
Jan 31 08:45:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:43.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:44 compute-2 nova_compute[226829]: 2026-01-31 08:45:44.163 226833 DEBUG nova.network.neutron [-] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:45:44 compute-2 nova_compute[226829]: 2026-01-31 08:45:44.257 226833 DEBUG nova.compute.manager [req-d9898a2e-6d59-475f-a03e-3a41c8de08d5 req-f80b7713-0cf5-4ea1-ab13-e265c93e8fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Received event network-vif-deleted-c16d1b26-cea9-482d-85b5-1691e078aa5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:44 compute-2 nova_compute[226829]: 2026-01-31 08:45:44.257 226833 INFO nova.compute.manager [req-d9898a2e-6d59-475f-a03e-3a41c8de08d5 req-f80b7713-0cf5-4ea1-ab13-e265c93e8fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Neutron deleted interface c16d1b26-cea9-482d-85b5-1691e078aa5d; detaching it from the instance and deleting it from the info cache
Jan 31 08:45:44 compute-2 nova_compute[226829]: 2026-01-31 08:45:44.257 226833 DEBUG nova.network.neutron [req-d9898a2e-6d59-475f-a03e-3a41c8de08d5 req-f80b7713-0cf5-4ea1-ab13-e265c93e8fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:45:44 compute-2 nova_compute[226829]: 2026-01-31 08:45:44.407 226833 INFO nova.compute.manager [-] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Took 4.38 seconds to deallocate network for instance.
Jan 31 08:45:44 compute-2 nova_compute[226829]: 2026-01-31 08:45:44.412 226833 DEBUG nova.compute.manager [req-d9898a2e-6d59-475f-a03e-3a41c8de08d5 req-f80b7713-0cf5-4ea1-ab13-e265c93e8fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Detach interface failed, port_id=c16d1b26-cea9-482d-85b5-1691e078aa5d, reason: Instance 77d3758e-409b-4bdb-ba47-044b8c99ba4d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:45:44 compute-2 nova_compute[226829]: 2026-01-31 08:45:44.651 226833 DEBUG oslo_concurrency.lockutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:44 compute-2 nova_compute[226829]: 2026-01-31 08:45:44.652 226833 DEBUG oslo_concurrency.lockutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:44.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:45 compute-2 nova_compute[226829]: 2026-01-31 08:45:45.078 226833 DEBUG oslo_concurrency.processutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:45:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3035427977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:45 compute-2 nova_compute[226829]: 2026-01-31 08:45:45.514 226833 DEBUG oslo_concurrency.processutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:45 compute-2 nova_compute[226829]: 2026-01-31 08:45:45.519 226833 DEBUG nova.compute.provider_tree [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:45:45 compute-2 nova_compute[226829]: 2026-01-31 08:45:45.733 226833 DEBUG nova.scheduler.client.report [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:45:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:45.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:46 compute-2 nova_compute[226829]: 2026-01-31 08:45:46.086 226833 DEBUG oslo_concurrency.lockutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:46 compute-2 ceph-mon[77282]: pgmap v3388: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 638 KiB/s rd, 4.0 MiB/s wr, 149 op/s
Jan 31 08:45:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3035427977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:46 compute-2 nova_compute[226829]: 2026-01-31 08:45:46.421 226833 INFO nova.scheduler.client.report [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Deleted allocations for instance 77d3758e-409b-4bdb-ba47-044b8c99ba4d
Jan 31 08:45:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:46.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:46 compute-2 nova_compute[226829]: 2026-01-31 08:45:46.943 226833 DEBUG oslo_concurrency.lockutils [None req-aa05c4ea-14c7-4491-a9e8-bce554fadf3e c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "77d3758e-409b-4bdb-ba47-044b8c99ba4d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:47 compute-2 ceph-mon[77282]: pgmap v3389: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 2.2 MiB/s wr, 104 op/s
Jan 31 08:45:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:47.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:48 compute-2 nova_compute[226829]: 2026-01-31 08:45:48.092 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:48 compute-2 nova_compute[226829]: 2026-01-31 08:45:48.547 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:49 compute-2 ceph-mon[77282]: pgmap v3390: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Jan 31 08:45:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:49.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:50 compute-2 podman[318689]: 2026-01-31 08:45:50.183727842 +0000 UTC m=+0.071991398 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 08:45:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:51 compute-2 nova_compute[226829]: 2026-01-31 08:45:51.556 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:51 compute-2 ceph-mon[77282]: pgmap v3391: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 1.5 MiB/s wr, 89 op/s
Jan 31 08:45:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:51.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:52.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:53 compute-2 nova_compute[226829]: 2026-01-31 08:45:53.093 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849138.0921736, 77d3758e-409b-4bdb-ba47-044b8c99ba4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:45:53 compute-2 nova_compute[226829]: 2026-01-31 08:45:53.094 226833 INFO nova.compute.manager [-] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] VM Stopped (Lifecycle Event)
Jan 31 08:45:53 compute-2 nova_compute[226829]: 2026-01-31 08:45:53.096 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:53 compute-2 nova_compute[226829]: 2026-01-31 08:45:53.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:53 compute-2 nova_compute[226829]: 2026-01-31 08:45:53.702 226833 DEBUG nova.compute.manager [None req-4c91d448-699d-4e7f-b88a-4aef11f58913 - - - - - -] [instance: 77d3758e-409b-4bdb-ba47-044b8c99ba4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:45:53 compute-2 ceph-mon[77282]: pgmap v3392: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 14 KiB/s wr, 7 op/s
Jan 31 08:45:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2695108008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:45:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2695108008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:45:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:53.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:45:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:54.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:45:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:55 compute-2 ceph-mon[77282]: pgmap v3393: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 14 KiB/s wr, 7 op/s
Jan 31 08:45:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:55.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:45:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:56.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:45:57 compute-2 ceph-mon[77282]: pgmap v3394: 305 pgs: 305 active+clean; 252 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 2.6 KiB/s wr, 14 op/s
Jan 31 08:45:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:57.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:58 compute-2 nova_compute[226829]: 2026-01-31 08:45:58.095 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:58 compute-2 podman[318719]: 2026-01-31 08:45:58.15190791 +0000 UTC m=+0.041907330 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:45:58 compute-2 nova_compute[226829]: 2026-01-31 08:45:58.550 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:58.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:58.859 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:45:58 compute-2 nova_compute[226829]: 2026-01-31 08:45:58.860 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:45:58.861 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:45:58 compute-2 sudo[318739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:58 compute-2 sudo[318739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:58 compute-2 sudo[318739]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:59 compute-2 sudo[318764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:59 compute-2 sudo[318764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:59 compute-2 sudo[318764]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:59 compute-2 nova_compute[226829]: 2026-01-31 08:45:59.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:59 compute-2 nova_compute[226829]: 2026-01-31 08:45:59.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:45:59 compute-2 nova_compute[226829]: 2026-01-31 08:45:59.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:45:59 compute-2 nova_compute[226829]: 2026-01-31 08:45:59.685 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:45:59 compute-2 nova_compute[226829]: 2026-01-31 08:45:59.685 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:59 compute-2 ceph-mon[77282]: pgmap v3395: 305 pgs: 305 active+clean; 232 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Jan 31 08:45:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:45:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:59.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:00 compute-2 nova_compute[226829]: 2026-01-31 08:46:00.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:00 compute-2 nova_compute[226829]: 2026-01-31 08:46:00.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3128772206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:01 compute-2 nova_compute[226829]: 2026-01-31 08:46:01.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:01 compute-2 ceph-mon[77282]: pgmap v3396: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:46:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:01.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:02.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:03 compute-2 nova_compute[226829]: 2026-01-31 08:46:03.097 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:03 compute-2 nova_compute[226829]: 2026-01-31 08:46:03.551 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:03 compute-2 ceph-mon[77282]: pgmap v3397: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:46:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:03.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:04.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:05 compute-2 ceph-mon[77282]: pgmap v3398: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 30 op/s
Jan 31 08:46:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:05.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:06.923 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:06.924 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:06.924 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:07 compute-2 nova_compute[226829]: 2026-01-31 08:46:07.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:07.863 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:46:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:07.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:07 compute-2 ceph-mon[77282]: pgmap v3399: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 33 op/s
Jan 31 08:46:08 compute-2 nova_compute[226829]: 2026-01-31 08:46:08.098 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:08 compute-2 nova_compute[226829]: 2026-01-31 08:46:08.551 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:09.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:09 compute-2 ceph-mon[77282]: pgmap v3400: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 511 B/s wr, 19 op/s
Jan 31 08:46:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:10 compute-2 nova_compute[226829]: 2026-01-31 08:46:10.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:11.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:12 compute-2 ceph-mon[77282]: pgmap v3401: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 KiB/s rd, 2.5 KiB/s wr, 11 op/s
Jan 31 08:46:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:12.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:13 compute-2 nova_compute[226829]: 2026-01-31 08:46:13.100 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:13 compute-2 nova_compute[226829]: 2026-01-31 08:46:13.553 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:13.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:14 compute-2 ceph-mon[77282]: pgmap v3402: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 KiB/s rd, 2.2 KiB/s wr, 5 op/s
Jan 31 08:46:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:14.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:15 compute-2 nova_compute[226829]: 2026-01-31 08:46:15.594 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:15 compute-2 nova_compute[226829]: 2026-01-31 08:46:15.594 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:15 compute-2 nova_compute[226829]: 2026-01-31 08:46:15.595 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:15 compute-2 nova_compute[226829]: 2026-01-31 08:46:15.595 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:46:15 compute-2 nova_compute[226829]: 2026-01-31 08:46:15.595 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:46:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2472085658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:15.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:15 compute-2 nova_compute[226829]: 2026-01-31 08:46:15.996 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:16 compute-2 ceph-mon[77282]: pgmap v3403: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 KiB/s rd, 3.2 KiB/s wr, 6 op/s
Jan 31 08:46:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2472085658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/33009808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2231380898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:16 compute-2 nova_compute[226829]: 2026-01-31 08:46:16.127 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:46:16 compute-2 nova_compute[226829]: 2026-01-31 08:46:16.128 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4128MB free_disk=20.942718505859375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:46:16 compute-2 nova_compute[226829]: 2026-01-31 08:46:16.128 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:16 compute-2 nova_compute[226829]: 2026-01-31 08:46:16.129 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:16 compute-2 nova_compute[226829]: 2026-01-31 08:46:16.783 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:46:16 compute-2 nova_compute[226829]: 2026-01-31 08:46:16.783 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:46:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:16.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:16 compute-2 nova_compute[226829]: 2026-01-31 08:46:16.839 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:46:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3721907473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:17 compute-2 nova_compute[226829]: 2026-01-31 08:46:17.258 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:17 compute-2 nova_compute[226829]: 2026-01-31 08:46:17.263 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:46:17 compute-2 nova_compute[226829]: 2026-01-31 08:46:17.400 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:46:17 compute-2 nova_compute[226829]: 2026-01-31 08:46:17.919 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:46:17 compute-2 nova_compute[226829]: 2026-01-31 08:46:17.919 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:18 compute-2 ceph-mon[77282]: pgmap v3404: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 3.1 KiB/s wr, 3 op/s
Jan 31 08:46:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3721907473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/584075436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/934742884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:18 compute-2 nova_compute[226829]: 2026-01-31 08:46:18.102 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:18 compute-2 nova_compute[226829]: 2026-01-31 08:46:18.554 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:18.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:19 compute-2 sudo[318844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:19 compute-2 sudo[318844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:19 compute-2 sudo[318844]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:19 compute-2 nova_compute[226829]: 2026-01-31 08:46:19.124 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "c6877078-3a1f-48f2-bc51-0daaa570d671" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:19 compute-2 nova_compute[226829]: 2026-01-31 08:46:19.125 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:19 compute-2 sudo[318869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:19 compute-2 sudo[318869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:19 compute-2 sudo[318869]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:19.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:20 compute-2 ceph-mon[77282]: pgmap v3405: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 3.1 KiB/s wr, 1 op/s
Jan 31 08:46:20 compute-2 nova_compute[226829]: 2026-01-31 08:46:20.218 226833 DEBUG nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:46:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:20 compute-2 nova_compute[226829]: 2026-01-31 08:46:20.683 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:20 compute-2 nova_compute[226829]: 2026-01-31 08:46:20.683 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:20 compute-2 nova_compute[226829]: 2026-01-31 08:46:20.699 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:46:20 compute-2 nova_compute[226829]: 2026-01-31 08:46:20.699 226833 INFO nova.compute.claims [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:46:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:20.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:21 compute-2 podman[318895]: 2026-01-31 08:46:21.167310179 +0000 UTC m=+0.055490700 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:46:21 compute-2 nova_compute[226829]: 2026-01-31 08:46:21.407 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:46:21 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1636768766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:21 compute-2 nova_compute[226829]: 2026-01-31 08:46:21.837 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:21 compute-2 nova_compute[226829]: 2026-01-31 08:46:21.842 226833 DEBUG nova.compute.provider_tree [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:46:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:21.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:22 compute-2 nova_compute[226829]: 2026-01-31 08:46:22.014 226833 DEBUG nova.scheduler.client.report [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:46:22 compute-2 ceph-mon[77282]: pgmap v3406: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 31 08:46:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1636768766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:22 compute-2 nova_compute[226829]: 2026-01-31 08:46:22.477 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:22 compute-2 nova_compute[226829]: 2026-01-31 08:46:22.478 226833 DEBUG nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:46:22 compute-2 nova_compute[226829]: 2026-01-31 08:46:22.770 226833 DEBUG nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:46:22 compute-2 nova_compute[226829]: 2026-01-31 08:46:22.771 226833 DEBUG nova.network.neutron [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:46:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:22.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:23 compute-2 nova_compute[226829]: 2026-01-31 08:46:23.103 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:23 compute-2 nova_compute[226829]: 2026-01-31 08:46:23.556 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:23 compute-2 nova_compute[226829]: 2026-01-31 08:46:23.660 226833 INFO nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:46:23 compute-2 nova_compute[226829]: 2026-01-31 08:46:23.967 226833 DEBUG nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:46:23 compute-2 nova_compute[226829]: 2026-01-31 08:46:23.977 226833 DEBUG nova.policy [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:46:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:23.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:24 compute-2 ceph-mon[77282]: pgmap v3407: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:46:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:24.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:26.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:26 compute-2 ceph-mon[77282]: pgmap v3408: 305 pgs: 305 active+clean; 137 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.7 KiB/s wr, 17 op/s
Jan 31 08:46:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:26.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:27 compute-2 sudo[318947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:27 compute-2 sudo[318947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:27 compute-2 sudo[318947]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:27 compute-2 sudo[318972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:27 compute-2 sudo[318972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:27 compute-2 sudo[318972]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:27 compute-2 sudo[318997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:27 compute-2 sudo[318997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:27 compute-2 sudo[318997]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:27 compute-2 sudo[319022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:46:27 compute-2 sudo[319022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:27 compute-2 sudo[319022]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:28.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:28 compute-2 nova_compute[226829]: 2026-01-31 08:46:28.105 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:28 compute-2 ceph-mon[77282]: pgmap v3409: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:46:28 compute-2 nova_compute[226829]: 2026-01-31 08:46:28.557 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:28.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:29 compute-2 podman[319079]: 2026-01-31 08:46:29.166797557 +0000 UTC m=+0.054575383 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.770 226833 DEBUG nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.772 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.772 226833 INFO nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Creating image(s)
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.801 226833 DEBUG nova.storage.rbd_utils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c6877078-3a1f-48f2-bc51-0daaa570d671_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.826 226833 DEBUG nova.storage.rbd_utils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c6877078-3a1f-48f2-bc51-0daaa570d671_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.857 226833 DEBUG nova.storage.rbd_utils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c6877078-3a1f-48f2-bc51-0daaa570d671_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.861 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.920 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.921 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.925 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.926 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.928 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.928 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.962 226833 DEBUG nova.storage.rbd_utils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c6877078-3a1f-48f2-bc51-0daaa570d671_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:46:29 compute-2 nova_compute[226829]: 2026-01-31 08:46:29.965 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c6877078-3a1f-48f2-bc51-0daaa570d671_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:30.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:30 compute-2 nova_compute[226829]: 2026-01-31 08:46:30.235 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c6877078-3a1f-48f2-bc51-0daaa570d671_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:30 compute-2 ceph-mon[77282]: pgmap v3410: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:46:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:46:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:46:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:46:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:46:30 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:46:30 compute-2 nova_compute[226829]: 2026-01-31 08:46:30.295 226833 DEBUG nova.storage.rbd_utils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image c6877078-3a1f-48f2-bc51-0daaa570d671_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:46:30 compute-2 nova_compute[226829]: 2026-01-31 08:46:30.393 226833 DEBUG nova.objects.instance [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid c6877078-3a1f-48f2-bc51-0daaa570d671 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:46:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:30 compute-2 nova_compute[226829]: 2026-01-31 08:46:30.599 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:46:30 compute-2 nova_compute[226829]: 2026-01-31 08:46:30.600 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Ensure instance console log exists: /var/lib/nova/instances/c6877078-3a1f-48f2-bc51-0daaa570d671/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:46:30 compute-2 nova_compute[226829]: 2026-01-31 08:46:30.600 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:30 compute-2 nova_compute[226829]: 2026-01-31 08:46:30.601 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:30 compute-2 nova_compute[226829]: 2026-01-31 08:46:30.601 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:30.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:32.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:32 compute-2 ceph-mon[77282]: pgmap v3411: 305 pgs: 305 active+clean; 124 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 260 KiB/s wr, 50 op/s
Jan 31 08:46:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:32.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:33 compute-2 nova_compute[226829]: 2026-01-31 08:46:33.108 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:33 compute-2 nova_compute[226829]: 2026-01-31 08:46:33.559 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:33 compute-2 ceph-mon[77282]: pgmap v3412: 305 pgs: 305 active+clean; 124 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 260 KiB/s wr, 50 op/s
Jan 31 08:46:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:34 compute-2 nova_compute[226829]: 2026-01-31 08:46:34.216 226833 DEBUG nova.network.neutron [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Successfully created port: 0b2ec72e-9b76-4407-ba53-e8aea0a2334d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:46:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:34.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:35 compute-2 ceph-mon[77282]: pgmap v3413: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 08:46:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2492724714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:36 compute-2 nova_compute[226829]: 2026-01-31 08:46:36.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:36 compute-2 nova_compute[226829]: 2026-01-31 08:46:36.488 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:36 compute-2 nova_compute[226829]: 2026-01-31 08:46:36.489 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:36 compute-2 nova_compute[226829]: 2026-01-31 08:46:36.490 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:36 compute-2 nova_compute[226829]: 2026-01-31 08:46:36.490 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:36 compute-2 nova_compute[226829]: 2026-01-31 08:46:36.490 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:36 compute-2 nova_compute[226829]: 2026-01-31 08:46:36.490 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:36 compute-2 sudo[319269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:36 compute-2 sudo[319269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:36 compute-2 sudo[319269]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:36.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:36 compute-2 sudo[319294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:46:36 compute-2 sudo[319294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:36 compute-2 sudo[319294]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:37 compute-2 ceph-mon[77282]: pgmap v3414: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 08:46:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:38.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:38 compute-2 nova_compute[226829]: 2026-01-31 08:46:38.108 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:38 compute-2 nova_compute[226829]: 2026-01-31 08:46:38.560 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:46:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:38.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:46:39 compute-2 sudo[319320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:39 compute-2 sudo[319320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:39 compute-2 sudo[319320]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:39 compute-2 sudo[319345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:39 compute-2 sudo[319345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:39 compute-2 sudo[319345]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:39 compute-2 nova_compute[226829]: 2026-01-31 08:46:39.575 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 31 08:46:39 compute-2 ceph-mon[77282]: pgmap v3415: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:46:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:40.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:40.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:41 compute-2 ceph-mon[77282]: pgmap v3416: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:46:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:42.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.619 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.620 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Image id 7c23949f-bba8-4466-bb79-caf568852d38 yields fingerprint ff90c10b8251df1dd96780c3025774cae23123c6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.620 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] image 7c23949f-bba8-4466-bb79-caf568852d38 at (/var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6): checking
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.620 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] image 7c23949f-bba8-4466-bb79-caf568852d38 at (/var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.622 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.623 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] c6877078-3a1f-48f2-bc51-0daaa570d671 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.623 226833 WARNING nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.623 226833 WARNING nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.623 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Active base files: /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.623 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Removable base files: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.623 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.624 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/db5baf63538b1dfcb3d1f0364e6b40e87a2aa4bb
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.624 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.624 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.624 226833 DEBUG nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 31 08:46:42 compute-2 nova_compute[226829]: 2026-01-31 08:46:42.624 226833 INFO nova.virt.libvirt.imagecache [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Jan 31 08:46:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:42.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:43 compute-2 nova_compute[226829]: 2026-01-31 08:46:43.110 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:43 compute-2 nova_compute[226829]: 2026-01-31 08:46:43.117 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:43 compute-2 nova_compute[226829]: 2026-01-31 08:46:43.166 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:43 compute-2 nova_compute[226829]: 2026-01-31 08:46:43.331 226833 DEBUG nova.network.neutron [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Successfully updated port: 0b2ec72e-9b76-4407-ba53-e8aea0a2334d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:46:43 compute-2 nova_compute[226829]: 2026-01-31 08:46:43.562 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:43 compute-2 ceph-mon[77282]: pgmap v3417: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 1.5 MiB/s wr, 4 op/s
Jan 31 08:46:44 compute-2 nova_compute[226829]: 2026-01-31 08:46:44.011 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:46:44 compute-2 nova_compute[226829]: 2026-01-31 08:46:44.012 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:46:44 compute-2 nova_compute[226829]: 2026-01-31 08:46:44.012 226833 DEBUG nova.network.neutron [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:46:44 compute-2 nova_compute[226829]: 2026-01-31 08:46:44.019 226833 DEBUG nova.compute.manager [req-03009e85-587f-45e4-a85f-daa438722761 req-aac1ff5e-6d94-49b9-9e6c-3c8c731269d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received event network-changed-0b2ec72e-9b76-4407-ba53-e8aea0a2334d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:46:44 compute-2 nova_compute[226829]: 2026-01-31 08:46:44.020 226833 DEBUG nova.compute.manager [req-03009e85-587f-45e4-a85f-daa438722761 req-aac1ff5e-6d94-49b9-9e6c-3c8c731269d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Refreshing instance network info cache due to event network-changed-0b2ec72e-9b76-4407-ba53-e8aea0a2334d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:46:44 compute-2 nova_compute[226829]: 2026-01-31 08:46:44.020 226833 DEBUG oslo_concurrency.lockutils [req-03009e85-587f-45e4-a85f-daa438722761 req-aac1ff5e-6d94-49b9-9e6c-3c8c731269d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:46:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:44.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:44.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:45 compute-2 ceph-mon[77282]: pgmap v3418: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 1.5 MiB/s wr, 4 op/s
Jan 31 08:46:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:46.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:46:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:46.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:46:47 compute-2 ceph-mon[77282]: pgmap v3419: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:48.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:48 compute-2 nova_compute[226829]: 2026-01-31 08:46:48.112 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:48 compute-2 nova_compute[226829]: 2026-01-31 08:46:48.564 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:48.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:49 compute-2 nova_compute[226829]: 2026-01-31 08:46:49.007 226833 DEBUG nova.network.neutron [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:46:49 compute-2 ceph-mon[77282]: pgmap v3420: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:50.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:50.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.316 226833 DEBUG nova.network.neutron [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updating instance_info_cache with network_info: [{"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.651 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.652 226833 DEBUG nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Instance network_info: |[{"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.652 226833 DEBUG oslo_concurrency.lockutils [req-03009e85-587f-45e4-a85f-daa438722761 req-aac1ff5e-6d94-49b9-9e6c-3c8c731269d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.653 226833 DEBUG nova.network.neutron [req-03009e85-587f-45e4-a85f-daa438722761 req-aac1ff5e-6d94-49b9-9e6c-3c8c731269d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Refreshing network info cache for port 0b2ec72e-9b76-4407-ba53-e8aea0a2334d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.656 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Start _get_guest_xml network_info=[{"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.661 226833 WARNING nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.671 226833 DEBUG nova.virt.libvirt.host [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.672 226833 DEBUG nova.virt.libvirt.host [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.682 226833 DEBUG nova.virt.libvirt.host [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.683 226833 DEBUG nova.virt.libvirt.host [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.685 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.685 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.686 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.686 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.687 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.687 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.687 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.688 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.688 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.689 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.689 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.689 226833 DEBUG nova.virt.hardware [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:46:51 compute-2 nova_compute[226829]: 2026-01-31 08:46:51.694 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:52 compute-2 ceph-mon[77282]: pgmap v3421: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:52.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:46:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/604803900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.134 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.157 226833 DEBUG nova.storage.rbd_utils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c6877078-3a1f-48f2-bc51-0daaa570d671_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.164 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:52 compute-2 podman[319398]: 2026-01-31 08:46:52.192001593 +0000 UTC m=+0.072680437 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:46:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:46:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1400578424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.573 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.574 226833 DEBUG nova.virt.libvirt.vif [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:46:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-425849670',display_name='tempest-TestNetworkBasicOps-server-425849670',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-425849670',id=193,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKMbC/FI4HjJREMCtTD7CNC5mvZyYtzpey4yRu6XY1sWZXuUm6lpM37jLTr6iQRJ2xBNOeXL6JthwkqGkxYDl24drZQt/0ATmDRDcJHCyK1nn+90+vt3jf49GzGXmtmbrw==',key_name='tempest-TestNetworkBasicOps-1418942212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-w01idkb6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:46:27Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c6877078-3a1f-48f2-bc51-0daaa570d671,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.575 226833 DEBUG nova.network.os_vif_util [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.576 226833 DEBUG nova.network.os_vif_util [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a1:c8,bridge_name='br-int',has_traffic_filtering=True,id=0b2ec72e-9b76-4407-ba53-e8aea0a2334d,network=Network(58047b08-8008-41e3-ad4b-abc4736272e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2ec72e-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.577 226833 DEBUG nova.objects.instance [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6877078-3a1f-48f2-bc51-0daaa570d671 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.713 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <uuid>c6877078-3a1f-48f2-bc51-0daaa570d671</uuid>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <name>instance-000000c1</name>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <nova:name>tempest-TestNetworkBasicOps-server-425849670</nova:name>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:46:51</nova:creationTime>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <nova:port uuid="0b2ec72e-9b76-4407-ba53-e8aea0a2334d">
Jan 31 08:46:52 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <system>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <entry name="serial">c6877078-3a1f-48f2-bc51-0daaa570d671</entry>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <entry name="uuid">c6877078-3a1f-48f2-bc51-0daaa570d671</entry>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     </system>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <os>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   </os>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <features>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   </features>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c6877078-3a1f-48f2-bc51-0daaa570d671_disk">
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       </source>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c6877078-3a1f-48f2-bc51-0daaa570d671_disk.config">
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       </source>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:46:52 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:f6:a1:c8"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <target dev="tap0b2ec72e-9b"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/c6877078-3a1f-48f2-bc51-0daaa570d671/console.log" append="off"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <video>
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     </video>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:46:52 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:46:52 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:46:52 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:46:52 compute-2 nova_compute[226829]: </domain>
Jan 31 08:46:52 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.714 226833 DEBUG nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Preparing to wait for external event network-vif-plugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.714 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.715 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.715 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.716 226833 DEBUG nova.virt.libvirt.vif [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:46:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-425849670',display_name='tempest-TestNetworkBasicOps-server-425849670',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-425849670',id=193,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKMbC/FI4HjJREMCtTD7CNC5mvZyYtzpey4yRu6XY1sWZXuUm6lpM37jLTr6iQRJ2xBNOeXL6JthwkqGkxYDl24drZQt/0ATmDRDcJHCyK1nn+90+vt3jf49GzGXmtmbrw==',key_name='tempest-TestNetworkBasicOps-1418942212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-w01idkb6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:46:27Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c6877078-3a1f-48f2-bc51-0daaa570d671,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.716 226833 DEBUG nova.network.os_vif_util [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.717 226833 DEBUG nova.network.os_vif_util [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a1:c8,bridge_name='br-int',has_traffic_filtering=True,id=0b2ec72e-9b76-4407-ba53-e8aea0a2334d,network=Network(58047b08-8008-41e3-ad4b-abc4736272e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2ec72e-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.717 226833 DEBUG os_vif [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a1:c8,bridge_name='br-int',has_traffic_filtering=True,id=0b2ec72e-9b76-4407-ba53-e8aea0a2334d,network=Network(58047b08-8008-41e3-ad4b-abc4736272e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2ec72e-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.718 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.718 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.719 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.722 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.723 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b2ec72e-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.723 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b2ec72e-9b, col_values=(('external_ids', {'iface-id': '0b2ec72e-9b76-4407-ba53-e8aea0a2334d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:a1:c8', 'vm-uuid': 'c6877078-3a1f-48f2-bc51-0daaa570d671'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.725 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:52 compute-2 NetworkManager[48999]: <info>  [1769849212.7261] manager: (tap0b2ec72e-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.727 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.730 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.730 226833 INFO os_vif [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a1:c8,bridge_name='br-int',has_traffic_filtering=True,id=0b2ec72e-9b76-4407-ba53-e8aea0a2334d,network=Network(58047b08-8008-41e3-ad4b-abc4736272e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2ec72e-9b')
Jan 31 08:46:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:52.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.873 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.873 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.873 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:f6:a1:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.874 226833 INFO nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Using config drive
Jan 31 08:46:52 compute-2 nova_compute[226829]: 2026-01-31 08:46:52.897 226833 DEBUG nova.storage.rbd_utils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c6877078-3a1f-48f2-bc51-0daaa570d671_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:46:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/604803900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:46:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1400578424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:46:53 compute-2 nova_compute[226829]: 2026-01-31 08:46:53.114 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:46:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2766823848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:46:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:46:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2766823848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:46:53 compute-2 nova_compute[226829]: 2026-01-31 08:46:53.878 226833 INFO nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Creating config drive at /var/lib/nova/instances/c6877078-3a1f-48f2-bc51-0daaa570d671/disk.config
Jan 31 08:46:53 compute-2 nova_compute[226829]: 2026-01-31 08:46:53.884 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6877078-3a1f-48f2-bc51-0daaa570d671/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpk2ed4jpc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.027 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6877078-3a1f-48f2-bc51-0daaa570d671/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpk2ed4jpc" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:54 compute-2 ceph-mon[77282]: pgmap v3422: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2766823848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:46:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2766823848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:46:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:54.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.067 226833 DEBUG nova.storage.rbd_utils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c6877078-3a1f-48f2-bc51-0daaa570d671_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.072 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6877078-3a1f-48f2-bc51-0daaa570d671/disk.config c6877078-3a1f-48f2-bc51-0daaa570d671_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.139023) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214139147, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1880, "num_deletes": 256, "total_data_size": 4521122, "memory_usage": 4601920, "flush_reason": "Manual Compaction"}
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214161956, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2949889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79640, "largest_seqno": 81515, "table_properties": {"data_size": 2942105, "index_size": 4662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16009, "raw_average_key_size": 19, "raw_value_size": 2926626, "raw_average_value_size": 3644, "num_data_blocks": 205, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849046, "oldest_key_time": 1769849046, "file_creation_time": 1769849214, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 22999 microseconds, and 7765 cpu microseconds.
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.162015) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2949889 bytes OK
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.162063) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.163371) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.163395) EVENT_LOG_v1 {"time_micros": 1769849214163388, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.163419) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 4512710, prev total WAL file size 4512710, number of live WAL files 2.
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.164428) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323636' seq:0, type:0; will stop at (end)
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2880KB)], [162(11MB)]
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214164514, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 14989706, "oldest_snapshot_seqno": -1}
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10308 keys, 14840434 bytes, temperature: kUnknown
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214275510, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 14840434, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14772482, "index_size": 41087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25797, "raw_key_size": 271572, "raw_average_key_size": 26, "raw_value_size": 14590728, "raw_average_value_size": 1415, "num_data_blocks": 1577, "num_entries": 10308, "num_filter_entries": 10308, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769849214, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.275768) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 14840434 bytes
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.283966) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.9 rd, 133.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.5 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(10.1) write-amplify(5.0) OK, records in: 10835, records dropped: 527 output_compression: NoCompression
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.284005) EVENT_LOG_v1 {"time_micros": 1769849214283991, "job": 104, "event": "compaction_finished", "compaction_time_micros": 111102, "compaction_time_cpu_micros": 30214, "output_level": 6, "num_output_files": 1, "total_output_size": 14840434, "num_input_records": 10835, "num_output_records": 10308, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214284681, "job": 104, "event": "table_file_deletion", "file_number": 164}
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214285655, "job": 104, "event": "table_file_deletion", "file_number": 162}
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.164311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.285758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.285763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.285766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.285768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:46:54.285770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.338 226833 DEBUG oslo_concurrency.processutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6877078-3a1f-48f2-bc51-0daaa570d671/disk.config c6877078-3a1f-48f2-bc51-0daaa570d671_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.339 226833 INFO nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Deleting local config drive /var/lib/nova/instances/c6877078-3a1f-48f2-bc51-0daaa570d671/disk.config because it was imported into RBD.
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.371 226833 DEBUG nova.network.neutron [req-03009e85-587f-45e4-a85f-daa438722761 req-aac1ff5e-6d94-49b9-9e6c-3c8c731269d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updated VIF entry in instance network info cache for port 0b2ec72e-9b76-4407-ba53-e8aea0a2334d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.373 226833 DEBUG nova.network.neutron [req-03009e85-587f-45e4-a85f-daa438722761 req-aac1ff5e-6d94-49b9-9e6c-3c8c731269d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updating instance_info_cache with network_info: [{"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:46:54 compute-2 kernel: tap0b2ec72e-9b: entered promiscuous mode
Jan 31 08:46:54 compute-2 ovn_controller[133834]: 2026-01-31T08:46:54Z|00766|binding|INFO|Claiming lport 0b2ec72e-9b76-4407-ba53-e8aea0a2334d for this chassis.
Jan 31 08:46:54 compute-2 ovn_controller[133834]: 2026-01-31T08:46:54Z|00767|binding|INFO|0b2ec72e-9b76-4407-ba53-e8aea0a2334d: Claiming fa:16:3e:f6:a1:c8 10.100.0.10
Jan 31 08:46:54 compute-2 NetworkManager[48999]: <info>  [1769849214.3831] manager: (tap0b2ec72e-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.383 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.385 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.403 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:54 compute-2 ovn_controller[133834]: 2026-01-31T08:46:54Z|00768|binding|INFO|Setting lport 0b2ec72e-9b76-4407-ba53-e8aea0a2334d ovn-installed in OVS
Jan 31 08:46:54 compute-2 systemd-machined[195142]: New machine qemu-88-instance-000000c1.
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.407 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:54 compute-2 systemd-udevd[319541]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.416 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:a1:c8 10.100.0.10'], port_security=['fa:16:3e:f6:a1:c8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c6877078-3a1f-48f2-bc51-0daaa570d671', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58047b08-8008-41e3-ad4b-abc4736272e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5babc4b4-11e9-4bf7-82f5-1d703e46f023', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d73b7ae4-5441-462a-860a-df757ef01151, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=0b2ec72e-9b76-4407-ba53-e8aea0a2334d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:46:54 compute-2 ovn_controller[133834]: 2026-01-31T08:46:54Z|00769|binding|INFO|Setting lport 0b2ec72e-9b76-4407-ba53-e8aea0a2334d up in Southbound
Jan 31 08:46:54 compute-2 NetworkManager[48999]: <info>  [1769849214.4189] device (tap0b2ec72e-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:46:54 compute-2 NetworkManager[48999]: <info>  [1769849214.4194] device (tap0b2ec72e-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.418 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 0b2ec72e-9b76-4407-ba53-e8aea0a2334d in datapath 58047b08-8008-41e3-ad4b-abc4736272e9 bound to our chassis
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.420 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58047b08-8008-41e3-ad4b-abc4736272e9
Jan 31 08:46:54 compute-2 systemd[1]: Started Virtual Machine qemu-88-instance-000000c1.
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.427 226833 DEBUG oslo_concurrency.lockutils [req-03009e85-587f-45e4-a85f-daa438722761 req-aac1ff5e-6d94-49b9-9e6c-3c8c731269d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.429 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[272511c9-4909-42c3-a689-a9acc522fd19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.429 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58047b08-81 in ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.432 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58047b08-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.432 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce3e1f2-d596-4843-bc51-3fa8eb029ea4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.433 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ba40958c-b82a-4d29-b3e2-b930b51bbef3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.441 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[1279f316-1ceb-436c-b9c2-6950e2db3477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.452 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0f8ff3-e6b2-4504-99f0-4d65168228dd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.475 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[1cfd0c85-5c3f-494a-9a11-b158e13f695b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 NetworkManager[48999]: <info>  [1769849214.4828] manager: (tap58047b08-80): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.482 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[25995d80-79de-4e54-b778-cb9e0cbd5f60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 systemd-udevd[319543]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.508 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4729ea51-bdf9-45cf-b79c-b4e1aff18919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.511 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bf644b-a836-49de-a287-feb8b8467e94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 NetworkManager[48999]: <info>  [1769849214.5246] device (tap58047b08-80): carrier: link connected
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.527 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbbab0b-49f7-4dec-8dce-318837ffb63f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.539 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[30201b08-b4ec-4865-b6b4-040ef970d40c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58047b08-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:86:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 946699, 'reachable_time': 31013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319576, 'error': None, 'target': 'ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.547 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bccf4b22-a2fc-4bba-8329-3309015836bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe28:8666'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 946699, 'tstamp': 946699}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319577, 'error': None, 'target': 'ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.555 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ce7a9f-b3f5-4627-9c1e-11641cfeef6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58047b08-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:86:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 946699, 'reachable_time': 31013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319578, 'error': None, 'target': 'ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.570 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdd0672-c20b-472c-9f40-e3476d6b9092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.597 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4faa7cfc-df7a-43c7-8e0e-507f448c17a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.598 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58047b08-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.598 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.599 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58047b08-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:46:54 compute-2 NetworkManager[48999]: <info>  [1769849214.6012] manager: (tap58047b08-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Jan 31 08:46:54 compute-2 kernel: tap58047b08-80: entered promiscuous mode
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.600 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.602 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.605 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58047b08-80, col_values=(('external_ids', {'iface-id': '4d5d68bb-880f-4c95-b90c-bedb6f5e595a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:46:54 compute-2 ovn_controller[133834]: 2026-01-31T08:46:54Z|00770|binding|INFO|Releasing lport 4d5d68bb-880f-4c95-b90c-bedb6f5e595a from this chassis (sb_readonly=0)
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.606 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.607 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.608 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58047b08-8008-41e3-ad4b-abc4736272e9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58047b08-8008-41e3-ad4b-abc4736272e9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.610 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.611 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9866f4-11e5-4ef9-97c8-d8269bce49bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.612 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-58047b08-8008-41e3-ad4b-abc4736272e9
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/58047b08-8008-41e3-ad4b-abc4736272e9.pid.haproxy
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 58047b08-8008-41e3-ad4b-abc4736272e9
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:46:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:46:54.612 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9', 'env', 'PROCESS_TAG=haproxy-58047b08-8008-41e3-ad4b-abc4736272e9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58047b08-8008-41e3-ad4b-abc4736272e9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.814 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849214.8134205, c6877078-3a1f-48f2-bc51-0daaa570d671 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.814 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] VM Started (Lifecycle Event)
Jan 31 08:46:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:54.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:54 compute-2 podman[319652]: 2026-01-31 08:46:54.956742466 +0000 UTC m=+0.059200870 container create cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.994 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.999 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849214.8139024, c6877078-3a1f-48f2-bc51-0daaa570d671 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:46:54 compute-2 nova_compute[226829]: 2026-01-31 08:46:54.999 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] VM Paused (Lifecycle Event)
Jan 31 08:46:55 compute-2 systemd[1]: Started libpod-conmon-cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1.scope.
Jan 31 08:46:55 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:46:55 compute-2 podman[319652]: 2026-01-31 08:46:54.916684608 +0000 UTC m=+0.019143042 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:46:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b260bf47660c9842ba4410e43680901c6afcf8b823507cd842342d16a772c76/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:55 compute-2 podman[319652]: 2026-01-31 08:46:55.031370325 +0000 UTC m=+0.133828769 container init cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:46:55 compute-2 podman[319652]: 2026-01-31 08:46:55.036220037 +0000 UTC m=+0.138678451 container start cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:46:55 compute-2 neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9[319667]: [NOTICE]   (319671) : New worker (319673) forked
Jan 31 08:46:55 compute-2 neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9[319667]: [NOTICE]   (319671) : Loading success.
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.153 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.158 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:46:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.518 226833 DEBUG nova.compute.manager [req-18f9d641-179e-4292-89e8-7e1fa584f401 req-0390086c-a6ff-4890-bdc7-0d82aa8fbf1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received event network-vif-plugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.518 226833 DEBUG oslo_concurrency.lockutils [req-18f9d641-179e-4292-89e8-7e1fa584f401 req-0390086c-a6ff-4890-bdc7-0d82aa8fbf1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.519 226833 DEBUG oslo_concurrency.lockutils [req-18f9d641-179e-4292-89e8-7e1fa584f401 req-0390086c-a6ff-4890-bdc7-0d82aa8fbf1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.519 226833 DEBUG oslo_concurrency.lockutils [req-18f9d641-179e-4292-89e8-7e1fa584f401 req-0390086c-a6ff-4890-bdc7-0d82aa8fbf1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.519 226833 DEBUG nova.compute.manager [req-18f9d641-179e-4292-89e8-7e1fa584f401 req-0390086c-a6ff-4890-bdc7-0d82aa8fbf1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Processing event network-vif-plugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.520 226833 DEBUG nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.524 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.526 226833 INFO nova.virt.libvirt.driver [-] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Instance spawned successfully.
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.527 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.551 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.551 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849215.523337, c6877078-3a1f-48f2-bc51-0daaa570d671 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:46:55 compute-2 nova_compute[226829]: 2026-01-31 08:46:55.552 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] VM Resumed (Lifecycle Event)
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.007 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.011 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.019 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.020 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.021 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.021 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.022 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.022 226833 DEBUG nova.virt.libvirt.driver [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:46:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:56.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:56 compute-2 ceph-mon[77282]: pgmap v3423: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.215 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.320 226833 INFO nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Took 26.55 seconds to spawn the instance on the hypervisor.
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.321 226833 DEBUG nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.467 226833 INFO nova.compute.manager [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Took 35.81 seconds to build instance.
Jan 31 08:46:56 compute-2 nova_compute[226829]: 2026-01-31 08:46:56.633 226833 DEBUG oslo_concurrency.lockutils [None req-01e8cfd8-86ba-4ad6-b56a-edde61351a0e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 37.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:56.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:57 compute-2 nova_compute[226829]: 2026-01-31 08:46:57.726 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:57 compute-2 nova_compute[226829]: 2026-01-31 08:46:57.959 226833 DEBUG nova.compute.manager [req-36ddc473-429f-4044-a203-f4275656c0e4 req-32b3532e-b171-47ab-8738-2f07c9912181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received event network-vif-plugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:46:57 compute-2 nova_compute[226829]: 2026-01-31 08:46:57.959 226833 DEBUG oslo_concurrency.lockutils [req-36ddc473-429f-4044-a203-f4275656c0e4 req-32b3532e-b171-47ab-8738-2f07c9912181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:57 compute-2 nova_compute[226829]: 2026-01-31 08:46:57.960 226833 DEBUG oslo_concurrency.lockutils [req-36ddc473-429f-4044-a203-f4275656c0e4 req-32b3532e-b171-47ab-8738-2f07c9912181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:57 compute-2 nova_compute[226829]: 2026-01-31 08:46:57.960 226833 DEBUG oslo_concurrency.lockutils [req-36ddc473-429f-4044-a203-f4275656c0e4 req-32b3532e-b171-47ab-8738-2f07c9912181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:57 compute-2 nova_compute[226829]: 2026-01-31 08:46:57.960 226833 DEBUG nova.compute.manager [req-36ddc473-429f-4044-a203-f4275656c0e4 req-32b3532e-b171-47ab-8738-2f07c9912181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] No waiting events found dispatching network-vif-plugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:46:57 compute-2 nova_compute[226829]: 2026-01-31 08:46:57.960 226833 WARNING nova.compute.manager [req-36ddc473-429f-4044-a203-f4275656c0e4 req-32b3532e-b171-47ab-8738-2f07c9912181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received unexpected event network-vif-plugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d for instance with vm_state active and task_state None.
Jan 31 08:46:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:46:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:58.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:46:58 compute-2 nova_compute[226829]: 2026-01-31 08:46:58.114 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:58 compute-2 ceph-mon[77282]: pgmap v3424: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 31 08:46:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:46:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:46:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:58.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:46:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:46:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 81K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1545 writes, 7560 keys, 1545 commit groups, 1.0 writes per commit group, ingest: 15.87 MB, 0.03 MB/s
                                           Interval WAL: 1545 writes, 1545 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     82.7      1.22              0.30        52    0.023       0      0       0.0       0.0
                                             L6      1/0   14.15 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2    153.3    131.5      3.96              1.46        51    0.078    382K    27K       0.0       0.0
                                            Sum      1/0   14.15 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2    117.2    120.0      5.18              1.75       103    0.050    382K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.4    123.7    130.0      0.66              0.22        12    0.055     62K   3118       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    153.3    131.5      3.96              1.46        51    0.078    382K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     82.8      1.22              0.30        51    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.098, interval 0.013
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.61 GB write, 0.10 MB/s write, 0.59 GB read, 0.10 MB/s read, 5.2 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 65.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000523 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3771,62.45 MB,20.5425%) FilterBlock(103,1.03 MB,0.338339%) IndexBlock(103,1.73 MB,0.568284%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:46:59 compute-2 nova_compute[226829]: 2026-01-31 08:46:59.619 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:59 compute-2 sudo[319685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:59 compute-2 sudo[319685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:59 compute-2 sudo[319685]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:59 compute-2 sudo[319716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:59 compute-2 sudo[319716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:59 compute-2 sudo[319716]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:59 compute-2 podman[319709]: 2026-01-31 08:46:59.941981471 +0000 UTC m=+0.054292607 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 08:47:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:00.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:00 compute-2 ceph-mon[77282]: pgmap v3425: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 797 KiB/s rd, 12 KiB/s wr, 34 op/s
Jan 31 08:47:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:00 compute-2 nova_compute[226829]: 2026-01-31 08:47:00.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:00 compute-2 nova_compute[226829]: 2026-01-31 08:47:00.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:00.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:01 compute-2 nova_compute[226829]: 2026-01-31 08:47:01.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:01 compute-2 nova_compute[226829]: 2026-01-31 08:47:01.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:47:01 compute-2 nova_compute[226829]: 2026-01-31 08:47:01.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:47:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:02.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:02 compute-2 ceph-mon[77282]: pgmap v3426: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:47:02 compute-2 nova_compute[226829]: 2026-01-31 08:47:02.730 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:47:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:02.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:47:03 compute-2 nova_compute[226829]: 2026-01-31 08:47:03.116 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:04.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:04 compute-2 ceph-mon[77282]: pgmap v3427: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:47:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:47:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:04.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:47:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:06.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:06 compute-2 ceph-mon[77282]: pgmap v3428: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:47:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:06.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:47:06.926 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:47:06.928 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:47:06.929 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:07 compute-2 nova_compute[226829]: 2026-01-31 08:47:07.733 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:08.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:08 compute-2 nova_compute[226829]: 2026-01-31 08:47:08.143 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:08 compute-2 ceph-mon[77282]: pgmap v3429: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Jan 31 08:47:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3314652597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:08.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:09 compute-2 nova_compute[226829]: 2026-01-31 08:47:09.099 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:47:09 compute-2 nova_compute[226829]: 2026-01-31 08:47:09.099 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:47:09 compute-2 nova_compute[226829]: 2026-01-31 08:47:09.099 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:47:09 compute-2 nova_compute[226829]: 2026-01-31 08:47:09.100 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid c6877078-3a1f-48f2-bc51-0daaa570d671 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:47:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/393711494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:47:09.317 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:47:09 compute-2 nova_compute[226829]: 2026-01-31 08:47:09.318 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:47:09.319 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:47:09 compute-2 ovn_controller[133834]: 2026-01-31T08:47:09Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:a1:c8 10.100.0.10
Jan 31 08:47:09 compute-2 ovn_controller[133834]: 2026-01-31T08:47:09Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:a1:c8 10.100.0.10
Jan 31 08:47:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:10.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:10 compute-2 ceph-mon[77282]: pgmap v3430: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 602 KiB/s wr, 77 op/s
Jan 31 08:47:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/534509877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2264906209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:10.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:12.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:12 compute-2 ceph-mon[77282]: pgmap v3431: 305 pgs: 305 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 31 08:47:12 compute-2 nova_compute[226829]: 2026-01-31 08:47:12.736 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:47:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:12.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:47:13 compute-2 nova_compute[226829]: 2026-01-31 08:47:13.145 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:13 compute-2 nova_compute[226829]: 2026-01-31 08:47:13.407 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updating instance_info_cache with network_info: [{"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:47:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:14.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.243 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.244 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.244 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.245 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.245 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.245 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:14 compute-2 ceph-mon[77282]: pgmap v3432: 305 pgs: 305 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.743 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.744 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.744 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.744 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:47:14 compute-2 nova_compute[226829]: 2026-01-31 08:47:14.745 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:14.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:47:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3370830284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:15 compute-2 nova_compute[226829]: 2026-01-31 08:47:15.170 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3370830284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:15 compute-2 nova_compute[226829]: 2026-01-31 08:47:15.428 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:47:15 compute-2 nova_compute[226829]: 2026-01-31 08:47:15.429 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:47:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:15 compute-2 nova_compute[226829]: 2026-01-31 08:47:15.593 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:47:15 compute-2 nova_compute[226829]: 2026-01-31 08:47:15.594 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3938MB free_disk=20.942935943603516GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:47:15 compute-2 nova_compute[226829]: 2026-01-31 08:47:15.594 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:15 compute-2 nova_compute[226829]: 2026-01-31 08:47:15.595 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:47:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:16.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.083 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance c6877078-3a1f-48f2-bc51-0daaa570d671 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.083 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.083 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.135 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.164 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:16 compute-2 NetworkManager[48999]: <info>  [1769849236.1653] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Jan 31 08:47:16 compute-2 NetworkManager[48999]: <info>  [1769849236.1667] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.181 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:16 compute-2 ovn_controller[133834]: 2026-01-31T08:47:16Z|00771|binding|INFO|Releasing lport 4d5d68bb-880f-4c95-b90c-bedb6f5e595a from this chassis (sb_readonly=0)
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.187 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:16 compute-2 ceph-mon[77282]: pgmap v3433: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:47:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:47:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1983775073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.570 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.575 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.682 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.788 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:47:16 compute-2 nova_compute[226829]: 2026-01-31 08:47:16.788 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:16.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:47:17.321 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:47:17 compute-2 nova_compute[226829]: 2026-01-31 08:47:17.335 226833 DEBUG nova.compute.manager [req-146aef9e-834e-4313-9986-a1d30050b72b req-be7fe2ba-501c-4044-a300-bce544d7971d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received event network-changed-0b2ec72e-9b76-4407-ba53-e8aea0a2334d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:47:17 compute-2 nova_compute[226829]: 2026-01-31 08:47:17.336 226833 DEBUG nova.compute.manager [req-146aef9e-834e-4313-9986-a1d30050b72b req-be7fe2ba-501c-4044-a300-bce544d7971d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Refreshing instance network info cache due to event network-changed-0b2ec72e-9b76-4407-ba53-e8aea0a2334d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:47:17 compute-2 nova_compute[226829]: 2026-01-31 08:47:17.336 226833 DEBUG oslo_concurrency.lockutils [req-146aef9e-834e-4313-9986-a1d30050b72b req-be7fe2ba-501c-4044-a300-bce544d7971d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:47:17 compute-2 nova_compute[226829]: 2026-01-31 08:47:17.336 226833 DEBUG oslo_concurrency.lockutils [req-146aef9e-834e-4313-9986-a1d30050b72b req-be7fe2ba-501c-4044-a300-bce544d7971d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:47:17 compute-2 nova_compute[226829]: 2026-01-31 08:47:17.336 226833 DEBUG nova.network.neutron [req-146aef9e-834e-4313-9986-a1d30050b72b req-be7fe2ba-501c-4044-a300-bce544d7971d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Refreshing network info cache for port 0b2ec72e-9b76-4407-ba53-e8aea0a2334d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:47:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1983775073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:17 compute-2 ceph-mon[77282]: pgmap v3434: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:47:17 compute-2 nova_compute[226829]: 2026-01-31 08:47:17.738 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:18.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:18 compute-2 nova_compute[226829]: 2026-01-31 08:47:18.147 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:18 compute-2 nova_compute[226829]: 2026-01-31 08:47:18.460 226833 INFO nova.compute.manager [None req-ed0abaaf-5c19-49db-8489-5c6ddb80c9b3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Get console output
Jan 31 08:47:18 compute-2 nova_compute[226829]: 2026-01-31 08:47:18.467 267670 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:47:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:18.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:19 compute-2 ceph-mon[77282]: pgmap v3435: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:47:19 compute-2 sudo[319812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:19 compute-2 sudo[319812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:20 compute-2 sudo[319812]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:20 compute-2 sudo[319837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:20 compute-2 sudo[319837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:20 compute-2 sudo[319837]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:47:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:20.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:47:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:20.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:21 compute-2 ceph-mon[77282]: pgmap v3436: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 220 KiB/s rd, 1.6 MiB/s wr, 55 op/s
Jan 31 08:47:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:22.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:22 compute-2 nova_compute[226829]: 2026-01-31 08:47:22.740 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:22.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:23 compute-2 nova_compute[226829]: 2026-01-31 08:47:23.148 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:23 compute-2 podman[319863]: 2026-01-31 08:47:23.210767646 +0000 UTC m=+0.096043141 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:47:23 compute-2 ceph-mon[77282]: pgmap v3437: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 17 KiB/s wr, 1 op/s
Jan 31 08:47:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:24.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:24 compute-2 nova_compute[226829]: 2026-01-31 08:47:24.876 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:24.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:25 compute-2 nova_compute[226829]: 2026-01-31 08:47:25.078 226833 DEBUG nova.network.neutron [req-146aef9e-834e-4313-9986-a1d30050b72b req-be7fe2ba-501c-4044-a300-bce544d7971d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updated VIF entry in instance network info cache for port 0b2ec72e-9b76-4407-ba53-e8aea0a2334d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:47:25 compute-2 nova_compute[226829]: 2026-01-31 08:47:25.079 226833 DEBUG nova.network.neutron [req-146aef9e-834e-4313-9986-a1d30050b72b req-be7fe2ba-501c-4044-a300-bce544d7971d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updating instance_info_cache with network_info: [{"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:47:25 compute-2 nova_compute[226829]: 2026-01-31 08:47:25.120 226833 DEBUG oslo_concurrency.lockutils [req-146aef9e-834e-4313-9986-a1d30050b72b req-be7fe2ba-501c-4044-a300-bce544d7971d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:47:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:25 compute-2 ceph-mon[77282]: pgmap v3438: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 21 KiB/s wr, 2 op/s
Jan 31 08:47:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:26.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:26.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:27 compute-2 nova_compute[226829]: 2026-01-31 08:47:27.031 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:27 compute-2 nova_compute[226829]: 2026-01-31 08:47:27.032 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:47:27 compute-2 nova_compute[226829]: 2026-01-31 08:47:27.743 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:27 compute-2 ceph-mon[77282]: pgmap v3439: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Jan 31 08:47:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:47:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:28.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:47:28 compute-2 nova_compute[226829]: 2026-01-31 08:47:28.150 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:47:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 72K writes, 297K keys, 72K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.05 MB/s
                                           Cumulative WAL: 72K writes, 26K syncs, 2.77 writes per sync, written: 0.30 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4825 writes, 20K keys, 4825 commit groups, 1.0 writes per commit group, ingest: 24.47 MB, 0.04 MB/s
                                           Interval WAL: 4825 writes, 1762 syncs, 2.74 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:47:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:28.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:30 compute-2 ceph-mon[77282]: pgmap v3440: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Jan 31 08:47:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:47:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:30.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:47:30 compute-2 podman[319895]: 2026-01-31 08:47:30.172353476 +0000 UTC m=+0.058464660 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:47:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:47:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:30.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:47:32 compute-2 ceph-mon[77282]: pgmap v3441: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 08:47:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:32 compute-2 nova_compute[226829]: 2026-01-31 08:47:32.746 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:32.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:33 compute-2 nova_compute[226829]: 2026-01-31 08:47:33.153 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:34 compute-2 ceph-mon[77282]: pgmap v3442: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 08:47:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:34.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:34.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:35 compute-2 nova_compute[226829]: 2026-01-31 08:47:35.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:36.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:36 compute-2 ceph-mon[77282]: pgmap v3443: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 08:47:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:36.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:36 compute-2 sudo[319918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:36 compute-2 sudo[319918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:36 compute-2 sudo[319918]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:36 compute-2 sudo[319943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:36 compute-2 sudo[319943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:36 compute-2 sudo[319943]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:37 compute-2 sudo[319968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:37 compute-2 sudo[319968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:37 compute-2 sudo[319968]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:37 compute-2 sudo[319993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:47:37 compute-2 sudo[319993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:37 compute-2 sudo[319993]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:37 compute-2 nova_compute[226829]: 2026-01-31 08:47:37.750 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:38.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:38 compute-2 nova_compute[226829]: 2026-01-31 08:47:38.153 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:38.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:38 compute-2 ceph-mon[77282]: pgmap v3444: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 31 08:47:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:47:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:47:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:47:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:47:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:47:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:47:40 compute-2 ceph-mon[77282]: pgmap v3445: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 31 08:47:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:47:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:40.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:47:40 compute-2 sudo[320053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:40 compute-2 sudo[320053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:40 compute-2 sudo[320053]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:40 compute-2 sudo[320078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:40 compute-2 sudo[320078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:40 compute-2 sudo[320078]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:40.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:41 compute-2 nova_compute[226829]: 2026-01-31 08:47:41.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:41 compute-2 nova_compute[226829]: 2026-01-31 08:47:41.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:47:41 compute-2 nova_compute[226829]: 2026-01-31 08:47:41.517 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:47:41 compute-2 nova_compute[226829]: 2026-01-31 08:47:41.776 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:41 compute-2 nova_compute[226829]: 2026-01-31 08:47:41.900 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Triggering sync for uuid c6877078-3a1f-48f2-bc51-0daaa570d671 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:47:41 compute-2 nova_compute[226829]: 2026-01-31 08:47:41.901 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "c6877078-3a1f-48f2-bc51-0daaa570d671" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:41 compute-2 nova_compute[226829]: 2026-01-31 08:47:41.901 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:41 compute-2 nova_compute[226829]: 2026-01-31 08:47:41.965 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:42 compute-2 ceph-mon[77282]: pgmap v3446: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 08:47:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:42.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:42 compute-2 nova_compute[226829]: 2026-01-31 08:47:42.752 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:42.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:43 compute-2 nova_compute[226829]: 2026-01-31 08:47:43.155 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:44 compute-2 ceph-mon[77282]: pgmap v3447: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 1.7 KiB/s wr, 0 op/s
Jan 31 08:47:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:44.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:44 compute-2 nova_compute[226829]: 2026-01-31 08:47:44.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:44 compute-2 nova_compute[226829]: 2026-01-31 08:47:44.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:47:44 compute-2 sudo[320105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:44 compute-2 sudo[320105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:44 compute-2 sudo[320105]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:44 compute-2 sudo[320130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:47:44 compute-2 sudo[320130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:44 compute-2 sudo[320130]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:47:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:44.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:47:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:47:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:47:45 compute-2 ceph-mon[77282]: pgmap v3448: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:47:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:46.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/480750745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.579680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266579740, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 746, "num_deletes": 251, "total_data_size": 1375857, "memory_usage": 1403368, "flush_reason": "Manual Compaction"}
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266587925, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 907419, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81521, "largest_seqno": 82261, "table_properties": {"data_size": 903809, "index_size": 1453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8242, "raw_average_key_size": 19, "raw_value_size": 896596, "raw_average_value_size": 2119, "num_data_blocks": 64, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849215, "oldest_key_time": 1769849215, "file_creation_time": 1769849266, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 8289 microseconds, and 3647 cpu microseconds.
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.587966) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 907419 bytes OK
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.587987) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.590316) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.590330) EVENT_LOG_v1 {"time_micros": 1769849266590326, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.590347) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1371932, prev total WAL file size 1371932, number of live WAL files 2.
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.590718) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(886KB)], [165(14MB)]
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266590779, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 15747853, "oldest_snapshot_seqno": -1}
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 10216 keys, 13742864 bytes, temperature: kUnknown
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266660100, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 13742864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13676475, "index_size": 39716, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 270352, "raw_average_key_size": 26, "raw_value_size": 13497147, "raw_average_value_size": 1321, "num_data_blocks": 1514, "num_entries": 10216, "num_filter_entries": 10216, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769849266, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.660354) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13742864 bytes
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.662087) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.9 rd, 198.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.2 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(32.5) write-amplify(15.1) OK, records in: 10731, records dropped: 515 output_compression: NoCompression
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.662104) EVENT_LOG_v1 {"time_micros": 1769849266662096, "job": 106, "event": "compaction_finished", "compaction_time_micros": 69417, "compaction_time_cpu_micros": 22828, "output_level": 6, "num_output_files": 1, "total_output_size": 13742864, "num_input_records": 10731, "num_output_records": 10216, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266662302, "job": 106, "event": "table_file_deletion", "file_number": 167}
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266663414, "job": 106, "event": "table_file_deletion", "file_number": 165}
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.590626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.663530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.663537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.663539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.663541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:47:46.663543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:46.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:47 compute-2 ceph-mon[77282]: pgmap v3449: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:47:47 compute-2 nova_compute[226829]: 2026-01-31 08:47:47.756 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:48.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:48 compute-2 nova_compute[226829]: 2026-01-31 08:47:48.187 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:48.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:50 compute-2 ceph-mon[77282]: pgmap v3450: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:47:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:50.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:50.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:52 compute-2 ceph-mon[77282]: pgmap v3451: 305 pgs: 305 active+clean; 221 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 674 KiB/s wr, 15 op/s
Jan 31 08:47:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:52.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:52 compute-2 nova_compute[226829]: 2026-01-31 08:47:52.760 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:52.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:53 compute-2 nova_compute[226829]: 2026-01-31 08:47:53.189 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:47:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/182738076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:47:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:47:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/182738076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:47:53 compute-2 nova_compute[226829]: 2026-01-31 08:47:53.733 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:54.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:54 compute-2 podman[320160]: 2026-01-31 08:47:54.187723043 +0000 UTC m=+0.068957075 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:47:54 compute-2 ceph-mon[77282]: pgmap v3452: 305 pgs: 305 active+clean; 221 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 673 KiB/s wr, 14 op/s
Jan 31 08:47:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/182738076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:47:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/182738076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:47:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:54.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:55 compute-2 nova_compute[226829]: 2026-01-31 08:47:55.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:56.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:56 compute-2 ceph-mon[77282]: pgmap v3453: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:47:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:56.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:57 compute-2 ceph-mon[77282]: pgmap v3454: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:47:57 compute-2 nova_compute[226829]: 2026-01-31 08:47:57.764 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:58.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:58 compute-2 nova_compute[226829]: 2026-01-31 08:47:58.192 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:47:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:58.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:00.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:00 compute-2 sudo[320189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:00 compute-2 sudo[320189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:00 compute-2 sudo[320189]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:00 compute-2 sudo[320220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:00 compute-2 sudo[320220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:00 compute-2 sudo[320220]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:00 compute-2 podman[320213]: 2026-01-31 08:48:00.335840033 +0000 UTC m=+0.053327520 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:48:00 compute-2 ceph-mon[77282]: pgmap v3455: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:48:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:00.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:01 compute-2 ceph-mon[77282]: pgmap v3456: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:48:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:02.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:02 compute-2 nova_compute[226829]: 2026-01-31 08:48:02.766 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1146695923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:02.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:03 compute-2 nova_compute[226829]: 2026-01-31 08:48:03.194 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:03 compute-2 nova_compute[226829]: 2026-01-31 08:48:03.868 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:03 compute-2 nova_compute[226829]: 2026-01-31 08:48:03.868 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:48:03 compute-2 nova_compute[226829]: 2026-01-31 08:48:03.868 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:48:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:04.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:04 compute-2 ceph-mon[77282]: pgmap v3457: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 1.1 MiB/s wr, 13 op/s
Jan 31 08:48:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3678183900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3499755342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:04 compute-2 nova_compute[226829]: 2026-01-31 08:48:04.173 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:48:04 compute-2 nova_compute[226829]: 2026-01-31 08:48:04.173 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:48:04 compute-2 nova_compute[226829]: 2026-01-31 08:48:04.173 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:48:04 compute-2 nova_compute[226829]: 2026-01-31 08:48:04.174 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid c6877078-3a1f-48f2-bc51-0daaa570d671 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:48:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:04.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3852161205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:06.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:06 compute-2 ceph-mon[77282]: pgmap v3458: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 3.2 MiB/s wr, 40 op/s
Jan 31 08:48:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/280270877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1441525110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:48:06.926 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:48:06.927 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:48:06.927 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:48:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:06.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:48:07 compute-2 nova_compute[226829]: 2026-01-31 08:48:07.769 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:07 compute-2 ceph-mon[77282]: pgmap v3459: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 3.2 MiB/s wr, 49 op/s
Jan 31 08:48:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:08.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:08 compute-2 nova_compute[226829]: 2026-01-31 08:48:08.197 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:08 compute-2 nova_compute[226829]: 2026-01-31 08:48:08.421 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updating instance_info_cache with network_info: [{"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:48:08 compute-2 nova_compute[226829]: 2026-01-31 08:48:08.463 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:48:08 compute-2 nova_compute[226829]: 2026-01-31 08:48:08.463 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:48:08 compute-2 nova_compute[226829]: 2026-01-31 08:48:08.464 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:08 compute-2 nova_compute[226829]: 2026-01-31 08:48:08.464 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:08 compute-2 nova_compute[226829]: 2026-01-31 08:48:08.465 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:08 compute-2 nova_compute[226829]: 2026-01-31 08:48:08.465 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:08 compute-2 nova_compute[226829]: 2026-01-31 08:48:08.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:08.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:10.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:10 compute-2 ceph-mon[77282]: pgmap v3460: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 3.5 MiB/s wr, 50 op/s
Jan 31 08:48:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2553671827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:10.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3835481311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1540000744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1385660829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:48:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:12.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:48:12 compute-2 ceph-mon[77282]: pgmap v3461: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 564 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Jan 31 08:48:12 compute-2 nova_compute[226829]: 2026-01-31 08:48:12.773 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:12.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:13 compute-2 nova_compute[226829]: 2026-01-31 08:48:13.198 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:13 compute-2 ceph-mon[77282]: pgmap v3462: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 564 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Jan 31 08:48:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:14.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:14 compute-2 nova_compute[226829]: 2026-01-31 08:48:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:14 compute-2 nova_compute[226829]: 2026-01-31 08:48:14.729 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:14 compute-2 nova_compute[226829]: 2026-01-31 08:48:14.729 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:14 compute-2 nova_compute[226829]: 2026-01-31 08:48:14.729 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:14 compute-2 nova_compute[226829]: 2026-01-31 08:48:14.729 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:48:14 compute-2 nova_compute[226829]: 2026-01-31 08:48:14.730 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/759972552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:14.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:48:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2098039222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.199 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.288 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.288 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.417 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.418 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3951MB free_disk=20.880279541015625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.418 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.419 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.799 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance c6877078-3a1f-48f2-bc51-0daaa570d671 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.799 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.799 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:48:15 compute-2 nova_compute[226829]: 2026-01-31 08:48:15.930 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:16 compute-2 ceph-mon[77282]: pgmap v3463: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 31 08:48:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2196161786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2098039222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:16.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:48:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2983306795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:16 compute-2 nova_compute[226829]: 2026-01-31 08:48:16.352 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:16 compute-2 nova_compute[226829]: 2026-01-31 08:48:16.356 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:48:16 compute-2 nova_compute[226829]: 2026-01-31 08:48:16.801 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:48:16 compute-2 nova_compute[226829]: 2026-01-31 08:48:16.803 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:48:16 compute-2 nova_compute[226829]: 2026-01-31 08:48:16.804 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:16.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2983306795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:17 compute-2 nova_compute[226829]: 2026-01-31 08:48:17.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:18.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:18 compute-2 nova_compute[226829]: 2026-01-31 08:48:18.200 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:18 compute-2 ceph-mon[77282]: pgmap v3464: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 111 op/s
Jan 31 08:48:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:18.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:48:20.093 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:48:20 compute-2 nova_compute[226829]: 2026-01-31 08:48:20.094 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:20 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:48:20.094 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:48:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:20.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:20 compute-2 ceph-mon[77282]: pgmap v3465: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 412 KiB/s wr, 91 op/s
Jan 31 08:48:20 compute-2 sudo[320314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:20 compute-2 sudo[320314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:20 compute-2 sudo[320314]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:20 compute-2 sudo[320339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:20 compute-2 sudo[320339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:20 compute-2 sudo[320339]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:20.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:21 compute-2 ceph-mon[77282]: pgmap v3466: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 37 KiB/s wr, 178 op/s
Jan 31 08:48:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:48:22.096 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:48:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:22.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:22 compute-2 nova_compute[226829]: 2026-01-31 08:48:22.779 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:22.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:23 compute-2 nova_compute[226829]: 2026-01-31 08:48:23.203 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:23 compute-2 ceph-mon[77282]: pgmap v3467: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 31 08:48:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:24.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:24.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:25 compute-2 podman[320367]: 2026-01-31 08:48:25.237904831 +0000 UTC m=+0.123749776 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:48:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:26 compute-2 ceph-mon[77282]: pgmap v3468: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.3 MiB/s wr, 327 op/s
Jan 31 08:48:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:26.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:26.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:27 compute-2 nova_compute[226829]: 2026-01-31 08:48:27.783 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:28 compute-2 ceph-mon[77282]: pgmap v3469: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 350 op/s
Jan 31 08:48:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:28.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:28 compute-2 nova_compute[226829]: 2026-01-31 08:48:28.205 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:28 compute-2 nova_compute[226829]: 2026-01-31 08:48:28.805 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:28 compute-2 nova_compute[226829]: 2026-01-31 08:48:28.805 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:48:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:28.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:30 compute-2 ceph-mon[77282]: pgmap v3470: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 379 op/s
Jan 31 08:48:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:30.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:30.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:31 compute-2 podman[320397]: 2026-01-31 08:48:31.154748635 +0000 UTC m=+0.045279382 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:48:32 compute-2 ceph-mon[77282]: pgmap v3471: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 377 op/s
Jan 31 08:48:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:32.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:32 compute-2 nova_compute[226829]: 2026-01-31 08:48:32.647 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:32 compute-2 nova_compute[226829]: 2026-01-31 08:48:32.785 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:32.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:33 compute-2 nova_compute[226829]: 2026-01-31 08:48:33.207 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:34.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:34 compute-2 ceph-mon[77282]: pgmap v3472: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 289 op/s
Jan 31 08:48:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:48:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:34.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:48:35 compute-2 ceph-mon[77282]: pgmap v3473: 305 pgs: 305 active+clean; 388 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 337 op/s
Jan 31 08:48:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:36.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:48:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:48:37 compute-2 nova_compute[226829]: 2026-01-31 08:48:37.787 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:38 compute-2 ceph-mon[77282]: pgmap v3474: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 600 KiB/s rd, 3.0 MiB/s wr, 170 op/s
Jan 31 08:48:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:38.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:38 compute-2 nova_compute[226829]: 2026-01-31 08:48:38.209 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:48:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:48:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:40.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:40 compute-2 ceph-mon[77282]: pgmap v3475: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Jan 31 08:48:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:40 compute-2 sudo[320423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:40 compute-2 sudo[320423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:40 compute-2 sudo[320423]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:40 compute-2 sudo[320448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:40 compute-2 sudo[320448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:40 compute-2 sudo[320448]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:48:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:40.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:48:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:42.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:42 compute-2 ceph-mon[77282]: pgmap v3476: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 08:48:42 compute-2 nova_compute[226829]: 2026-01-31 08:48:42.790 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:48:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:42.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:48:43 compute-2 nova_compute[226829]: 2026-01-31 08:48:43.210 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:48:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:44.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.538549) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324538619, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 776, "num_deletes": 250, "total_data_size": 1454625, "memory_usage": 1469720, "flush_reason": "Manual Compaction"}
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324564818, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 607509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82266, "largest_seqno": 83037, "table_properties": {"data_size": 604467, "index_size": 949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8325, "raw_average_key_size": 20, "raw_value_size": 598045, "raw_average_value_size": 1473, "num_data_blocks": 43, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849267, "oldest_key_time": 1769849267, "file_creation_time": 1769849324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 26327 microseconds, and 3016 cpu microseconds.
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.564873) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 607509 bytes OK
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.564892) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.594665) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.594719) EVENT_LOG_v1 {"time_micros": 1769849324594697, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.594741) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1450587, prev total WAL file size 1451300, number of live WAL files 2.
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.595458) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373630' seq:72057594037927935, type:22 .. '6D6772737461740033303131' seq:0, type:0; will stop at (end)
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(593KB)], [168(13MB)]
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324595554, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 14350373, "oldest_snapshot_seqno": -1}
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10136 keys, 10877091 bytes, temperature: kUnknown
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324714047, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 10877091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10815379, "index_size": 35295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 268893, "raw_average_key_size": 26, "raw_value_size": 10641614, "raw_average_value_size": 1049, "num_data_blocks": 1330, "num_entries": 10136, "num_filter_entries": 10136, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769849324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.714315) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10877091 bytes
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.770497) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.0 rd, 91.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 13.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(41.5) write-amplify(17.9) OK, records in: 10622, records dropped: 486 output_compression: NoCompression
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.770539) EVENT_LOG_v1 {"time_micros": 1769849324770523, "job": 108, "event": "compaction_finished", "compaction_time_micros": 118570, "compaction_time_cpu_micros": 52874, "output_level": 6, "num_output_files": 1, "total_output_size": 10877091, "num_input_records": 10622, "num_output_records": 10136, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324770792, "job": 108, "event": "table_file_deletion", "file_number": 170}
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324772051, "job": 108, "event": "table_file_deletion", "file_number": 168}
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.595313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.772097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.772102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.772103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.772105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:48:44.772106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-2 ceph-mon[77282]: pgmap v3477: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 08:48:44 compute-2 sudo[320475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:44 compute-2 sudo[320475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:44 compute-2 sudo[320475]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:44.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:44 compute-2 sudo[320500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:45 compute-2 sudo[320500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:45 compute-2 sudo[320500]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:45 compute-2 sudo[320525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:45 compute-2 sudo[320525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:45 compute-2 sudo[320525]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:45 compute-2 sudo[320550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:48:45 compute-2 sudo[320550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:45 compute-2 podman[320646]: 2026-01-31 08:48:45.62783917 +0000 UTC m=+0.107722368 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:48:45 compute-2 podman[320646]: 2026-01-31 08:48:45.746386742 +0000 UTC m=+0.226269910 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 08:48:45 compute-2 ceph-mon[77282]: pgmap v3478: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 356 KiB/s rd, 2.3 MiB/s wr, 77 op/s
Jan 31 08:48:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:46.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:46 compute-2 podman[320800]: 2026-01-31 08:48:46.498600337 +0000 UTC m=+0.232585032 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:48:46 compute-2 podman[320800]: 2026-01-31 08:48:46.540364382 +0000 UTC m=+0.274349037 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:48:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:47 compute-2 podman[320864]: 2026-01-31 08:48:47.161082782 +0000 UTC m=+0.380899082 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, com.redhat.component=keepalived-container, vcs-type=git, io.buildah.version=1.28.2, release=1793)
Jan 31 08:48:47 compute-2 podman[320885]: 2026-01-31 08:48:47.283196102 +0000 UTC m=+0.104028019 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., name=keepalived, architecture=x86_64, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 08:48:47 compute-2 podman[320864]: 2026-01-31 08:48:47.33500708 +0000 UTC m=+0.554823360 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, distribution-scope=public, vcs-type=git, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=)
Jan 31 08:48:47 compute-2 sudo[320550]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:47 compute-2 sudo[320898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:47 compute-2 sudo[320898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:47 compute-2 sudo[320898]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:47 compute-2 sudo[320923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:47 compute-2 sudo[320923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:47 compute-2 sudo[320923]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:47 compute-2 sudo[320948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:47 compute-2 sudo[320948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:47 compute-2 sudo[320948]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:47 compute-2 sudo[320973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:48:47 compute-2 sudo[320973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:47 compute-2 nova_compute[226829]: 2026-01-31 08:48:47.792 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:47 compute-2 sudo[320973]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:48 compute-2 ceph-mon[77282]: pgmap v3479: 305 pgs: 305 active+clean; 409 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 156 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Jan 31 08:48:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:48:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:48:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:48:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:48:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:48:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:48 compute-2 nova_compute[226829]: 2026-01-31 08:48:48.212 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:48.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:48.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:50 compute-2 ceph-mon[77282]: pgmap v3480: 305 pgs: 305 active+clean; 409 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 944 KiB/s wr, 26 op/s
Jan 31 08:48:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:50.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:51 compute-2 nova_compute[226829]: 2026-01-31 08:48:51.642 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "0c25d280-5868-4d4d-8665-797d3463ed08" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:51 compute-2 nova_compute[226829]: 2026-01-31 08:48:51.643 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:51 compute-2 nova_compute[226829]: 2026-01-31 08:48:51.919 226833 DEBUG nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:48:52 compute-2 ceph-mon[77282]: pgmap v3481: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 1.3 MiB/s wr, 46 op/s
Jan 31 08:48:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:52.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:52 compute-2 nova_compute[226829]: 2026-01-31 08:48:52.293 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:52 compute-2 nova_compute[226829]: 2026-01-31 08:48:52.294 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:52 compute-2 nova_compute[226829]: 2026-01-31 08:48:52.300 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:48:52 compute-2 nova_compute[226829]: 2026-01-31 08:48:52.301 226833 INFO nova.compute.claims [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:48:52 compute-2 nova_compute[226829]: 2026-01-31 08:48:52.760 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:52 compute-2 nova_compute[226829]: 2026-01-31 08:48:52.794 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:48:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/57302897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:53 compute-2 nova_compute[226829]: 2026-01-31 08:48:53.214 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:53 compute-2 nova_compute[226829]: 2026-01-31 08:48:53.223 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:53 compute-2 nova_compute[226829]: 2026-01-31 08:48:53.228 226833 DEBUG nova.compute.provider_tree [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:48:53 compute-2 nova_compute[226829]: 2026-01-31 08:48:53.538 226833 DEBUG nova.scheduler.client.report [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:48:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:54.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:54 compute-2 nova_compute[226829]: 2026-01-31 08:48:54.232 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:54 compute-2 nova_compute[226829]: 2026-01-31 08:48:54.233 226833 DEBUG nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:48:54 compute-2 nova_compute[226829]: 2026-01-31 08:48:54.452 226833 DEBUG nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:48:54 compute-2 nova_compute[226829]: 2026-01-31 08:48:54.452 226833 DEBUG nova.network.neutron [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:48:54 compute-2 ceph-mon[77282]: pgmap v3482: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Jan 31 08:48:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/57302897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/769013742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:48:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/769013742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:48:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:55.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:55 compute-2 nova_compute[226829]: 2026-01-31 08:48:55.049 226833 INFO nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:48:55 compute-2 nova_compute[226829]: 2026-01-31 08:48:55.403 226833 DEBUG nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:48:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:55 compute-2 nova_compute[226829]: 2026-01-31 08:48:55.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:55 compute-2 sudo[321055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:55 compute-2 sudo[321055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:55 compute-2 sudo[321055]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:55 compute-2 sudo[321083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:48:55 compute-2 sudo[321083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:55 compute-2 sudo[321083]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:55 compute-2 podman[321079]: 2026-01-31 08:48:55.736104434 +0000 UTC m=+0.068744318 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 31 08:48:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2202804351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:55 compute-2 ceph-mon[77282]: pgmap v3483: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 1.5 MiB/s wr, 51 op/s
Jan 31 08:48:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.147 226833 DEBUG nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.148 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.148 226833 INFO nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Creating image(s)
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.171 226833 DEBUG nova.storage.rbd_utils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 0c25d280-5868-4d4d-8665-797d3463ed08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.192 226833 DEBUG nova.storage.rbd_utils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 0c25d280-5868-4d4d-8665-797d3463ed08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.213 226833 DEBUG nova.storage.rbd_utils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 0c25d280-5868-4d4d-8665-797d3463ed08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.216 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:56.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.269 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.270 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.271 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.271 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.445 226833 DEBUG nova.storage.rbd_utils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 0c25d280-5868-4d4d-8665-797d3463ed08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.449 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 0c25d280-5868-4d4d-8665-797d3463ed08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:56 compute-2 nova_compute[226829]: 2026-01-31 08:48:56.474 226833 DEBUG nova.policy [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6968a1ee10e4e3b8651ffe0240a7e46', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:48:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:57.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:57 compute-2 nova_compute[226829]: 2026-01-31 08:48:57.600 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 0c25d280-5868-4d4d-8665-797d3463ed08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:57 compute-2 nova_compute[226829]: 2026-01-31 08:48:57.687 226833 DEBUG nova.storage.rbd_utils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] resizing rbd image 0c25d280-5868-4d4d-8665-797d3463ed08_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:48:57 compute-2 nova_compute[226829]: 2026-01-31 08:48:57.816 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:57 compute-2 nova_compute[226829]: 2026-01-31 08:48:57.821 226833 DEBUG nova.objects.instance [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'migration_context' on Instance uuid 0c25d280-5868-4d4d-8665-797d3463ed08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:48:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:58.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:58 compute-2 nova_compute[226829]: 2026-01-31 08:48:58.257 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:58 compute-2 nova_compute[226829]: 2026-01-31 08:48:58.305 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:48:58 compute-2 nova_compute[226829]: 2026-01-31 08:48:58.306 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Ensure instance console log exists: /var/lib/nova/instances/0c25d280-5868-4d4d-8665-797d3463ed08/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:48:58 compute-2 nova_compute[226829]: 2026-01-31 08:48:58.306 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:58 compute-2 nova_compute[226829]: 2026-01-31 08:48:58.306 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:58 compute-2 nova_compute[226829]: 2026-01-31 08:48:58.307 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:58 compute-2 ceph-mon[77282]: pgmap v3484: 305 pgs: 305 active+clean; 347 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Jan 31 08:48:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:48:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:59 compute-2 nova_compute[226829]: 2026-01-31 08:48:59.967 226833 DEBUG nova.network.neutron [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Successfully created port: 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:49:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:00.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:00 compute-2 sudo[321299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:00 compute-2 sudo[321299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:00 compute-2 sudo[321299]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:00 compute-2 sudo[321324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:00 compute-2 sudo[321324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:00 compute-2 sudo[321324]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:01 compute-2 ceph-mon[77282]: pgmap v3485: 305 pgs: 305 active+clean; 367 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 31 08:49:01 compute-2 nova_compute[226829]: 2026-01-31 08:49:01.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:02 compute-2 ceph-mon[77282]: pgmap v3486: 305 pgs: 305 active+clean; 333 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 286 KiB/s rd, 3.0 MiB/s wr, 130 op/s
Jan 31 08:49:02 compute-2 podman[321350]: 2026-01-31 08:49:02.177773054 +0000 UTC m=+0.068581226 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:49:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:02.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:02 compute-2 nova_compute[226829]: 2026-01-31 08:49:02.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:02 compute-2 nova_compute[226829]: 2026-01-31 08:49:02.647 226833 DEBUG nova.network.neutron [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Successfully updated port: 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:49:02 compute-2 nova_compute[226829]: 2026-01-31 08:49:02.862 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:03 compute-2 nova_compute[226829]: 2026-01-31 08:49:03.259 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:03 compute-2 nova_compute[226829]: 2026-01-31 08:49:03.484 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "refresh_cache-0c25d280-5868-4d4d-8665-797d3463ed08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:49:03 compute-2 nova_compute[226829]: 2026-01-31 08:49:03.484 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquired lock "refresh_cache-0c25d280-5868-4d4d-8665-797d3463ed08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:49:03 compute-2 nova_compute[226829]: 2026-01-31 08:49:03.484 226833 DEBUG nova.network.neutron [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:49:03 compute-2 nova_compute[226829]: 2026-01-31 08:49:03.572 226833 DEBUG nova.compute.manager [req-cc71da82-07f9-4286-9a81-1b6016f3655c req-8bead083-5f45-4866-bef0-ba5119800ebb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Received event network-changed-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:03 compute-2 nova_compute[226829]: 2026-01-31 08:49:03.573 226833 DEBUG nova.compute.manager [req-cc71da82-07f9-4286-9a81-1b6016f3655c req-8bead083-5f45-4866-bef0-ba5119800ebb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Refreshing instance network info cache due to event network-changed-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:49:03 compute-2 nova_compute[226829]: 2026-01-31 08:49:03.573 226833 DEBUG oslo_concurrency.lockutils [req-cc71da82-07f9-4286-9a81-1b6016f3655c req-8bead083-5f45-4866-bef0-ba5119800ebb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0c25d280-5868-4d4d-8665-797d3463ed08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:49:04 compute-2 ceph-mon[77282]: pgmap v3487: 305 pgs: 305 active+clean; 333 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 2.6 MiB/s wr, 107 op/s
Jan 31 08:49:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:04.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:04 compute-2 nova_compute[226829]: 2026-01-31 08:49:04.455 226833 DEBUG nova.network.neutron [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:49:04 compute-2 nova_compute[226829]: 2026-01-31 08:49:04.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:04 compute-2 nova_compute[226829]: 2026-01-31 08:49:04.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:49:04 compute-2 nova_compute[226829]: 2026-01-31 08:49:04.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:49:04 compute-2 nova_compute[226829]: 2026-01-31 08:49:04.698 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:49:04 compute-2 nova_compute[226829]: 2026-01-31 08:49:04.925 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:49:04 compute-2 nova_compute[226829]: 2026-01-31 08:49:04.925 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:49:04 compute-2 nova_compute[226829]: 2026-01-31 08:49:04.925 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:49:04 compute-2 nova_compute[226829]: 2026-01-31 08:49:04.925 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid c6877078-3a1f-48f2-bc51-0daaa570d671 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:49:04 compute-2 ovn_controller[133834]: 2026-01-31T08:49:04Z|00772|binding|INFO|Releasing lport 4d5d68bb-880f-4c95-b90c-bedb6f5e595a from this chassis (sb_readonly=0)
Jan 31 08:49:05 compute-2 nova_compute[226829]: 2026-01-31 08:49:05.009 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1274297397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:06 compute-2 ceph-mon[77282]: pgmap v3488: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 2.6 MiB/s wr, 107 op/s
Jan 31 08:49:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:06.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.253 226833 DEBUG nova.network.neutron [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Updating instance_info_cache with network_info: [{"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.707 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Releasing lock "refresh_cache-0c25d280-5868-4d4d-8665-797d3463ed08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.708 226833 DEBUG nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Instance network_info: |[{"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.708 226833 DEBUG oslo_concurrency.lockutils [req-cc71da82-07f9-4286-9a81-1b6016f3655c req-8bead083-5f45-4866-bef0-ba5119800ebb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0c25d280-5868-4d4d-8665-797d3463ed08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.708 226833 DEBUG nova.network.neutron [req-cc71da82-07f9-4286-9a81-1b6016f3655c req-8bead083-5f45-4866-bef0-ba5119800ebb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Refreshing network info cache for port 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.711 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Start _get_guest_xml network_info=[{"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.715 226833 WARNING nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.723 226833 DEBUG nova.virt.libvirt.host [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.724 226833 DEBUG nova.virt.libvirt.host [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.727 226833 DEBUG nova.virt.libvirt.host [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.728 226833 DEBUG nova.virt.libvirt.host [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.729 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.729 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.730 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.730 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.730 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.730 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.730 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.731 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.731 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.732 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.732 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.732 226833 DEBUG nova.virt.hardware [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.735 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.901 226833 DEBUG nova.compute.manager [req-c90c033d-01bf-4338-beaf-9fb7a57b3c19 req-85e593c2-621a-4ec5-bd67-fa9b09eab74d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received event network-changed-0b2ec72e-9b76-4407-ba53-e8aea0a2334d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.902 226833 DEBUG nova.compute.manager [req-c90c033d-01bf-4338-beaf-9fb7a57b3c19 req-85e593c2-621a-4ec5-bd67-fa9b09eab74d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Refreshing instance network info cache due to event network-changed-0b2ec72e-9b76-4407-ba53-e8aea0a2334d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:49:06 compute-2 nova_compute[226829]: 2026-01-31 08:49:06.902 226833 DEBUG oslo_concurrency.lockutils [req-c90c033d-01bf-4338-beaf-9fb7a57b3c19 req-85e593c2-621a-4ec5-bd67-fa9b09eab74d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:49:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:06.927 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:06.928 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:06.929 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:07.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:49:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3995720161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.142 226833 DEBUG oslo_concurrency.lockutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "c6877078-3a1f-48f2-bc51-0daaa570d671" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.143 226833 DEBUG oslo_concurrency.lockutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.143 226833 DEBUG oslo_concurrency.lockutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.143 226833 DEBUG oslo_concurrency.lockutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.144 226833 DEBUG oslo_concurrency.lockutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.145 226833 INFO nova.compute.manager [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Terminating instance
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.146 226833 DEBUG nova.compute.manager [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.152 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.176 226833 DEBUG nova.storage.rbd_utils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 0c25d280-5868-4d4d-8665-797d3463ed08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.179 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/902395460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3995720161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:49:07 compute-2 kernel: tap0b2ec72e-9b (unregistering): left promiscuous mode
Jan 31 08:49:07 compute-2 NetworkManager[48999]: <info>  [1769849347.2265] device (tap0b2ec72e-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:49:07 compute-2 ovn_controller[133834]: 2026-01-31T08:49:07Z|00773|binding|INFO|Releasing lport 0b2ec72e-9b76-4407-ba53-e8aea0a2334d from this chassis (sb_readonly=0)
Jan 31 08:49:07 compute-2 ovn_controller[133834]: 2026-01-31T08:49:07Z|00774|binding|INFO|Setting lport 0b2ec72e-9b76-4407-ba53-e8aea0a2334d down in Southbound
Jan 31 08:49:07 compute-2 ovn_controller[133834]: 2026-01-31T08:49:07Z|00775|binding|INFO|Removing iface tap0b2ec72e-9b ovn-installed in OVS
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.258 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.264 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:07 compute-2 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c1.scope: Deactivated successfully.
Jan 31 08:49:07 compute-2 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c1.scope: Consumed 17.453s CPU time.
Jan 31 08:49:07 compute-2 systemd-machined[195142]: Machine qemu-88-instance-000000c1 terminated.
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.337 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:a1:c8 10.100.0.10'], port_security=['fa:16:3e:f6:a1:c8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c6877078-3a1f-48f2-bc51-0daaa570d671', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58047b08-8008-41e3-ad4b-abc4736272e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5babc4b4-11e9-4bf7-82f5-1d703e46f023', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d73b7ae4-5441-462a-860a-df757ef01151, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=0b2ec72e-9b76-4407-ba53-e8aea0a2334d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.338 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 0b2ec72e-9b76-4407-ba53-e8aea0a2334d in datapath 58047b08-8008-41e3-ad4b-abc4736272e9 unbound from our chassis
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.340 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58047b08-8008-41e3-ad4b-abc4736272e9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.342 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b6bfd673-64e8-4c45-9c54-72323a60bf22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.343 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9 namespace which is not needed anymore
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.389 226833 INFO nova.virt.libvirt.driver [-] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Instance destroyed successfully.
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.389 226833 DEBUG nova.objects.instance [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid c6877078-3a1f-48f2-bc51-0daaa570d671 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:49:07 compute-2 neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9[319667]: [NOTICE]   (319671) : haproxy version is 2.8.14-c23fe91
Jan 31 08:49:07 compute-2 neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9[319667]: [NOTICE]   (319671) : path to executable is /usr/sbin/haproxy
Jan 31 08:49:07 compute-2 neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9[319667]: [WARNING]  (319671) : Exiting Master process...
Jan 31 08:49:07 compute-2 neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9[319667]: [WARNING]  (319671) : Exiting Master process...
Jan 31 08:49:07 compute-2 neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9[319667]: [ALERT]    (319671) : Current worker (319673) exited with code 143 (Terminated)
Jan 31 08:49:07 compute-2 neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9[319667]: [WARNING]  (319671) : All workers exited. Exiting... (0)
Jan 31 08:49:07 compute-2 systemd[1]: libpod-cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1.scope: Deactivated successfully.
Jan 31 08:49:07 compute-2 podman[321466]: 2026-01-31 08:49:07.463209548 +0000 UTC m=+0.040119642 container died cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:49:07 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1-userdata-shm.mount: Deactivated successfully.
Jan 31 08:49:07 compute-2 systemd[1]: var-lib-containers-storage-overlay-4b260bf47660c9842ba4410e43680901c6afcf8b823507cd842342d16a772c76-merged.mount: Deactivated successfully.
Jan 31 08:49:07 compute-2 podman[321466]: 2026-01-31 08:49:07.499554776 +0000 UTC m=+0.076464870 container cleanup cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 08:49:07 compute-2 systemd[1]: libpod-conmon-cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1.scope: Deactivated successfully.
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.531 226833 DEBUG nova.virt.libvirt.vif [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:46:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-425849670',display_name='tempest-TestNetworkBasicOps-server-425849670',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-425849670',id=193,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKMbC/FI4HjJREMCtTD7CNC5mvZyYtzpey4yRu6XY1sWZXuUm6lpM37jLTr6iQRJ2xBNOeXL6JthwkqGkxYDl24drZQt/0ATmDRDcJHCyK1nn+90+vt3jf49GzGXmtmbrw==',key_name='tempest-TestNetworkBasicOps-1418942212',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:46:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-w01idkb6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:46:56Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c6877078-3a1f-48f2-bc51-0daaa570d671,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.531 226833 DEBUG nova.network.os_vif_util [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.532 226833 DEBUG nova.network.os_vif_util [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:a1:c8,bridge_name='br-int',has_traffic_filtering=True,id=0b2ec72e-9b76-4407-ba53-e8aea0a2334d,network=Network(58047b08-8008-41e3-ad4b-abc4736272e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2ec72e-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.532 226833 DEBUG os_vif [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:a1:c8,bridge_name='br-int',has_traffic_filtering=True,id=0b2ec72e-9b76-4407-ba53-e8aea0a2334d,network=Network(58047b08-8008-41e3-ad4b-abc4736272e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2ec72e-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.535 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.535 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b2ec72e-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.539 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.541 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.545 226833 INFO os_vif [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:a1:c8,bridge_name='br-int',has_traffic_filtering=True,id=0b2ec72e-9b76-4407-ba53-e8aea0a2334d,network=Network(58047b08-8008-41e3-ad4b-abc4736272e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2ec72e-9b')
Jan 31 08:49:07 compute-2 podman[321496]: 2026-01-31 08:49:07.567023669 +0000 UTC m=+0.053395682 container remove cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.571 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5d338a62-e3c6-4b66-a0c6-dba13d96b4fb]: (4, ('Sat Jan 31 08:49:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9 (cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1)\ncd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1\nSat Jan 31 08:49:07 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9 (cd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1)\ncd3e1a645ece16e30c1714d30fa5f09087a7c57a6755fa43ba5b5521b0e0f6f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.573 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[81377659-fbd9-4c81-800b-4f0716255e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.574 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58047b08-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:07 compute-2 kernel: tap58047b08-80: left promiscuous mode
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.575 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.581 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[71f9609c-9a40-45e0-bafa-b58d5fc9dded]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.599 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4ad9a6-27f0-4022-adc8-2c1ef68604c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.600 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f97eff25-fae0-4584-b5fc-7f5f12a69666]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.612 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[618e0979-94f6-4393-8cea-d723df7518f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 946694, 'reachable_time': 36843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321527, 'error': None, 'target': 'ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:07 compute-2 systemd[1]: run-netns-ovnmeta\x2d58047b08\x2d8008\x2d41e3\x2dad4b\x2dabc4736272e9.mount: Deactivated successfully.
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.617 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58047b08-8008-41e3-ad4b-abc4736272e9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:49:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:07.617 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[196923ca-1727-438a-9446-03d229f3fab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:49:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/414552168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.681 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.683 226833 DEBUG nova.virt.libvirt.vif [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-402573898',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-402573898',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ge',id=197,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGEVRXCOmyKSdhlo9l0WxIRbGJsQf5I7xnoqbiNS2WMkfr+PHUSU3MjMN6e2reAurOFdIo1aIllo1la3phc66kwPrjNj65wq6jckJq8FKFDC0nYvOhxhwR1EYQGTcrj65g==',key_name='tempest-TestSecurityGroupsBasicOps-681515177',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-e83pt6ne',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:48:55Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=0c25d280-5868-4d4d-8665-797d3463ed08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.684 226833 DEBUG nova.network.os_vif_util [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.685 226833 DEBUG nova.network.os_vif_util [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:81:96,bridge_name='br-int',has_traffic_filtering=True,id=61dfc1b3-64a2-42b8-9ae2-a87bc503acf2,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61dfc1b3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.687 226833 DEBUG nova.objects.instance [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c25d280-5868-4d4d-8665-797d3463ed08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.878 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <uuid>0c25d280-5868-4d4d-8665-797d3463ed08</uuid>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <name>instance-000000c5</name>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-402573898</nova:name>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:49:06</nova:creationTime>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <nova:user uuid="c6968a1ee10e4e3b8651ffe0240a7e46">tempest-TestSecurityGroupsBasicOps-1014068786-project-member</nova:user>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <nova:project uuid="ba35ae24dbf3443e8a526dce39c6793b">tempest-TestSecurityGroupsBasicOps-1014068786</nova:project>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <nova:port uuid="61dfc1b3-64a2-42b8-9ae2-a87bc503acf2">
Jan 31 08:49:07 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <system>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <entry name="serial">0c25d280-5868-4d4d-8665-797d3463ed08</entry>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <entry name="uuid">0c25d280-5868-4d4d-8665-797d3463ed08</entry>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     </system>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <os>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   </os>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <features>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   </features>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/0c25d280-5868-4d4d-8665-797d3463ed08_disk">
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       </source>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/0c25d280-5868-4d4d-8665-797d3463ed08_disk.config">
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       </source>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:49:07 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:7d:81:96"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <target dev="tap61dfc1b3-64"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/0c25d280-5868-4d4d-8665-797d3463ed08/console.log" append="off"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <video>
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     </video>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:49:07 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:49:07 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:49:07 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:49:07 compute-2 nova_compute[226829]: </domain>
Jan 31 08:49:07 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.879 226833 DEBUG nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Preparing to wait for external event network-vif-plugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.879 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.880 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.880 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.881 226833 DEBUG nova.virt.libvirt.vif [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-402573898',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-402573898',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ge',id=197,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGEVRXCOmyKSdhlo9l0WxIRbGJsQf5I7xnoqbiNS2WMkfr+PHUSU3MjMN6e2reAurOFdIo1aIllo1la3phc66kwPrjNj65wq6jckJq8FKFDC0nYvOhxhwR1EYQGTcrj65g==',key_name='tempest-TestSecurityGroupsBasicOps-681515177',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-e83pt6ne',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:48:55Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=0c25d280-5868-4d4d-8665-797d3463ed08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.881 226833 DEBUG nova.network.os_vif_util [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.882 226833 DEBUG nova.network.os_vif_util [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:81:96,bridge_name='br-int',has_traffic_filtering=True,id=61dfc1b3-64a2-42b8-9ae2-a87bc503acf2,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61dfc1b3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.882 226833 DEBUG os_vif [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:81:96,bridge_name='br-int',has_traffic_filtering=True,id=61dfc1b3-64a2-42b8-9ae2-a87bc503acf2,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61dfc1b3-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.883 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.884 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.884 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.886 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.886 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61dfc1b3-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.887 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap61dfc1b3-64, col_values=(('external_ids', {'iface-id': '61dfc1b3-64a2-42b8-9ae2-a87bc503acf2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:81:96', 'vm-uuid': '0c25d280-5868-4d4d-8665-797d3463ed08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.888 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:07 compute-2 NetworkManager[48999]: <info>  [1769849347.8893] manager: (tap61dfc1b3-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.892 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:49:07 compute-2 nova_compute[226829]: 2026-01-31 08:49:07.893 226833 INFO os_vif [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:81:96,bridge_name='br-int',has_traffic_filtering=True,id=61dfc1b3-64a2-42b8-9ae2-a87bc503acf2,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61dfc1b3-64')
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.227 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.227 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.227 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No VIF found with MAC fa:16:3e:7d:81:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.228 226833 INFO nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Using config drive
Jan 31 08:49:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:08.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.257 226833 DEBUG nova.storage.rbd_utils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 0c25d280-5868-4d4d-8665-797d3463ed08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.261 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.518 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updating instance_info_cache with network_info: [{"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:08 compute-2 ceph-mon[77282]: pgmap v3489: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 265 KiB/s rd, 2.4 MiB/s wr, 96 op/s
Jan 31 08:49:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1381336499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/414552168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.781 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.782 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.783 226833 DEBUG oslo_concurrency.lockutils [req-c90c033d-01bf-4338-beaf-9fb7a57b3c19 req-85e593c2-621a-4ec5-bd67-fa9b09eab74d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.783 226833 DEBUG nova.network.neutron [req-c90c033d-01bf-4338-beaf-9fb7a57b3c19 req-85e593c2-621a-4ec5-bd67-fa9b09eab74d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Refreshing network info cache for port 0b2ec72e-9b76-4407-ba53-e8aea0a2334d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.786 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:08 compute-2 nova_compute[226829]: 2026-01-31 08:49:08.788 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:49:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:49:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:10.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:10 compute-2 nova_compute[226829]: 2026-01-31 08:49:10.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:11 compute-2 nova_compute[226829]: 2026-01-31 08:49:11.626 226833 DEBUG nova.compute.manager [req-8b298308-bf88-4aaf-816f-f944584f7a74 req-b0ac6786-d753-43e6-bb59-8e86dcf97a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received event network-vif-unplugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:11 compute-2 nova_compute[226829]: 2026-01-31 08:49:11.626 226833 DEBUG oslo_concurrency.lockutils [req-8b298308-bf88-4aaf-816f-f944584f7a74 req-b0ac6786-d753-43e6-bb59-8e86dcf97a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:11 compute-2 nova_compute[226829]: 2026-01-31 08:49:11.626 226833 DEBUG oslo_concurrency.lockutils [req-8b298308-bf88-4aaf-816f-f944584f7a74 req-b0ac6786-d753-43e6-bb59-8e86dcf97a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:11 compute-2 nova_compute[226829]: 2026-01-31 08:49:11.627 226833 DEBUG oslo_concurrency.lockutils [req-8b298308-bf88-4aaf-816f-f944584f7a74 req-b0ac6786-d753-43e6-bb59-8e86dcf97a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:11 compute-2 nova_compute[226829]: 2026-01-31 08:49:11.627 226833 DEBUG nova.compute.manager [req-8b298308-bf88-4aaf-816f-f944584f7a74 req-b0ac6786-d753-43e6-bb59-8e86dcf97a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] No waiting events found dispatching network-vif-unplugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:49:11 compute-2 nova_compute[226829]: 2026-01-31 08:49:11.627 226833 DEBUG nova.compute.manager [req-8b298308-bf88-4aaf-816f-f944584f7a74 req-b0ac6786-d753-43e6-bb59-8e86dcf97a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received event network-vif-unplugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:49:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:12.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:12 compute-2 nova_compute[226829]: 2026-01-31 08:49:12.889 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:12 compute-2 nova_compute[226829]: 2026-01-31 08:49:12.934 226833 INFO nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Creating config drive at /var/lib/nova/instances/0c25d280-5868-4d4d-8665-797d3463ed08/disk.config
Jan 31 08:49:12 compute-2 nova_compute[226829]: 2026-01-31 08:49:12.937 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c25d280-5868-4d4d-8665-797d3463ed08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpeu14xmdd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:13.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:13 compute-2 nova_compute[226829]: 2026-01-31 08:49:13.069 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c25d280-5868-4d4d-8665-797d3463ed08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpeu14xmdd" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:13 compute-2 nova_compute[226829]: 2026-01-31 08:49:13.097 226833 DEBUG nova.storage.rbd_utils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 0c25d280-5868-4d4d-8665-797d3463ed08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:49:13 compute-2 nova_compute[226829]: 2026-01-31 08:49:13.101 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c25d280-5868-4d4d-8665-797d3463ed08/disk.config 0c25d280-5868-4d4d-8665-797d3463ed08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:13 compute-2 nova_compute[226829]: 2026-01-31 08:49:13.263 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:13 compute-2 nova_compute[226829]: 2026-01-31 08:49:13.346 226833 DEBUG nova.network.neutron [req-cc71da82-07f9-4286-9a81-1b6016f3655c req-8bead083-5f45-4866-bef0-ba5119800ebb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Updated VIF entry in instance network info cache for port 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:49:13 compute-2 nova_compute[226829]: 2026-01-31 08:49:13.346 226833 DEBUG nova.network.neutron [req-cc71da82-07f9-4286-9a81-1b6016f3655c req-8bead083-5f45-4866-bef0-ba5119800ebb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Updating instance_info_cache with network_info: [{"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:13 compute-2 nova_compute[226829]: 2026-01-31 08:49:13.823 226833 DEBUG oslo_concurrency.lockutils [req-cc71da82-07f9-4286-9a81-1b6016f3655c req-8bead083-5f45-4866-bef0-ba5119800ebb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0c25d280-5868-4d4d-8665-797d3463ed08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.254 226833 DEBUG nova.compute.manager [req-861eae13-82a9-4f90-a15d-826ed8a43a5d req-7fc4811f-2b77-4ab9-a1d5-40f9d0ed9b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received event network-vif-plugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.255 226833 DEBUG oslo_concurrency.lockutils [req-861eae13-82a9-4f90-a15d-826ed8a43a5d req-7fc4811f-2b77-4ab9-a1d5-40f9d0ed9b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.255 226833 DEBUG oslo_concurrency.lockutils [req-861eae13-82a9-4f90-a15d-826ed8a43a5d req-7fc4811f-2b77-4ab9-a1d5-40f9d0ed9b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.255 226833 DEBUG oslo_concurrency.lockutils [req-861eae13-82a9-4f90-a15d-826ed8a43a5d req-7fc4811f-2b77-4ab9-a1d5-40f9d0ed9b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.255 226833 DEBUG nova.compute.manager [req-861eae13-82a9-4f90-a15d-826ed8a43a5d req-7fc4811f-2b77-4ab9-a1d5-40f9d0ed9b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] No waiting events found dispatching network-vif-plugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.256 226833 WARNING nova.compute.manager [req-861eae13-82a9-4f90-a15d-826ed8a43a5d req-7fc4811f-2b77-4ab9-a1d5-40f9d0ed9b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received unexpected event network-vif-plugged-0b2ec72e-9b76-4407-ba53-e8aea0a2334d for instance with vm_state active and task_state deleting.
Jan 31 08:49:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:14.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.613 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.613 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.613 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.614 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:49:14 compute-2 nova_compute[226829]: 2026-01-31 08:49:14.614 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:49:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4256285338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.026 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.262 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.263 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.266 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.266 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.390 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.391 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4042MB free_disk=20.876338958740234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.391 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.391 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:15 compute-2 ceph-mon[77282]: pgmap v3490: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 261 KiB/s rd, 1.9 MiB/s wr, 88 op/s
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.853 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance c6877078-3a1f-48f2-bc51-0daaa570d671 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.854 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 0c25d280-5868-4d4d-8665-797d3463ed08 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.854 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.854 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:49:15 compute-2 nova_compute[226829]: 2026-01-31 08:49:15.962 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.062 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.063 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.080 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.113 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.194 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:16.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.423 226833 DEBUG nova.network.neutron [req-c90c033d-01bf-4338-beaf-9fb7a57b3c19 req-85e593c2-621a-4ec5-bd67-fa9b09eab74d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updated VIF entry in instance network info cache for port 0b2ec72e-9b76-4407-ba53-e8aea0a2334d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.424 226833 DEBUG nova.network.neutron [req-c90c033d-01bf-4338-beaf-9fb7a57b3c19 req-85e593c2-621a-4ec5-bd67-fa9b09eab74d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updating instance_info_cache with network_info: [{"id": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "address": "fa:16:3e:f6:a1:c8", "network": {"id": "58047b08-8008-41e3-ad4b-abc4736272e9", "bridge": "br-int", "label": "tempest-network-smoke--1499520732", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2ec72e-9b", "ovs_interfaceid": "0b2ec72e-9b76-4407-ba53-e8aea0a2334d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:49:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2114223668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.637 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.642 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:49:16 compute-2 ceph-mon[77282]: pgmap v3491: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 246 KiB/s rd, 1.2 MiB/s wr, 82 op/s
Jan 31 08:49:16 compute-2 ceph-mon[77282]: pgmap v3492: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 6 op/s
Jan 31 08:49:16 compute-2 ceph-mon[77282]: pgmap v3493: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 85 B/s wr, 7 op/s
Jan 31 08:49:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4256285338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.977 226833 DEBUG oslo_concurrency.processutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c25d280-5868-4d4d-8665-797d3463ed08/disk.config 0c25d280-5868-4d4d-8665-797d3463ed08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.877s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:16 compute-2 nova_compute[226829]: 2026-01-31 08:49:16.978 226833 INFO nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Deleting local config drive /var/lib/nova/instances/0c25d280-5868-4d4d-8665-797d3463ed08/disk.config because it was imported into RBD.
Jan 31 08:49:17 compute-2 kernel: tap61dfc1b3-64: entered promiscuous mode
Jan 31 08:49:17 compute-2 NetworkManager[48999]: <info>  [1769849357.0143] manager: (tap61dfc1b3-64): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.015 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:17 compute-2 ovn_controller[133834]: 2026-01-31T08:49:17Z|00776|binding|INFO|Claiming lport 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 for this chassis.
Jan 31 08:49:17 compute-2 ovn_controller[133834]: 2026-01-31T08:49:17Z|00777|binding|INFO|61dfc1b3-64a2-42b8-9ae2-a87bc503acf2: Claiming fa:16:3e:7d:81:96 10.100.0.11
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.022 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:17 compute-2 ovn_controller[133834]: 2026-01-31T08:49:17Z|00778|binding|INFO|Setting lport 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 ovn-installed in OVS
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.023 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.024 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.027 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:17.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:17 compute-2 systemd-machined[195142]: New machine qemu-89-instance-000000c5.
Jan 31 08:49:17 compute-2 systemd-udevd[321654]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:49:17 compute-2 NetworkManager[48999]: <info>  [1769849357.0562] device (tap61dfc1b3-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:49:17 compute-2 NetworkManager[48999]: <info>  [1769849357.0566] device (tap61dfc1b3-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:49:17 compute-2 systemd[1]: Started Virtual Machine qemu-89-instance-000000c5.
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.234 226833 INFO nova.virt.libvirt.driver [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Deleting instance files /var/lib/nova/instances/c6877078-3a1f-48f2-bc51-0daaa570d671_del
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.236 226833 INFO nova.virt.libvirt.driver [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Deletion of /var/lib/nova/instances/c6877078-3a1f-48f2-bc51-0daaa570d671_del complete
Jan 31 08:49:17 compute-2 ovn_controller[133834]: 2026-01-31T08:49:17Z|00779|binding|INFO|Setting lport 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 up in Southbound
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.295 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:81:96 10.100.0.11'], port_security=['fa:16:3e:7d:81:96 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0c25d280-5868-4d4d-8665-797d3463ed08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1594b23b-251f-4e04-9498-d13079f70afc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81b33545-8c35-4640-a006-3779e6b65cfc, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=61dfc1b3-64a2-42b8-9ae2-a87bc503acf2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.298 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 in datapath 93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 bound to our chassis
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.300 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.310 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[34b053f4-b2a9-46e3-a5eb-f9f50187a077]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.311 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93bea2b0-d1 in ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.313 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93bea2b0-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.313 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5bf2b6-3494-4314-99d3-4b88c0bb349c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.314 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[18e9fd0e-00f0-4e1c-a18f-d03879fd2142]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.323 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed77378-d567-4672-94bc-eed93649fbb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.333 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[20cbabc7-1710-4182-91aa-a0c93a803de4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.362 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3736ac59-9df3-4809-ab31-1a9c41a600bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 NetworkManager[48999]: <info>  [1769849357.3686] manager: (tap93bea2b0-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/389)
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.369 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[30ca0f7c-1141-412e-ac5a-7026c3998837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.396 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d3649c47-e8f4-45d1-9054-95df98eef9e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.399 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5e866075-9430-4233-8c4c-8ca6b59421c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 NetworkManager[48999]: <info>  [1769849357.4175] device (tap93bea2b0-d0): carrier: link connected
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.421 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cefcf3-6915-45bc-8a02-d20210b997f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.431 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.445 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7837f98f-6ce3-46e0-9ef7-a22ed8726dd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93bea2b0-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:5d:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 960989, 'reachable_time': 43698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321687, 'error': None, 'target': 'ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.457 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed3cc54-35a5-4dbe-945a-bfec4cd90986]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:5dce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 960989, 'tstamp': 960989}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321688, 'error': None, 'target': 'ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.471 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5051f122-1dae-4784-bb4a-f2629fa26a9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93bea2b0-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:5d:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 960989, 'reachable_time': 43698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321689, 'error': None, 'target': 'ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.493 226833 DEBUG oslo_concurrency.lockutils [req-c90c033d-01bf-4338-beaf-9fb7a57b3c19 req-85e593c2-621a-4ec5-bd67-fa9b09eab74d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c6877078-3a1f-48f2-bc51-0daaa570d671" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.497 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1eec7710-c3d1-43b3-a866-e55bac819e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.533 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[09d112db-22d0-44f0-95b1-3019986a8a9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.534 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93bea2b0-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.534 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.535 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93bea2b0-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:17 compute-2 kernel: tap93bea2b0-d0: entered promiscuous mode
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.536 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:17 compute-2 NetworkManager[48999]: <info>  [1769849357.5379] manager: (tap93bea2b0-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.539 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93bea2b0-d0, col_values=(('external_ids', {'iface-id': 'ccf3782f-9c12-4288-9bff-27ab514c5f7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:17 compute-2 ovn_controller[133834]: 2026-01-31T08:49:17Z|00780|binding|INFO|Releasing lport ccf3782f-9c12-4288-9bff-27ab514c5f7a from this chassis (sb_readonly=1)
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.542 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.543 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c9e053-1156-4f80-832e-0fd987d78e24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.543 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234.pid.haproxy
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:49:17 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:17.544 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'env', 'PROCESS_TAG=haproxy-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.546 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.757 226833 INFO nova.compute.manager [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Took 10.61 seconds to destroy the instance on the hypervisor.
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.758 226833 DEBUG oslo.service.loopingcall [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.758 226833 DEBUG nova.compute.manager [-] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.759 226833 DEBUG nova.network.neutron [-] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.811 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849357.8111978, 0c25d280-5868-4d4d-8665-797d3463ed08 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.812 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] VM Started (Lifecycle Event)
Jan 31 08:49:17 compute-2 podman[321763]: 2026-01-31 08:49:17.880774498 +0000 UTC m=+0.049139456 container create 61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 08:49:17 compute-2 nova_compute[226829]: 2026-01-31 08:49:17.890 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:17 compute-2 systemd[1]: Started libpod-conmon-61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15.scope.
Jan 31 08:49:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2114223668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:17 compute-2 ceph-mon[77282]: pgmap v3494: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.9 KiB/s rd, 170 B/s wr, 7 op/s
Jan 31 08:49:17 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:49:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8cf6b58f28ed1f381f9098842726796cbf00f98e3b89d0a2e583899e1f2815/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:17 compute-2 podman[321763]: 2026-01-31 08:49:17.8554636 +0000 UTC m=+0.023828588 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:49:17 compute-2 podman[321763]: 2026-01-31 08:49:17.957261767 +0000 UTC m=+0.125626745 container init 61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:49:17 compute-2 podman[321763]: 2026-01-31 08:49:17.96287462 +0000 UTC m=+0.131239578 container start 61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 08:49:17 compute-2 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[321778]: [NOTICE]   (321782) : New worker (321785) forked
Jan 31 08:49:17 compute-2 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[321778]: [NOTICE]   (321782) : Loading success.
Jan 31 08:49:18 compute-2 nova_compute[226829]: 2026-01-31 08:49:18.032 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:49:18 compute-2 nova_compute[226829]: 2026-01-31 08:49:18.033 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:18 compute-2 nova_compute[226829]: 2026-01-31 08:49:18.129 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:49:18 compute-2 nova_compute[226829]: 2026-01-31 08:49:18.132 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849357.8120954, 0c25d280-5868-4d4d-8665-797d3463ed08 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:49:18 compute-2 nova_compute[226829]: 2026-01-31 08:49:18.132 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] VM Paused (Lifecycle Event)
Jan 31 08:49:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:18.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:18 compute-2 nova_compute[226829]: 2026-01-31 08:49:18.265 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:18 compute-2 nova_compute[226829]: 2026-01-31 08:49:18.487 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:49:18 compute-2 nova_compute[226829]: 2026-01-31 08:49:18.491 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:49:18 compute-2 nova_compute[226829]: 2026-01-31 08:49:18.872 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:49:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:19.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.314 226833 DEBUG nova.compute.manager [req-20ad5d49-ecb6-4dc8-ad98-b17c1bbef6c2 req-73c39b3f-cca2-4217-a4aa-6f7d48437c2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Received event network-vif-plugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.315 226833 DEBUG oslo_concurrency.lockutils [req-20ad5d49-ecb6-4dc8-ad98-b17c1bbef6c2 req-73c39b3f-cca2-4217-a4aa-6f7d48437c2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.315 226833 DEBUG oslo_concurrency.lockutils [req-20ad5d49-ecb6-4dc8-ad98-b17c1bbef6c2 req-73c39b3f-cca2-4217-a4aa-6f7d48437c2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.316 226833 DEBUG oslo_concurrency.lockutils [req-20ad5d49-ecb6-4dc8-ad98-b17c1bbef6c2 req-73c39b3f-cca2-4217-a4aa-6f7d48437c2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.317 226833 DEBUG nova.compute.manager [req-20ad5d49-ecb6-4dc8-ad98-b17c1bbef6c2 req-73c39b3f-cca2-4217-a4aa-6f7d48437c2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Processing event network-vif-plugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.318 226833 DEBUG nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.322 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849359.3220806, 0c25d280-5868-4d4d-8665-797d3463ed08 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.323 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] VM Resumed (Lifecycle Event)
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.325 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.329 226833 INFO nova.virt.libvirt.driver [-] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Instance spawned successfully.
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.330 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.720 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.725 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.733 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.734 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.734 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.735 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.735 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:49:19 compute-2 nova_compute[226829]: 2026-01-31 08:49:19.736 226833 DEBUG nova.virt.libvirt.driver [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:49:19 compute-2 ceph-mon[77282]: pgmap v3495: 305 pgs: 305 active+clean; 304 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 24 op/s
Jan 31 08:49:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1250200569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:20.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:20 compute-2 nova_compute[226829]: 2026-01-31 08:49:20.277 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:49:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:20 compute-2 nova_compute[226829]: 2026-01-31 08:49:20.540 226833 INFO nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Took 24.39 seconds to spawn the instance on the hypervisor.
Jan 31 08:49:20 compute-2 nova_compute[226829]: 2026-01-31 08:49:20.541 226833 DEBUG nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:49:20 compute-2 sudo[321795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:20 compute-2 sudo[321795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:20 compute-2 sudo[321795]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:20 compute-2 sudo[321820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:20 compute-2 sudo[321820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:20 compute-2 sudo[321820]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:21.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.206 226833 INFO nova.compute.manager [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Took 28.94 seconds to build instance.
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.305 226833 DEBUG nova.network.neutron [-] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.346 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:21.346 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:49:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:21.347 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.487 226833 DEBUG oslo_concurrency.lockutils [None req-0d33cf0b-6405-4bbc-9ec4-7d11ffc355f0 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.505 226833 DEBUG nova.compute.manager [req-11eec837-3ac6-4a12-93c5-4cc008f1bc79 req-99228f2b-4467-46bd-b8dd-396fe71b9dd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Received event network-vif-deleted-0b2ec72e-9b76-4407-ba53-e8aea0a2334d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.506 226833 INFO nova.compute.manager [req-11eec837-3ac6-4a12-93c5-4cc008f1bc79 req-99228f2b-4467-46bd-b8dd-396fe71b9dd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Neutron deleted interface 0b2ec72e-9b76-4407-ba53-e8aea0a2334d; detaching it from the instance and deleting it from the info cache
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.506 226833 DEBUG nova.network.neutron [req-11eec837-3ac6-4a12-93c5-4cc008f1bc79 req-99228f2b-4467-46bd-b8dd-396fe71b9dd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.913 226833 INFO nova.compute.manager [-] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Took 4.15 seconds to deallocate network for instance.
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.965 226833 DEBUG nova.compute.manager [req-1284cd94-08fa-4957-9618-959f7626219d req-54053217-6cf4-4d19-8b10-ad6b84c522b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Received event network-vif-plugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.966 226833 DEBUG oslo_concurrency.lockutils [req-1284cd94-08fa-4957-9618-959f7626219d req-54053217-6cf4-4d19-8b10-ad6b84c522b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.966 226833 DEBUG oslo_concurrency.lockutils [req-1284cd94-08fa-4957-9618-959f7626219d req-54053217-6cf4-4d19-8b10-ad6b84c522b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.966 226833 DEBUG oslo_concurrency.lockutils [req-1284cd94-08fa-4957-9618-959f7626219d req-54053217-6cf4-4d19-8b10-ad6b84c522b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.967 226833 DEBUG nova.compute.manager [req-1284cd94-08fa-4957-9618-959f7626219d req-54053217-6cf4-4d19-8b10-ad6b84c522b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] No waiting events found dispatching network-vif-plugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.967 226833 WARNING nova.compute.manager [req-1284cd94-08fa-4957-9618-959f7626219d req-54053217-6cf4-4d19-8b10-ad6b84c522b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Received unexpected event network-vif-plugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 for instance with vm_state active and task_state None.
Jan 31 08:49:21 compute-2 ceph-mon[77282]: pgmap v3496: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 114 KiB/s rd, 14 KiB/s wr, 41 op/s
Jan 31 08:49:21 compute-2 nova_compute[226829]: 2026-01-31 08:49:21.987 226833 DEBUG nova.compute.manager [req-11eec837-3ac6-4a12-93c5-4cc008f1bc79 req-99228f2b-4467-46bd-b8dd-396fe71b9dd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Detach interface failed, port_id=0b2ec72e-9b76-4407-ba53-e8aea0a2334d, reason: Instance c6877078-3a1f-48f2-bc51-0daaa570d671 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:49:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:22.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:22 compute-2 nova_compute[226829]: 2026-01-31 08:49:22.381 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849347.3802924, c6877078-3a1f-48f2-bc51-0daaa570d671 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:49:22 compute-2 nova_compute[226829]: 2026-01-31 08:49:22.382 226833 INFO nova.compute.manager [-] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] VM Stopped (Lifecycle Event)
Jan 31 08:49:22 compute-2 nova_compute[226829]: 2026-01-31 08:49:22.392 226833 DEBUG oslo_concurrency.lockutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:22 compute-2 nova_compute[226829]: 2026-01-31 08:49:22.393 226833 DEBUG oslo_concurrency.lockutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:22 compute-2 nova_compute[226829]: 2026-01-31 08:49:22.490 226833 DEBUG oslo_concurrency.processutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:22 compute-2 nova_compute[226829]: 2026-01-31 08:49:22.535 226833 DEBUG nova.compute.manager [None req-4fe23afb-c102-464a-8e75-a1317257851f - - - - - -] [instance: c6877078-3a1f-48f2-bc51-0daaa570d671] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:49:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:49:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2868309194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:22 compute-2 nova_compute[226829]: 2026-01-31 08:49:22.936 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:22 compute-2 nova_compute[226829]: 2026-01-31 08:49:22.948 226833 DEBUG oslo_concurrency.processutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:22 compute-2 nova_compute[226829]: 2026-01-31 08:49:22.956 226833 DEBUG nova.compute.provider_tree [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:49:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/244133078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2868309194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:23.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:23 compute-2 nova_compute[226829]: 2026-01-31 08:49:23.268 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:23 compute-2 nova_compute[226829]: 2026-01-31 08:49:23.299 226833 DEBUG nova.scheduler.client.report [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:49:23 compute-2 nova_compute[226829]: 2026-01-31 08:49:23.485 226833 DEBUG oslo_concurrency.lockutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:23 compute-2 nova_compute[226829]: 2026-01-31 08:49:23.728 226833 INFO nova.scheduler.client.report [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance c6877078-3a1f-48f2-bc51-0daaa570d671
Jan 31 08:49:24 compute-2 ceph-mon[77282]: pgmap v3497: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 108 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 31 08:49:24 compute-2 nova_compute[226829]: 2026-01-31 08:49:24.246 226833 DEBUG oslo_concurrency.lockutils [None req-b5ac02e9-5c1c-4590-8a6e-fbff62c8a6d8 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c6877078-3a1f-48f2-bc51-0daaa570d671" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 17.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:24.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:25.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:26 compute-2 ceph-mon[77282]: pgmap v3498: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 95 op/s
Jan 31 08:49:26 compute-2 podman[321871]: 2026-01-31 08:49:26.178368369 +0000 UTC m=+0.064429722 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:49:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:26.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:27.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:27 compute-2 nova_compute[226829]: 2026-01-31 08:49:27.938 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:28 compute-2 ceph-mon[77282]: pgmap v3499: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 95 op/s
Jan 31 08:49:28 compute-2 nova_compute[226829]: 2026-01-31 08:49:28.270 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:28.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:29.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:30 compute-2 ceph-mon[77282]: pgmap v3500: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 95 op/s
Jan 31 08:49:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:30.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:30.350 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:30 compute-2 nova_compute[226829]: 2026-01-31 08:49:30.716 226833 DEBUG nova.compute.manager [req-6a202be6-a7bc-43e0-b7f2-8007cf6808d7 req-db8dabd5-4b18-4d75-978c-15e7ee139538 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Received event network-changed-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:30 compute-2 nova_compute[226829]: 2026-01-31 08:49:30.717 226833 DEBUG nova.compute.manager [req-6a202be6-a7bc-43e0-b7f2-8007cf6808d7 req-db8dabd5-4b18-4d75-978c-15e7ee139538 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Refreshing instance network info cache due to event network-changed-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:49:30 compute-2 nova_compute[226829]: 2026-01-31 08:49:30.717 226833 DEBUG oslo_concurrency.lockutils [req-6a202be6-a7bc-43e0-b7f2-8007cf6808d7 req-db8dabd5-4b18-4d75-978c-15e7ee139538 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0c25d280-5868-4d4d-8665-797d3463ed08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:49:30 compute-2 nova_compute[226829]: 2026-01-31 08:49:30.717 226833 DEBUG oslo_concurrency.lockutils [req-6a202be6-a7bc-43e0-b7f2-8007cf6808d7 req-db8dabd5-4b18-4d75-978c-15e7ee139538 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0c25d280-5868-4d4d-8665-797d3463ed08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:49:30 compute-2 nova_compute[226829]: 2026-01-31 08:49:30.717 226833 DEBUG nova.network.neutron [req-6a202be6-a7bc-43e0-b7f2-8007cf6808d7 req-db8dabd5-4b18-4d75-978c-15e7ee139538 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Refreshing network info cache for port 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:49:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:31.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:31 compute-2 ovn_controller[133834]: 2026-01-31T08:49:31Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:81:96 10.100.0.11
Jan 31 08:49:31 compute-2 ovn_controller[133834]: 2026-01-31T08:49:31Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:81:96 10.100.0.11
Jan 31 08:49:32 compute-2 nova_compute[226829]: 2026-01-31 08:49:32.034 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:32 compute-2 nova_compute[226829]: 2026-01-31 08:49:32.035 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:49:32 compute-2 ceph-mon[77282]: pgmap v3501: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 81 op/s
Jan 31 08:49:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:32.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:32 compute-2 nova_compute[226829]: 2026-01-31 08:49:32.940 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:33.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:33 compute-2 podman[321900]: 2026-01-31 08:49:33.152994334 +0000 UTC m=+0.040165283 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:49:33 compute-2 nova_compute[226829]: 2026-01-31 08:49:33.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:34 compute-2 ceph-mon[77282]: pgmap v3502: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 16 KiB/s wr, 63 op/s
Jan 31 08:49:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:34.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:35.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:35 compute-2 nova_compute[226829]: 2026-01-31 08:49:35.490 226833 DEBUG nova.network.neutron [req-6a202be6-a7bc-43e0-b7f2-8007cf6808d7 req-db8dabd5-4b18-4d75-978c-15e7ee139538 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Updated VIF entry in instance network info cache for port 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:49:35 compute-2 nova_compute[226829]: 2026-01-31 08:49:35.491 226833 DEBUG nova.network.neutron [req-6a202be6-a7bc-43e0-b7f2-8007cf6808d7 req-db8dabd5-4b18-4d75-978c-15e7ee139538 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Updating instance_info_cache with network_info: [{"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:35 compute-2 nova_compute[226829]: 2026-01-31 08:49:35.940 226833 DEBUG oslo_concurrency.lockutils [req-6a202be6-a7bc-43e0-b7f2-8007cf6808d7 req-db8dabd5-4b18-4d75-978c-15e7ee139538 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0c25d280-5868-4d4d-8665-797d3463ed08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:49:36 compute-2 ceph-mon[77282]: pgmap v3503: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Jan 31 08:49:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:36.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:37.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:37 compute-2 nova_compute[226829]: 2026-01-31 08:49:37.984 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:38 compute-2 ceph-mon[77282]: pgmap v3504: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.265 226833 DEBUG oslo_concurrency.lockutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "0c25d280-5868-4d4d-8665-797d3463ed08" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.266 226833 DEBUG oslo_concurrency.lockutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.266 226833 DEBUG oslo_concurrency.lockutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.266 226833 DEBUG oslo_concurrency.lockutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.266 226833 DEBUG oslo_concurrency.lockutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.267 226833 INFO nova.compute.manager [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Terminating instance
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.268 226833 DEBUG nova.compute.manager [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.274 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:38.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:38 compute-2 kernel: tap61dfc1b3-64 (unregistering): left promiscuous mode
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.333 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:38 compute-2 NetworkManager[48999]: <info>  [1769849378.3344] device (tap61dfc1b3-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:49:38 compute-2 ovn_controller[133834]: 2026-01-31T08:49:38Z|00781|binding|INFO|Releasing lport 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 from this chassis (sb_readonly=0)
Jan 31 08:49:38 compute-2 ovn_controller[133834]: 2026-01-31T08:49:38Z|00782|binding|INFO|Setting lport 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 down in Southbound
Jan 31 08:49:38 compute-2 ovn_controller[133834]: 2026-01-31T08:49:38Z|00783|binding|INFO|Removing iface tap61dfc1b3-64 ovn-installed in OVS
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.340 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.342 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.345 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:38 compute-2 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c5.scope: Deactivated successfully.
Jan 31 08:49:38 compute-2 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c5.scope: Consumed 13.229s CPU time.
Jan 31 08:49:38 compute-2 systemd-machined[195142]: Machine qemu-89-instance-000000c5 terminated.
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.502 226833 INFO nova.virt.libvirt.driver [-] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Instance destroyed successfully.
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.503 226833 DEBUG nova.objects.instance [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'resources' on Instance uuid 0c25d280-5868-4d4d-8665-797d3463ed08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:49:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:38.854 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:81:96 10.100.0.11'], port_security=['fa:16:3e:7d:81:96 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0c25d280-5868-4d4d-8665-797d3463ed08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3979cb10-851a-43da-b971-e32968704c0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81b33545-8c35-4640-a006-3779e6b65cfc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=61dfc1b3-64a2-42b8-9ae2-a87bc503acf2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:49:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:38.855 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 in datapath 93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 unbound from our chassis
Jan 31 08:49:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:38.857 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:49:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:38.858 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[617fe87d-1bc8-43ae-a709-b4e67c02df71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:38.859 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 namespace which is not needed anymore
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.904 226833 DEBUG nova.virt.libvirt.vif [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-402573898',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-gen-1-402573898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ge',id=197,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGEVRXCOmyKSdhlo9l0WxIRbGJsQf5I7xnoqbiNS2WMkfr+PHUSU3MjMN6e2reAurOFdIo1aIllo1la3phc66kwPrjNj65wq6jckJq8FKFDC0nYvOhxhwR1EYQGTcrj65g==',key_name='tempest-TestSecurityGroupsBasicOps-681515177',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:49:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-e83pt6ne',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:49:20Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=0c25d280-5868-4d4d-8665-797d3463ed08,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.905 226833 DEBUG nova.network.os_vif_util [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "address": "fa:16:3e:7d:81:96", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61dfc1b3-64", "ovs_interfaceid": "61dfc1b3-64a2-42b8-9ae2-a87bc503acf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.906 226833 DEBUG nova.network.os_vif_util [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:81:96,bridge_name='br-int',has_traffic_filtering=True,id=61dfc1b3-64a2-42b8-9ae2-a87bc503acf2,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61dfc1b3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.906 226833 DEBUG os_vif [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:81:96,bridge_name='br-int',has_traffic_filtering=True,id=61dfc1b3-64a2-42b8-9ae2-a87bc503acf2,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61dfc1b3-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.909 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.909 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61dfc1b3-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.911 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.912 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:38 compute-2 nova_compute[226829]: 2026-01-31 08:49:38.918 226833 INFO os_vif [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:81:96,bridge_name='br-int',has_traffic_filtering=True,id=61dfc1b3-64a2-42b8-9ae2-a87bc503acf2,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61dfc1b3-64')
Jan 31 08:49:38 compute-2 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[321778]: [NOTICE]   (321782) : haproxy version is 2.8.14-c23fe91
Jan 31 08:49:38 compute-2 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[321778]: [NOTICE]   (321782) : path to executable is /usr/sbin/haproxy
Jan 31 08:49:38 compute-2 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[321778]: [WARNING]  (321782) : Exiting Master process...
Jan 31 08:49:38 compute-2 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[321778]: [ALERT]    (321782) : Current worker (321785) exited with code 143 (Terminated)
Jan 31 08:49:38 compute-2 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[321778]: [WARNING]  (321782) : All workers exited. Exiting... (0)
Jan 31 08:49:38 compute-2 systemd[1]: libpod-61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15.scope: Deactivated successfully.
Jan 31 08:49:38 compute-2 podman[321965]: 2026-01-31 08:49:38.98342172 +0000 UTC m=+0.046856494 container died 61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:49:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:39 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15-userdata-shm.mount: Deactivated successfully.
Jan 31 08:49:39 compute-2 systemd[1]: var-lib-containers-storage-overlay-fa8cf6b58f28ed1f381f9098842726796cbf00f98e3b89d0a2e583899e1f2815-merged.mount: Deactivated successfully.
Jan 31 08:49:39 compute-2 podman[321965]: 2026-01-31 08:49:39.080901899 +0000 UTC m=+0.144336663 container cleanup 61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:49:39 compute-2 systemd[1]: libpod-conmon-61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15.scope: Deactivated successfully.
Jan 31 08:49:39 compute-2 podman[322003]: 2026-01-31 08:49:39.155796505 +0000 UTC m=+0.058633505 container remove 61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:49:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:39.159 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ecad27-3804-4dfc-8865-df85b29cf2e5]: (4, ('Sat Jan 31 08:49:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 (61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15)\n61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15\nSat Jan 31 08:49:39 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 (61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15)\n61b34bf37c997baf6e247dfd9591c5414dda04816a01b41e5b13e1453bb07a15\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:39.161 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[16e410ff-05e7-4492-84f5-5a8935145178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:39.162 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93bea2b0-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:39 compute-2 nova_compute[226829]: 2026-01-31 08:49:39.216 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:39 compute-2 kernel: tap93bea2b0-d0: left promiscuous mode
Jan 31 08:49:39 compute-2 nova_compute[226829]: 2026-01-31 08:49:39.221 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:39.223 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[81402b6d-15f3-45f3-a97b-f4dffde1b442]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:39.234 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcef4b5-055e-4c73-bde9-742e35d3a490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:39.235 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d76f9988-0520-48eb-b41b-c13bb95a1b95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:39.245 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7e15f9d7-8fec-4655-b1d0-b314c8e4c246]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 960983, 'reachable_time': 31389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322018, 'error': None, 'target': 'ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:39 compute-2 systemd[1]: run-netns-ovnmeta\x2d93bea2b0\x2dd2f6\x2d4f71\x2d8f0c\x2d6b8a51ad6234.mount: Deactivated successfully.
Jan 31 08:49:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:39.248 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:49:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:49:39.248 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[a6476288-cc8a-4bec-a38f-c294fe752a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:39 compute-2 nova_compute[226829]: 2026-01-31 08:49:39.496 226833 INFO nova.virt.libvirt.driver [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Deleting instance files /var/lib/nova/instances/0c25d280-5868-4d4d-8665-797d3463ed08_del
Jan 31 08:49:39 compute-2 nova_compute[226829]: 2026-01-31 08:49:39.497 226833 INFO nova.virt.libvirt.driver [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Deletion of /var/lib/nova/instances/0c25d280-5868-4d4d-8665-797d3463ed08_del complete
Jan 31 08:49:40 compute-2 ceph-mon[77282]: pgmap v3505: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:49:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:40.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.309 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.331 226833 INFO nova.compute.manager [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Took 2.06 seconds to destroy the instance on the hypervisor.
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.332 226833 DEBUG oslo.service.loopingcall [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.332 226833 DEBUG nova.compute.manager [-] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.332 226833 DEBUG nova.network.neutron [-] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.341 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.584 226833 DEBUG nova.compute.manager [req-0c03983e-0c39-4a5f-80d6-e43ab72cade0 req-49089954-5358-4045-8882-2026b915129f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Received event network-vif-unplugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.585 226833 DEBUG oslo_concurrency.lockutils [req-0c03983e-0c39-4a5f-80d6-e43ab72cade0 req-49089954-5358-4045-8882-2026b915129f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.585 226833 DEBUG oslo_concurrency.lockutils [req-0c03983e-0c39-4a5f-80d6-e43ab72cade0 req-49089954-5358-4045-8882-2026b915129f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.585 226833 DEBUG oslo_concurrency.lockutils [req-0c03983e-0c39-4a5f-80d6-e43ab72cade0 req-49089954-5358-4045-8882-2026b915129f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.586 226833 DEBUG nova.compute.manager [req-0c03983e-0c39-4a5f-80d6-e43ab72cade0 req-49089954-5358-4045-8882-2026b915129f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] No waiting events found dispatching network-vif-unplugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:49:40 compute-2 nova_compute[226829]: 2026-01-31 08:49:40.586 226833 DEBUG nova.compute.manager [req-0c03983e-0c39-4a5f-80d6-e43ab72cade0 req-49089954-5358-4045-8882-2026b915129f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Received event network-vif-unplugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:49:40 compute-2 sudo[322022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:40 compute-2 sudo[322022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:40 compute-2 sudo[322022]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:40 compute-2 sudo[322047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:40 compute-2 sudo[322047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:40 compute-2 sudo[322047]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:41.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:42 compute-2 nova_compute[226829]: 2026-01-31 08:49:42.057 226833 DEBUG nova.network.neutron [-] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:42 compute-2 ceph-mon[77282]: pgmap v3506: 305 pgs: 305 active+clean; 230 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 08:49:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:42.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:43.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:43 compute-2 nova_compute[226829]: 2026-01-31 08:49:43.276 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:43 compute-2 nova_compute[226829]: 2026-01-31 08:49:43.910 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:44 compute-2 ceph-mon[77282]: pgmap v3507: 305 pgs: 305 active+clean; 230 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 31 08:49:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:44.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:44 compute-2 nova_compute[226829]: 2026-01-31 08:49:44.334 226833 DEBUG nova.compute.manager [req-76760bda-cc65-4a1d-9098-70d2de9f8272 req-8fe1f3a9-9d10-411b-b7c5-882c4d5e1dda 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Received event network-vif-deleted-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:44 compute-2 nova_compute[226829]: 2026-01-31 08:49:44.334 226833 INFO nova.compute.manager [req-76760bda-cc65-4a1d-9098-70d2de9f8272 req-8fe1f3a9-9d10-411b-b7c5-882c4d5e1dda 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Neutron deleted interface 61dfc1b3-64a2-42b8-9ae2-a87bc503acf2; detaching it from the instance and deleting it from the info cache
Jan 31 08:49:44 compute-2 nova_compute[226829]: 2026-01-31 08:49:44.334 226833 DEBUG nova.network.neutron [req-76760bda-cc65-4a1d-9098-70d2de9f8272 req-8fe1f3a9-9d10-411b-b7c5-882c4d5e1dda 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:44 compute-2 nova_compute[226829]: 2026-01-31 08:49:44.912 226833 DEBUG nova.compute.manager [req-4e413896-2f47-44de-98c8-2da90004e74c req-5ce733f4-50ba-4c47-b277-243c9d33e7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Received event network-vif-plugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:44 compute-2 nova_compute[226829]: 2026-01-31 08:49:44.913 226833 DEBUG oslo_concurrency.lockutils [req-4e413896-2f47-44de-98c8-2da90004e74c req-5ce733f4-50ba-4c47-b277-243c9d33e7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:44 compute-2 nova_compute[226829]: 2026-01-31 08:49:44.913 226833 DEBUG oslo_concurrency.lockutils [req-4e413896-2f47-44de-98c8-2da90004e74c req-5ce733f4-50ba-4c47-b277-243c9d33e7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:44 compute-2 nova_compute[226829]: 2026-01-31 08:49:44.913 226833 DEBUG oslo_concurrency.lockutils [req-4e413896-2f47-44de-98c8-2da90004e74c req-5ce733f4-50ba-4c47-b277-243c9d33e7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:44 compute-2 nova_compute[226829]: 2026-01-31 08:49:44.913 226833 DEBUG nova.compute.manager [req-4e413896-2f47-44de-98c8-2da90004e74c req-5ce733f4-50ba-4c47-b277-243c9d33e7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] No waiting events found dispatching network-vif-plugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:49:44 compute-2 nova_compute[226829]: 2026-01-31 08:49:44.914 226833 WARNING nova.compute.manager [req-4e413896-2f47-44de-98c8-2da90004e74c req-5ce733f4-50ba-4c47-b277-243c9d33e7ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Received unexpected event network-vif-plugged-61dfc1b3-64a2-42b8-9ae2-a87bc503acf2 for instance with vm_state active and task_state deleting.
Jan 31 08:49:44 compute-2 nova_compute[226829]: 2026-01-31 08:49:44.915 226833 INFO nova.compute.manager [-] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Took 4.58 seconds to deallocate network for instance.
Jan 31 08:49:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:45.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:45 compute-2 nova_compute[226829]: 2026-01-31 08:49:45.509 226833 DEBUG nova.compute.manager [req-76760bda-cc65-4a1d-9098-70d2de9f8272 req-8fe1f3a9-9d10-411b-b7c5-882c4d5e1dda 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Detach interface failed, port_id=61dfc1b3-64a2-42b8-9ae2-a87bc503acf2, reason: Instance 0c25d280-5868-4d4d-8665-797d3463ed08 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:49:45 compute-2 nova_compute[226829]: 2026-01-31 08:49:45.846 226833 DEBUG oslo_concurrency.lockutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:45 compute-2 nova_compute[226829]: 2026-01-31 08:49:45.846 226833 DEBUG oslo_concurrency.lockutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:46 compute-2 ceph-mon[77282]: pgmap v3508: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 31 08:49:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:46.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:47.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:48 compute-2 ceph-mon[77282]: pgmap v3509: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 127 KiB/s wr, 36 op/s
Jan 31 08:49:48 compute-2 nova_compute[226829]: 2026-01-31 08:49:48.278 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:48.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:48 compute-2 nova_compute[226829]: 2026-01-31 08:49:48.516 226833 DEBUG oslo_concurrency.processutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:49:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3491448071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:48 compute-2 nova_compute[226829]: 2026-01-31 08:49:48.911 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:48 compute-2 nova_compute[226829]: 2026-01-31 08:49:48.913 226833 DEBUG oslo_concurrency.processutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:48 compute-2 nova_compute[226829]: 2026-01-31 08:49:48.918 226833 DEBUG nova.compute.provider_tree [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:49:48 compute-2 nova_compute[226829]: 2026-01-31 08:49:48.993 226833 DEBUG nova.scheduler.client.report [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:49:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:49.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3491448071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:49 compute-2 nova_compute[226829]: 2026-01-31 08:49:49.334 226833 DEBUG oslo_concurrency.lockutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:49 compute-2 nova_compute[226829]: 2026-01-31 08:49:49.612 226833 INFO nova.scheduler.client.report [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Deleted allocations for instance 0c25d280-5868-4d4d-8665-797d3463ed08
Jan 31 08:49:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:50.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:50 compute-2 ceph-mon[77282]: pgmap v3510: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 31 08:49:50 compute-2 nova_compute[226829]: 2026-01-31 08:49:50.427 226833 DEBUG oslo_concurrency.lockutils [None req-44ca5503-5ec1-427a-9380-0dcb4ff6e9df c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "0c25d280-5868-4d4d-8665-797d3463ed08" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:49:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:51.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:49:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:49:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:52.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:49:52 compute-2 ceph-mon[77282]: pgmap v3511: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 31 08:49:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:53.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:53 compute-2 nova_compute[226829]: 2026-01-31 08:49:53.280 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:53 compute-2 ceph-mon[77282]: pgmap v3512: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 19 op/s
Jan 31 08:49:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2800592308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:49:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2800592308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:49:53 compute-2 nova_compute[226829]: 2026-01-31 08:49:53.502 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849378.500885, 0c25d280-5868-4d4d-8665-797d3463ed08 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:49:53 compute-2 nova_compute[226829]: 2026-01-31 08:49:53.502 226833 INFO nova.compute.manager [-] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] VM Stopped (Lifecycle Event)
Jan 31 08:49:53 compute-2 nova_compute[226829]: 2026-01-31 08:49:53.752 226833 DEBUG nova.compute.manager [None req-8f6858a8-c267-43dd-9e35-4ce956837a2a - - - - - -] [instance: 0c25d280-5868-4d4d-8665-797d3463ed08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:49:53 compute-2 nova_compute[226829]: 2026-01-31 08:49:53.913 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:54.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:55.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:55 compute-2 sudo[322101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:55 compute-2 sudo[322101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:55 compute-2 sudo[322101]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:55 compute-2 sudo[322126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:49:55 compute-2 sudo[322126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:55 compute-2 sudo[322126]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:55 compute-2 sudo[322151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:55 compute-2 sudo[322151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:55 compute-2 sudo[322151]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:55 compute-2 ceph-mon[77282]: pgmap v3513: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 19 op/s
Jan 31 08:49:55 compute-2 sudo[322176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:49:55 compute-2 sudo[322176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:56 compute-2 sudo[322176]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:56.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:49:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:49:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:49:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:49:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:49:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:49:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:49:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:57.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:57 compute-2 podman[322231]: 2026-01-31 08:49:57.184845959 +0000 UTC m=+0.066473407 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:49:57 compute-2 nova_compute[226829]: 2026-01-31 08:49:57.728 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:58 compute-2 ceph-mon[77282]: pgmap v3514: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Jan 31 08:49:58 compute-2 nova_compute[226829]: 2026-01-31 08:49:58.282 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:58.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:58 compute-2 nova_compute[226829]: 2026-01-31 08:49:58.914 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:49:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:49:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:00 compute-2 ceph-mon[77282]: pgmap v3515: 305 pgs: 305 active+clean; 183 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s wr, 0 op/s
Jan 31 08:50:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 08:50:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:00.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:01.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:01 compute-2 sudo[322261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:01 compute-2 sudo[322261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:01 compute-2 sudo[322261]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:01 compute-2 sudo[322286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:01 compute-2 sudo[322286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:01 compute-2 sudo[322286]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:01 compute-2 nova_compute[226829]: 2026-01-31 08:50:01.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:02 compute-2 ceph-mon[77282]: pgmap v3516: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:50:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:50:02 compute-2 sudo[322312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:02 compute-2 sudo[322312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:02 compute-2 sudo[322312]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:02 compute-2 sudo[322337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:50:02 compute-2 sudo[322337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:02 compute-2 sudo[322337]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:02.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:03.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:03 compute-2 nova_compute[226829]: 2026-01-31 08:50:03.284 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:03 compute-2 nova_compute[226829]: 2026-01-31 08:50:03.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:03 compute-2 nova_compute[226829]: 2026-01-31 08:50:03.915 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:04 compute-2 ceph-mon[77282]: pgmap v3517: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:04 compute-2 podman[322363]: 2026-01-31 08:50:04.156632276 +0000 UTC m=+0.042614349 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 31 08:50:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:50:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:04.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:50:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:05.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:05 compute-2 nova_compute[226829]: 2026-01-31 08:50:05.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:05 compute-2 nova_compute[226829]: 2026-01-31 08:50:05.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:50:05 compute-2 nova_compute[226829]: 2026-01-31 08:50:05.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:50:05 compute-2 nova_compute[226829]: 2026-01-31 08:50:05.828 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:50:05 compute-2 nova_compute[226829]: 2026-01-31 08:50:05.829 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:05 compute-2 nova_compute[226829]: 2026-01-31 08:50:05.829 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:06 compute-2 ceph-mon[77282]: pgmap v3518: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:06.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:50:06.928 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:50:06.929 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:50:06.929 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/71149161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2492933013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:07.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:08 compute-2 ceph-mon[77282]: pgmap v3519: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2986454865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:08 compute-2 nova_compute[226829]: 2026-01-31 08:50:08.286 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:08.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:08 compute-2 nova_compute[226829]: 2026-01-31 08:50:08.917 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:09.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:10 compute-2 ceph-mon[77282]: pgmap v3520: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:50:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:10.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:10 compute-2 nova_compute[226829]: 2026-01-31 08:50:10.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:11.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:12 compute-2 ceph-mon[77282]: pgmap v3521: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:50:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4128509544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:12.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:13.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1541487575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:13 compute-2 nova_compute[226829]: 2026-01-31 08:50:13.287 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:13 compute-2 nova_compute[226829]: 2026-01-31 08:50:13.918 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:14 compute-2 ceph-mon[77282]: pgmap v3522: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:14.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:14 compute-2 nova_compute[226829]: 2026-01-31 08:50:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:14 compute-2 nova_compute[226829]: 2026-01-31 08:50:14.608 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:14 compute-2 nova_compute[226829]: 2026-01-31 08:50:14.608 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:14 compute-2 nova_compute[226829]: 2026-01-31 08:50:14.609 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:14 compute-2 nova_compute[226829]: 2026-01-31 08:50:14.609 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:50:14 compute-2 nova_compute[226829]: 2026-01-31 08:50:14.609 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:50:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:50:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1129069942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:15 compute-2 nova_compute[226829]: 2026-01-31 08:50:15.046 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:50:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:15.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:15 compute-2 nova_compute[226829]: 2026-01-31 08:50:15.175 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:50:15 compute-2 nova_compute[226829]: 2026-01-31 08:50:15.176 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4093MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:50:15 compute-2 nova_compute[226829]: 2026-01-31 08:50:15.177 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:15 compute-2 nova_compute[226829]: 2026-01-31 08:50:15.177 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1129069942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:15 compute-2 nova_compute[226829]: 2026-01-31 08:50:15.804 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:50:15 compute-2 nova_compute[226829]: 2026-01-31 08:50:15.805 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:50:15 compute-2 nova_compute[226829]: 2026-01-31 08:50:15.833 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:50:16 compute-2 ceph-mon[77282]: pgmap v3523: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:50:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1183231244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:16 compute-2 nova_compute[226829]: 2026-01-31 08:50:16.250 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:50:16 compute-2 nova_compute[226829]: 2026-01-31 08:50:16.255 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:50:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:16.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:16 compute-2 nova_compute[226829]: 2026-01-31 08:50:16.557 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:50:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1183231244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:17 compute-2 nova_compute[226829]: 2026-01-31 08:50:17.410 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:50:17 compute-2 nova_compute[226829]: 2026-01-31 08:50:17.411 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:18 compute-2 ceph-mon[77282]: pgmap v3524: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:18 compute-2 nova_compute[226829]: 2026-01-31 08:50:18.289 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:18.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:18 compute-2 nova_compute[226829]: 2026-01-31 08:50:18.920 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:19.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:20 compute-2 ceph-mon[77282]: pgmap v3525: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:20.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:21.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:21 compute-2 sudo[322437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:21 compute-2 sudo[322437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:21 compute-2 sudo[322437]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:21 compute-2 sudo[322462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:21 compute-2 sudo[322462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:21 compute-2 sudo[322462]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:22 compute-2 ceph-mon[77282]: pgmap v3526: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:22.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:23.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:23 compute-2 nova_compute[226829]: 2026-01-31 08:50:23.291 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:23 compute-2 nova_compute[226829]: 2026-01-31 08:50:23.920 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:24.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:24 compute-2 ceph-mon[77282]: pgmap v3527: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:25.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:26.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:26 compute-2 ceph-mon[77282]: pgmap v3528: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:27.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:28 compute-2 podman[322492]: 2026-01-31 08:50:28.181404529 +0000 UTC m=+0.069284434 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:50:28 compute-2 nova_compute[226829]: 2026-01-31 08:50:28.293 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:28.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:28 compute-2 ceph-mon[77282]: pgmap v3529: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:50:28.724 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:50:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:50:28.725 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:50:28 compute-2 nova_compute[226829]: 2026-01-31 08:50:28.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:28 compute-2 nova_compute[226829]: 2026-01-31 08:50:28.921 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:29.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:29 compute-2 ceph-mon[77282]: pgmap v3530: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:30.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:31.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:31 compute-2 ceph-mon[77282]: pgmap v3531: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:32.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:32 compute-2 nova_compute[226829]: 2026-01-31 08:50:32.410 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:32 compute-2 nova_compute[226829]: 2026-01-31 08:50:32.411 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:50:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:33.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:33 compute-2 nova_compute[226829]: 2026-01-31 08:50:33.295 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:33 compute-2 nova_compute[226829]: 2026-01-31 08:50:33.923 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:34 compute-2 ceph-mon[77282]: pgmap v3532: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:34.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:35.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:35 compute-2 podman[322522]: 2026-01-31 08:50:35.151908192 +0000 UTC m=+0.043589936 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:50:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:50:35.727 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:50:36 compute-2 ceph-mon[77282]: pgmap v3533: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:37.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:38 compute-2 ceph-mon[77282]: pgmap v3534: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:38 compute-2 nova_compute[226829]: 2026-01-31 08:50:38.295 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:38.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:38 compute-2 nova_compute[226829]: 2026-01-31 08:50:38.973 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:39.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:40 compute-2 ceph-mon[77282]: pgmap v3535: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:40.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:41.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:41 compute-2 sudo[322544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:41 compute-2 sudo[322544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:41 compute-2 sudo[322544]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:41 compute-2 sudo[322569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:41 compute-2 sudo[322569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:41 compute-2 sudo[322569]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:42.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:42 compute-2 ceph-mon[77282]: pgmap v3536: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:43.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:43 compute-2 nova_compute[226829]: 2026-01-31 08:50:43.346 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:43 compute-2 ceph-mon[77282]: pgmap v3537: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:43 compute-2 nova_compute[226829]: 2026-01-31 08:50:43.975 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:44.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/20990812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:45.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:46 compute-2 ceph-mon[77282]: pgmap v3538: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:46.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:47.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:48 compute-2 ceph-mon[77282]: pgmap v3539: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:48 compute-2 nova_compute[226829]: 2026-01-31 08:50:48.348 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:48.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:48 compute-2 nova_compute[226829]: 2026-01-31 08:50:48.977 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:49.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:50 compute-2 ceph-mon[77282]: pgmap v3540: 305 pgs: 305 active+clean; 137 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.6 KiB/s rd, 602 KiB/s wr, 12 op/s
Jan 31 08:50:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:50.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:51.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:52 compute-2 ceph-mon[77282]: pgmap v3541: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:52.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:53.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:53 compute-2 nova_compute[226829]: 2026-01-31 08:50:53.350 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:50:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/709796620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:50:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:50:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/709796620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:50:53 compute-2 nova_compute[226829]: 2026-01-31 08:50:53.979 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:54 compute-2 ceph-mon[77282]: pgmap v3542: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/709796620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:50:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/709796620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:50:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:54.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:55.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:55 compute-2 ovn_controller[133834]: 2026-01-31T08:50:55Z|00784|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 31 08:50:56 compute-2 ceph-mon[77282]: pgmap v3543: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:56.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:50:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:57.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:50:58 compute-2 ceph-mon[77282]: pgmap v3544: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:58 compute-2 nova_compute[226829]: 2026-01-31 08:50:58.352 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:58.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:58 compute-2 nova_compute[226829]: 2026-01-31 08:50:58.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:58 compute-2 nova_compute[226829]: 2026-01-31 08:50:58.980 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:50:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:59.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:59 compute-2 podman[322603]: 2026-01-31 08:50:59.193743368 +0000 UTC m=+0.085573257 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:51:00 compute-2 ceph-mon[77282]: pgmap v3545: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:51:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:00.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:51:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:01.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:51:01 compute-2 sudo[322631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:01 compute-2 sudo[322631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:01 compute-2 sudo[322631]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:01 compute-2 sudo[322656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:01 compute-2 sudo[322656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:01 compute-2 sudo[322656]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-2 sudo[322682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:02 compute-2 sudo[322682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-2 sudo[322682]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-2 sudo[322707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:51:02 compute-2 sudo[322707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-2 sudo[322707]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-2 sudo[322732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:02 compute-2 sudo[322732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-2 sudo[322732]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-2 ceph-mon[77282]: pgmap v3546: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.6 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 31 08:51:02 compute-2 sudo[322757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:51:02 compute-2 sudo[322757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:02.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:02 compute-2 sudo[322757]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:51:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:03.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:51:03 compute-2 nova_compute[226829]: 2026-01-31 08:51:03.353 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:03 compute-2 nova_compute[226829]: 2026-01-31 08:51:03.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:03 compute-2 ceph-mon[77282]: pgmap v3547: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:51:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/802492447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:51:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/533740819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:51:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:51:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:51:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:51:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:51:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:51:03 compute-2 nova_compute[226829]: 2026-01-31 08:51:03.981 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:04.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:04 compute-2 nova_compute[226829]: 2026-01-31 08:51:04.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:05.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:06 compute-2 ceph-mon[77282]: pgmap v3548: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Jan 31 08:51:06 compute-2 podman[322815]: 2026-01-31 08:51:06.14951934 +0000 UTC m=+0.037542702 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:51:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:06.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:06 compute-2 nova_compute[226829]: 2026-01-31 08:51:06.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:06 compute-2 nova_compute[226829]: 2026-01-31 08:51:06.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:51:06.929 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:51:06.929 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:51:06.930 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/900598782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:07.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:07 compute-2 nova_compute[226829]: 2026-01-31 08:51:07.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:07 compute-2 nova_compute[226829]: 2026-01-31 08:51:07.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:51:07 compute-2 nova_compute[226829]: 2026-01-31 08:51:07.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:51:07 compute-2 nova_compute[226829]: 2026-01-31 08:51:07.513 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:51:08 compute-2 ceph-mon[77282]: pgmap v3549: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Jan 31 08:51:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2677413607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:08 compute-2 nova_compute[226829]: 2026-01-31 08:51:08.355 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:08.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:08 compute-2 nova_compute[226829]: 2026-01-31 08:51:08.982 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:10 compute-2 sudo[322837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:10 compute-2 sudo[322837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:10 compute-2 sudo[322837]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:10 compute-2 sudo[322862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:51:10 compute-2 sudo[322862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:10 compute-2 sudo[322862]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:10 compute-2 ceph-mon[77282]: pgmap v3550: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 648 KiB/s rd, 12 KiB/s wr, 30 op/s
Jan 31 08:51:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:51:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:10.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:51:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:11.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:11 compute-2 nova_compute[226829]: 2026-01-31 08:51:11.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:11 compute-2 ceph-mon[77282]: pgmap v3551: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:51:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3520348176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:12.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1222370889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:13.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:13 compute-2 nova_compute[226829]: 2026-01-31 08:51:13.357 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:13 compute-2 nova_compute[226829]: 2026-01-31 08:51:13.982 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:14.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:14 compute-2 ceph-mon[77282]: pgmap v3552: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:51:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:51:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:15.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:51:15 compute-2 ceph-mon[77282]: pgmap v3553: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:51:15 compute-2 nova_compute[226829]: 2026-01-31 08:51:15.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:15 compute-2 nova_compute[226829]: 2026-01-31 08:51:15.595 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:15 compute-2 nova_compute[226829]: 2026-01-31 08:51:15.596 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:15 compute-2 nova_compute[226829]: 2026-01-31 08:51:15.596 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:15 compute-2 nova_compute[226829]: 2026-01-31 08:51:15.596 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:51:15 compute-2 nova_compute[226829]: 2026-01-31 08:51:15.596 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:51:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/971336384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:16 compute-2 nova_compute[226829]: 2026-01-31 08:51:16.010 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:16 compute-2 nova_compute[226829]: 2026-01-31 08:51:16.152 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:51:16 compute-2 nova_compute[226829]: 2026-01-31 08:51:16.153 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4140MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:51:16 compute-2 nova_compute[226829]: 2026-01-31 08:51:16.154 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:16 compute-2 nova_compute[226829]: 2026-01-31 08:51:16.154 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:16.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/971336384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:16 compute-2 nova_compute[226829]: 2026-01-31 08:51:16.556 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:51:16 compute-2 nova_compute[226829]: 2026-01-31 08:51:16.556 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:51:16 compute-2 nova_compute[226829]: 2026-01-31 08:51:16.575 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:51:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3154236841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:17 compute-2 nova_compute[226829]: 2026-01-31 08:51:17.012 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:17 compute-2 nova_compute[226829]: 2026-01-31 08:51:17.017 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:51:17 compute-2 nova_compute[226829]: 2026-01-31 08:51:17.154 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:51:17 compute-2 nova_compute[226829]: 2026-01-31 08:51:17.156 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:51:17 compute-2 nova_compute[226829]: 2026-01-31 08:51:17.156 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:17.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:17 compute-2 ceph-mon[77282]: pgmap v3554: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Jan 31 08:51:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3154236841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:18 compute-2 nova_compute[226829]: 2026-01-31 08:51:18.359 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:18.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:18 compute-2 nova_compute[226829]: 2026-01-31 08:51:18.983 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:19.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:20 compute-2 ceph-mon[77282]: pgmap v3555: 305 pgs: 305 active+clean; 171 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 279 KiB/s wr, 97 op/s
Jan 31 08:51:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:20.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2246006372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:21.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:21 compute-2 sudo[322936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:21 compute-2 sudo[322936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:21 compute-2 sudo[322936]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:21 compute-2 sudo[322962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:21 compute-2 sudo[322962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:21 compute-2 sudo[322962]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:22 compute-2 ceph-mon[77282]: pgmap v3556: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 31 08:51:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:22.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:23.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:23 compute-2 nova_compute[226829]: 2026-01-31 08:51:23.362 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:24 compute-2 nova_compute[226829]: 2026-01-31 08:51:24.026 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:24 compute-2 ceph-mon[77282]: pgmap v3557: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:51:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:24.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:25.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:26 compute-2 ceph-mon[77282]: pgmap v3558: 305 pgs: 305 active+clean; 238 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.8 MiB/s wr, 88 op/s
Jan 31 08:51:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:26.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:27.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:28 compute-2 ceph-mon[77282]: pgmap v3559: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.214225) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488214295, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1841, "num_deletes": 251, "total_data_size": 4298623, "memory_usage": 4352176, "flush_reason": "Manual Compaction"}
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488226469, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 2801808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83042, "largest_seqno": 84878, "table_properties": {"data_size": 2794290, "index_size": 4460, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15999, "raw_average_key_size": 20, "raw_value_size": 2779136, "raw_average_value_size": 3517, "num_data_blocks": 196, "num_entries": 790, "num_filter_entries": 790, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849324, "oldest_key_time": 1769849324, "file_creation_time": 1769849488, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 12292 microseconds, and 4937 cpu microseconds.
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.226513) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 2801808 bytes OK
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.226531) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.228038) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.228052) EVENT_LOG_v1 {"time_micros": 1769849488228047, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.228069) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 4290393, prev total WAL file size 4290393, number of live WAL files 2.
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.228805) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(2736KB)], [171(10MB)]
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488228872, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 13678899, "oldest_snapshot_seqno": -1}
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 10409 keys, 11723616 bytes, temperature: kUnknown
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488286160, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 11723616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11659377, "index_size": 37118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 275322, "raw_average_key_size": 26, "raw_value_size": 11480198, "raw_average_value_size": 1102, "num_data_blocks": 1403, "num_entries": 10409, "num_filter_entries": 10409, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769849488, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.286442) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11723616 bytes
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.287720) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 238.3 rd, 204.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.4 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(9.1) write-amplify(4.2) OK, records in: 10926, records dropped: 517 output_compression: NoCompression
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.287740) EVENT_LOG_v1 {"time_micros": 1769849488287731, "job": 110, "event": "compaction_finished", "compaction_time_micros": 57390, "compaction_time_cpu_micros": 25895, "output_level": 6, "num_output_files": 1, "total_output_size": 11723616, "num_input_records": 10926, "num_output_records": 10409, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488288255, "job": 110, "event": "table_file_deletion", "file_number": 173}
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488289374, "job": 110, "event": "table_file_deletion", "file_number": 171}
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.228714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.289497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.289503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.289504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.289506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:51:28.289507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-2 nova_compute[226829]: 2026-01-31 08:51:28.364 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:28.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:29 compute-2 nova_compute[226829]: 2026-01-31 08:51:29.027 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:29.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:30 compute-2 podman[322992]: 2026-01-31 08:51:30.212389627 +0000 UTC m=+0.102634770 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:51:30 compute-2 ceph-mon[77282]: pgmap v3560: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 31 08:51:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:30.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:31.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:32 compute-2 nova_compute[226829]: 2026-01-31 08:51:32.158 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:32 compute-2 nova_compute[226829]: 2026-01-31 08:51:32.159 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:51:32 compute-2 ceph-mon[77282]: pgmap v3561: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Jan 31 08:51:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2942672263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:51:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2152520865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:51:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:32.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:32 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:51:32.698 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:51:32 compute-2 nova_compute[226829]: 2026-01-31 08:51:32.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:32 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:51:32.700 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:51:32 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:51:32.701 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:51:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:33 compute-2 nova_compute[226829]: 2026-01-31 08:51:33.365 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:34 compute-2 nova_compute[226829]: 2026-01-31 08:51:34.029 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:34 compute-2 ceph-mon[77282]: pgmap v3562: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:51:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:34.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:35.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:36 compute-2 ceph-mon[77282]: pgmap v3563: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 31 08:51:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:36.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:37 compute-2 podman[323023]: 2026-01-31 08:51:37.174206643 +0000 UTC m=+0.066454746 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:51:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:51:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:37.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:51:38 compute-2 ceph-mon[77282]: pgmap v3564: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 113 KiB/s wr, 6 op/s
Jan 31 08:51:38 compute-2 nova_compute[226829]: 2026-01-31 08:51:38.401 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:38.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:39 compute-2 nova_compute[226829]: 2026-01-31 08:51:39.029 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:39.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:40 compute-2 ceph-mon[77282]: pgmap v3565: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.7 KiB/s rd, 19 KiB/s wr, 10 op/s
Jan 31 08:51:40 compute-2 nova_compute[226829]: 2026-01-31 08:51:40.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:40.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:41.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:41 compute-2 ceph-mon[77282]: pgmap v3566: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 20 KiB/s wr, 10 op/s
Jan 31 08:51:41 compute-2 sudo[323043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:41 compute-2 sudo[323043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:41 compute-2 sudo[323043]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:41 compute-2 sudo[323068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:41 compute-2 sudo[323068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:41 compute-2 sudo[323068]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:43.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:43 compute-2 nova_compute[226829]: 2026-01-31 08:51:43.403 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:44 compute-2 nova_compute[226829]: 2026-01-31 08:51:44.031 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:44 compute-2 ceph-mon[77282]: pgmap v3567: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 15 KiB/s wr, 10 op/s
Jan 31 08:51:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:46 compute-2 ceph-mon[77282]: pgmap v3568: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 15 KiB/s wr, 46 op/s
Jan 31 08:51:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:51:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:46.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:51:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:47.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:48 compute-2 nova_compute[226829]: 2026-01-31 08:51:48.405 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:48.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:48 compute-2 ceph-mon[77282]: pgmap v3569: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 KiB/s wr, 50 op/s
Jan 31 08:51:49 compute-2 nova_compute[226829]: 2026-01-31 08:51:49.033 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:49.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:50.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:50 compute-2 ceph-mon[77282]: pgmap v3570: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 KiB/s wr, 71 op/s
Jan 31 08:51:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:51.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:52.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:52 compute-2 ceph-mon[77282]: pgmap v3571: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Jan 31 08:51:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:53.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:53 compute-2 nova_compute[226829]: 2026-01-31 08:51:53.448 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:54 compute-2 nova_compute[226829]: 2026-01-31 08:51:54.035 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:54 compute-2 ceph-mon[77282]: pgmap v3572: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 64 op/s
Jan 31 08:51:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2136031882' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:51:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2136031882' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:51:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:54.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:55.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:56 compute-2 ceph-mon[77282]: pgmap v3573: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 66 op/s
Jan 31 08:51:56 compute-2 nova_compute[226829]: 2026-01-31 08:51:56.439 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "c304efec-22bc-408c-adea-b06aaf5fbe40" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:56 compute-2 nova_compute[226829]: 2026-01-31 08:51:56.439 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:56.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:56 compute-2 nova_compute[226829]: 2026-01-31 08:51:56.525 226833 DEBUG nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:51:56 compute-2 nova_compute[226829]: 2026-01-31 08:51:56.730 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:56 compute-2 nova_compute[226829]: 2026-01-31 08:51:56.731 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:56 compute-2 nova_compute[226829]: 2026-01-31 08:51:56.737 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:51:56 compute-2 nova_compute[226829]: 2026-01-31 08:51:56.737 226833 INFO nova.compute.claims [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:51:56 compute-2 nova_compute[226829]: 2026-01-31 08:51:56.966 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:57.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:51:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/82887673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:57 compute-2 nova_compute[226829]: 2026-01-31 08:51:57.441 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:57 compute-2 nova_compute[226829]: 2026-01-31 08:51:57.446 226833 DEBUG nova.compute.provider_tree [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:51:57 compute-2 nova_compute[226829]: 2026-01-31 08:51:57.475 226833 DEBUG nova.scheduler.client.report [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:51:57 compute-2 nova_compute[226829]: 2026-01-31 08:51:57.586 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:57 compute-2 nova_compute[226829]: 2026-01-31 08:51:57.586 226833 DEBUG nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:51:57 compute-2 nova_compute[226829]: 2026-01-31 08:51:57.876 226833 DEBUG nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:51:57 compute-2 nova_compute[226829]: 2026-01-31 08:51:57.877 226833 DEBUG nova.network.neutron [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:51:57 compute-2 nova_compute[226829]: 2026-01-31 08:51:57.921 226833 INFO nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:51:57 compute-2 nova_compute[226829]: 2026-01-31 08:51:57.966 226833 DEBUG nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.114 226833 DEBUG nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.115 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.116 226833 INFO nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Creating image(s)
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.146 226833 DEBUG nova.storage.rbd_utils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image c304efec-22bc-408c-adea-b06aaf5fbe40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.174 226833 DEBUG nova.storage.rbd_utils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image c304efec-22bc-408c-adea-b06aaf5fbe40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.200 226833 DEBUG nova.storage.rbd_utils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image c304efec-22bc-408c-adea-b06aaf5fbe40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.203 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.228 226833 DEBUG nova.policy [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5a37d71e432b45168339dde5abdbe7b6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '996284baaa2946258a0ab1be9a30d1f6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.280 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.281 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.282 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.282 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:58 compute-2 ceph-mon[77282]: pgmap v3574: 305 pgs: 305 active+clean; 251 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 960 KiB/s rd, 443 KiB/s wr, 40 op/s
Jan 31 08:51:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/82887673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.311 226833 DEBUG nova.storage.rbd_utils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image c304efec-22bc-408c-adea-b06aaf5fbe40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.317 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c304efec-22bc-408c-adea-b06aaf5fbe40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.450 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:58.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.619 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c304efec-22bc-408c-adea-b06aaf5fbe40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.677 226833 DEBUG nova.storage.rbd_utils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] resizing rbd image c304efec-22bc-408c-adea-b06aaf5fbe40_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.779 226833 DEBUG nova.objects.instance [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lazy-loading 'migration_context' on Instance uuid c304efec-22bc-408c-adea-b06aaf5fbe40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.909 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.910 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Ensure instance console log exists: /var/lib/nova/instances/c304efec-22bc-408c-adea-b06aaf5fbe40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.910 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.911 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:58 compute-2 nova_compute[226829]: 2026-01-31 08:51:58.911 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:59 compute-2 nova_compute[226829]: 2026-01-31 08:51:59.077 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:51:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:59.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:59 compute-2 nova_compute[226829]: 2026-01-31 08:51:59.591 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:00 compute-2 ceph-mon[77282]: pgmap v3575: 305 pgs: 305 active+clean; 257 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 842 KiB/s rd, 868 KiB/s wr, 54 op/s
Jan 31 08:52:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:00.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:01 compute-2 podman[323292]: 2026-01-31 08:52:01.179761984 +0000 UTC m=+0.069100609 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Jan 31 08:52:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:01.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:01 compute-2 sudo[323318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:01 compute-2 sudo[323318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:01 compute-2 sudo[323318]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:01 compute-2 sudo[323343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:01 compute-2 sudo[323343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:01 compute-2 sudo[323343]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:01 compute-2 nova_compute[226829]: 2026-01-31 08:52:01.946 226833 DEBUG nova.network.neutron [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Successfully created port: f8858d9f-7fda-4961-8579-8b1536def97a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:52:02 compute-2 ceph-mon[77282]: pgmap v3576: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 359 KiB/s rd, 3.5 MiB/s wr, 105 op/s
Jan 31 08:52:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:02.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:52:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:03.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:52:03 compute-2 nova_compute[226829]: 2026-01-31 08:52:03.245 226833 DEBUG nova.network.neutron [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Successfully updated port: f8858d9f-7fda-4961-8579-8b1536def97a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:52:03 compute-2 nova_compute[226829]: 2026-01-31 08:52:03.278 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:52:03 compute-2 nova_compute[226829]: 2026-01-31 08:52:03.279 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquired lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:52:03 compute-2 nova_compute[226829]: 2026-01-31 08:52:03.279 226833 DEBUG nova.network.neutron [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:52:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1349914857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:03 compute-2 nova_compute[226829]: 2026-01-31 08:52:03.450 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:03 compute-2 nova_compute[226829]: 2026-01-31 08:52:03.574 226833 DEBUG nova.network.neutron [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:52:04 compute-2 nova_compute[226829]: 2026-01-31 08:52:04.078 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:04 compute-2 nova_compute[226829]: 2026-01-31 08:52:04.197 226833 DEBUG nova.compute.manager [req-115cc44f-e343-4a9d-a77b-28afc373bee1 req-40cd29ab-c759-4bbd-be9f-54f238207ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received event network-changed-f8858d9f-7fda-4961-8579-8b1536def97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:52:04 compute-2 nova_compute[226829]: 2026-01-31 08:52:04.198 226833 DEBUG nova.compute.manager [req-115cc44f-e343-4a9d-a77b-28afc373bee1 req-40cd29ab-c759-4bbd-be9f-54f238207ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Refreshing instance network info cache due to event network-changed-f8858d9f-7fda-4961-8579-8b1536def97a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:52:04 compute-2 nova_compute[226829]: 2026-01-31 08:52:04.198 226833 DEBUG oslo_concurrency.lockutils [req-115cc44f-e343-4a9d-a77b-28afc373bee1 req-40cd29ab-c759-4bbd-be9f-54f238207ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:52:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:04.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:04 compute-2 ceph-mon[77282]: pgmap v3577: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 121 op/s
Jan 31 08:52:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:52:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:05.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:52:05 compute-2 nova_compute[226829]: 2026-01-31 08:52:05.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:05 compute-2 nova_compute[226829]: 2026-01-31 08:52:05.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:05 compute-2 ceph-mon[77282]: pgmap v3578: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 121 op/s
Jan 31 08:52:06 compute-2 nova_compute[226829]: 2026-01-31 08:52:06.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:06.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:06.931 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:06.933 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:06.933 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/711020738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3662488891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:07.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.506 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.506 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.704 226833 DEBUG nova.network.neutron [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Updating instance_info_cache with network_info: [{"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.733 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Releasing lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.734 226833 DEBUG nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Instance network_info: |[{"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.734 226833 DEBUG oslo_concurrency.lockutils [req-115cc44f-e343-4a9d-a77b-28afc373bee1 req-40cd29ab-c759-4bbd-be9f-54f238207ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.735 226833 DEBUG nova.network.neutron [req-115cc44f-e343-4a9d-a77b-28afc373bee1 req-40cd29ab-c759-4bbd-be9f-54f238207ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Refreshing network info cache for port f8858d9f-7fda-4961-8579-8b1536def97a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.737 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Start _get_guest_xml network_info=[{"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.741 226833 WARNING nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.745 226833 DEBUG nova.virt.libvirt.host [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.746 226833 DEBUG nova.virt.libvirt.host [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.758 226833 DEBUG nova.virt.libvirt.host [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.759 226833 DEBUG nova.virt.libvirt.host [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.760 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.760 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.761 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.761 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.761 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.762 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.762 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.762 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.763 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.763 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.763 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.763 226833 DEBUG nova.virt.hardware [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:52:07 compute-2 nova_compute[226829]: 2026-01-31 08:52:07.767 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:07 compute-2 ceph-mon[77282]: pgmap v3579: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Jan 31 08:52:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:52:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2680479719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:52:08 compute-2 podman[323392]: 2026-01-31 08:52:08.19529263 +0000 UTC m=+0.084123497 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.200 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.231 226833 DEBUG nova.storage.rbd_utils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image c304efec-22bc-408c-adea-b06aaf5fbe40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.235 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.459 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:08.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:52:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3279264857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.633 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.634 226833 DEBUG nova.virt.libvirt.vif [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:51:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-445454519',display_name='tempest-TestSnapshotPattern-server-445454519',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-445454519',id=200,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGk1avDtDRCkfi38WDQyZMb5AgNzJzpa/E7afrTWkN843bVEm67ME3qOwcYUGN5nJ8L7GrnzGumfCdBmJYfKJwV+0Rfyn9QiAVoGTFowygEpRUI24B1nMsGC8mTo9YdR3w==',key_name='tempest-TestSnapshotPattern-1326482889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996284baaa2946258a0ab1be9a30d1f6',ramdisk_id='',reservation_id='r-pko5ijj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-920311190',owner_user_name='tempest-TestSnapshotPattern-920311190-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:51:58Z,user_data=None,user_id='5a37d71e432b45168339dde5abdbe7b6',uuid=c304efec-22bc-408c-adea-b06aaf5fbe40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.635 226833 DEBUG nova.network.os_vif_util [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converting VIF {"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.635 226833 DEBUG nova.network.os_vif_util [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:d8:76,bridge_name='br-int',has_traffic_filtering=True,id=f8858d9f-7fda-4961-8579-8b1536def97a,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8858d9f-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.636 226833 DEBUG nova.objects.instance [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid c304efec-22bc-408c-adea-b06aaf5fbe40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.680 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <uuid>c304efec-22bc-408c-adea-b06aaf5fbe40</uuid>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <name>instance-000000c8</name>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <nova:name>tempest-TestSnapshotPattern-server-445454519</nova:name>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:52:07</nova:creationTime>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <nova:user uuid="5a37d71e432b45168339dde5abdbe7b6">tempest-TestSnapshotPattern-920311190-project-member</nova:user>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <nova:project uuid="996284baaa2946258a0ab1be9a30d1f6">tempest-TestSnapshotPattern-920311190</nova:project>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <nova:port uuid="f8858d9f-7fda-4961-8579-8b1536def97a">
Jan 31 08:52:08 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <system>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <entry name="serial">c304efec-22bc-408c-adea-b06aaf5fbe40</entry>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <entry name="uuid">c304efec-22bc-408c-adea-b06aaf5fbe40</entry>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     </system>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <os>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   </os>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <features>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   </features>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c304efec-22bc-408c-adea-b06aaf5fbe40_disk">
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       </source>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/c304efec-22bc-408c-adea-b06aaf5fbe40_disk.config">
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       </source>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:52:08 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:03:d8:76"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <target dev="tapf8858d9f-7f"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/c304efec-22bc-408c-adea-b06aaf5fbe40/console.log" append="off"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <video>
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     </video>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:52:08 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:52:08 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:52:08 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:52:08 compute-2 nova_compute[226829]: </domain>
Jan 31 08:52:08 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.681 226833 DEBUG nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Preparing to wait for external event network-vif-plugged-f8858d9f-7fda-4961-8579-8b1536def97a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.681 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.681 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.681 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.682 226833 DEBUG nova.virt.libvirt.vif [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:51:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-445454519',display_name='tempest-TestSnapshotPattern-server-445454519',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-445454519',id=200,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGk1avDtDRCkfi38WDQyZMb5AgNzJzpa/E7afrTWkN843bVEm67ME3qOwcYUGN5nJ8L7GrnzGumfCdBmJYfKJwV+0Rfyn9QiAVoGTFowygEpRUI24B1nMsGC8mTo9YdR3w==',key_name='tempest-TestSnapshotPattern-1326482889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996284baaa2946258a0ab1be9a30d1f6',ramdisk_id='',reservation_id='r-pko5ijj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-920311190',owner_user_name='tempest-TestSnapshotPattern-920311190-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:51:58Z,user_data=None,user_id='5a37d71e432b45168339dde5abdbe7b6',uuid=c304efec-22bc-408c-adea-b06aaf5fbe40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.683 226833 DEBUG nova.network.os_vif_util [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converting VIF {"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.683 226833 DEBUG nova.network.os_vif_util [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:d8:76,bridge_name='br-int',has_traffic_filtering=True,id=f8858d9f-7fda-4961-8579-8b1536def97a,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8858d9f-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.683 226833 DEBUG os_vif [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:d8:76,bridge_name='br-int',has_traffic_filtering=True,id=f8858d9f-7fda-4961-8579-8b1536def97a,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8858d9f-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.684 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.684 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.685 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.690 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.691 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8858d9f-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.691 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf8858d9f-7f, col_values=(('external_ids', {'iface-id': 'f8858d9f-7fda-4961-8579-8b1536def97a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:d8:76', 'vm-uuid': 'c304efec-22bc-408c-adea-b06aaf5fbe40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:08 compute-2 NetworkManager[48999]: <info>  [1769849528.6951] manager: (tapf8858d9f-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.696 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.699 226833 INFO os_vif [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:d8:76,bridge_name='br-int',has_traffic_filtering=True,id=f8858d9f-7fda-4961-8579-8b1536def97a,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8858d9f-7f')
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.792 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.792 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.793 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] No VIF found with MAC fa:16:3e:03:d8:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.794 226833 INFO nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Using config drive
Jan 31 08:52:08 compute-2 nova_compute[226829]: 2026-01-31 08:52:08.820 226833 DEBUG nova.storage.rbd_utils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image c304efec-22bc-408c-adea-b06aaf5fbe40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:52:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2680479719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:52:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3279264857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:52:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:52:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:09.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:52:10 compute-2 ceph-mon[77282]: pgmap v3580: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 3.5 MiB/s wr, 109 op/s
Jan 31 08:52:10 compute-2 sudo[323475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:10 compute-2 sudo[323475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-2 sudo[323475]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-2 sudo[323500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:52:10 compute-2 sudo[323500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-2 sudo[323500]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-2 sudo[323525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:10 compute-2 sudo[323525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-2 sudo[323525]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-2 sudo[323550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 08:52:10 compute-2 sudo[323550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:10.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:10 compute-2 sudo[323550]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-2 sudo[323598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:10 compute-2 sudo[323598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-2 sudo[323598]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-2 sudo[323623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:52:10 compute-2 sudo[323623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-2 sudo[323623]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-2 nova_compute[226829]: 2026-01-31 08:52:10.701 226833 INFO nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Creating config drive at /var/lib/nova/instances/c304efec-22bc-408c-adea-b06aaf5fbe40/disk.config
Jan 31 08:52:10 compute-2 nova_compute[226829]: 2026-01-31 08:52:10.708 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c304efec-22bc-408c-adea-b06aaf5fbe40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5zsiq1v6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:10 compute-2 sudo[323648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:10 compute-2 sudo[323648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-2 sudo[323648]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-2 sudo[323676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:52:10 compute-2 sudo[323676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-2 nova_compute[226829]: 2026-01-31 08:52:10.848 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c304efec-22bc-408c-adea-b06aaf5fbe40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5zsiq1v6" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:10 compute-2 nova_compute[226829]: 2026-01-31 08:52:10.888 226833 DEBUG nova.storage.rbd_utils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image c304efec-22bc-408c-adea-b06aaf5fbe40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:52:10 compute-2 nova_compute[226829]: 2026-01-31 08:52:10.892 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c304efec-22bc-408c-adea-b06aaf5fbe40/disk.config c304efec-22bc-408c-adea-b06aaf5fbe40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.051 226833 DEBUG oslo_concurrency.processutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c304efec-22bc-408c-adea-b06aaf5fbe40/disk.config c304efec-22bc-408c-adea-b06aaf5fbe40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.053 226833 INFO nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Deleting local config drive /var/lib/nova/instances/c304efec-22bc-408c-adea-b06aaf5fbe40/disk.config because it was imported into RBD.
Jan 31 08:52:11 compute-2 kernel: tapf8858d9f-7f: entered promiscuous mode
Jan 31 08:52:11 compute-2 NetworkManager[48999]: <info>  [1769849531.1099] manager: (tapf8858d9f-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/392)
Jan 31 08:52:11 compute-2 ovn_controller[133834]: 2026-01-31T08:52:11Z|00785|binding|INFO|Claiming lport f8858d9f-7fda-4961-8579-8b1536def97a for this chassis.
Jan 31 08:52:11 compute-2 ovn_controller[133834]: 2026-01-31T08:52:11Z|00786|binding|INFO|f8858d9f-7fda-4961-8579-8b1536def97a: Claiming fa:16:3e:03:d8:76 10.100.0.12
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.110 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.113 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.117 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.120 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.131 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:d8:76 10.100.0.12'], port_security=['fa:16:3e:03:d8:76 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c304efec-22bc-408c-adea-b06aaf5fbe40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996284baaa2946258a0ab1be9a30d1f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc7f4114-32da-4b90-986a-baaae713330e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6418939f-fc1e-4745-b073-d983621a4c28, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=f8858d9f-7fda-4961-8579-8b1536def97a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.133 143841 INFO neutron.agent.ovn.metadata.agent [-] Port f8858d9f-7fda-4961-8579-8b1536def97a in datapath 5bcbd792-d73a-4177-857d-cca1f0044ec8 bound to our chassis
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.135 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bcbd792-d73a-4177-857d-cca1f0044ec8
Jan 31 08:52:11 compute-2 systemd-machined[195142]: New machine qemu-90-instance-000000c8.
Jan 31 08:52:11 compute-2 ovn_controller[133834]: 2026-01-31T08:52:11Z|00787|binding|INFO|Setting lport f8858d9f-7fda-4961-8579-8b1536def97a ovn-installed in OVS
Jan 31 08:52:11 compute-2 ovn_controller[133834]: 2026-01-31T08:52:11Z|00788|binding|INFO|Setting lport f8858d9f-7fda-4961-8579-8b1536def97a up in Southbound
Jan 31 08:52:11 compute-2 systemd-udevd[323771]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.144 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.144 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[280861ce-56c6-434e-aacd-5a614f48e7cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.147 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5bcbd792-d1 in ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.149 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5bcbd792-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.149 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[45aa61ab-53e3-42bc-9ec8-0003fd0e4da0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 systemd[1]: Started Virtual Machine qemu-90-instance-000000c8.
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.150 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd736bd-04d9-4a6d-8867-0587e53cec23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 NetworkManager[48999]: <info>  [1769849531.1584] device (tapf8858d9f-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:52:11 compute-2 NetworkManager[48999]: <info>  [1769849531.1588] device (tapf8858d9f-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.161 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[63160860-6e99-464f-8f4e-8b4424bbe2c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.172 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1e104ca4-8576-4a1c-9cb3-dabd664f0447]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.194 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[345de90e-41e8-4039-ba2e-eee3397091c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 sudo[323676]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:11 compute-2 NetworkManager[48999]: <info>  [1769849531.2102] manager: (tap5bcbd792-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/393)
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.209 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d50056-2d11-44a3-9ddd-bba2a16a48b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:11 compute-2 systemd-udevd[323775]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:52:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:52:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.235 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[725a107e-23fe-4a96-bde6-e5c2404e73c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.238 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[08584fec-8223-43e2-b02d-16a40d6edcfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 NetworkManager[48999]: <info>  [1769849531.2529] device (tap5bcbd792-d0): carrier: link connected
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.257 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[635afedc-d576-40b0-b521-91b06da65c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.269 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1663fadf-7dc2-4767-a5e8-0d4635295672]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bcbd792-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:c4:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 978372, 'reachable_time': 16828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323815, 'error': None, 'target': 'ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.281 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba52232-85b5-477b-9f90-e4d55224aa8f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:c4f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 978372, 'tstamp': 978372}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323816, 'error': None, 'target': 'ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.292 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[06fbc4bc-d4b7-47da-ab22-98f440ab7cd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bcbd792-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:c4:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 978372, 'reachable_time': 16828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323817, 'error': None, 'target': 'ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.312 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[88083c02-4f2a-49f2-8871-72646d84b4f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.354 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dba4dd4e-0853-4aba-86ad-64659d67a905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.356 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bcbd792-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.356 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.356 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bcbd792-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:52:11 compute-2 kernel: tap5bcbd792-d0: entered promiscuous mode
Jan 31 08:52:11 compute-2 NetworkManager[48999]: <info>  [1769849531.3588] manager: (tap5bcbd792-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.360 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.360 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bcbd792-d0, col_values=(('external_ids', {'iface-id': '5339b2f1-5a22-4487-8eff-180aa0b06c18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:52:11 compute-2 ovn_controller[133834]: 2026-01-31T08:52:11Z|00789|binding|INFO|Releasing lport 5339b2f1-5a22-4487-8eff-180aa0b06c18 from this chassis (sb_readonly=0)
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.362 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5bcbd792-d73a-4177-857d-cca1f0044ec8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5bcbd792-d73a-4177-857d-cca1f0044ec8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.365 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7658cc43-373a-4ed0-b062-285f7348b481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.365 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5bcbd792-d73a-4177-857d-cca1f0044ec8
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5bcbd792-d73a-4177-857d-cca1f0044ec8.pid.haproxy
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5bcbd792-d73a-4177-857d-cca1f0044ec8
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.366 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:11 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:11.366 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'env', 'PROCESS_TAG=haproxy-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5bcbd792-d73a-4177-857d-cca1f0044ec8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:52:11 compute-2 ceph-mon[77282]: pgmap v3581: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 3.1 MiB/s wr, 88 op/s
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:52:11 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.625 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849531.6250079, c304efec-22bc-408c-adea-b06aaf5fbe40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.626 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] VM Started (Lifecycle Event)
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.693 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.698 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849531.625377, c304efec-22bc-408c-adea-b06aaf5fbe40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.698 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] VM Paused (Lifecycle Event)
Jan 31 08:52:11 compute-2 podman[323891]: 2026-01-31 08:52:11.708698101 +0000 UTC m=+0.047029129 container create 8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.730 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.734 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:52:11 compute-2 systemd[1]: Started libpod-conmon-8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb.scope.
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.760 226833 DEBUG nova.network.neutron [req-115cc44f-e343-4a9d-a77b-28afc373bee1 req-40cd29ab-c759-4bbd-be9f-54f238207ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Updated VIF entry in instance network info cache for port f8858d9f-7fda-4961-8579-8b1536def97a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.760 226833 DEBUG nova.network.neutron [req-115cc44f-e343-4a9d-a77b-28afc373bee1 req-40cd29ab-c759-4bbd-be9f-54f238207ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Updating instance_info_cache with network_info: [{"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:52:11 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:52:11 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bb7394239e46eabed2cba7d04f4c8d148f60c43943040c55d1864a81070e6d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.775 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:52:11 compute-2 podman[323891]: 2026-01-31 08:52:11.777690097 +0000 UTC m=+0.116021125 container init 8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 08:52:11 compute-2 podman[323891]: 2026-01-31 08:52:11.684650077 +0000 UTC m=+0.022981135 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:52:11 compute-2 podman[323891]: 2026-01-31 08:52:11.781718436 +0000 UTC m=+0.120049464 container start 8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:52:11 compute-2 nova_compute[226829]: 2026-01-31 08:52:11.787 226833 DEBUG oslo_concurrency.lockutils [req-115cc44f-e343-4a9d-a77b-28afc373bee1 req-40cd29ab-c759-4bbd-be9f-54f238207ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:52:11 compute-2 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[323906]: [NOTICE]   (323910) : New worker (323912) forked
Jan 31 08:52:11 compute-2 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[323906]: [NOTICE]   (323910) : Loading success.
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.049 226833 DEBUG nova.compute.manager [req-bada74ca-7900-41d8-a147-cf72b9f40640 req-43307e37-b299-4150-9141-8f51a305646b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received event network-vif-plugged-f8858d9f-7fda-4961-8579-8b1536def97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.050 226833 DEBUG oslo_concurrency.lockutils [req-bada74ca-7900-41d8-a147-cf72b9f40640 req-43307e37-b299-4150-9141-8f51a305646b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.050 226833 DEBUG oslo_concurrency.lockutils [req-bada74ca-7900-41d8-a147-cf72b9f40640 req-43307e37-b299-4150-9141-8f51a305646b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.050 226833 DEBUG oslo_concurrency.lockutils [req-bada74ca-7900-41d8-a147-cf72b9f40640 req-43307e37-b299-4150-9141-8f51a305646b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.051 226833 DEBUG nova.compute.manager [req-bada74ca-7900-41d8-a147-cf72b9f40640 req-43307e37-b299-4150-9141-8f51a305646b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Processing event network-vif-plugged-f8858d9f-7fda-4961-8579-8b1536def97a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.051 226833 DEBUG nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.055 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849532.05427, c304efec-22bc-408c-adea-b06aaf5fbe40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.055 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] VM Resumed (Lifecycle Event)
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.057 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.060 226833 INFO nova.virt.libvirt.driver [-] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Instance spawned successfully.
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.060 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.100 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.102 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.103 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.103 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.104 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.104 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.105 226833 DEBUG nova.virt.libvirt.driver [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.109 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.168 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.208 226833 INFO nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Took 14.09 seconds to spawn the instance on the hypervisor.
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.209 226833 DEBUG nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.309 226833 INFO nova.compute.manager [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Took 15.63 seconds to build instance.
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.405 226833 DEBUG oslo_concurrency.lockutils [None req-b96bd053-f6d6-49dc-a48e-dfd3d64f764a 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:12 compute-2 nova_compute[226829]: 2026-01-31 08:52:12.490 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:12.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:52:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:52:13 compute-2 nova_compute[226829]: 2026-01-31 08:52:13.460 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:13 compute-2 nova_compute[226829]: 2026-01-31 08:52:13.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:14 compute-2 ceph-mon[77282]: pgmap v3582: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 425 KiB/s wr, 17 op/s
Jan 31 08:52:14 compute-2 nova_compute[226829]: 2026-01-31 08:52:14.181 226833 DEBUG nova.compute.manager [req-31be019c-af23-4a94-821b-b9407de881fb req-57a3b314-f251-4ed7-a811-c4b65e08d4ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received event network-vif-plugged-f8858d9f-7fda-4961-8579-8b1536def97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:52:14 compute-2 nova_compute[226829]: 2026-01-31 08:52:14.182 226833 DEBUG oslo_concurrency.lockutils [req-31be019c-af23-4a94-821b-b9407de881fb req-57a3b314-f251-4ed7-a811-c4b65e08d4ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:14 compute-2 nova_compute[226829]: 2026-01-31 08:52:14.182 226833 DEBUG oslo_concurrency.lockutils [req-31be019c-af23-4a94-821b-b9407de881fb req-57a3b314-f251-4ed7-a811-c4b65e08d4ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:14 compute-2 nova_compute[226829]: 2026-01-31 08:52:14.182 226833 DEBUG oslo_concurrency.lockutils [req-31be019c-af23-4a94-821b-b9407de881fb req-57a3b314-f251-4ed7-a811-c4b65e08d4ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:14 compute-2 nova_compute[226829]: 2026-01-31 08:52:14.182 226833 DEBUG nova.compute.manager [req-31be019c-af23-4a94-821b-b9407de881fb req-57a3b314-f251-4ed7-a811-c4b65e08d4ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] No waiting events found dispatching network-vif-plugged-f8858d9f-7fda-4961-8579-8b1536def97a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:52:14 compute-2 nova_compute[226829]: 2026-01-31 08:52:14.183 226833 WARNING nova.compute.manager [req-31be019c-af23-4a94-821b-b9407de881fb req-57a3b314-f251-4ed7-a811-c4b65e08d4ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received unexpected event network-vif-plugged-f8858d9f-7fda-4961-8579-8b1536def97a for instance with vm_state active and task_state None.
Jan 31 08:52:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:14.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3006365587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:15.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:15 compute-2 nova_compute[226829]: 2026-01-31 08:52:15.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:15 compute-2 nova_compute[226829]: 2026-01-31 08:52:15.541 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:15 compute-2 nova_compute[226829]: 2026-01-31 08:52:15.541 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:15 compute-2 nova_compute[226829]: 2026-01-31 08:52:15.542 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:15 compute-2 nova_compute[226829]: 2026-01-31 08:52:15.542 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:52:15 compute-2 nova_compute[226829]: 2026-01-31 08:52:15.542 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:52:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4071357891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:15 compute-2 nova_compute[226829]: 2026-01-31 08:52:15.966 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.083 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.083 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:52:16 compute-2 ceph-mon[77282]: pgmap v3583: 305 pgs: 305 active+clean; 201 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 886 KiB/s rd, 27 KiB/s wr, 39 op/s
Jan 31 08:52:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2460422966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4071357891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.227 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.228 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3968MB free_disk=20.94830322265625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.229 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.229 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.321 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance c304efec-22bc-408c-adea-b06aaf5fbe40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.321 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.322 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.455 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:16.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:52:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3314229693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.875 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.880 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.917 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.944 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:52:16 compute-2 nova_compute[226829]: 2026-01-31 08:52:16.945 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2732757244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3314229693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:17 compute-2 sudo[323969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:17 compute-2 sudo[323969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:17 compute-2 sudo[323969]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:17 compute-2 sudo[323994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:52:17 compute-2 sudo[323994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:17 compute-2 sudo[323994]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:18 compute-2 ceph-mon[77282]: pgmap v3584: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 16 KiB/s wr, 71 op/s
Jan 31 08:52:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:18 compute-2 nova_compute[226829]: 2026-01-31 08:52:18.463 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:18.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:18 compute-2 nova_compute[226829]: 2026-01-31 08:52:18.695 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:19.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:20 compute-2 NetworkManager[48999]: <info>  [1769849540.2096] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Jan 31 08:52:20 compute-2 NetworkManager[48999]: <info>  [1769849540.2105] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Jan 31 08:52:20 compute-2 nova_compute[226829]: 2026-01-31 08:52:20.208 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:20 compute-2 nova_compute[226829]: 2026-01-31 08:52:20.253 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:20 compute-2 ovn_controller[133834]: 2026-01-31T08:52:20Z|00790|binding|INFO|Releasing lport 5339b2f1-5a22-4487-8eff-180aa0b06c18 from this chassis (sb_readonly=0)
Jan 31 08:52:20 compute-2 nova_compute[226829]: 2026-01-31 08:52:20.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:20 compute-2 ceph-mon[77282]: pgmap v3585: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:52:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:20.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:21.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:21 compute-2 nova_compute[226829]: 2026-01-31 08:52:21.320 226833 DEBUG nova.compute.manager [req-67587d68-18e8-46d1-894f-82d47d4a2e34 req-86b02919-00b6-46bb-8b2a-4b9eb50ab7bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received event network-changed-f8858d9f-7fda-4961-8579-8b1536def97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:52:21 compute-2 nova_compute[226829]: 2026-01-31 08:52:21.320 226833 DEBUG nova.compute.manager [req-67587d68-18e8-46d1-894f-82d47d4a2e34 req-86b02919-00b6-46bb-8b2a-4b9eb50ab7bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Refreshing instance network info cache due to event network-changed-f8858d9f-7fda-4961-8579-8b1536def97a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:52:21 compute-2 nova_compute[226829]: 2026-01-31 08:52:21.321 226833 DEBUG oslo_concurrency.lockutils [req-67587d68-18e8-46d1-894f-82d47d4a2e34 req-86b02919-00b6-46bb-8b2a-4b9eb50ab7bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:52:21 compute-2 nova_compute[226829]: 2026-01-31 08:52:21.321 226833 DEBUG oslo_concurrency.lockutils [req-67587d68-18e8-46d1-894f-82d47d4a2e34 req-86b02919-00b6-46bb-8b2a-4b9eb50ab7bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:52:21 compute-2 nova_compute[226829]: 2026-01-31 08:52:21.321 226833 DEBUG nova.network.neutron [req-67587d68-18e8-46d1-894f-82d47d4a2e34 req-86b02919-00b6-46bb-8b2a-4b9eb50ab7bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Refreshing network info cache for port f8858d9f-7fda-4961-8579-8b1536def97a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:52:22 compute-2 sudo[324022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:22 compute-2 sudo[324022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:22 compute-2 sudo[324022]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:22 compute-2 sudo[324048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:22 compute-2 sudo[324048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:22 compute-2 sudo[324048]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:22 compute-2 ceph-mon[77282]: pgmap v3586: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:52:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:22.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:23.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:23 compute-2 nova_compute[226829]: 2026-01-31 08:52:23.465 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:23 compute-2 nova_compute[226829]: 2026-01-31 08:52:23.627 226833 DEBUG nova.network.neutron [req-67587d68-18e8-46d1-894f-82d47d4a2e34 req-86b02919-00b6-46bb-8b2a-4b9eb50ab7bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Updated VIF entry in instance network info cache for port f8858d9f-7fda-4961-8579-8b1536def97a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:52:23 compute-2 nova_compute[226829]: 2026-01-31 08:52:23.628 226833 DEBUG nova.network.neutron [req-67587d68-18e8-46d1-894f-82d47d4a2e34 req-86b02919-00b6-46bb-8b2a-4b9eb50ab7bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Updating instance_info_cache with network_info: [{"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:52:23 compute-2 nova_compute[226829]: 2026-01-31 08:52:23.696 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:23 compute-2 nova_compute[226829]: 2026-01-31 08:52:23.698 226833 DEBUG oslo_concurrency.lockutils [req-67587d68-18e8-46d1-894f-82d47d4a2e34 req-86b02919-00b6-46bb-8b2a-4b9eb50ab7bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:52:24 compute-2 ovn_controller[133834]: 2026-01-31T08:52:24Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:d8:76 10.100.0.12
Jan 31 08:52:24 compute-2 ovn_controller[133834]: 2026-01-31T08:52:24Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:d8:76 10.100.0.12
Jan 31 08:52:24 compute-2 ceph-mon[77282]: pgmap v3587: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:52:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:24.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:52:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:25.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:52:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:26 compute-2 nova_compute[226829]: 2026-01-31 08:52:26.185 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:26 compute-2 ceph-mon[77282]: pgmap v3588: 305 pgs: 305 active+clean; 184 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Jan 31 08:52:26 compute-2 ovn_controller[133834]: 2026-01-31T08:52:26Z|00791|binding|INFO|Releasing lport 5339b2f1-5a22-4487-8eff-180aa0b06c18 from this chassis (sb_readonly=0)
Jan 31 08:52:26 compute-2 nova_compute[226829]: 2026-01-31 08:52:26.522 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:26.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:27.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:28 compute-2 ceph-mon[77282]: pgmap v3589: 305 pgs: 305 active+clean; 195 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.7 MiB/s wr, 120 op/s
Jan 31 08:52:28 compute-2 nova_compute[226829]: 2026-01-31 08:52:28.518 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:28.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:28 compute-2 nova_compute[226829]: 2026-01-31 08:52:28.698 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:52:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:29.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:52:29 compute-2 ceph-mon[77282]: pgmap v3590: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 773 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 31 08:52:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:30.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:52:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:31.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:52:31 compute-2 nova_compute[226829]: 2026-01-31 08:52:31.946 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:31 compute-2 nova_compute[226829]: 2026-01-31 08:52:31.946 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:52:32 compute-2 ceph-mon[77282]: pgmap v3591: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:52:32 compute-2 podman[324078]: 2026-01-31 08:52:32.190987423 +0000 UTC m=+0.075070962 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:52:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:32.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:52:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:33.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:52:33 compute-2 nova_compute[226829]: 2026-01-31 08:52:33.520 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:33 compute-2 nova_compute[226829]: 2026-01-31 08:52:33.698 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:34 compute-2 ceph-mon[77282]: pgmap v3592: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:52:34 compute-2 ceph-mgr[77635]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 08:52:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:34.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:35.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:36 compute-2 ceph-mon[77282]: pgmap v3593: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:52:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:36.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:37.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:37 compute-2 ceph-mon[77282]: pgmap v3594: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 277 KiB/s rd, 1.1 MiB/s wr, 37 op/s
Jan 31 08:52:38 compute-2 nova_compute[226829]: 2026-01-31 08:52:38.522 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:38 compute-2 nova_compute[226829]: 2026-01-31 08:52:38.700 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:39 compute-2 podman[324108]: 2026-01-31 08:52:39.181956331 +0000 UTC m=+0.068025719 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:52:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:39.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:39 compute-2 nova_compute[226829]: 2026-01-31 08:52:39.288 226833 DEBUG nova.compute.manager [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:52:39 compute-2 nova_compute[226829]: 2026-01-31 08:52:39.465 226833 INFO nova.compute.manager [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] instance snapshotting
Jan 31 08:52:39 compute-2 ovn_controller[133834]: 2026-01-31T08:52:39Z|00792|binding|INFO|Releasing lport 5339b2f1-5a22-4487-8eff-180aa0b06c18 from this chassis (sb_readonly=0)
Jan 31 08:52:39 compute-2 nova_compute[226829]: 2026-01-31 08:52:39.884 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:39 compute-2 nova_compute[226829]: 2026-01-31 08:52:39.944 226833 INFO nova.virt.libvirt.driver [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Beginning live snapshot process
Jan 31 08:52:40 compute-2 nova_compute[226829]: 2026-01-31 08:52:40.209 226833 DEBUG nova.virt.libvirt.imagebackend [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 08:52:40 compute-2 ceph-mon[77282]: pgmap v3595: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 482 KiB/s wr, 8 op/s
Jan 31 08:52:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:40.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:40 compute-2 nova_compute[226829]: 2026-01-31 08:52:40.840 226833 DEBUG nova.storage.rbd_utils [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] creating snapshot(070493ce15d64537acc57a4b1b82da0a) on rbd image(c304efec-22bc-408c-adea-b06aaf5fbe40_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:52:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:52:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:41.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:52:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e385 e385: 3 total, 3 up, 3 in
Jan 31 08:52:41 compute-2 ceph-mon[77282]: pgmap v3596: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s rd, 30 KiB/s wr, 5 op/s
Jan 31 08:52:41 compute-2 ceph-mon[77282]: osdmap e385: 3 total, 3 up, 3 in
Jan 31 08:52:41 compute-2 nova_compute[226829]: 2026-01-31 08:52:41.717 226833 DEBUG nova.storage.rbd_utils [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] cloning vms/c304efec-22bc-408c-adea-b06aaf5fbe40_disk@070493ce15d64537acc57a4b1b82da0a to images/e29674ab-eaa7-4f25-a795-4373e5f5c4e8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:52:42 compute-2 sudo[324218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:42 compute-2 sudo[324218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:42 compute-2 sudo[324218]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:42 compute-2 sudo[324243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:42 compute-2 sudo[324243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:42 compute-2 sudo[324243]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:42 compute-2 nova_compute[226829]: 2026-01-31 08:52:42.345 226833 DEBUG nova.storage.rbd_utils [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] flattening images/e29674ab-eaa7-4f25-a795-4373e5f5c4e8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 08:52:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:43.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:43 compute-2 nova_compute[226829]: 2026-01-31 08:52:43.524 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:43 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Jan 31 08:52:43 compute-2 nova_compute[226829]: 2026-01-31 08:52:43.702 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:43 compute-2 ceph-mon[77282]: pgmap v3598: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 34 KiB/s wr, 7 op/s
Jan 31 08:52:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:44.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:44 compute-2 nova_compute[226829]: 2026-01-31 08:52:44.975 226833 DEBUG nova.storage.rbd_utils [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] removing snapshot(070493ce15d64537acc57a4b1b82da0a) on rbd image(c304efec-22bc-408c-adea-b06aaf5fbe40_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:52:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e386 e386: 3 total, 3 up, 3 in
Jan 31 08:52:45 compute-2 nova_compute[226829]: 2026-01-31 08:52:45.214 226833 DEBUG nova.storage.rbd_utils [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] creating snapshot(snap) on rbd image(e29674ab-eaa7-4f25-a795-4373e5f5c4e8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:52:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:52:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:45.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:52:45 compute-2 nova_compute[226829]: 2026-01-31 08:52:45.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:45 compute-2 nova_compute[226829]: 2026-01-31 08:52:45.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:52:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:45 compute-2 nova_compute[226829]: 2026-01-31 08:52:45.643 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:52:46 compute-2 ceph-mon[77282]: pgmap v3599: 305 pgs: 305 active+clean; 231 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 20 op/s
Jan 31 08:52:46 compute-2 ceph-mon[77282]: osdmap e386: 3 total, 3 up, 3 in
Jan 31 08:52:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e387 e387: 3 total, 3 up, 3 in
Jan 31 08:52:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:46.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:47 compute-2 ceph-mon[77282]: osdmap e387: 3 total, 3 up, 3 in
Jan 31 08:52:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:47.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:48 compute-2 ceph-mon[77282]: pgmap v3602: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.1 MiB/s wr, 139 op/s
Jan 31 08:52:48 compute-2 nova_compute[226829]: 2026-01-31 08:52:48.525 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:48.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:48 compute-2 nova_compute[226829]: 2026-01-31 08:52:48.704 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:52:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:49.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:52:50 compute-2 ceph-mon[77282]: pgmap v3603: 305 pgs: 305 active+clean; 274 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 115 op/s
Jan 31 08:52:50 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3394808446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:50.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:50 compute-2 nova_compute[226829]: 2026-01-31 08:52:50.847 226833 INFO nova.virt.libvirt.driver [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Snapshot image upload complete
Jan 31 08:52:50 compute-2 nova_compute[226829]: 2026-01-31 08:52:50.848 226833 INFO nova.compute.manager [None req-b6cf3394-a742-49c7-a417-2188ae86a1b9 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Took 11.38 seconds to snapshot the instance on the hypervisor.
Jan 31 08:52:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:51.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:51.366 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:52:51 compute-2 nova_compute[226829]: 2026-01-31 08:52:51.367 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:51.368 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:52:52 compute-2 ceph-mon[77282]: pgmap v3604: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Jan 31 08:52:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:52.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:53.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:52:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/121000242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:52:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:52:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/121000242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:52:53 compute-2 ceph-mon[77282]: pgmap v3605: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.2 MiB/s wr, 112 op/s
Jan 31 08:52:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/121000242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:52:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/121000242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:52:53 compute-2 nova_compute[226829]: 2026-01-31 08:52:53.546 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:53 compute-2 nova_compute[226829]: 2026-01-31 08:52:53.706 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:54.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 e388: 3 total, 3 up, 3 in
Jan 31 08:52:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:55.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:55 compute-2 nova_compute[226829]: 2026-01-31 08:52:55.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:55 compute-2 nova_compute[226829]: 2026-01-31 08:52:55.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:52:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:55 compute-2 ceph-mon[77282]: osdmap e388: 3 total, 3 up, 3 in
Jan 31 08:52:55 compute-2 ceph-mon[77282]: pgmap v3607: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 41 op/s
Jan 31 08:52:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:52:56.370 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:52:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:57.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:57 compute-2 nova_compute[226829]: 2026-01-31 08:52:57.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:58 compute-2 ceph-mon[77282]: pgmap v3608: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 52 op/s
Jan 31 08:52:58 compute-2 nova_compute[226829]: 2026-01-31 08:52:58.548 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:58 compute-2 nova_compute[226829]: 2026-01-31 08:52:58.708 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:52:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:59.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:00 compute-2 ceph-mon[77282]: pgmap v3609: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 31 08:53:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3004284897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:00 compute-2 nova_compute[226829]: 2026-01-31 08:53:00.516 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:00.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2028781581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:53:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:01.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:02 compute-2 ceph-mon[77282]: pgmap v3610: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 31 08:53:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3139870674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:53:02 compute-2 sudo[324332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:02 compute-2 sudo[324332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:02 compute-2 sudo[324332]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:02 compute-2 sudo[324362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:02 compute-2 sudo[324362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:02 compute-2 sudo[324362]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:02 compute-2 podman[324356]: 2026-01-31 08:53:02.401983211 +0000 UTC m=+0.073562970 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:53:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:02.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:03.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:03 compute-2 nova_compute[226829]: 2026-01-31 08:53:03.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:03 compute-2 nova_compute[226829]: 2026-01-31 08:53:03.710 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:04 compute-2 ceph-mon[77282]: pgmap v3611: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 31 08:53:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:04.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:05.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:05 compute-2 nova_compute[226829]: 2026-01-31 08:53:05.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:05 compute-2 nova_compute[226829]: 2026-01-31 08:53:05.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:06 compute-2 ceph-mon[77282]: pgmap v3612: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 20 op/s
Jan 31 08:53:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:06.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:53:06.931 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:53:06.932 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:53:06.932 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:07.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:07 compute-2 nova_compute[226829]: 2026-01-31 08:53:07.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:08 compute-2 ceph-mon[77282]: pgmap v3613: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 1001 KiB/s wr, 49 op/s
Jan 31 08:53:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1997707929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:08 compute-2 nova_compute[226829]: 2026-01-31 08:53:08.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:08 compute-2 nova_compute[226829]: 2026-01-31 08:53:08.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:53:08 compute-2 nova_compute[226829]: 2026-01-31 08:53:08.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:53:08 compute-2 nova_compute[226829]: 2026-01-31 08:53:08.550 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:08.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:08 compute-2 nova_compute[226829]: 2026-01-31 08:53:08.712 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:08 compute-2 nova_compute[226829]: 2026-01-31 08:53:08.894 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:53:08 compute-2 nova_compute[226829]: 2026-01-31 08:53:08.895 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:53:08 compute-2 nova_compute[226829]: 2026-01-31 08:53:08.895 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:53:08 compute-2 nova_compute[226829]: 2026-01-31 08:53:08.895 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid c304efec-22bc-408c-adea-b06aaf5fbe40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:53:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:09.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3637733675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:09 compute-2 nova_compute[226829]: 2026-01-31 08:53:09.997 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:10 compute-2 podman[324412]: 2026-01-31 08:53:10.160950644 +0000 UTC m=+0.042821246 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:53:10 compute-2 ceph-mon[77282]: pgmap v3614: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 525 KiB/s rd, 481 KiB/s wr, 53 op/s
Jan 31 08:53:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3390267310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:53:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3709395865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:53:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:10.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:11.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:12 compute-2 nova_compute[226829]: 2026-01-31 08:53:12.105 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Updating instance_info_cache with network_info: [{"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:53:12 compute-2 nova_compute[226829]: 2026-01-31 08:53:12.125 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:53:12 compute-2 nova_compute[226829]: 2026-01-31 08:53:12.126 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:53:12 compute-2 nova_compute[226829]: 2026-01-31 08:53:12.126 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:12 compute-2 ceph-mon[77282]: pgmap v3615: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Jan 31 08:53:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:12.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:13.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:13 compute-2 nova_compute[226829]: 2026-01-31 08:53:13.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:13 compute-2 nova_compute[226829]: 2026-01-31 08:53:13.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:13 compute-2 nova_compute[226829]: 2026-01-31 08:53:13.713 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:14 compute-2 ceph-mon[77282]: pgmap v3616: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:53:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2425752531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:14.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:15.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1783711535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:16 compute-2 ceph-mon[77282]: pgmap v3617: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 15 KiB/s wr, 128 op/s
Jan 31 08:53:16 compute-2 nova_compute[226829]: 2026-01-31 08:53:16.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:16 compute-2 nova_compute[226829]: 2026-01-31 08:53:16.560 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:16 compute-2 nova_compute[226829]: 2026-01-31 08:53:16.560 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:16 compute-2 nova_compute[226829]: 2026-01-31 08:53:16.561 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:16 compute-2 nova_compute[226829]: 2026-01-31 08:53:16.561 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:53:16 compute-2 nova_compute[226829]: 2026-01-31 08:53:16.561 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:53:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:16.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:53:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2347719483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:16 compute-2 nova_compute[226829]: 2026-01-31 08:53:16.980 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:53:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:17.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2347719483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:17 compute-2 sudo[324456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:17 compute-2 sudo[324456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:17 compute-2 sudo[324456]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:17 compute-2 sudo[324481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:53:17 compute-2 sudo[324481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:17 compute-2 sudo[324481]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:17 compute-2 nova_compute[226829]: 2026-01-31 08:53:17.733 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:53:17 compute-2 nova_compute[226829]: 2026-01-31 08:53:17.735 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:53:17 compute-2 sudo[324506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:17 compute-2 sudo[324506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:17 compute-2 sudo[324506]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:17 compute-2 sudo[324531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:53:17 compute-2 sudo[324531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:17 compute-2 nova_compute[226829]: 2026-01-31 08:53:17.954 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:53:17 compute-2 nova_compute[226829]: 2026-01-31 08:53:17.955 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3873MB free_disk=20.9217529296875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:53:17 compute-2 nova_compute[226829]: 2026-01-31 08:53:17.955 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:17 compute-2 nova_compute[226829]: 2026-01-31 08:53:17.955 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:18 compute-2 sudo[324531]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:18 compute-2 ceph-mon[77282]: pgmap v3618: 305 pgs: 305 active+clean; 326 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 15 KiB/s wr, 163 op/s
Jan 31 08:53:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:53:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:53:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:53:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:53:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:53:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:53:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:53:18 compute-2 nova_compute[226829]: 2026-01-31 08:53:18.553 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:18.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:18 compute-2 nova_compute[226829]: 2026-01-31 08:53:18.715 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:18 compute-2 nova_compute[226829]: 2026-01-31 08:53:18.848 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance c304efec-22bc-408c-adea-b06aaf5fbe40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:53:18 compute-2 nova_compute[226829]: 2026-01-31 08:53:18.848 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:53:18 compute-2 nova_compute[226829]: 2026-01-31 08:53:18.848 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:53:18 compute-2 nova_compute[226829]: 2026-01-31 08:53:18.896 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:53:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:19.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:53:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2000871693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:19 compute-2 nova_compute[226829]: 2026-01-31 08:53:19.326 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:53:19 compute-2 nova_compute[226829]: 2026-01-31 08:53:19.331 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:53:19 compute-2 nova_compute[226829]: 2026-01-31 08:53:19.358 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:53:19 compute-2 nova_compute[226829]: 2026-01-31 08:53:19.429 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:53:19 compute-2 nova_compute[226829]: 2026-01-31 08:53:19.430 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2000871693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:19 compute-2 nova_compute[226829]: 2026-01-31 08:53:19.858 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:20 compute-2 ceph-mon[77282]: pgmap v3619: 305 pgs: 305 active+clean; 326 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 140 op/s
Jan 31 08:53:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:20.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:21.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:21 compute-2 ceph-mon[77282]: pgmap v3620: 305 pgs: 305 active+clean; 345 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.2 MiB/s wr, 171 op/s
Jan 31 08:53:22 compute-2 sudo[324613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:22 compute-2 sudo[324613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:22 compute-2 sudo[324613]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:22 compute-2 sudo[324638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:22 compute-2 sudo[324638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:22 compute-2 sudo[324638]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:22.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:23.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:23 compute-2 nova_compute[226829]: 2026-01-31 08:53:23.555 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:23 compute-2 nova_compute[226829]: 2026-01-31 08:53:23.717 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:24 compute-2 ceph-mon[77282]: pgmap v3621: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Jan 31 08:53:24 compute-2 sudo[324664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:24 compute-2 sudo[324664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:24 compute-2 sudo[324664]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:24 compute-2 sudo[324689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:53:24 compute-2 sudo[324689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:24 compute-2 sudo[324689]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.003000081s ======
Jan 31 08:53:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:24.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Jan 31 08:53:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:53:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:53:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:25.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:26 compute-2 ceph-mon[77282]: pgmap v3622: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Jan 31 08:53:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:26.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:27.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:28 compute-2 ceph-mon[77282]: pgmap v3623: 305 pgs: 305 active+clean; 362 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 126 op/s
Jan 31 08:53:28 compute-2 nova_compute[226829]: 2026-01-31 08:53:28.556 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:28.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:28 compute-2 nova_compute[226829]: 2026-01-31 08:53:28.719 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:29.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:30 compute-2 ceph-mon[77282]: pgmap v3624: 305 pgs: 305 active+clean; 367 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 944 KiB/s rd, 2.3 MiB/s wr, 92 op/s
Jan 31 08:53:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:30.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:31.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:32 compute-2 ceph-mon[77282]: pgmap v3625: 305 pgs: 305 active+clean; 345 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 31 08:53:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:32.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:33 compute-2 podman[324718]: 2026-01-31 08:53:33.203713686 +0000 UTC m=+0.086881442 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true)
Jan 31 08:53:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:33.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:33 compute-2 nova_compute[226829]: 2026-01-31 08:53:33.558 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:33 compute-2 nova_compute[226829]: 2026-01-31 08:53:33.618 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:33 compute-2 nova_compute[226829]: 2026-01-31 08:53:33.721 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:34 compute-2 ceph-mon[77282]: pgmap v3626: 305 pgs: 305 active+clean; 323 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 95 op/s
Jan 31 08:53:34 compute-2 nova_compute[226829]: 2026-01-31 08:53:34.430 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:34 compute-2 nova_compute[226829]: 2026-01-31 08:53:34.431 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:53:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:34.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:35.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:36 compute-2 ceph-mon[77282]: pgmap v3627: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 548 KiB/s wr, 81 op/s
Jan 31 08:53:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:36.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:37.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:37 compute-2 ceph-mon[77282]: pgmap v3628: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 558 KiB/s wr, 81 op/s
Jan 31 08:53:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1264266940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:38 compute-2 nova_compute[226829]: 2026-01-31 08:53:38.622 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:38.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:38 compute-2 nova_compute[226829]: 2026-01-31 08:53:38.722 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:39.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:39 compute-2 ceph-mon[77282]: pgmap v3629: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 950 KiB/s rd, 497 KiB/s wr, 71 op/s
Jan 31 08:53:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:40.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:41 compute-2 podman[324749]: 2026-01-31 08:53:41.160941878 +0000 UTC m=+0.049802005 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:53:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:41.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:42 compute-2 ceph-mon[77282]: pgmap v3630: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 347 KiB/s wr, 70 op/s
Jan 31 08:53:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e389 e389: 3 total, 3 up, 3 in
Jan 31 08:53:42 compute-2 sudo[324770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:42 compute-2 sudo[324770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:42 compute-2 sudo[324770]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:42 compute-2 sudo[324795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:42 compute-2 sudo[324795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:42 compute-2 sudo[324795]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:42.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:43 compute-2 nova_compute[226829]: 2026-01-31 08:53:43.055 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:43.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:43 compute-2 ceph-mon[77282]: osdmap e389: 3 total, 3 up, 3 in
Jan 31 08:53:43 compute-2 nova_compute[226829]: 2026-01-31 08:53:43.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:43 compute-2 nova_compute[226829]: 2026-01-31 08:53:43.624 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:43 compute-2 nova_compute[226829]: 2026-01-31 08:53:43.724 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e390 e390: 3 total, 3 up, 3 in
Jan 31 08:53:44 compute-2 ceph-mon[77282]: pgmap v3632: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 295 KiB/s rd, 17 KiB/s wr, 14 op/s
Jan 31 08:53:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:44.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:45.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:45 compute-2 ceph-mon[77282]: osdmap e390: 3 total, 3 up, 3 in
Jan 31 08:53:45 compute-2 ceph-mon[77282]: pgmap v3634: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 20 op/s
Jan 31 08:53:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e391 e391: 3 total, 3 up, 3 in
Jan 31 08:53:46 compute-2 ceph-mon[77282]: osdmap e391: 3 total, 3 up, 3 in
Jan 31 08:53:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:46.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:47.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:47 compute-2 ceph-mon[77282]: pgmap v3636: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 374 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 11 MiB/s wr, 102 op/s
Jan 31 08:53:48 compute-2 nova_compute[226829]: 2026-01-31 08:53:48.625 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:48.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:48 compute-2 nova_compute[226829]: 2026-01-31 08:53:48.725 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e392 e392: 3 total, 3 up, 3 in
Jan 31 08:53:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:49.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:50 compute-2 ceph-mon[77282]: pgmap v3637: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 398 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 14 MiB/s wr, 163 op/s
Jan 31 08:53:50 compute-2 ceph-mon[77282]: osdmap e392: 3 total, 3 up, 3 in
Jan 31 08:53:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:50.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:51.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:52 compute-2 ceph-mon[77282]: pgmap v3639: 305 pgs: 305 active+clean; 321 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 14 MiB/s wr, 174 op/s
Jan 31 08:53:52 compute-2 ovn_controller[133834]: 2026-01-31T08:53:52Z|00793|binding|INFO|Releasing lport 5339b2f1-5a22-4487-8eff-180aa0b06c18 from this chassis (sb_readonly=0)
Jan 31 08:53:52 compute-2 nova_compute[226829]: 2026-01-31 08:53:52.439 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:52.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:53.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:53 compute-2 nova_compute[226829]: 2026-01-31 08:53:53.627 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:53 compute-2 nova_compute[226829]: 2026-01-31 08:53:53.726 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:54 compute-2 ceph-mon[77282]: pgmap v3640: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 8.8 MiB/s wr, 163 op/s
Jan 31 08:53:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3588722987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:53:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3588722987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:53:54 compute-2 nova_compute[226829]: 2026-01-31 08:53:54.328 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:53:54.328 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:53:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:53:54.330 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:53:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:54.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e393 e393: 3 total, 3 up, 3 in
Jan 31 08:53:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:55.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:56 compute-2 ceph-mon[77282]: osdmap e393: 3 total, 3 up, 3 in
Jan 31 08:53:56 compute-2 ceph-mon[77282]: pgmap v3642: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 797 KiB/s rd, 2.7 MiB/s wr, 100 op/s
Jan 31 08:53:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:56.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:57.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:58 compute-2 ceph-mon[77282]: pgmap v3643: 305 pgs: 305 active+clean; 300 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 280 KiB/s wr, 46 op/s
Jan 31 08:53:58 compute-2 nova_compute[226829]: 2026-01-31 08:53:58.669 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:53:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:58.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:53:58 compute-2 nova_compute[226829]: 2026-01-31 08:53:58.727 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:53:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:59.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:00 compute-2 ceph-mon[77282]: pgmap v3644: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 227 KiB/s wr, 41 op/s
Jan 31 08:54:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:00 compute-2 nova_compute[226829]: 2026-01-31 08:54:00.648 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:00.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:00 compute-2 nova_compute[226829]: 2026-01-31 08:54:00.898 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/243364613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:01.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:01.332 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e394 e394: 3 total, 3 up, 3 in
Jan 31 08:54:02 compute-2 ceph-mon[77282]: pgmap v3645: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 267 KiB/s rd, 229 KiB/s wr, 64 op/s
Jan 31 08:54:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:02.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:02 compute-2 sudo[324830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:02 compute-2 sudo[324830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:02 compute-2 sudo[324830]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:02 compute-2 sudo[324855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:02 compute-2 sudo[324855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:02 compute-2 sudo[324855]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:03.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:03 compute-2 ceph-mon[77282]: osdmap e394: 3 total, 3 up, 3 in
Jan 31 08:54:03 compute-2 nova_compute[226829]: 2026-01-31 08:54:03.671 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:03 compute-2 nova_compute[226829]: 2026-01-31 08:54:03.729 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:04 compute-2 podman[324881]: 2026-01-31 08:54:04.183737707 +0000 UTC m=+0.072077950 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 08:54:04 compute-2 ceph-mon[77282]: pgmap v3647: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 276 KiB/s wr, 46 op/s
Jan 31 08:54:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2018586957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:04.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.800567) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644800677, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1923, "num_deletes": 259, "total_data_size": 4448588, "memory_usage": 4501024, "flush_reason": "Manual Compaction"}
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644812769, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 2922273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84883, "largest_seqno": 86801, "table_properties": {"data_size": 2914231, "index_size": 4855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16748, "raw_average_key_size": 20, "raw_value_size": 2898148, "raw_average_value_size": 3508, "num_data_blocks": 210, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849489, "oldest_key_time": 1769849489, "file_creation_time": 1769849644, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 12279 microseconds, and 5723 cpu microseconds.
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.812857) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 2922273 bytes OK
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.812873) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.815096) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.815109) EVENT_LOG_v1 {"time_micros": 1769849644815106, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.815126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4439968, prev total WAL file size 4439968, number of live WAL files 2.
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.815892) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323635' seq:72057594037927935, type:22 .. '6C6F676D0033353138' seq:0, type:0; will stop at (end)
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(2853KB)], [174(11MB)]
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644815999, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 14645889, "oldest_snapshot_seqno": -1}
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 10698 keys, 14495974 bytes, temperature: kUnknown
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644898248, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 14495974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14426744, "index_size": 41405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26757, "raw_key_size": 282389, "raw_average_key_size": 26, "raw_value_size": 14239497, "raw_average_value_size": 1331, "num_data_blocks": 1583, "num_entries": 10698, "num_filter_entries": 10698, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769849644, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.898546) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14495974 bytes
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.899973) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.8 rd, 176.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.2 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(10.0) write-amplify(5.0) OK, records in: 11235, records dropped: 537 output_compression: NoCompression
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.899993) EVENT_LOG_v1 {"time_micros": 1769849644899983, "job": 112, "event": "compaction_finished", "compaction_time_micros": 82363, "compaction_time_cpu_micros": 35184, "output_level": 6, "num_output_files": 1, "total_output_size": 14495974, "num_input_records": 11235, "num_output_records": 10698, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644900409, "job": 112, "event": "table_file_deletion", "file_number": 176}
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644902182, "job": 112, "event": "table_file_deletion", "file_number": 174}
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.815678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.902267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.902280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.902282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.902284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:04 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:04.902286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:05.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:05 compute-2 ceph-mon[77282]: pgmap v3648: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 243 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 249 KiB/s rd, 227 KiB/s wr, 41 op/s
Jan 31 08:54:06 compute-2 nova_compute[226829]: 2026-01-31 08:54:06.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:06 compute-2 nova_compute[226829]: 2026-01-31 08:54:06.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:06.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:06.932 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:06.933 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:06.934 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:54:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:07.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:54:08 compute-2 ceph-mon[77282]: pgmap v3649: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 208 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 6.1 KiB/s wr, 60 op/s
Jan 31 08:54:08 compute-2 nova_compute[226829]: 2026-01-31 08:54:08.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:08 compute-2 nova_compute[226829]: 2026-01-31 08:54:08.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:08 compute-2 nova_compute[226829]: 2026-01-31 08:54:08.673 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:08.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:08 compute-2 nova_compute[226829]: 2026-01-31 08:54:08.731 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.004 226833 DEBUG nova.compute.manager [req-2db8e184-817c-4380-96ba-0e4c7b330a64 req-f4de53b0-27d9-406e-8cb4-0ece4cba9a84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received event network-changed-f8858d9f-7fda-4961-8579-8b1536def97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.004 226833 DEBUG nova.compute.manager [req-2db8e184-817c-4380-96ba-0e4c7b330a64 req-f4de53b0-27d9-406e-8cb4-0ece4cba9a84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Refreshing instance network info cache due to event network-changed-f8858d9f-7fda-4961-8579-8b1536def97a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.004 226833 DEBUG oslo_concurrency.lockutils [req-2db8e184-817c-4380-96ba-0e4c7b330a64 req-f4de53b0-27d9-406e-8cb4-0ece4cba9a84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.004 226833 DEBUG oslo_concurrency.lockutils [req-2db8e184-817c-4380-96ba-0e4c7b330a64 req-f4de53b0-27d9-406e-8cb4-0ece4cba9a84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.004 226833 DEBUG nova.network.neutron [req-2db8e184-817c-4380-96ba-0e4c7b330a64 req-f4de53b0-27d9-406e-8cb4-0ece4cba9a84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Refreshing network info cache for port f8858d9f-7fda-4961-8579-8b1536def97a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.234 226833 DEBUG oslo_concurrency.lockutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "c304efec-22bc-408c-adea-b06aaf5fbe40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.234 226833 DEBUG oslo_concurrency.lockutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.234 226833 DEBUG oslo_concurrency.lockutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.235 226833 DEBUG oslo_concurrency.lockutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.235 226833 DEBUG oslo_concurrency.lockutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.236 226833 INFO nova.compute.manager [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Terminating instance
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.237 226833 DEBUG nova.compute.manager [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:54:09 compute-2 kernel: tapf8858d9f-7f (unregistering): left promiscuous mode
Jan 31 08:54:09 compute-2 NetworkManager[48999]: <info>  [1769849649.2959] device (tapf8858d9f-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.301 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 ovn_controller[133834]: 2026-01-31T08:54:09Z|00794|binding|INFO|Releasing lport f8858d9f-7fda-4961-8579-8b1536def97a from this chassis (sb_readonly=0)
Jan 31 08:54:09 compute-2 ovn_controller[133834]: 2026-01-31T08:54:09Z|00795|binding|INFO|Setting lport f8858d9f-7fda-4961-8579-8b1536def97a down in Southbound
Jan 31 08:54:09 compute-2 ovn_controller[133834]: 2026-01-31T08:54:09Z|00796|binding|INFO|Removing iface tapf8858d9f-7f ovn-installed in OVS
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.304 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.311 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.326 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:d8:76 10.100.0.12'], port_security=['fa:16:3e:03:d8:76 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c304efec-22bc-408c-adea-b06aaf5fbe40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996284baaa2946258a0ab1be9a30d1f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc7f4114-32da-4b90-986a-baaae713330e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6418939f-fc1e-4745-b073-d983621a4c28, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=f8858d9f-7fda-4961-8579-8b1536def97a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.327 143841 INFO neutron.agent.ovn.metadata.agent [-] Port f8858d9f-7fda-4961-8579-8b1536def97a in datapath 5bcbd792-d73a-4177-857d-cca1f0044ec8 unbound from our chassis
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.329 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bcbd792-d73a-4177-857d-cca1f0044ec8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:54:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.331 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4e995686-a489-4b26-8e09-522778dc3ab7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:09.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.332 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8 namespace which is not needed anymore
Jan 31 08:54:09 compute-2 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Jan 31 08:54:09 compute-2 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c8.scope: Consumed 16.107s CPU time.
Jan 31 08:54:09 compute-2 systemd-machined[195142]: Machine qemu-90-instance-000000c8 terminated.
Jan 31 08:54:09 compute-2 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[323906]: [NOTICE]   (323910) : haproxy version is 2.8.14-c23fe91
Jan 31 08:54:09 compute-2 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[323906]: [NOTICE]   (323910) : path to executable is /usr/sbin/haproxy
Jan 31 08:54:09 compute-2 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[323906]: [WARNING]  (323910) : Exiting Master process...
Jan 31 08:54:09 compute-2 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[323906]: [ALERT]    (323910) : Current worker (323912) exited with code 143 (Terminated)
Jan 31 08:54:09 compute-2 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[323906]: [WARNING]  (323910) : All workers exited. Exiting... (0)
Jan 31 08:54:09 compute-2 systemd[1]: libpod-8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb.scope: Deactivated successfully.
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.455 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 podman[324936]: 2026-01-31 08:54:09.456460045 +0000 UTC m=+0.048332285 container died 8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.458 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.467 226833 INFO nova.virt.libvirt.driver [-] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Instance destroyed successfully.
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.467 226833 DEBUG nova.objects.instance [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lazy-loading 'resources' on Instance uuid c304efec-22bc-408c-adea-b06aaf5fbe40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:54:09 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb-userdata-shm.mount: Deactivated successfully.
Jan 31 08:54:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-e3bb7394239e46eabed2cba7d04f4c8d148f60c43943040c55d1864a81070e6d-merged.mount: Deactivated successfully.
Jan 31 08:54:09 compute-2 podman[324936]: 2026-01-31 08:54:09.515357386 +0000 UTC m=+0.107229406 container cleanup 8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.521 226833 DEBUG nova.virt.libvirt.vif [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:51:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-445454519',display_name='tempest-TestSnapshotPattern-server-445454519',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-445454519',id=200,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGk1avDtDRCkfi38WDQyZMb5AgNzJzpa/E7afrTWkN843bVEm67ME3qOwcYUGN5nJ8L7GrnzGumfCdBmJYfKJwV+0Rfyn9QiAVoGTFowygEpRUI24B1nMsGC8mTo9YdR3w==',key_name='tempest-TestSnapshotPattern-1326482889',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:52:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996284baaa2946258a0ab1be9a30d1f6',ramdisk_id='',reservation_id='r-pko5ijj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-920311190',owner_user_name='tempest-TestSnapshotPattern-920311190-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:52:51Z,user_data=None,user_id='5a37d71e432b45168339dde5abdbe7b6',uuid=c304efec-22bc-408c-adea-b06aaf5fbe40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.522 226833 DEBUG nova.network.os_vif_util [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converting VIF {"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.523 226833 DEBUG nova.network.os_vif_util [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:d8:76,bridge_name='br-int',has_traffic_filtering=True,id=f8858d9f-7fda-4961-8579-8b1536def97a,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8858d9f-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.523 226833 DEBUG os_vif [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:d8:76,bridge_name='br-int',has_traffic_filtering=True,id=f8858d9f-7fda-4961-8579-8b1536def97a,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8858d9f-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:54:09 compute-2 systemd[1]: libpod-conmon-8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb.scope: Deactivated successfully.
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.527 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.528 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8858d9f-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.529 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.531 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.536 226833 INFO os_vif [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:d8:76,bridge_name='br-int',has_traffic_filtering=True,id=f8858d9f-7fda-4961-8579-8b1536def97a,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8858d9f-7f')
Jan 31 08:54:09 compute-2 podman[324976]: 2026-01-31 08:54:09.575823659 +0000 UTC m=+0.041412476 container remove 8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.580 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3f0258-e579-4f46-b5f6-8d40f5acaf7a]: (4, ('Sat Jan 31 08:54:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8 (8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb)\n8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb\nSat Jan 31 08:54:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8 (8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb)\n8402a72f1ede3a26351474ca189f4c3a5cf60a8d0e33ac16ce4cae75e33eecdb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.582 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[939cdbe5-9489-4f60-b0fd-e55771dbd628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.583 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bcbd792-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.584 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 kernel: tap5bcbd792-d0: left promiscuous mode
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.588 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.592 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[be84fe02-f7fd-4962-9215-7c1d4f3c025d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.614 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0fee5882-ac20-4bb7-add5-062a3770bd4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.615 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[88ad8a3b-8882-45cc-88ad-76a138f5c33c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.628 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[48a108dd-a4f8-469d-b62a-9ba9b78ccb5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 978366, 'reachable_time': 33699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325009, 'error': None, 'target': 'ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:09 compute-2 systemd[1]: run-netns-ovnmeta\x2d5bcbd792\x2dd73a\x2d4177\x2d857d\x2dcca1f0044ec8.mount: Deactivated successfully.
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.633 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:54:09 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:09.633 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[e77bb8d0-d912-4ab0-8f73-2c21d4e63fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e395 e395: 3 total, 3 up, 3 in
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.936 226833 DEBUG nova.compute.manager [req-2b4ba045-7c56-4efa-b691-a1872e6c68f9 req-dfc93b03-4a29-48ce-9996-c3fecdbc87ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received event network-vif-unplugged-f8858d9f-7fda-4961-8579-8b1536def97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.936 226833 DEBUG oslo_concurrency.lockutils [req-2b4ba045-7c56-4efa-b691-a1872e6c68f9 req-dfc93b03-4a29-48ce-9996-c3fecdbc87ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.937 226833 DEBUG oslo_concurrency.lockutils [req-2b4ba045-7c56-4efa-b691-a1872e6c68f9 req-dfc93b03-4a29-48ce-9996-c3fecdbc87ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.937 226833 DEBUG oslo_concurrency.lockutils [req-2b4ba045-7c56-4efa-b691-a1872e6c68f9 req-dfc93b03-4a29-48ce-9996-c3fecdbc87ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.937 226833 DEBUG nova.compute.manager [req-2b4ba045-7c56-4efa-b691-a1872e6c68f9 req-dfc93b03-4a29-48ce-9996-c3fecdbc87ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] No waiting events found dispatching network-vif-unplugged-f8858d9f-7fda-4961-8579-8b1536def97a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:54:09 compute-2 nova_compute[226829]: 2026-01-31 08:54:09.938 226833 DEBUG nova.compute.manager [req-2b4ba045-7c56-4efa-b691-a1872e6c68f9 req-dfc93b03-4a29-48ce-9996-c3fecdbc87ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received event network-vif-unplugged-f8858d9f-7fda-4961-8579-8b1536def97a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.007 226833 INFO nova.virt.libvirt.driver [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Deleting instance files /var/lib/nova/instances/c304efec-22bc-408c-adea-b06aaf5fbe40_del
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.008 226833 INFO nova.virt.libvirt.driver [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Deletion of /var/lib/nova/instances/c304efec-22bc-408c-adea-b06aaf5fbe40_del complete
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.123 226833 INFO nova.compute.manager [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Took 0.89 seconds to destroy the instance on the hypervisor.
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.124 226833 DEBUG oslo.service.loopingcall [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.124 226833 DEBUG nova.compute.manager [-] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.124 226833 DEBUG nova.network.neutron [-] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:54:10 compute-2 ceph-mon[77282]: pgmap v3650: 305 pgs: 305 active+clean; 211 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 263 KiB/s rd, 390 KiB/s wr, 58 op/s
Jan 31 08:54:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2686569800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:10 compute-2 ceph-mon[77282]: osdmap e395: 3 total, 3 up, 3 in
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:54:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.554 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 08:54:10 compute-2 nova_compute[226829]: 2026-01-31 08:54:10.554 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:54:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:10.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/127726543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:11.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:11 compute-2 nova_compute[226829]: 2026-01-31 08:54:11.832 226833 DEBUG nova.network.neutron [req-2db8e184-817c-4380-96ba-0e4c7b330a64 req-f4de53b0-27d9-406e-8cb4-0ece4cba9a84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Updated VIF entry in instance network info cache for port f8858d9f-7fda-4961-8579-8b1536def97a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:54:11 compute-2 nova_compute[226829]: 2026-01-31 08:54:11.832 226833 DEBUG nova.network.neutron [req-2db8e184-817c-4380-96ba-0e4c7b330a64 req-f4de53b0-27d9-406e-8cb4-0ece4cba9a84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Updating instance_info_cache with network_info: [{"id": "f8858d9f-7fda-4961-8579-8b1536def97a", "address": "fa:16:3e:03:d8:76", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8858d9f-7f", "ovs_interfaceid": "f8858d9f-7fda-4961-8579-8b1536def97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.042 226833 DEBUG oslo_concurrency.lockutils [req-2db8e184-817c-4380-96ba-0e4c7b330a64 req-f4de53b0-27d9-406e-8cb4-0ece4cba9a84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c304efec-22bc-408c-adea-b06aaf5fbe40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.051 226833 DEBUG nova.compute.manager [req-93476f40-e8ac-42cd-961e-fcb0c2908d31 req-c490d0c7-a403-4a9c-8f15-ededab8fa33e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received event network-vif-plugged-f8858d9f-7fda-4961-8579-8b1536def97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.052 226833 DEBUG oslo_concurrency.lockutils [req-93476f40-e8ac-42cd-961e-fcb0c2908d31 req-c490d0c7-a403-4a9c-8f15-ededab8fa33e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.052 226833 DEBUG oslo_concurrency.lockutils [req-93476f40-e8ac-42cd-961e-fcb0c2908d31 req-c490d0c7-a403-4a9c-8f15-ededab8fa33e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.052 226833 DEBUG oslo_concurrency.lockutils [req-93476f40-e8ac-42cd-961e-fcb0c2908d31 req-c490d0c7-a403-4a9c-8f15-ededab8fa33e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.052 226833 DEBUG nova.compute.manager [req-93476f40-e8ac-42cd-961e-fcb0c2908d31 req-c490d0c7-a403-4a9c-8f15-ededab8fa33e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] No waiting events found dispatching network-vif-plugged-f8858d9f-7fda-4961-8579-8b1536def97a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.052 226833 WARNING nova.compute.manager [req-93476f40-e8ac-42cd-961e-fcb0c2908d31 req-c490d0c7-a403-4a9c-8f15-ededab8fa33e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received unexpected event network-vif-plugged-f8858d9f-7fda-4961-8579-8b1536def97a for instance with vm_state active and task_state deleting.
Jan 31 08:54:12 compute-2 podman[325014]: 2026-01-31 08:54:12.154369782 +0000 UTC m=+0.043902354 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:54:12 compute-2 ceph-mon[77282]: pgmap v3652: 305 pgs: 305 active+clean; 219 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 2.4 MiB/s wr, 96 op/s
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.488 226833 DEBUG nova.network.neutron [-] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.561 226833 INFO nova.compute.manager [-] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Took 2.44 seconds to deallocate network for instance.
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.570 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.663 226833 DEBUG oslo_concurrency.lockutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.664 226833 DEBUG oslo_concurrency.lockutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.678 226833 DEBUG nova.compute.manager [req-d357cce6-d4c9-4c9f-9983-352582b98f9d req-5d52b4fb-4e98-4499-a3e3-f8b29d4cff08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Received event network-vif-deleted-f8858d9f-7fda-4961-8579-8b1536def97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:54:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:12.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:12 compute-2 nova_compute[226829]: 2026-01-31 08:54:12.737 226833 DEBUG oslo_concurrency.processutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:54:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3785438324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:13 compute-2 nova_compute[226829]: 2026-01-31 08:54:13.189 226833 DEBUG oslo_concurrency.processutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:13 compute-2 nova_compute[226829]: 2026-01-31 08:54:13.195 226833 DEBUG nova.compute.provider_tree [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:54:13 compute-2 nova_compute[226829]: 2026-01-31 08:54:13.221 226833 DEBUG nova.scheduler.client.report [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:54:13 compute-2 nova_compute[226829]: 2026-01-31 08:54:13.264 226833 DEBUG oslo_concurrency.lockutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:13.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:13 compute-2 nova_compute[226829]: 2026-01-31 08:54:13.338 226833 INFO nova.scheduler.client.report [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Deleted allocations for instance c304efec-22bc-408c-adea-b06aaf5fbe40
Jan 31 08:54:13 compute-2 nova_compute[226829]: 2026-01-31 08:54:13.500 226833 DEBUG oslo_concurrency.lockutils [None req-ec2e3a38-b959-4497-9ff1-1e9e7daf6e34 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "c304efec-22bc-408c-adea-b06aaf5fbe40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:13 compute-2 nova_compute[226829]: 2026-01-31 08:54:13.675 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:14 compute-2 ceph-mon[77282]: pgmap v3653: 305 pgs: 305 active+clean; 207 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 31 08:54:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3785438324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1001679988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:14 compute-2 nova_compute[226829]: 2026-01-31 08:54:14.530 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:14.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2646746360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/490158924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:15.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:15 compute-2 nova_compute[226829]: 2026-01-31 08:54:15.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:16 compute-2 ceph-mon[77282]: pgmap v3654: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 31 08:54:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3574426144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:16.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:17.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:17 compute-2 nova_compute[226829]: 2026-01-31 08:54:17.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:17 compute-2 nova_compute[226829]: 2026-01-31 08:54:17.524 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:17 compute-2 nova_compute[226829]: 2026-01-31 08:54:17.525 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:17 compute-2 nova_compute[226829]: 2026-01-31 08:54:17.525 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:17 compute-2 nova_compute[226829]: 2026-01-31 08:54:17.525 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:54:17 compute-2 nova_compute[226829]: 2026-01-31 08:54:17.525 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:54:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3250238092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:17 compute-2 nova_compute[226829]: 2026-01-31 08:54:17.998 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.135 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.136 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4101MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.136 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.137 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:18 compute-2 ceph-mon[77282]: pgmap v3655: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 08:54:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3250238092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.362 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.362 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.448 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.614 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.614 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.636 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.695 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:54:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:18.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:18 compute-2 nova_compute[226829]: 2026-01-31 08:54:18.741 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:54:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3882194921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:19 compute-2 nova_compute[226829]: 2026-01-31 08:54:19.144 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:19 compute-2 nova_compute[226829]: 2026-01-31 08:54:19.150 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:54:19 compute-2 nova_compute[226829]: 2026-01-31 08:54:19.185 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:54:19 compute-2 nova_compute[226829]: 2026-01-31 08:54:19.209 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:54:19 compute-2 nova_compute[226829]: 2026-01-31 08:54:19.210 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3882194921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:19.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:19 compute-2 nova_compute[226829]: 2026-01-31 08:54:19.532 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:20 compute-2 ceph-mon[77282]: pgmap v3656: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 1.7 MiB/s wr, 65 op/s
Jan 31 08:54:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:20.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:21.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:21 compute-2 nova_compute[226829]: 2026-01-31 08:54:21.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:21 compute-2 nova_compute[226829]: 2026-01-31 08:54:21.616 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:22 compute-2 ceph-mon[77282]: pgmap v3657: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 770 KiB/s rd, 712 KiB/s wr, 77 op/s
Jan 31 08:54:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:22.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:22 compute-2 sudo[325108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:22 compute-2 sudo[325108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:22 compute-2 sudo[325108]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:22 compute-2 sudo[325133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:22 compute-2 sudo[325133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:22 compute-2 sudo[325133]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:23 compute-2 nova_compute[226829]: 2026-01-31 08:54:23.679 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:24 compute-2 ceph-mon[77282]: pgmap v3658: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 53 op/s
Jan 31 08:54:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/658489239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:24 compute-2 nova_compute[226829]: 2026-01-31 08:54:24.466 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849649.4643543, c304efec-22bc-408c-adea-b06aaf5fbe40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:54:24 compute-2 nova_compute[226829]: 2026-01-31 08:54:24.467 226833 INFO nova.compute.manager [-] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] VM Stopped (Lifecycle Event)
Jan 31 08:54:24 compute-2 nova_compute[226829]: 2026-01-31 08:54:24.534 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:24 compute-2 nova_compute[226829]: 2026-01-31 08:54:24.536 226833 DEBUG nova.compute.manager [None req-3ba6f863-7d28-4e9c-bf1b-1078b1af3326 - - - - - -] [instance: c304efec-22bc-408c-adea-b06aaf5fbe40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:54:24 compute-2 sudo[325159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:24 compute-2 sudo[325159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:24 compute-2 sudo[325159]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:24 compute-2 sudo[325184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:54:24 compute-2 sudo[325184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:24 compute-2 sudo[325184]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:24 compute-2 sudo[325209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:24 compute-2 sudo[325209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:24 compute-2 sudo[325209]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:24 compute-2 sudo[325234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:54:24 compute-2 sudo[325234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:24.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:25 compute-2 sudo[325234]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:54:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:54:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:54:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:54:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:54:25 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:54:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:25.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:26 compute-2 ceph-mon[77282]: pgmap v3659: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 81 op/s
Jan 31 08:54:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:26.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:27.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:28 compute-2 ceph-mon[77282]: pgmap v3660: 305 pgs: 305 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 512 KiB/s wr, 75 op/s
Jan 31 08:54:28 compute-2 nova_compute[226829]: 2026-01-31 08:54:28.680 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:28.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:29.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:29 compute-2 nova_compute[226829]: 2026-01-31 08:54:29.536 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:30 compute-2 ceph-mon[77282]: pgmap v3661: 305 pgs: 305 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 76 op/s
Jan 31 08:54:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:30.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:31.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:31 compute-2 sudo[325293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:31 compute-2 sudo[325293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:31 compute-2 sudo[325293]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1813203743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:54:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:54:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/925435894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:31 compute-2 sudo[325318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:54:31 compute-2 sudo[325318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:31 compute-2 sudo[325318]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:32 compute-2 ceph-mon[77282]: pgmap v3662: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 08:54:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:32.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:33 compute-2 nova_compute[226829]: 2026-01-31 08:54:33.211 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:33 compute-2 nova_compute[226829]: 2026-01-31 08:54:33.212 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:54:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:33.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:33 compute-2 ceph-mon[77282]: pgmap v3663: 305 pgs: 305 active+clean; 229 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 78 op/s
Jan 31 08:54:33 compute-2 nova_compute[226829]: 2026-01-31 08:54:33.681 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:34 compute-2 nova_compute[226829]: 2026-01-31 08:54:34.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:35 compute-2 podman[325345]: 2026-01-31 08:54:35.172654749 +0000 UTC m=+0.062052537 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 08:54:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:35.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:36 compute-2 ceph-mon[77282]: pgmap v3664: 305 pgs: 305 active+clean; 236 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 122 op/s
Jan 31 08:54:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:36.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:37.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:38 compute-2 ceph-mon[77282]: pgmap v3665: 305 pgs: 305 active+clean; 245 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 132 op/s
Jan 31 08:54:38 compute-2 nova_compute[226829]: 2026-01-31 08:54:38.683 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:38.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:39.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:39 compute-2 nova_compute[226829]: 2026-01-31 08:54:39.539 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:40 compute-2 ceph-mon[77282]: pgmap v3666: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 163 op/s
Jan 31 08:54:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:40.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:41.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:41 compute-2 ceph-mon[77282]: pgmap v3667: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 162 op/s
Jan 31 08:54:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:42.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:42 compute-2 sudo[325376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:42 compute-2 sudo[325376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:42 compute-2 sudo[325376]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:43 compute-2 sudo[325407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:43 compute-2 sudo[325407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:43 compute-2 podman[325400]: 2026-01-31 08:54:43.038705852 +0000 UTC m=+0.041865139 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:54:43 compute-2 sudo[325407]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:43.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:43 compute-2 nova_compute[226829]: 2026-01-31 08:54:43.685 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:44 compute-2 ceph-mon[77282]: pgmap v3668: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 31 08:54:44 compute-2 nova_compute[226829]: 2026-01-31 08:54:44.539 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:44.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:54:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1109915705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1109915705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:45.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:46 compute-2 ceph-mon[77282]: pgmap v3669: 305 pgs: 305 active+clean; 247 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Jan 31 08:54:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:46.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:47.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:48 compute-2 ceph-mon[77282]: pgmap v3670: 305 pgs: 305 active+clean; 253 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 85 op/s
Jan 31 08:54:48 compute-2 nova_compute[226829]: 2026-01-31 08:54:48.686 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:54:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:48.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:54:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:49.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:49 compute-2 nova_compute[226829]: 2026-01-31 08:54:49.541 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:50 compute-2 ceph-mon[77282]: pgmap v3671: 305 pgs: 305 active+clean; 261 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 75 op/s
Jan 31 08:54:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:50.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.213258) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691213328, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 763, "num_deletes": 252, "total_data_size": 1355791, "memory_usage": 1374752, "flush_reason": "Manual Compaction"}
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Jan 31 08:54:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e396 e396: 3 total, 3 up, 3 in
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691219119, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 894100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86807, "largest_seqno": 87564, "table_properties": {"data_size": 890372, "index_size": 1507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8760, "raw_average_key_size": 19, "raw_value_size": 882718, "raw_average_value_size": 1997, "num_data_blocks": 66, "num_entries": 442, "num_filter_entries": 442, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849645, "oldest_key_time": 1769849645, "file_creation_time": 1769849691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 5898 microseconds, and 2322 cpu microseconds.
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.219157) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 894100 bytes OK
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.219173) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.223979) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.224034) EVENT_LOG_v1 {"time_micros": 1769849691224014, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.224060) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1351746, prev total WAL file size 1351787, number of live WAL files 2.
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.224556) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(873KB)], [177(13MB)]
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691224624, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 15390074, "oldest_snapshot_seqno": -1}
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 10614 keys, 13406632 bytes, temperature: kUnknown
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691284757, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 13406632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13338863, "index_size": 40136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 281341, "raw_average_key_size": 26, "raw_value_size": 13153938, "raw_average_value_size": 1239, "num_data_blocks": 1523, "num_entries": 10614, "num_filter_entries": 10614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769849691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.285114) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13406632 bytes
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.287746) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 255.6 rd, 222.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(32.2) write-amplify(15.0) OK, records in: 11140, records dropped: 526 output_compression: NoCompression
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.287767) EVENT_LOG_v1 {"time_micros": 1769849691287759, "job": 114, "event": "compaction_finished", "compaction_time_micros": 60214, "compaction_time_cpu_micros": 27999, "output_level": 6, "num_output_files": 1, "total_output_size": 13406632, "num_input_records": 11140, "num_output_records": 10614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691287940, "job": 114, "event": "table_file_deletion", "file_number": 179}
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691289368, "job": 114, "event": "table_file_deletion", "file_number": 177}
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.224487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.289487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.289494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.289495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.289497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:54:51.289499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:51.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:52 compute-2 ceph-mon[77282]: pgmap v3672: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.3 MiB/s wr, 72 op/s
Jan 31 08:54:52 compute-2 ceph-mon[77282]: osdmap e396: 3 total, 3 up, 3 in
Jan 31 08:54:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:52.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:53.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:54:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1800567950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:54:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:54:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1800567950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:54:53 compute-2 nova_compute[226829]: 2026-01-31 08:54:53.730 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e397 e397: 3 total, 3 up, 3 in
Jan 31 08:54:54 compute-2 ceph-mon[77282]: pgmap v3674: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 408 KiB/s rd, 2.8 MiB/s wr, 94 op/s
Jan 31 08:54:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1800567950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:54:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1800567950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:54:54 compute-2 nova_compute[226829]: 2026-01-31 08:54:54.543 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:54.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:55 compute-2 ceph-mon[77282]: osdmap e397: 3 total, 3 up, 3 in
Jan 31 08:54:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:55.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e398 e398: 3 total, 3 up, 3 in
Jan 31 08:54:56 compute-2 ceph-mon[77282]: pgmap v3676: 305 pgs: 305 active+clean; 286 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 102 op/s
Jan 31 08:54:56 compute-2 ceph-mon[77282]: osdmap e398: 3 total, 3 up, 3 in
Jan 31 08:54:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:56.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e399 e399: 3 total, 3 up, 3 in
Jan 31 08:54:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:57.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:58.040 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:54:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:58.041 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:54:58 compute-2 nova_compute[226829]: 2026-01-31 08:54:58.042 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:54:58.043 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:58 compute-2 ceph-mon[77282]: pgmap v3678: 305 pgs: 305 active+clean; 319 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.2 MiB/s wr, 101 op/s
Jan 31 08:54:58 compute-2 ceph-mon[77282]: osdmap e399: 3 total, 3 up, 3 in
Jan 31 08:54:58 compute-2 nova_compute[226829]: 2026-01-31 08:54:58.731 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:58.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:54:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:54:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:59.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:54:59 compute-2 nova_compute[226829]: 2026-01-31 08:54:59.545 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:00 compute-2 ceph-mon[77282]: pgmap v3680: 305 pgs: 305 active+clean; 347 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.1 MiB/s wr, 145 op/s
Jan 31 08:55:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:00.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:02 compute-2 ceph-mon[77282]: pgmap v3681: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 162 op/s
Jan 31 08:55:02 compute-2 nova_compute[226829]: 2026-01-31 08:55:02.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:02.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:03 compute-2 sudo[325455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:03 compute-2 sudo[325455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:03 compute-2 sudo[325455]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:03 compute-2 sudo[325480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:03 compute-2 sudo[325480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:03 compute-2 sudo[325480]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:03.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:03 compute-2 nova_compute[226829]: 2026-01-31 08:55:03.767 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:04 compute-2 ceph-mon[77282]: pgmap v3682: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.3 MiB/s wr, 151 op/s
Jan 31 08:55:04 compute-2 nova_compute[226829]: 2026-01-31 08:55:04.548 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:04 compute-2 nova_compute[226829]: 2026-01-31 08:55:04.743 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:04 compute-2 nova_compute[226829]: 2026-01-31 08:55:04.743 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:04 compute-2 nova_compute[226829]: 2026-01-31 08:55:04.797 226833 DEBUG nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:55:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:04.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 e400: 3 total, 3 up, 3 in
Jan 31 08:55:04 compute-2 nova_compute[226829]: 2026-01-31 08:55:04.918 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:04 compute-2 nova_compute[226829]: 2026-01-31 08:55:04.918 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:04 compute-2 nova_compute[226829]: 2026-01-31 08:55:04.929 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:55:04 compute-2 nova_compute[226829]: 2026-01-31 08:55:04.930 226833 INFO nova.compute.claims [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.083 226833 DEBUG oslo_concurrency.processutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:05.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.401 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "b13507e9-5374-4c40-b919-b9d7b61d4f12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.401 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.427 226833 DEBUG nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:55:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:55:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2275626305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.517 226833 DEBUG oslo_concurrency.processutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.522 226833 DEBUG nova.compute.provider_tree [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.528 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.562 226833 DEBUG nova.scheduler.client.report [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.640 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.641 226833 DEBUG nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.644 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.653 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.653 226833 INFO nova.compute.claims [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.779 226833 DEBUG nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.779 226833 DEBUG nova.network.neutron [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.815 226833 INFO nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.859 226833 DEBUG nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:55:05 compute-2 ceph-mon[77282]: osdmap e400: 3 total, 3 up, 3 in
Jan 31 08:55:05 compute-2 ceph-mon[77282]: pgmap v3684: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 93 op/s
Jan 31 08:55:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2275626305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:05 compute-2 nova_compute[226829]: 2026-01-31 08:55:05.918 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.025 226833 DEBUG nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.027 226833 DEBUG nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.027 226833 INFO nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Creating image(s)
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.058 226833 DEBUG nova.storage.rbd_utils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image 27645b95-3e37-43ba-8465-c8789c0f8700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.080 226833 DEBUG nova.storage.rbd_utils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image 27645b95-3e37-43ba-8465-c8789c0f8700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.111 226833 DEBUG nova.storage.rbd_utils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image 27645b95-3e37-43ba-8465-c8789c0f8700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.123 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "21c8520e0c6de60e01dfdab7f242e26525ecf5f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.124 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "21c8520e0c6de60e01dfdab7f242e26525ecf5f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:06 compute-2 podman[325592]: 2026-01-31 08:55:06.199773019 +0000 UTC m=+0.078858424 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 08:55:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:55:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/153569570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.362 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.366 226833 DEBUG nova.compute.provider_tree [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.385 226833 DEBUG nova.scheduler.client.report [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.428 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.429 226833 DEBUG nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.487 226833 DEBUG nova.virt.libvirt.imagebackend [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/6827763f-c9c4-43fc-825d-2f9c946c4536/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/6827763f-c9c4-43fc-825d-2f9c946c4536/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.534 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.538 226833 DEBUG nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.538 226833 DEBUG nova.network.neutron [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.543 226833 DEBUG nova.virt.libvirt.imagebackend [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/6827763f-c9c4-43fc-825d-2f9c946c4536/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.544 226833 DEBUG nova.storage.rbd_utils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] cloning images/6827763f-c9c4-43fc-825d-2f9c946c4536@snap to None/27645b95-3e37-43ba-8465-c8789c0f8700_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.578 226833 INFO nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.616 226833 DEBUG nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.663 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "21c8520e0c6de60e01dfdab7f242e26525ecf5f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.796 226833 DEBUG nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.797 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.797 226833 INFO nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Creating image(s)
Jan 31 08:55:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:06.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.817 226833 DEBUG nova.storage.rbd_utils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image b13507e9-5374-4c40-b919-b9d7b61d4f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.839 226833 DEBUG nova.storage.rbd_utils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image b13507e9-5374-4c40-b919-b9d7b61d4f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.862 226833 DEBUG nova.storage.rbd_utils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image b13507e9-5374-4c40-b919-b9d7b61d4f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.866 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/153569570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.898 226833 DEBUG nova.policy [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a798fdf6d13d4af4b166dd94b5cea7cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6e96de7b2784be1adce763bc9c9adc5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.908 226833 DEBUG nova.objects.instance [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'migration_context' on Instance uuid 27645b95-3e37-43ba-8465-c8789c0f8700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:55:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:06.933 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:06.934 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:06.934 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.934 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.935 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.935 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.936 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.954 226833 DEBUG nova.storage.rbd_utils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image b13507e9-5374-4c40-b919-b9d7b61d4f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.958 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 b13507e9-5374-4c40-b919-b9d7b61d4f12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.982 226833 DEBUG nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.983 226833 DEBUG nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Ensure instance console log exists: /var/lib/nova/instances/27645b95-3e37-43ba-8465-c8789c0f8700/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.983 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.984 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:06 compute-2 nova_compute[226829]: 2026-01-31 08:55:06.984 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:07 compute-2 nova_compute[226829]: 2026-01-31 08:55:07.142 226833 DEBUG nova.policy [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:55:07 compute-2 nova_compute[226829]: 2026-01-31 08:55:07.218 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 b13507e9-5374-4c40-b919-b9d7b61d4f12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:07 compute-2 nova_compute[226829]: 2026-01-31 08:55:07.282 226833 DEBUG nova.storage.rbd_utils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image b13507e9-5374-4c40-b919-b9d7b61d4f12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:55:07 compute-2 nova_compute[226829]: 2026-01-31 08:55:07.386 226833 DEBUG nova.objects.instance [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid b13507e9-5374-4c40-b919-b9d7b61d4f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:55:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:07.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:07 compute-2 ovn_controller[133834]: 2026-01-31T08:55:07Z|00797|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 08:55:07 compute-2 nova_compute[226829]: 2026-01-31 08:55:07.410 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:55:07 compute-2 nova_compute[226829]: 2026-01-31 08:55:07.410 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Ensure instance console log exists: /var/lib/nova/instances/b13507e9-5374-4c40-b919-b9d7b61d4f12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:55:07 compute-2 nova_compute[226829]: 2026-01-31 08:55:07.411 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:07 compute-2 nova_compute[226829]: 2026-01-31 08:55:07.411 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:07 compute-2 nova_compute[226829]: 2026-01-31 08:55:07.411 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:07 compute-2 nova_compute[226829]: 2026-01-31 08:55:07.490 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:07 compute-2 ceph-mon[77282]: pgmap v3685: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 77 op/s
Jan 31 08:55:08 compute-2 nova_compute[226829]: 2026-01-31 08:55:08.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:08 compute-2 nova_compute[226829]: 2026-01-31 08:55:08.805 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:08.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:09.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:09 compute-2 nova_compute[226829]: 2026-01-31 08:55:09.550 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:10 compute-2 ceph-mon[77282]: pgmap v3686: 305 pgs: 305 active+clean; 366 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Jan 31 08:55:10 compute-2 nova_compute[226829]: 2026-01-31 08:55:10.316 226833 DEBUG nova.network.neutron [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Successfully created port: 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:55:10 compute-2 nova_compute[226829]: 2026-01-31 08:55:10.359 226833 DEBUG nova.network.neutron [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Successfully created port: 495003a6-1a9f-4821-a3a7-1aee88998d2e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:55:10 compute-2 nova_compute[226829]: 2026-01-31 08:55:10.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:10.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:11.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:12 compute-2 ceph-mon[77282]: pgmap v3687: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 31 08:55:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1320594743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3939374713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:12 compute-2 nova_compute[226829]: 2026-01-31 08:55:12.302 226833 DEBUG nova.network.neutron [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Successfully updated port: 495003a6-1a9f-4821-a3a7-1aee88998d2e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:55:12 compute-2 nova_compute[226829]: 2026-01-31 08:55:12.337 226833 DEBUG nova.network.neutron [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Successfully updated port: 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:55:12 compute-2 nova_compute[226829]: 2026-01-31 08:55:12.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:12 compute-2 nova_compute[226829]: 2026-01-31 08:55:12.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:55:12 compute-2 nova_compute[226829]: 2026-01-31 08:55:12.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:55:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:12.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:13 compute-2 podman[325923]: 2026-01-31 08:55:13.147992247 +0000 UTC m=+0.039302179 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.315 226833 DEBUG nova.compute.manager [req-097496e9-20eb-4387-8388-72b6b75f3864 req-0de30f3f-01cd-4f61-b90a-e3e6da40ca12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received event network-changed-495003a6-1a9f-4821-a3a7-1aee88998d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.315 226833 DEBUG nova.compute.manager [req-097496e9-20eb-4387-8388-72b6b75f3864 req-0de30f3f-01cd-4f61-b90a-e3e6da40ca12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Refreshing instance network info cache due to event network-changed-495003a6-1a9f-4821-a3a7-1aee88998d2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.316 226833 DEBUG oslo_concurrency.lockutils [req-097496e9-20eb-4387-8388-72b6b75f3864 req-0de30f3f-01cd-4f61-b90a-e3e6da40ca12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.316 226833 DEBUG oslo_concurrency.lockutils [req-097496e9-20eb-4387-8388-72b6b75f3864 req-0de30f3f-01cd-4f61-b90a-e3e6da40ca12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.316 226833 DEBUG nova.network.neutron [req-097496e9-20eb-4387-8388-72b6b75f3864 req-0de30f3f-01cd-4f61-b90a-e3e6da40ca12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Refreshing network info cache for port 495003a6-1a9f-4821-a3a7-1aee88998d2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.317 226833 DEBUG nova.compute.manager [req-ae6cde7c-40cd-4d14-b813-deffb13fc950 req-0ff2eee2-ce4d-48a4-9af8-5ba21b64eb21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Received event network-changed-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.317 226833 DEBUG nova.compute.manager [req-ae6cde7c-40cd-4d14-b813-deffb13fc950 req-0ff2eee2-ce4d-48a4-9af8-5ba21b64eb21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Refreshing instance network info cache due to event network-changed-4bed2fb3-03ee-4717-bf09-a7a55d07eef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.317 226833 DEBUG oslo_concurrency.lockutils [req-ae6cde7c-40cd-4d14-b813-deffb13fc950 req-0ff2eee2-ce4d-48a4-9af8-5ba21b64eb21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b13507e9-5374-4c40-b919-b9d7b61d4f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.318 226833 DEBUG oslo_concurrency.lockutils [req-ae6cde7c-40cd-4d14-b813-deffb13fc950 req-0ff2eee2-ce4d-48a4-9af8-5ba21b64eb21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b13507e9-5374-4c40-b919-b9d7b61d4f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.318 226833 DEBUG nova.network.neutron [req-ae6cde7c-40cd-4d14-b813-deffb13fc950 req-0ff2eee2-ce4d-48a4-9af8-5ba21b64eb21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Refreshing network info cache for port 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.339 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-b13507e9-5374-4c40-b919-b9d7b61d4f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.339 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.340 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.340 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.340 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:55:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:13.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.650 226833 DEBUG nova.network.neutron [req-ae6cde7c-40cd-4d14-b813-deffb13fc950 req-0ff2eee2-ce4d-48a4-9af8-5ba21b64eb21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.709 226833 DEBUG nova.network.neutron [req-097496e9-20eb-4387-8388-72b6b75f3864 req-0de30f3f-01cd-4f61-b90a-e3e6da40ca12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:55:13 compute-2 nova_compute[226829]: 2026-01-31 08:55:13.806 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:14 compute-2 ceph-mon[77282]: pgmap v3688: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.194 226833 DEBUG nova.network.neutron [req-ae6cde7c-40cd-4d14-b813-deffb13fc950 req-0ff2eee2-ce4d-48a4-9af8-5ba21b64eb21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.223 226833 DEBUG oslo_concurrency.lockutils [req-ae6cde7c-40cd-4d14-b813-deffb13fc950 req-0ff2eee2-ce4d-48a4-9af8-5ba21b64eb21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b13507e9-5374-4c40-b919-b9d7b61d4f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.224 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-b13507e9-5374-4c40-b919-b9d7b61d4f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.224 226833 DEBUG nova.network.neutron [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.383 226833 DEBUG nova.network.neutron [req-097496e9-20eb-4387-8388-72b6b75f3864 req-0de30f3f-01cd-4f61-b90a-e3e6da40ca12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.405 226833 DEBUG oslo_concurrency.lockutils [req-097496e9-20eb-4387-8388-72b6b75f3864 req-0de30f3f-01cd-4f61-b90a-e3e6da40ca12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.405 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquired lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.406 226833 DEBUG nova.network.neutron [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.559 226833 DEBUG nova.network.neutron [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:55:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:14.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:14 compute-2 nova_compute[226829]: 2026-01-31 08:55:14.839 226833 DEBUG nova.network.neutron [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:55:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:55:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:15.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:55:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:16 compute-2 ceph-mon[77282]: pgmap v3689: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.287 226833 DEBUG nova.network.neutron [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Updating instance_info_cache with network_info: [{"id": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "address": "fa:16:3e:b4:bb:04", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bed2fb3-03", "ovs_interfaceid": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.371 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-b13507e9-5374-4c40-b919-b9d7b61d4f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.372 226833 DEBUG nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Instance network_info: |[{"id": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "address": "fa:16:3e:b4:bb:04", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bed2fb3-03", "ovs_interfaceid": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.376 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Start _get_guest_xml network_info=[{"id": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "address": "fa:16:3e:b4:bb:04", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bed2fb3-03", "ovs_interfaceid": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.382 226833 WARNING nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.389 226833 DEBUG nova.virt.libvirt.host [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.390 226833 DEBUG nova.virt.libvirt.host [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.393 226833 DEBUG nova.virt.libvirt.host [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.393 226833 DEBUG nova.virt.libvirt.host [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.394 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.395 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.395 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.395 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.395 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.396 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.396 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.396 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.396 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.396 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.397 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.397 226833 DEBUG nova.virt.hardware [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.400 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.458 226833 DEBUG nova.network.neutron [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Updating instance_info_cache with network_info: [{"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.623 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Releasing lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.623 226833 DEBUG nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Instance network_info: |[{"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.626 226833 DEBUG nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Start _get_guest_xml network_info=[{"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:54:52Z,direct_url=<?>,disk_format='raw',id=6827763f-c9c4-43fc-825d-2f9c946c4536,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1067192783',owner='e6e96de7b2784be1adce763bc9c9adc5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:54:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '6827763f-c9c4-43fc-825d-2f9c946c4536'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.630 226833 WARNING nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.635 226833 DEBUG nova.virt.libvirt.host [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.635 226833 DEBUG nova.virt.libvirt.host [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.638 226833 DEBUG nova.virt.libvirt.host [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.639 226833 DEBUG nova.virt.libvirt.host [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.640 226833 DEBUG nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.640 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:54:52Z,direct_url=<?>,disk_format='raw',id=6827763f-c9c4-43fc-825d-2f9c946c4536,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1067192783',owner='e6e96de7b2784be1adce763bc9c9adc5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:54:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.641 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.641 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.641 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.641 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.642 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.642 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.642 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.642 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.642 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.643 226833 DEBUG nova.virt.hardware [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.645 226833 DEBUG oslo_concurrency.processutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:55:16 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/773898932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:16.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.835 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.859 226833 DEBUG nova.storage.rbd_utils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image b13507e9-5374-4c40-b919-b9d7b61d4f12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:16 compute-2 nova_compute[226829]: 2026-01-31 08:55:16.863 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:55:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2725231681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.047 226833 DEBUG oslo_concurrency.processutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.070 226833 DEBUG nova.storage.rbd_utils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image 27645b95-3e37-43ba-8465-c8789c0f8700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.074 226833 DEBUG oslo_concurrency.processutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/773898932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2725231681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:55:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1228416371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.338 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.339 226833 DEBUG nova.virt.libvirt.vif [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:55:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1198552944',display_name='tempest-TestNetworkBasicOps-server-1198552944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1198552944',id=206,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOrAc73kLV+i+xNaDwSFpYhqmjjip+brTb/F1WQVdODy2G5Aulg4YJQPokDZxchVl00UFgwUIq6BPV+OajtYBIlmVVa9XGOK2f1Ixs/ana7c3e/mHthS6MRW0hZaVFRwQ==',key_name='tempest-TestNetworkBasicOps-1328355201',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-8xl90rgk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:55:06Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=b13507e9-5374-4c40-b919-b9d7b61d4f12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "address": "fa:16:3e:b4:bb:04", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bed2fb3-03", "ovs_interfaceid": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.340 226833 DEBUG nova.network.os_vif_util [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "address": "fa:16:3e:b4:bb:04", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bed2fb3-03", "ovs_interfaceid": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.340 226833 DEBUG nova.network.os_vif_util [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bb:04,bridge_name='br-int',has_traffic_filtering=True,id=4bed2fb3-03ee-4717-bf09-a7a55d07eef8,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bed2fb3-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.342 226833 DEBUG nova.objects.instance [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid b13507e9-5374-4c40-b919-b9d7b61d4f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.360 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <uuid>b13507e9-5374-4c40-b919-b9d7b61d4f12</uuid>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <name>instance-000000ce</name>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:name>tempest-TestNetworkBasicOps-server-1198552944</nova:name>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:55:16</nova:creationTime>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:port uuid="4bed2fb3-03ee-4717-bf09-a7a55d07eef8">
Jan 31 08:55:17 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <system>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="serial">b13507e9-5374-4c40-b919-b9d7b61d4f12</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="uuid">b13507e9-5374-4c40-b919-b9d7b61d4f12</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </system>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <os>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </os>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <features>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </features>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/b13507e9-5374-4c40-b919-b9d7b61d4f12_disk">
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </source>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/b13507e9-5374-4c40-b919-b9d7b61d4f12_disk.config">
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </source>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:b4:bb:04"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <target dev="tap4bed2fb3-03"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/b13507e9-5374-4c40-b919-b9d7b61d4f12/console.log" append="off"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <video>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </video>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:55:17 compute-2 nova_compute[226829]: </domain>
Jan 31 08:55:17 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.361 226833 DEBUG nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Preparing to wait for external event network-vif-plugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.362 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.362 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.362 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.363 226833 DEBUG nova.virt.libvirt.vif [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:55:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1198552944',display_name='tempest-TestNetworkBasicOps-server-1198552944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1198552944',id=206,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOrAc73kLV+i+xNaDwSFpYhqmjjip+brTb/F1WQVdODy2G5Aulg4YJQPokDZxchVl00UFgwUIq6BPV+OajtYBIlmVVa9XGOK2f1Ixs/ana7c3e/mHthS6MRW0hZaVFRwQ==',key_name='tempest-TestNetworkBasicOps-1328355201',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-8xl90rgk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:55:06Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=b13507e9-5374-4c40-b919-b9d7b61d4f12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "address": "fa:16:3e:b4:bb:04", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bed2fb3-03", "ovs_interfaceid": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.363 226833 DEBUG nova.network.os_vif_util [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "address": "fa:16:3e:b4:bb:04", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bed2fb3-03", "ovs_interfaceid": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.364 226833 DEBUG nova.network.os_vif_util [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bb:04,bridge_name='br-int',has_traffic_filtering=True,id=4bed2fb3-03ee-4717-bf09-a7a55d07eef8,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bed2fb3-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.364 226833 DEBUG os_vif [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bb:04,bridge_name='br-int',has_traffic_filtering=True,id=4bed2fb3-03ee-4717-bf09-a7a55d07eef8,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bed2fb3-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.365 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.365 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.366 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.368 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.369 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bed2fb3-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.369 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4bed2fb3-03, col_values=(('external_ids', {'iface-id': '4bed2fb3-03ee-4717-bf09-a7a55d07eef8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:bb:04', 'vm-uuid': 'b13507e9-5374-4c40-b919-b9d7b61d4f12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:17 compute-2 NetworkManager[48999]: <info>  [1769849717.3729] manager: (tap4bed2fb3-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.377 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.378 226833 INFO os_vif [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bb:04,bridge_name='br-int',has_traffic_filtering=True,id=4bed2fb3-03ee-4717-bf09-a7a55d07eef8,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bed2fb3-03')
Jan 31 08:55:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:17.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:55:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/690692146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.520 226833 DEBUG oslo_concurrency.processutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.521 226833 DEBUG nova.virt.libvirt.vif [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-114162633',display_name='tempest-TestStampPattern-server-114162633',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-teststamppattern-server-114162633',id=205,image_ref='6827763f-c9c4-43fc-825d-2f9c946c4536',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfz5EJ1EvVhkopw671xsxq9cmxCD9AJZYRdx1fZqnsJKH4HE3ct43AahyjBDecQtfre/K2oZ3kPMxp5bbpWjZgXwmif2lJfZCK32Cd1YqdcHbaKXFc2nUgzqikPeTQnpA==',key_name='tempest-TestStampPattern-1030074900',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6e96de7b2784be1adce763bc9c9adc5',ramdisk_id='',reservation_id='r-jkkdi703',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='da90f1fb-9090-49b5-a510-d7e6ac7a30d6',image_min_disk='1',image_min_ram='0',image_owner_id='e6e96de7b2784be1adce763bc9c9adc5',image_owner_project_name='tempest-TestStampPattern-23409568',image_owner_user_name='tempest-TestStampPattern-23409568-project-member',image_user_id='a798fdf6d13d4af4b166dd94b5cea7cc',network_allocated='True',owner_project_name='tempest-TestStampPattern-23409568',owner_user_name='tempest-TestStampPattern-23409568-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:55:05Z,user_data=None,user_id='a798fdf6d13d4af4b166dd94b5cea7cc',uuid=27645b95-3e37-43ba-8465-c8789c0f8700,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.521 226833 DEBUG nova.network.os_vif_util [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converting VIF {"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.522 226833 DEBUG nova.network.os_vif_util [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:2d:f9,bridge_name='br-int',has_traffic_filtering=True,id=495003a6-1a9f-4821-a3a7-1aee88998d2e,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap495003a6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.522 226833 DEBUG nova.objects.instance [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27645b95-3e37-43ba-8465-c8789c0f8700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.552 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.552 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.552 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.552 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.553 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.578 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.578 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.578 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:b4:bb:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.579 226833 INFO nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Using config drive
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.598 226833 DEBUG nova.storage.rbd_utils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image b13507e9-5374-4c40-b919-b9d7b61d4f12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.604 226833 DEBUG nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <uuid>27645b95-3e37-43ba-8465-c8789c0f8700</uuid>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <name>instance-000000cd</name>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:name>tempest-TestStampPattern-server-114162633</nova:name>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:55:16</nova:creationTime>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:user uuid="a798fdf6d13d4af4b166dd94b5cea7cc">tempest-TestStampPattern-23409568-project-member</nova:user>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:project uuid="e6e96de7b2784be1adce763bc9c9adc5">tempest-TestStampPattern-23409568</nova:project>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="6827763f-c9c4-43fc-825d-2f9c946c4536"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <nova:port uuid="495003a6-1a9f-4821-a3a7-1aee88998d2e">
Jan 31 08:55:17 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <system>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="serial">27645b95-3e37-43ba-8465-c8789c0f8700</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="uuid">27645b95-3e37-43ba-8465-c8789c0f8700</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </system>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <os>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </os>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <features>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </features>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/27645b95-3e37-43ba-8465-c8789c0f8700_disk">
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </source>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/27645b95-3e37-43ba-8465-c8789c0f8700_disk.config">
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </source>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:55:17 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:37:2d:f9"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <target dev="tap495003a6-1a"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/27645b95-3e37-43ba-8465-c8789c0f8700/console.log" append="off"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <video>
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </video>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:55:17 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:55:17 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:55:17 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:55:17 compute-2 nova_compute[226829]: </domain>
Jan 31 08:55:17 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.605 226833 DEBUG nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Preparing to wait for external event network-vif-plugged-495003a6-1a9f-4821-a3a7-1aee88998d2e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.605 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.605 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.605 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.606 226833 DEBUG nova.virt.libvirt.vif [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-114162633',display_name='tempest-TestStampPattern-server-114162633',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-teststamppattern-server-114162633',id=205,image_ref='6827763f-c9c4-43fc-825d-2f9c946c4536',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfz5EJ1EvVhkopw671xsxq9cmxCD9AJZYRdx1fZqnsJKH4HE3ct43AahyjBDecQtfre/K2oZ3kPMxp5bbpWjZgXwmif2lJfZCK32Cd1YqdcHbaKXFc2nUgzqikPeTQnpA==',key_name='tempest-TestStampPattern-1030074900',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6e96de7b2784be1adce763bc9c9adc5',ramdisk_id='',reservation_id='r-jkkdi703',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='da90f1fb-9090-49b5-a510-d7e6ac7a30d6',image_min_disk='1',image_min_ram='0',image_owner_id='e6e96de7b2784be1adce763bc9c9adc5',image_owner_project_name='tempest-TestStampPattern-23409568',image_owner_user_name='tempest-TestStampPattern-23409568-project-member',image_user_id='a798fdf6d13d4af4b166dd94b5cea7cc',network_allocated='True',owner_project_name='tempest-TestStampPattern-23409568',owner_user_name='tempest-TestStampPattern-23409568-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:55:05Z,user_data=None,user_id='a798fdf6d13d4af4b166dd94b5cea7cc',uuid=27645b95-3e37-43ba-8465-c8789c0f8700,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.606 226833 DEBUG nova.network.os_vif_util [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converting VIF {"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.607 226833 DEBUG nova.network.os_vif_util [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:2d:f9,bridge_name='br-int',has_traffic_filtering=True,id=495003a6-1a9f-4821-a3a7-1aee88998d2e,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap495003a6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.607 226833 DEBUG os_vif [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:2d:f9,bridge_name='br-int',has_traffic_filtering=True,id=495003a6-1a9f-4821-a3a7-1aee88998d2e,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap495003a6-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.609 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.609 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.609 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.612 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.612 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap495003a6-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.612 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap495003a6-1a, col_values=(('external_ids', {'iface-id': '495003a6-1a9f-4821-a3a7-1aee88998d2e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:2d:f9', 'vm-uuid': '27645b95-3e37-43ba-8465-c8789c0f8700'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:17 compute-2 NetworkManager[48999]: <info>  [1769849717.6147] manager: (tap495003a6-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.615 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.621 226833 INFO os_vif [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:2d:f9,bridge_name='br-int',has_traffic_filtering=True,id=495003a6-1a9f-4821-a3a7-1aee88998d2e,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap495003a6-1a')
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.797 226833 DEBUG nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.798 226833 DEBUG nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.798 226833 DEBUG nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No VIF found with MAC fa:16:3e:37:2d:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.798 226833 INFO nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Using config drive
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.821 226833 DEBUG nova.storage.rbd_utils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image 27645b95-3e37-43ba-8465-c8789c0f8700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:55:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3943486430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:17 compute-2 nova_compute[226829]: 2026-01-31 08:55:17.998 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.115 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.116 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.119 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000cd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.119 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000cd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:55:18 compute-2 ceph-mon[77282]: pgmap v3690: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 08:55:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1228416371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/690692146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1522886520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3943486430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.277 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.278 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4067MB free_disk=20.876312255859375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.278 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.279 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.369 226833 INFO nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Creating config drive at /var/lib/nova/instances/b13507e9-5374-4c40-b919-b9d7b61d4f12/disk.config
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.374 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b13507e9-5374-4c40-b919-b9d7b61d4f12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp70tfdt2g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.509 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b13507e9-5374-4c40-b919-b9d7b61d4f12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp70tfdt2g" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.536 226833 DEBUG nova.storage.rbd_utils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image b13507e9-5374-4c40-b919-b9d7b61d4f12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.541 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b13507e9-5374-4c40-b919-b9d7b61d4f12/disk.config b13507e9-5374-4c40-b919-b9d7b61d4f12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.574 226833 INFO nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Creating config drive at /var/lib/nova/instances/27645b95-3e37-43ba-8465-c8789c0f8700/disk.config
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.578 226833 DEBUG oslo_concurrency.processutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27645b95-3e37-43ba-8465-c8789c0f8700/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3h91a_md execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.613 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 27645b95-3e37-43ba-8465-c8789c0f8700 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.614 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance b13507e9-5374-4c40-b919-b9d7b61d4f12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.614 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.615 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.700 226833 DEBUG oslo_concurrency.processutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b13507e9-5374-4c40-b919-b9d7b61d4f12/disk.config b13507e9-5374-4c40-b919-b9d7b61d4f12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.701 226833 INFO nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Deleting local config drive /var/lib/nova/instances/b13507e9-5374-4c40-b919-b9d7b61d4f12/disk.config because it was imported into RBD.
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.718 226833 DEBUG oslo_concurrency.processutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27645b95-3e37-43ba-8465-c8789c0f8700/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3h91a_md" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:18 compute-2 kernel: tap4bed2fb3-03: entered promiscuous mode
Jan 31 08:55:18 compute-2 NetworkManager[48999]: <info>  [1769849718.7858] manager: (tap4bed2fb3-03): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Jan 31 08:55:18 compute-2 ovn_controller[133834]: 2026-01-31T08:55:18Z|00798|binding|INFO|Claiming lport 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 for this chassis.
Jan 31 08:55:18 compute-2 ovn_controller[133834]: 2026-01-31T08:55:18Z|00799|binding|INFO|4bed2fb3-03ee-4717-bf09-a7a55d07eef8: Claiming fa:16:3e:b4:bb:04 10.100.0.14
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.794 226833 DEBUG nova.storage.rbd_utils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image 27645b95-3e37-43ba-8465-c8789c0f8700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.801 226833 DEBUG oslo_concurrency.processutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27645b95-3e37-43ba-8465-c8789c0f8700/disk.config 27645b95-3e37-43ba-8465-c8789c0f8700_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:18 compute-2 systemd-udevd[326208]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:55:18 compute-2 systemd-machined[195142]: New machine qemu-91-instance-000000ce.
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.817 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:bb:04 10.100.0.14'], port_security=['fa:16:3e:b4:bb:04 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b13507e9-5374-4c40-b919-b9d7b61d4f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e71e82f1-2476-4d79-9b64-3d07204593df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4626dc44-870d-4112-a662-eaa355e02fd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2f59b70-6149-48e7-81ee-ebb34275d7e4, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=4bed2fb3-03ee-4717-bf09-a7a55d07eef8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.818 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 in datapath e71e82f1-2476-4d79-9b64-3d07204593df bound to our chassis
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.820 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e71e82f1-2476-4d79-9b64-3d07204593df
Jan 31 08:55:18 compute-2 NetworkManager[48999]: <info>  [1769849718.8235] device (tap4bed2fb3-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:55:18 compute-2 NetworkManager[48999]: <info>  [1769849718.8247] device (tap4bed2fb3-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:55:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:18.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:18 compute-2 ovn_controller[133834]: 2026-01-31T08:55:18Z|00800|binding|INFO|Setting lport 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 ovn-installed in OVS
Jan 31 08:55:18 compute-2 ovn_controller[133834]: 2026-01-31T08:55:18Z|00801|binding|INFO|Setting lport 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 up in Southbound
Jan 31 08:55:18 compute-2 systemd[1]: Started Virtual Machine qemu-91-instance-000000ce.
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.832 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.835 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0d1c43-4f96-4374-9dc5-d96587a464d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.836 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape71e82f1-21 in ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.838 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape71e82f1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.839 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[05a3d27c-8ca2-41ed-b99a-cd9a8350ae66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.839 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3ba2e6-4cb2-4f92-9f22-d7b6769e0cbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.847 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[0808f852-0805-4895-9a6f-023ecc0a45aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.869 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aadf5488-8b2d-40a7-8c7a-f63cfc452cf7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.912 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[59e1f85e-7c05-446a-8309-48649d694450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:18 compute-2 NetworkManager[48999]: <info>  [1769849718.9215] manager: (tape71e82f1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/400)
Jan 31 08:55:18 compute-2 systemd-udevd[326212]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.920 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[890e5ee2-6992-4827-9c56-fe576bece247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.946 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7e4640-95c8-4667-ae51-9ffda41f75c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.948 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccda728-f30e-417a-a90a-5954e85433db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.957 226833 DEBUG oslo_concurrency.processutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27645b95-3e37-43ba-8465-c8789c0f8700/disk.config 27645b95-3e37-43ba-8465-c8789c0f8700_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.958 226833 INFO nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Deleting local config drive /var/lib/nova/instances/27645b95-3e37-43ba-8465-c8789c0f8700/disk.config because it was imported into RBD.
Jan 31 08:55:18 compute-2 NetworkManager[48999]: <info>  [1769849718.9662] device (tape71e82f1-20): carrier: link connected
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.973 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4a82f8-b52b-465a-ba26-011dcc7ab33a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:18 compute-2 kernel: tap495003a6-1a: entered promiscuous mode
Jan 31 08:55:18 compute-2 systemd-udevd[326259]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:55:18 compute-2 ovn_controller[133834]: 2026-01-31T08:55:18Z|00802|binding|INFO|Claiming lport 495003a6-1a9f-4821-a3a7-1aee88998d2e for this chassis.
Jan 31 08:55:18 compute-2 ovn_controller[133834]: 2026-01-31T08:55:18Z|00803|binding|INFO|495003a6-1a9f-4821-a3a7-1aee88998d2e: Claiming fa:16:3e:37:2d:f9 10.100.0.11
Jan 31 08:55:18 compute-2 NetworkManager[48999]: <info>  [1769849718.9939] manager: (tap495003a6-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Jan 31 08:55:18 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.992 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:18.994 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbc4244-414f-42ac-b02d-df75ae99c29d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape71e82f1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:3c:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 997143, 'reachable_time': 26953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326267, 'error': None, 'target': 'ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:18.999 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 NetworkManager[48999]: <info>  [1769849719.0017] device (tap495003a6-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:55:19 compute-2 NetworkManager[48999]: <info>  [1769849719.0023] device (tap495003a6-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.004 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 NetworkManager[48999]: <info>  [1769849719.0051] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Jan 31 08:55:19 compute-2 NetworkManager[48999]: <info>  [1769849719.0057] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.010 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c0cf0acf-fd7c-46d3-a716-90e4c06e7788]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:3c1c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 997143, 'tstamp': 997143}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326275, 'error': None, 'target': 'ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.016 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:2d:f9 10.100.0.11'], port_security=['fa:16:3e:37:2d:f9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27645b95-3e37-43ba-8465-c8789c0f8700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53eebd24-3b32-4949-827a-524f9e042652', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6e96de7b2784be1adce763bc9c9adc5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e34ba2ee-7d71-4f69-8288-b62c847fa225', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fa4ba9c-944f-4ccd-90bc-07135c4442c5, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=495003a6-1a9f-4821-a3a7-1aee88998d2e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.028 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7b175a-820e-45f0-ad1f-30bc2ee52d8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape71e82f1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:3c:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 997143, 'reachable_time': 26953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326281, 'error': None, 'target': 'ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 systemd-machined[195142]: New machine qemu-92-instance-000000cd.
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.040 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 systemd[1]: Started Virtual Machine qemu-92-instance-000000cd.
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.058 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 ovn_controller[133834]: 2026-01-31T08:55:19Z|00804|binding|INFO|Setting lport 495003a6-1a9f-4821-a3a7-1aee88998d2e ovn-installed in OVS
Jan 31 08:55:19 compute-2 ovn_controller[133834]: 2026-01-31T08:55:19Z|00805|binding|INFO|Setting lport 495003a6-1a9f-4821-a3a7-1aee88998d2e up in Southbound
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.062 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.065 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[872bd923-dd50-464d-9995-000a23a5e0e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.072 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.126 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[acd7f04d-ab67-48a1-a410-9909f698be1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.128 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape71e82f1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.128 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.128 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape71e82f1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:19 compute-2 NetworkManager[48999]: <info>  [1769849719.1308] manager: (tape71e82f1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.130 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 kernel: tape71e82f1-20: entered promiscuous mode
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.134 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape71e82f1-20, col_values=(('external_ids', {'iface-id': '40798090-132f-4914-b176-6fbb940ac583'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:19 compute-2 ovn_controller[133834]: 2026-01-31T08:55:19Z|00806|binding|INFO|Releasing lport 40798090-132f-4914-b176-6fbb940ac583 from this chassis (sb_readonly=0)
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.135 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.136 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.137 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e71e82f1-2476-4d79-9b64-3d07204593df.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e71e82f1-2476-4d79-9b64-3d07204593df.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.139 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[dab507d0-173b-4a97-a0cc-016aa4006ec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.140 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-e71e82f1-2476-4d79-9b64-3d07204593df
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/e71e82f1-2476-4d79-9b64-3d07204593df.pid.haproxy
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID e71e82f1-2476-4d79-9b64-3d07204593df
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.142 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df', 'env', 'PROCESS_TAG=haproxy-e71e82f1-2476-4d79-9b64-3d07204593df', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e71e82f1-2476-4d79-9b64-3d07204593df.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.142 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.244 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849719.2440627, b13507e9-5374-4c40-b919-b9d7b61d4f12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.244 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] VM Started (Lifecycle Event)
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.332 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.337 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849719.2466218, b13507e9-5374-4c40-b919-b9d7b61d4f12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.337 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] VM Paused (Lifecycle Event)
Jan 31 08:55:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:19.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.484 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.489 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:55:19 compute-2 podman[326378]: 2026-01-31 08:55:19.509953909 +0000 UTC m=+0.041879159 container create 037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:55:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:55:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1654730449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:19 compute-2 systemd[1]: Started libpod-conmon-037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c.scope.
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.547 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.554 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:55:19 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:55:19 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8020c05a9219c5aa9c6d44e064f997e9906339484aa1c3350271aaac409e59c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:19 compute-2 podman[326378]: 2026-01-31 08:55:19.487775876 +0000 UTC m=+0.019701156 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:55:19 compute-2 podman[326378]: 2026-01-31 08:55:19.589593873 +0000 UTC m=+0.121519133 container init 037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 08:55:19 compute-2 podman[326378]: 2026-01-31 08:55:19.593972573 +0000 UTC m=+0.125897823 container start 037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:55:19 compute-2 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[326414]: [NOTICE]   (326434) : New worker (326439) forked
Jan 31 08:55:19 compute-2 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[326414]: [NOTICE]   (326434) : Loading success.
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.643 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 495003a6-1a9f-4821-a3a7-1aee88998d2e in datapath 53eebd24-3b32-4949-827a-524f9e042652 unbound from our chassis
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.646 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 53eebd24-3b32-4949-827a-524f9e042652
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.653 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[38a84aa5-bb4f-4676-b2d7-ed255e57d6d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.654 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap53eebd24-31 in ovnmeta-53eebd24-3b32-4949-827a-524f9e042652 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.655 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap53eebd24-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.655 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[99e16484-1aad-4566-88c8-27e8b4eaf681]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.657 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6f586f36-9239-4e1e-b987-c73eec229c71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.664 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[7300283d-0e7f-42dc-aca0-2bba8471f6c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.673 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0cdb8b1d-5b40-46b2-8719-4803ede40084]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.694 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[54b53475-0756-48a1-aba7-5559c24de306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 NetworkManager[48999]: <info>  [1769849719.7005] manager: (tap53eebd24-30): new Veth device (/org/freedesktop/NetworkManager/Devices/405)
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.700 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[66f6d774-7b23-4822-b851-2371af2323d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.723 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2c76753b-b2c0-418b-abb1-4e07a5a0591a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.727 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[5c239de4-848d-4588-81b4-50b7aac5ae5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 NetworkManager[48999]: <info>  [1769849719.7413] device (tap53eebd24-30): carrier: link connected
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.744 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8e00dea3-88b2-412a-a62c-924082340540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.759 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[69ecaa4f-f3d9-4a19-9695-ee6a4106847a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53eebd24-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 997221, 'reachable_time': 20972, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326463, 'error': None, 'target': 'ovnmeta-53eebd24-3b32-4949-827a-524f9e042652', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.768 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[46fff265-36c5-4a22-a22f-e6139ea31f37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd43'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 997221, 'tstamp': 997221}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326464, 'error': None, 'target': 'ovnmeta-53eebd24-3b32-4949-827a-524f9e042652', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.778 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0198fb-94c0-4024-a71c-c639a0f01bda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53eebd24-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 997221, 'reachable_time': 20972, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326465, 'error': None, 'target': 'ovnmeta-53eebd24-3b32-4949-827a-524f9e042652', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.798 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ea12689b-e0e4-4e61-89ea-3096d1180e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.839 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f561b4bb-2499-4374-8c9e-12baa13fe986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.841 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53eebd24-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.841 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.841 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53eebd24-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.882 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 NetworkManager[48999]: <info>  [1769849719.8841] manager: (tap53eebd24-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Jan 31 08:55:19 compute-2 kernel: tap53eebd24-30: entered promiscuous mode
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.885 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.886 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap53eebd24-30, col_values=(('external_ids', {'iface-id': '62efe4e7-cbd5-44c6-8fac-7cb5fe1c3604'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.887 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 ovn_controller[133834]: 2026-01-31T08:55:19Z|00807|binding|INFO|Releasing lport 62efe4e7-cbd5-44c6-8fac-7cb5fe1c3604 from this chassis (sb_readonly=0)
Jan 31 08:55:19 compute-2 nova_compute[226829]: 2026-01-31 08:55:19.892 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.892 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/53eebd24-3b32-4949-827a-524f9e042652.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/53eebd24-3b32-4949-827a-524f9e042652.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.893 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed8d7bb-8e80-47d9-8ecc-bb93c468b2a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.894 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-53eebd24-3b32-4949-827a-524f9e042652
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/53eebd24-3b32-4949-827a-524f9e042652.pid.haproxy
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 53eebd24-3b32-4949-827a-524f9e042652
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:55:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:19.894 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-53eebd24-3b32-4949-827a-524f9e042652', 'env', 'PROCESS_TAG=haproxy-53eebd24-3b32-4949-827a-524f9e042652', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/53eebd24-3b32-4949-827a-524f9e042652.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:55:20 compute-2 podman[326498]: 2026-01-31 08:55:20.19471624 +0000 UTC m=+0.046847805 container create 751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:55:20 compute-2 systemd[1]: Started libpod-conmon-751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f.scope.
Jan 31 08:55:20 compute-2 ceph-mon[77282]: pgmap v3691: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 08:55:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/398945549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1654730449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:20 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:55:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5105254819a0414b8044aa575e17971f5923bb269a84a9ce2d1731671c97d6e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:20 compute-2 podman[326498]: 2026-01-31 08:55:20.26644664 +0000 UTC m=+0.118578225 container init 751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:55:20 compute-2 podman[326498]: 2026-01-31 08:55:20.173808352 +0000 UTC m=+0.025939937 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:55:20 compute-2 podman[326498]: 2026-01-31 08:55:20.271092865 +0000 UTC m=+0.123224430 container start 751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:55:20 compute-2 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[326513]: [NOTICE]   (326517) : New worker (326519) forked
Jan 31 08:55:20 compute-2 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[326513]: [NOTICE]   (326517) : Loading success.
Jan 31 08:55:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:20.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.227 226833 DEBUG nova.compute.manager [req-ba74cb6b-1941-4f88-b691-adeaa2b31e3e req-27c58617-0da1-4685-b710-06b1cc81fbaf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received event network-vif-plugged-495003a6-1a9f-4821-a3a7-1aee88998d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.228 226833 DEBUG oslo_concurrency.lockutils [req-ba74cb6b-1941-4f88-b691-adeaa2b31e3e req-27c58617-0da1-4685-b710-06b1cc81fbaf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.228 226833 DEBUG oslo_concurrency.lockutils [req-ba74cb6b-1941-4f88-b691-adeaa2b31e3e req-27c58617-0da1-4685-b710-06b1cc81fbaf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.228 226833 DEBUG oslo_concurrency.lockutils [req-ba74cb6b-1941-4f88-b691-adeaa2b31e3e req-27c58617-0da1-4685-b710-06b1cc81fbaf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.228 226833 DEBUG nova.compute.manager [req-ba74cb6b-1941-4f88-b691-adeaa2b31e3e req-27c58617-0da1-4685-b710-06b1cc81fbaf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Processing event network-vif-plugged-495003a6-1a9f-4821-a3a7-1aee88998d2e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.229 226833 DEBUG nova.compute.manager [req-4a2d5c1a-7a68-44e6-b7ee-7cc6c351ee15 req-da3d410d-c003-42d2-9601-a90d193e44cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Received event network-vif-plugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.229 226833 DEBUG oslo_concurrency.lockutils [req-4a2d5c1a-7a68-44e6-b7ee-7cc6c351ee15 req-da3d410d-c003-42d2-9601-a90d193e44cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.230 226833 DEBUG oslo_concurrency.lockutils [req-4a2d5c1a-7a68-44e6-b7ee-7cc6c351ee15 req-da3d410d-c003-42d2-9601-a90d193e44cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.230 226833 DEBUG oslo_concurrency.lockutils [req-4a2d5c1a-7a68-44e6-b7ee-7cc6c351ee15 req-da3d410d-c003-42d2-9601-a90d193e44cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.230 226833 DEBUG nova.compute.manager [req-4a2d5c1a-7a68-44e6-b7ee-7cc6c351ee15 req-da3d410d-c003-42d2-9601-a90d193e44cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Processing event network-vif-plugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.231 226833 DEBUG nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.231 226833 DEBUG nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.235 226833 DEBUG nova.virt.libvirt.driver [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.236 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.240 226833 INFO nova.virt.libvirt.driver [-] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Instance spawned successfully.
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.240 226833 INFO nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Took 15.21 seconds to spawn the instance on the hypervisor.
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.241 226833 DEBUG nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.242 226833 INFO nova.virt.libvirt.driver [-] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Instance spawned successfully.
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.243 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.262 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.263 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849719.7182963, 27645b95-3e37-43ba-8465-c8789c0f8700 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.263 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] VM Started (Lifecycle Event)
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.265 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.274 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.274 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.275 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.275 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.276 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.276 226833 DEBUG nova.virt.libvirt.driver [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.299 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.302 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.324 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.324 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.346 226833 INFO nova.compute.manager [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Took 16.47 seconds to build instance.
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.375 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849719.7183683, 27645b95-3e37-43ba-8465-c8789c0f8700 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.376 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] VM Paused (Lifecycle Event)
Jan 31 08:55:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:21.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.422 226833 DEBUG oslo_concurrency.lockutils [None req-e34cc092-e3bd-46b3-b4a4-7c5e0a9571d3 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.434 226833 INFO nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Took 14.64 seconds to spawn the instance on the hypervisor.
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.435 226833 DEBUG nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.436 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.442 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849721.2338939, 27645b95-3e37-43ba-8465-c8789c0f8700 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.443 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] VM Resumed (Lifecycle Event)
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.530 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.533 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.565 226833 INFO nova.compute.manager [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Took 16.06 seconds to build instance.
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.572 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849721.235637, b13507e9-5374-4c40-b919-b9d7b61d4f12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.572 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] VM Resumed (Lifecycle Event)
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.589 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.590 226833 DEBUG oslo_concurrency.lockutils [None req-d1e3b51f-123d-4be8-a624-a3e60882930e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:21 compute-2 nova_compute[226829]: 2026-01-31 08:55:21.592 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:55:22 compute-2 ceph-mon[77282]: pgmap v3692: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 1.5 MiB/s wr, 73 op/s
Jan 31 08:55:22 compute-2 nova_compute[226829]: 2026-01-31 08:55:22.614 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:22.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:23 compute-2 sudo[326531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:23 compute-2 sudo[326531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:23 compute-2 sudo[326531]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.326 226833 DEBUG nova.compute.manager [req-4540a6e4-a8e6-482b-a4b3-6b7ac5d1589c req-d2e41395-e0ef-4a38-b661-8b8bd8445f5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Received event network-vif-plugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.327 226833 DEBUG oslo_concurrency.lockutils [req-4540a6e4-a8e6-482b-a4b3-6b7ac5d1589c req-d2e41395-e0ef-4a38-b661-8b8bd8445f5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.328 226833 DEBUG oslo_concurrency.lockutils [req-4540a6e4-a8e6-482b-a4b3-6b7ac5d1589c req-d2e41395-e0ef-4a38-b661-8b8bd8445f5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.328 226833 DEBUG oslo_concurrency.lockutils [req-4540a6e4-a8e6-482b-a4b3-6b7ac5d1589c req-d2e41395-e0ef-4a38-b661-8b8bd8445f5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.329 226833 DEBUG nova.compute.manager [req-4540a6e4-a8e6-482b-a4b3-6b7ac5d1589c req-d2e41395-e0ef-4a38-b661-8b8bd8445f5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] No waiting events found dispatching network-vif-plugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.329 226833 WARNING nova.compute.manager [req-4540a6e4-a8e6-482b-a4b3-6b7ac5d1589c req-d2e41395-e0ef-4a38-b661-8b8bd8445f5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Received unexpected event network-vif-plugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 for instance with vm_state active and task_state None.
Jan 31 08:55:23 compute-2 sudo[326556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:23 compute-2 sudo[326556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:23 compute-2 sudo[326556]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.336 226833 DEBUG nova.compute.manager [req-37be34ca-33b8-4584-a70b-a2730269fb5e req-d13fcb68-a89a-4c7c-bd2d-c3b6f3c2413d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received event network-vif-plugged-495003a6-1a9f-4821-a3a7-1aee88998d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.337 226833 DEBUG oslo_concurrency.lockutils [req-37be34ca-33b8-4584-a70b-a2730269fb5e req-d13fcb68-a89a-4c7c-bd2d-c3b6f3c2413d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.337 226833 DEBUG oslo_concurrency.lockutils [req-37be34ca-33b8-4584-a70b-a2730269fb5e req-d13fcb68-a89a-4c7c-bd2d-c3b6f3c2413d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.337 226833 DEBUG oslo_concurrency.lockutils [req-37be34ca-33b8-4584-a70b-a2730269fb5e req-d13fcb68-a89a-4c7c-bd2d-c3b6f3c2413d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.338 226833 DEBUG nova.compute.manager [req-37be34ca-33b8-4584-a70b-a2730269fb5e req-d13fcb68-a89a-4c7c-bd2d-c3b6f3c2413d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] No waiting events found dispatching network-vif-plugged-495003a6-1a9f-4821-a3a7-1aee88998d2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.338 226833 WARNING nova.compute.manager [req-37be34ca-33b8-4584-a70b-a2730269fb5e req-d13fcb68-a89a-4c7c-bd2d-c3b6f3c2413d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received unexpected event network-vif-plugged-495003a6-1a9f-4821-a3a7-1aee88998d2e for instance with vm_state active and task_state None.
Jan 31 08:55:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:55:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:23.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:55:23 compute-2 nova_compute[226829]: 2026-01-31 08:55:23.842 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:24 compute-2 ceph-mon[77282]: pgmap v3693: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 557 KiB/s rd, 28 KiB/s wr, 43 op/s
Jan 31 08:55:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:24.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:25.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:25 compute-2 nova_compute[226829]: 2026-01-31 08:55:25.649 226833 DEBUG nova.compute.manager [req-b24cac58-416c-4170-8cf9-d4c66f5ede32 req-85ec9fbf-d148-4c42-b1a6-03dd78d40e27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Received event network-changed-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:25 compute-2 nova_compute[226829]: 2026-01-31 08:55:25.650 226833 DEBUG nova.compute.manager [req-b24cac58-416c-4170-8cf9-d4c66f5ede32 req-85ec9fbf-d148-4c42-b1a6-03dd78d40e27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Refreshing instance network info cache due to event network-changed-4bed2fb3-03ee-4717-bf09-a7a55d07eef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:55:25 compute-2 nova_compute[226829]: 2026-01-31 08:55:25.651 226833 DEBUG oslo_concurrency.lockutils [req-b24cac58-416c-4170-8cf9-d4c66f5ede32 req-85ec9fbf-d148-4c42-b1a6-03dd78d40e27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b13507e9-5374-4c40-b919-b9d7b61d4f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:55:25 compute-2 nova_compute[226829]: 2026-01-31 08:55:25.651 226833 DEBUG oslo_concurrency.lockutils [req-b24cac58-416c-4170-8cf9-d4c66f5ede32 req-85ec9fbf-d148-4c42-b1a6-03dd78d40e27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b13507e9-5374-4c40-b919-b9d7b61d4f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:55:25 compute-2 nova_compute[226829]: 2026-01-31 08:55:25.651 226833 DEBUG nova.network.neutron [req-b24cac58-416c-4170-8cf9-d4c66f5ede32 req-85ec9fbf-d148-4c42-b1a6-03dd78d40e27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Refreshing network info cache for port 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:55:26 compute-2 ceph-mon[77282]: pgmap v3694: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 28 KiB/s wr, 107 op/s
Jan 31 08:55:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:26.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:27 compute-2 nova_compute[226829]: 2026-01-31 08:55:27.264 226833 DEBUG nova.network.neutron [req-b24cac58-416c-4170-8cf9-d4c66f5ede32 req-85ec9fbf-d148-4c42-b1a6-03dd78d40e27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Updated VIF entry in instance network info cache for port 4bed2fb3-03ee-4717-bf09-a7a55d07eef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:55:27 compute-2 nova_compute[226829]: 2026-01-31 08:55:27.265 226833 DEBUG nova.network.neutron [req-b24cac58-416c-4170-8cf9-d4c66f5ede32 req-85ec9fbf-d148-4c42-b1a6-03dd78d40e27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Updating instance_info_cache with network_info: [{"id": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "address": "fa:16:3e:b4:bb:04", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bed2fb3-03", "ovs_interfaceid": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:55:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:27.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:27 compute-2 nova_compute[226829]: 2026-01-31 08:55:27.432 226833 DEBUG oslo_concurrency.lockutils [req-b24cac58-416c-4170-8cf9-d4c66f5ede32 req-85ec9fbf-d148-4c42-b1a6-03dd78d40e27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b13507e9-5374-4c40-b919-b9d7b61d4f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:55:27 compute-2 nova_compute[226829]: 2026-01-31 08:55:27.617 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:27 compute-2 nova_compute[226829]: 2026-01-31 08:55:27.815 226833 DEBUG nova.compute.manager [req-66d35d5d-2873-40df-a948-68874b83e6c7 req-8ba6a2a4-253f-4885-ab3a-a9ba881edbd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received event network-changed-495003a6-1a9f-4821-a3a7-1aee88998d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:27 compute-2 nova_compute[226829]: 2026-01-31 08:55:27.816 226833 DEBUG nova.compute.manager [req-66d35d5d-2873-40df-a948-68874b83e6c7 req-8ba6a2a4-253f-4885-ab3a-a9ba881edbd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Refreshing instance network info cache due to event network-changed-495003a6-1a9f-4821-a3a7-1aee88998d2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:55:27 compute-2 nova_compute[226829]: 2026-01-31 08:55:27.816 226833 DEBUG oslo_concurrency.lockutils [req-66d35d5d-2873-40df-a948-68874b83e6c7 req-8ba6a2a4-253f-4885-ab3a-a9ba881edbd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:55:27 compute-2 nova_compute[226829]: 2026-01-31 08:55:27.816 226833 DEBUG oslo_concurrency.lockutils [req-66d35d5d-2873-40df-a948-68874b83e6c7 req-8ba6a2a4-253f-4885-ab3a-a9ba881edbd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:55:27 compute-2 nova_compute[226829]: 2026-01-31 08:55:27.817 226833 DEBUG nova.network.neutron [req-66d35d5d-2873-40df-a948-68874b83e6c7 req-8ba6a2a4-253f-4885-ab3a-a9ba881edbd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Refreshing network info cache for port 495003a6-1a9f-4821-a3a7-1aee88998d2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:55:28 compute-2 ceph-mon[77282]: pgmap v3695: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 152 op/s
Jan 31 08:55:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:28.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:28 compute-2 nova_compute[226829]: 2026-01-31 08:55:28.844 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:29.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:30 compute-2 ceph-mon[77282]: pgmap v3696: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 150 op/s
Jan 31 08:55:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:30.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:31.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:31 compute-2 sudo[326585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:31 compute-2 sudo[326585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:31 compute-2 sudo[326585]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:31 compute-2 sudo[326610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:55:31 compute-2 sudo[326610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:31 compute-2 sudo[326610]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:31 compute-2 sudo[326635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:31 compute-2 sudo[326635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:31 compute-2 sudo[326635]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:31 compute-2 sudo[326660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:55:31 compute-2 sudo[326660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:32 compute-2 sudo[326660]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:32 compute-2 ceph-mon[77282]: pgmap v3697: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 150 op/s
Jan 31 08:55:32 compute-2 nova_compute[226829]: 2026-01-31 08:55:32.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:32.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:33 compute-2 nova_compute[226829]: 2026-01-31 08:55:33.016 226833 DEBUG nova.network.neutron [req-66d35d5d-2873-40df-a948-68874b83e6c7 req-8ba6a2a4-253f-4885-ab3a-a9ba881edbd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Updated VIF entry in instance network info cache for port 495003a6-1a9f-4821-a3a7-1aee88998d2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:55:33 compute-2 nova_compute[226829]: 2026-01-31 08:55:33.017 226833 DEBUG nova.network.neutron [req-66d35d5d-2873-40df-a948-68874b83e6c7 req-8ba6a2a4-253f-4885-ab3a-a9ba881edbd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Updating instance_info_cache with network_info: [{"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:55:33 compute-2 nova_compute[226829]: 2026-01-31 08:55:33.111 226833 DEBUG oslo_concurrency.lockutils [req-66d35d5d-2873-40df-a948-68874b83e6c7 req-8ba6a2a4-253f-4885-ab3a-a9ba881edbd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:55:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:55:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:33.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:55:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:33 compute-2 ceph-mon[77282]: pgmap v3698: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 85 B/s wr, 132 op/s
Jan 31 08:55:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:55:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:55:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:55:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:55:33 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:55:33 compute-2 nova_compute[226829]: 2026-01-31 08:55:33.846 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:34.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:35 compute-2 ovn_controller[133834]: 2026-01-31T08:55:35Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:bb:04 10.100.0.14
Jan 31 08:55:35 compute-2 ovn_controller[133834]: 2026-01-31T08:55:35Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:bb:04 10.100.0.14
Jan 31 08:55:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:35.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:36 compute-2 nova_compute[226829]: 2026-01-31 08:55:36.324 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:36 compute-2 nova_compute[226829]: 2026-01-31 08:55:36.325 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:55:36 compute-2 ceph-mon[77282]: pgmap v3699: 305 pgs: 305 active+clean; 420 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.2 MiB/s wr, 144 op/s
Jan 31 08:55:36 compute-2 ovn_controller[133834]: 2026-01-31T08:55:36Z|00108|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.11
Jan 31 08:55:36 compute-2 ovn_controller[133834]: 2026-01-31T08:55:36Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:37:2d:f9 10.100.0.11
Jan 31 08:55:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:36.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:37 compute-2 podman[326718]: 2026-01-31 08:55:37.181800355 +0000 UTC m=+0.063855637 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:55:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:37.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:37 compute-2 nova_compute[226829]: 2026-01-31 08:55:37.623 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:38.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:38 compute-2 nova_compute[226829]: 2026-01-31 08:55:38.861 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:39 compute-2 ceph-mon[77282]: pgmap v3700: 305 pgs: 305 active+clean; 437 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 31 08:55:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:39.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:39 compute-2 sudo[326746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:39 compute-2 sudo[326746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:39 compute-2 sudo[326746]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:39 compute-2 sudo[326771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:55:39 compute-2 sudo[326771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:39 compute-2 sudo[326771]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:40 compute-2 ceph-mon[77282]: pgmap v3701: 305 pgs: 305 active+clean; 447 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 87 op/s
Jan 31 08:55:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:40 compute-2 nova_compute[226829]: 2026-01-31 08:55:40.657 226833 INFO nova.compute.manager [None req-057e641d-9116-4b1c-b3c1-1fe4ec3670c7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Get console output
Jan 31 08:55:40 compute-2 nova_compute[226829]: 2026-01-31 08:55:40.663 267670 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:55:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.088 226833 DEBUG oslo_concurrency.lockutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "b13507e9-5374-4c40-b919-b9d7b61d4f12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.088 226833 DEBUG oslo_concurrency.lockutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.089 226833 DEBUG oslo_concurrency.lockutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.089 226833 DEBUG oslo_concurrency.lockutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.089 226833 DEBUG oslo_concurrency.lockutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.090 226833 INFO nova.compute.manager [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Terminating instance
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.091 226833 DEBUG nova.compute.manager [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:55:41 compute-2 ovn_controller[133834]: 2026-01-31T08:55:41Z|00110|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.11
Jan 31 08:55:41 compute-2 ovn_controller[133834]: 2026-01-31T08:55:41Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:37:2d:f9 10.100.0.11
Jan 31 08:55:41 compute-2 kernel: tap4bed2fb3-03 (unregistering): left promiscuous mode
Jan 31 08:55:41 compute-2 NetworkManager[48999]: <info>  [1769849741.1596] device (tap4bed2fb3-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.165 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:41 compute-2 ovn_controller[133834]: 2026-01-31T08:55:41Z|00808|binding|INFO|Releasing lport 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 from this chassis (sb_readonly=0)
Jan 31 08:55:41 compute-2 ovn_controller[133834]: 2026-01-31T08:55:41Z|00809|binding|INFO|Setting lport 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 down in Southbound
Jan 31 08:55:41 compute-2 ovn_controller[133834]: 2026-01-31T08:55:41Z|00810|binding|INFO|Removing iface tap4bed2fb3-03 ovn-installed in OVS
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.169 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.176 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.187 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:bb:04 10.100.0.14'], port_security=['fa:16:3e:b4:bb:04 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b13507e9-5374-4c40-b919-b9d7b61d4f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e71e82f1-2476-4d79-9b64-3d07204593df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4626dc44-870d-4112-a662-eaa355e02fd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2f59b70-6149-48e7-81ee-ebb34275d7e4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=4bed2fb3-03ee-4717-bf09-a7a55d07eef8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.189 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 4bed2fb3-03ee-4717-bf09-a7a55d07eef8 in datapath e71e82f1-2476-4d79-9b64-3d07204593df unbound from our chassis
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.191 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e71e82f1-2476-4d79-9b64-3d07204593df, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.194 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3d6860-e579-4343-8c52-31798c6dd21f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.195 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df namespace which is not needed anymore
Jan 31 08:55:41 compute-2 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000ce.scope: Deactivated successfully.
Jan 31 08:55:41 compute-2 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000ce.scope: Consumed 14.477s CPU time.
Jan 31 08:55:41 compute-2 systemd-machined[195142]: Machine qemu-91-instance-000000ce terminated.
Jan 31 08:55:41 compute-2 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[326414]: [NOTICE]   (326434) : haproxy version is 2.8.14-c23fe91
Jan 31 08:55:41 compute-2 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[326414]: [NOTICE]   (326434) : path to executable is /usr/sbin/haproxy
Jan 31 08:55:41 compute-2 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[326414]: [WARNING]  (326434) : Exiting Master process...
Jan 31 08:55:41 compute-2 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[326414]: [ALERT]    (326434) : Current worker (326439) exited with code 143 (Terminated)
Jan 31 08:55:41 compute-2 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[326414]: [WARNING]  (326434) : All workers exited. Exiting... (0)
Jan 31 08:55:41 compute-2 systemd[1]: libpod-037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c.scope: Deactivated successfully.
Jan 31 08:55:41 compute-2 conmon[326414]: conmon 037973fb7cd23daee8ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c.scope/container/memory.events
Jan 31 08:55:41 compute-2 podman[326820]: 2026-01-31 08:55:41.289928139 +0000 UTC m=+0.037045347 container died 037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:55:41 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c-userdata-shm.mount: Deactivated successfully.
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.327 226833 INFO nova.virt.libvirt.driver [-] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Instance destroyed successfully.
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.328 226833 DEBUG nova.objects.instance [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid b13507e9-5374-4c40-b919-b9d7b61d4f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:55:41 compute-2 systemd[1]: var-lib-containers-storage-overlay-d8020c05a9219c5aa9c6d44e064f997e9906339484aa1c3350271aaac409e59c-merged.mount: Deactivated successfully.
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.344 226833 DEBUG nova.virt.libvirt.vif [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:55:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1198552944',display_name='tempest-TestNetworkBasicOps-server-1198552944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1198552944',id=206,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOrAc73kLV+i+xNaDwSFpYhqmjjip+brTb/F1WQVdODy2G5Aulg4YJQPokDZxchVl00UFgwUIq6BPV+OajtYBIlmVVa9XGOK2f1Ixs/ana7c3e/mHthS6MRW0hZaVFRwQ==',key_name='tempest-TestNetworkBasicOps-1328355201',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:55:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-8xl90rgk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:55:21Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=b13507e9-5374-4c40-b919-b9d7b61d4f12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "address": "fa:16:3e:b4:bb:04", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bed2fb3-03", "ovs_interfaceid": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.345 226833 DEBUG nova.network.os_vif_util [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "address": "fa:16:3e:b4:bb:04", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bed2fb3-03", "ovs_interfaceid": "4bed2fb3-03ee-4717-bf09-a7a55d07eef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.345 226833 DEBUG nova.network.os_vif_util [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bb:04,bridge_name='br-int',has_traffic_filtering=True,id=4bed2fb3-03ee-4717-bf09-a7a55d07eef8,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bed2fb3-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.345 226833 DEBUG os_vif [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bb:04,bridge_name='br-int',has_traffic_filtering=True,id=4bed2fb3-03ee-4717-bf09-a7a55d07eef8,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bed2fb3-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.348 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.348 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bed2fb3-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.349 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.350 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.353 226833 INFO os_vif [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bb:04,bridge_name='br-int',has_traffic_filtering=True,id=4bed2fb3-03ee-4717-bf09-a7a55d07eef8,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bed2fb3-03')
Jan 31 08:55:41 compute-2 podman[326820]: 2026-01-31 08:55:41.357449376 +0000 UTC m=+0.104566584 container cleanup 037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:55:41 compute-2 systemd[1]: libpod-conmon-037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c.scope: Deactivated successfully.
Jan 31 08:55:41 compute-2 podman[326867]: 2026-01-31 08:55:41.414953048 +0000 UTC m=+0.041112139 container remove 037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.419 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aa576fc6-8c1d-4886-af6f-3eb4f6b88371]: (4, ('Sat Jan 31 08:55:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df (037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c)\n037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c\nSat Jan 31 08:55:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df (037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c)\n037973fb7cd23daee8efe332963ac16ba8887a97449789d22bccfb7b4fa0ae3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.420 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0abe4d-c8f5-4bb8-b15a-e5ce00d3b049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.421 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape71e82f1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:41 compute-2 kernel: tape71e82f1-20: left promiscuous mode
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.424 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.428 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.431 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ecba41f6-b928-4598-9d6f-b18a3134017a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:41.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.446 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e8de7edd-6624-4e96-a4de-d896751eeb75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.447 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7c96c975-c53a-4cfd-836e-64863ebff0f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.460 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6d61556e-c722-4608-b211-682cf8577b7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 997137, 'reachable_time': 18857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326893, 'error': None, 'target': 'ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.463 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:55:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:55:41.463 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[5d52da7a-4e68-496f-aaba-d1ee35469593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:55:41 compute-2 systemd[1]: run-netns-ovnmeta\x2de71e82f1\x2d2476\x2d4d79\x2d9b64\x2d3d07204593df.mount: Deactivated successfully.
Jan 31 08:55:41 compute-2 ovn_controller[133834]: 2026-01-31T08:55:41Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:2d:f9 10.100.0.11
Jan 31 08:55:41 compute-2 ovn_controller[133834]: 2026-01-31T08:55:41Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:2d:f9 10.100.0.11
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.907 226833 INFO nova.virt.libvirt.driver [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Deleting instance files /var/lib/nova/instances/b13507e9-5374-4c40-b919-b9d7b61d4f12_del
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.908 226833 INFO nova.virt.libvirt.driver [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Deletion of /var/lib/nova/instances/b13507e9-5374-4c40-b919-b9d7b61d4f12_del complete
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.980 226833 INFO nova.compute.manager [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Took 0.89 seconds to destroy the instance on the hypervisor.
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.981 226833 DEBUG oslo.service.loopingcall [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.982 226833 DEBUG nova.compute.manager [-] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:55:41 compute-2 nova_compute[226829]: 2026-01-31 08:55:41.982 226833 DEBUG nova.network.neutron [-] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:55:42 compute-2 nova_compute[226829]: 2026-01-31 08:55:42.120 226833 DEBUG nova.compute.manager [req-72c92908-a68d-4e94-937a-ba9cc0cc0d3e req-38ae3787-868e-4f03-ae49-675a9379b3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Received event network-vif-unplugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:42 compute-2 nova_compute[226829]: 2026-01-31 08:55:42.120 226833 DEBUG oslo_concurrency.lockutils [req-72c92908-a68d-4e94-937a-ba9cc0cc0d3e req-38ae3787-868e-4f03-ae49-675a9379b3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:42 compute-2 nova_compute[226829]: 2026-01-31 08:55:42.121 226833 DEBUG oslo_concurrency.lockutils [req-72c92908-a68d-4e94-937a-ba9cc0cc0d3e req-38ae3787-868e-4f03-ae49-675a9379b3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:42 compute-2 nova_compute[226829]: 2026-01-31 08:55:42.121 226833 DEBUG oslo_concurrency.lockutils [req-72c92908-a68d-4e94-937a-ba9cc0cc0d3e req-38ae3787-868e-4f03-ae49-675a9379b3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:42 compute-2 nova_compute[226829]: 2026-01-31 08:55:42.121 226833 DEBUG nova.compute.manager [req-72c92908-a68d-4e94-937a-ba9cc0cc0d3e req-38ae3787-868e-4f03-ae49-675a9379b3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] No waiting events found dispatching network-vif-unplugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:55:42 compute-2 nova_compute[226829]: 2026-01-31 08:55:42.121 226833 DEBUG nova.compute.manager [req-72c92908-a68d-4e94-937a-ba9cc0cc0d3e req-38ae3787-868e-4f03-ae49-675a9379b3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Received event network-vif-unplugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:55:42 compute-2 ceph-mon[77282]: pgmap v3702: 305 pgs: 305 active+clean; 454 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 116 op/s
Jan 31 08:55:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:42.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:43 compute-2 sudo[326896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:43 compute-2 sudo[326896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:43 compute-2 sudo[326896]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:43 compute-2 sudo[326927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:43 compute-2 sudo[326927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:43 compute-2 sudo[326927]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:43 compute-2 podman[326920]: 2026-01-31 08:55:43.470802374 +0000 UTC m=+0.046470844 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:55:43 compute-2 nova_compute[226829]: 2026-01-31 08:55:43.650 226833 DEBUG nova.network.neutron [-] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:55:43 compute-2 nova_compute[226829]: 2026-01-31 08:55:43.801 226833 DEBUG nova.compute.manager [req-20ad591c-9893-4c17-b084-0014edcfaf27 req-0fe4a618-488a-4717-82d5-1180854beb43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Received event network-vif-deleted-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:43 compute-2 nova_compute[226829]: 2026-01-31 08:55:43.802 226833 INFO nova.compute.manager [req-20ad591c-9893-4c17-b084-0014edcfaf27 req-0fe4a618-488a-4717-82d5-1180854beb43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Neutron deleted interface 4bed2fb3-03ee-4717-bf09-a7a55d07eef8; detaching it from the instance and deleting it from the info cache
Jan 31 08:55:43 compute-2 nova_compute[226829]: 2026-01-31 08:55:43.802 226833 DEBUG nova.network.neutron [req-20ad591c-9893-4c17-b084-0014edcfaf27 req-0fe4a618-488a-4717-82d5-1180854beb43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:55:43 compute-2 nova_compute[226829]: 2026-01-31 08:55:43.864 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:43 compute-2 nova_compute[226829]: 2026-01-31 08:55:43.955 226833 INFO nova.compute.manager [-] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Took 1.97 seconds to deallocate network for instance.
Jan 31 08:55:43 compute-2 nova_compute[226829]: 2026-01-31 08:55:43.961 226833 DEBUG nova.compute.manager [req-20ad591c-9893-4c17-b084-0014edcfaf27 req-0fe4a618-488a-4717-82d5-1180854beb43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Detach interface failed, port_id=4bed2fb3-03ee-4717-bf09-a7a55d07eef8, reason: Instance b13507e9-5374-4c40-b919-b9d7b61d4f12 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:55:44 compute-2 nova_compute[226829]: 2026-01-31 08:55:44.021 226833 DEBUG oslo_concurrency.lockutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:44 compute-2 nova_compute[226829]: 2026-01-31 08:55:44.022 226833 DEBUG oslo_concurrency.lockutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:44 compute-2 nova_compute[226829]: 2026-01-31 08:55:44.132 226833 DEBUG oslo_concurrency.processutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:44 compute-2 ceph-mon[77282]: pgmap v3703: 305 pgs: 305 active+clean; 440 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 124 op/s
Jan 31 08:55:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:55:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3174164384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:44 compute-2 nova_compute[226829]: 2026-01-31 08:55:44.533 226833 DEBUG oslo_concurrency.processutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:44 compute-2 nova_compute[226829]: 2026-01-31 08:55:44.539 226833 DEBUG nova.compute.provider_tree [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:55:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:44.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:45.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3174164384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:45 compute-2 nova_compute[226829]: 2026-01-31 08:55:45.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:45 compute-2 nova_compute[226829]: 2026-01-31 08:55:45.844 226833 DEBUG nova.compute.manager [req-4be21fec-ee13-47b6-9d4a-20d4a17eeb76 req-4a01c338-7b7a-43a5-ae57-8074a878f492 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Received event network-vif-plugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:55:45 compute-2 nova_compute[226829]: 2026-01-31 08:55:45.845 226833 DEBUG oslo_concurrency.lockutils [req-4be21fec-ee13-47b6-9d4a-20d4a17eeb76 req-4a01c338-7b7a-43a5-ae57-8074a878f492 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:45 compute-2 nova_compute[226829]: 2026-01-31 08:55:45.845 226833 DEBUG oslo_concurrency.lockutils [req-4be21fec-ee13-47b6-9d4a-20d4a17eeb76 req-4a01c338-7b7a-43a5-ae57-8074a878f492 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:45 compute-2 nova_compute[226829]: 2026-01-31 08:55:45.845 226833 DEBUG oslo_concurrency.lockutils [req-4be21fec-ee13-47b6-9d4a-20d4a17eeb76 req-4a01c338-7b7a-43a5-ae57-8074a878f492 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:45 compute-2 nova_compute[226829]: 2026-01-31 08:55:45.845 226833 DEBUG nova.compute.manager [req-4be21fec-ee13-47b6-9d4a-20d4a17eeb76 req-4a01c338-7b7a-43a5-ae57-8074a878f492 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] No waiting events found dispatching network-vif-plugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:55:45 compute-2 nova_compute[226829]: 2026-01-31 08:55:45.846 226833 WARNING nova.compute.manager [req-4be21fec-ee13-47b6-9d4a-20d4a17eeb76 req-4a01c338-7b7a-43a5-ae57-8074a878f492 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Received unexpected event network-vif-plugged-4bed2fb3-03ee-4717-bf09-a7a55d07eef8 for instance with vm_state deleted and task_state None.
Jan 31 08:55:45 compute-2 nova_compute[226829]: 2026-01-31 08:55:45.883 226833 DEBUG nova.scheduler.client.report [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:55:45 compute-2 nova_compute[226829]: 2026-01-31 08:55:45.935 226833 DEBUG oslo_concurrency.lockutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:45 compute-2 nova_compute[226829]: 2026-01-31 08:55:45.983 226833 INFO nova.scheduler.client.report [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance b13507e9-5374-4c40-b919-b9d7b61d4f12
Jan 31 08:55:46 compute-2 nova_compute[226829]: 2026-01-31 08:55:46.079 226833 DEBUG oslo_concurrency.lockutils [None req-dee42746-b309-424e-b887-d23cc0de8259 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "b13507e9-5374-4c40-b919-b9d7b61d4f12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:46 compute-2 nova_compute[226829]: 2026-01-31 08:55:46.361 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:46 compute-2 ceph-mon[77282]: pgmap v3704: 305 pgs: 305 active+clean; 404 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 135 op/s
Jan 31 08:55:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:55:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:46.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:55:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:47.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:47 compute-2 ceph-mon[77282]: pgmap v3705: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 110 op/s
Jan 31 08:55:48 compute-2 nova_compute[226829]: 2026-01-31 08:55:48.866 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:48.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:49.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:50 compute-2 ceph-mon[77282]: pgmap v3706: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 860 KiB/s rd, 538 KiB/s wr, 85 op/s
Jan 31 08:55:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:50.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:51 compute-2 nova_compute[226829]: 2026-01-31 08:55:51.365 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:51.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:52 compute-2 ceph-mon[77282]: pgmap v3707: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 262 KiB/s rd, 147 KiB/s wr, 60 op/s
Jan 31 08:55:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:52.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:53.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1149404289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:55:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1149404289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:55:53 compute-2 nova_compute[226829]: 2026-01-31 08:55:53.919 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:54 compute-2 ceph-mon[77282]: pgmap v3708: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 50 KiB/s wr, 30 op/s
Jan 31 08:55:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:54.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:55.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:55 compute-2 ceph-mon[77282]: pgmap v3709: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 50 KiB/s wr, 28 op/s
Jan 31 08:55:56 compute-2 nova_compute[226829]: 2026-01-31 08:55:56.326 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849741.3251348, b13507e9-5374-4c40-b919-b9d7b61d4f12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:55:56 compute-2 nova_compute[226829]: 2026-01-31 08:55:56.327 226833 INFO nova.compute.manager [-] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] VM Stopped (Lifecycle Event)
Jan 31 08:55:56 compute-2 nova_compute[226829]: 2026-01-31 08:55:56.367 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:56 compute-2 nova_compute[226829]: 2026-01-31 08:55:56.416 226833 DEBUG nova.compute.manager [None req-53849d02-36d3-47c8-80e6-259bceab3eb4 - - - - - -] [instance: b13507e9-5374-4c40-b919-b9d7b61d4f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:55:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:56.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:55:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:57.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:55:58 compute-2 ceph-mon[77282]: pgmap v3710: 305 pgs: 305 active+clean; 366 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 15 KiB/s wr, 20 op/s
Jan 31 08:55:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:58.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:58 compute-2 nova_compute[226829]: 2026-01-31 08:55:58.955 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:55:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:00 compute-2 ceph-mon[77282]: pgmap v3711: 305 pgs: 305 active+clean; 342 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 17 op/s
Jan 31 08:56:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:01 compute-2 nova_compute[226829]: 2026-01-31 08:56:01.370 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:56:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:01.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:56:01 compute-2 nova_compute[226829]: 2026-01-31 08:56:01.621 226833 DEBUG oslo_concurrency.lockutils [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:01 compute-2 nova_compute[226829]: 2026-01-31 08:56:01.621 226833 DEBUG oslo_concurrency.lockutils [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:01 compute-2 nova_compute[226829]: 2026-01-31 08:56:01.651 226833 DEBUG nova.objects.instance [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'flavor' on Instance uuid 27645b95-3e37-43ba-8465-c8789c0f8700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:56:01 compute-2 nova_compute[226829]: 2026-01-31 08:56:01.733 226833 DEBUG oslo_concurrency.lockutils [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.332 226833 DEBUG oslo_concurrency.lockutils [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.333 226833 DEBUG oslo_concurrency.lockutils [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.333 226833 INFO nova.compute.manager [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Attaching volume da113752-340c-4a7b-98c3-707e5c3c2d4b to /dev/vdb
Jan 31 08:56:02 compute-2 ceph-mon[77282]: pgmap v3712: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 7.0 KiB/s wr, 29 op/s
Jan 31 08:56:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3242168337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.894 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:02.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.952 226833 DEBUG os_brick.utils [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.954 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.966 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.967 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[88b4664a-b3f3-4937-bff5-c10bf8cd77f8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.968 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.974 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.974 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[db5f396b-f1d0-49fd-8979-361e02a3e642]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.976 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.984 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.985 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[37a9fa2c-8246-4bb5-a7c5-e5a6a326181b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.986 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[1d846118-e8ab-40ce-83c9-a566b5272472]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:02 compute-2 nova_compute[226829]: 2026-01-31 08:56:02.987 226833 DEBUG oslo_concurrency.processutils [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:56:03 compute-2 nova_compute[226829]: 2026-01-31 08:56:03.014 226833 DEBUG oslo_concurrency.processutils [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:56:03 compute-2 nova_compute[226829]: 2026-01-31 08:56:03.016 226833 DEBUG os_brick.initiator.connectors.lightos [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:56:03 compute-2 nova_compute[226829]: 2026-01-31 08:56:03.016 226833 DEBUG os_brick.initiator.connectors.lightos [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:56:03 compute-2 nova_compute[226829]: 2026-01-31 08:56:03.016 226833 DEBUG os_brick.initiator.connectors.lightos [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:56:03 compute-2 nova_compute[226829]: 2026-01-31 08:56:03.017 226833 DEBUG os_brick.utils [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:56:03 compute-2 nova_compute[226829]: 2026-01-31 08:56:03.017 226833 DEBUG nova.virt.block_device [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Updating existing volume attachment record: 9070e2db-feb7-4020-9ab8-67041d6df378 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:56:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:03.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:03 compute-2 sudo[327004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:03 compute-2 sudo[327004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:03 compute-2 sudo[327004]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:03 compute-2 sudo[327029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:03 compute-2 sudo[327029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:03 compute-2 sudo[327029]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:03 compute-2 ceph-mon[77282]: pgmap v3713: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Jan 31 08:56:03 compute-2 nova_compute[226829]: 2026-01-31 08:56:03.958 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:04.088 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:56:04 compute-2 nova_compute[226829]: 2026-01-31 08:56:04.088 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:04.089 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:56:04 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:04.090 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:56:04 compute-2 nova_compute[226829]: 2026-01-31 08:56:04.700 226833 DEBUG nova.objects.instance [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'flavor' on Instance uuid 27645b95-3e37-43ba-8465-c8789c0f8700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:56:04 compute-2 nova_compute[226829]: 2026-01-31 08:56:04.748 226833 DEBUG nova.virt.libvirt.driver [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Attempting to attach volume da113752-340c-4a7b-98c3-707e5c3c2d4b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:56:04 compute-2 nova_compute[226829]: 2026-01-31 08:56:04.751 226833 DEBUG nova.virt.libvirt.guest [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:56:04 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:56:04 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-da113752-340c-4a7b-98c3-707e5c3c2d4b">
Jan 31 08:56:04 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:56:04 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:56:04 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:56:04 compute-2 nova_compute[226829]:   </source>
Jan 31 08:56:04 compute-2 nova_compute[226829]:   <auth username="openstack">
Jan 31 08:56:04 compute-2 nova_compute[226829]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:56:04 compute-2 nova_compute[226829]:   </auth>
Jan 31 08:56:04 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:56:04 compute-2 nova_compute[226829]:   <serial>da113752-340c-4a7b-98c3-707e5c3c2d4b</serial>
Jan 31 08:56:04 compute-2 nova_compute[226829]: </disk>
Jan 31 08:56:04 compute-2 nova_compute[226829]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:56:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:05 compute-2 nova_compute[226829]: 2026-01-31 08:56:05.083 226833 DEBUG nova.virt.libvirt.driver [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:56:05 compute-2 nova_compute[226829]: 2026-01-31 08:56:05.084 226833 DEBUG nova.virt.libvirt.driver [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:56:05 compute-2 nova_compute[226829]: 2026-01-31 08:56:05.084 226833 DEBUG nova.virt.libvirt.driver [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:56:05 compute-2 nova_compute[226829]: 2026-01-31 08:56:05.084 226833 DEBUG nova.virt.libvirt.driver [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No VIF found with MAC fa:16:3e:37:2d:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:56:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1685330573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:56:05 compute-2 nova_compute[226829]: 2026-01-31 08:56:05.399 226833 DEBUG oslo_concurrency.lockutils [None req-1be5d322-628b-49b7-85d7-19e007603708 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:05.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:06 compute-2 nova_compute[226829]: 2026-01-31 08:56:06.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:06 compute-2 ceph-mon[77282]: pgmap v3714: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Jan 31 08:56:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:06.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:06.934 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:06.935 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:06.935 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:07.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:08 compute-2 podman[327077]: 2026-01-31 08:56:08.188802497 +0000 UTC m=+0.072676525 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 08:56:08 compute-2 nova_compute[226829]: 2026-01-31 08:56:08.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:08 compute-2 nova_compute[226829]: 2026-01-31 08:56:08.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:08 compute-2 ceph-mon[77282]: pgmap v3715: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 6.2 KiB/s wr, 23 op/s
Jan 31 08:56:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:08.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:09 compute-2 nova_compute[226829]: 2026-01-31 08:56:09.005 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:09.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:09 compute-2 nova_compute[226829]: 2026-01-31 08:56:09.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:09 compute-2 ceph-mon[77282]: pgmap v3716: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 4.2 KiB/s wr, 27 op/s
Jan 31 08:56:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:10 compute-2 nova_compute[226829]: 2026-01-31 08:56:10.846 226833 DEBUG oslo_concurrency.lockutils [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:10 compute-2 nova_compute[226829]: 2026-01-31 08:56:10.846 226833 DEBUG oslo_concurrency.lockutils [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:10 compute-2 nova_compute[226829]: 2026-01-31 08:56:10.891 226833 INFO nova.compute.manager [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Detaching volume da113752-340c-4a7b-98c3-707e5c3c2d4b
Jan 31 08:56:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:10.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.116 226833 INFO nova.virt.block_device [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Attempting to driver detach volume da113752-340c-4a7b-98c3-707e5c3c2d4b from mountpoint /dev/vdb
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.123 226833 DEBUG nova.virt.libvirt.driver [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Attempting to detach device vdb from instance 27645b95-3e37-43ba-8465-c8789c0f8700 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.124 226833 DEBUG nova.virt.libvirt.guest [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:56:11 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-da113752-340c-4a7b-98c3-707e5c3c2d4b">
Jan 31 08:56:11 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]:   </source>
Jan 31 08:56:11 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]:   <serial>da113752-340c-4a7b-98c3-707e5c3c2d4b</serial>
Jan 31 08:56:11 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]: </disk>
Jan 31 08:56:11 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.132 226833 INFO nova.virt.libvirt.driver [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Successfully detached device vdb from instance 27645b95-3e37-43ba-8465-c8789c0f8700 from the persistent domain config.
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.133 226833 DEBUG nova.virt.libvirt.driver [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 27645b95-3e37-43ba-8465-c8789c0f8700 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.133 226833 DEBUG nova.virt.libvirt.guest [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:56:11 compute-2 nova_compute[226829]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]:   <source protocol="rbd" name="volumes/volume-da113752-340c-4a7b-98c3-707e5c3c2d4b">
Jan 31 08:56:11 compute-2 nova_compute[226829]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]:   </source>
Jan 31 08:56:11 compute-2 nova_compute[226829]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]:   <serial>da113752-340c-4a7b-98c3-707e5c3c2d4b</serial>
Jan 31 08:56:11 compute-2 nova_compute[226829]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:56:11 compute-2 nova_compute[226829]: </disk>
Jan 31 08:56:11 compute-2 nova_compute[226829]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:56:11 compute-2 ovn_controller[133834]: 2026-01-31T08:56:11Z|00811|binding|INFO|Releasing lport 62efe4e7-cbd5-44c6-8fac-7cb5fe1c3604 from this chassis (sb_readonly=0)
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.158 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.241 226833 DEBUG nova.virt.libvirt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Received event <DeviceRemovedEvent: 1769849771.2413123, 27645b95-3e37-43ba-8465-c8789c0f8700 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.243 226833 DEBUG nova.virt.libvirt.driver [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 27645b95-3e37-43ba-8465-c8789c0f8700 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.244 226833 INFO nova.virt.libvirt.driver [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Successfully detached device vdb from instance 27645b95-3e37-43ba-8465-c8789c0f8700 from the live domain config.
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.376 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:11.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:11 compute-2 nova_compute[226829]: 2026-01-31 08:56:11.745 226833 DEBUG nova.objects.instance [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'flavor' on Instance uuid 27645b95-3e37-43ba-8465-c8789c0f8700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:56:12 compute-2 nova_compute[226829]: 2026-01-31 08:56:12.072 226833 DEBUG oslo_concurrency.lockutils [None req-b03d8845-ce32-4821-8c5d-fa5ee5a0ab3b a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:12 compute-2 ceph-mon[77282]: pgmap v3717: 305 pgs: 305 active+clean; 304 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 580 KiB/s rd, 361 KiB/s wr, 29 op/s
Jan 31 08:56:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2553713859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:12 compute-2 nova_compute[226829]: 2026-01-31 08:56:12.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:12 compute-2 nova_compute[226829]: 2026-01-31 08:56:12.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:56:12 compute-2 nova_compute[226829]: 2026-01-31 08:56:12.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:56:12 compute-2 nova_compute[226829]: 2026-01-31 08:56:12.891 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:56:12 compute-2 nova_compute[226829]: 2026-01-31 08:56:12.892 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:56:12 compute-2 nova_compute[226829]: 2026-01-31 08:56:12.892 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:56:12 compute-2 nova_compute[226829]: 2026-01-31 08:56:12.892 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 27645b95-3e37-43ba-8465-c8789c0f8700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:56:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:12.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/249953271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:13.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.006 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:14 compute-2 podman[327109]: 2026-01-31 08:56:14.144358324 +0000 UTC m=+0.036365699 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:56:14 compute-2 ceph-mon[77282]: pgmap v3718: 305 pgs: 305 active+clean; 304 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 573 KiB/s rd, 360 KiB/s wr, 17 op/s
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.539 226833 DEBUG nova.compute.manager [req-bd43775a-8d03-4a9e-bd99-bc3e740dc43b req-7dd45215-3932-462a-98ab-9fad1d34625e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received event network-changed-495003a6-1a9f-4821-a3a7-1aee88998d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.539 226833 DEBUG nova.compute.manager [req-bd43775a-8d03-4a9e-bd99-bc3e740dc43b req-7dd45215-3932-462a-98ab-9fad1d34625e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Refreshing instance network info cache due to event network-changed-495003a6-1a9f-4821-a3a7-1aee88998d2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.539 226833 DEBUG oslo_concurrency.lockutils [req-bd43775a-8d03-4a9e-bd99-bc3e740dc43b req-7dd45215-3932-462a-98ab-9fad1d34625e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.724 226833 DEBUG oslo_concurrency.lockutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.724 226833 DEBUG oslo_concurrency.lockutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.724 226833 DEBUG oslo_concurrency.lockutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.725 226833 DEBUG oslo_concurrency.lockutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.725 226833 DEBUG oslo_concurrency.lockutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.726 226833 INFO nova.compute.manager [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Terminating instance
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.728 226833 DEBUG nova.compute.manager [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:56:14 compute-2 kernel: tap495003a6-1a (unregistering): left promiscuous mode
Jan 31 08:56:14 compute-2 NetworkManager[48999]: <info>  [1769849774.7861] device (tap495003a6-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.790 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:14 compute-2 ovn_controller[133834]: 2026-01-31T08:56:14Z|00812|binding|INFO|Releasing lport 495003a6-1a9f-4821-a3a7-1aee88998d2e from this chassis (sb_readonly=0)
Jan 31 08:56:14 compute-2 ovn_controller[133834]: 2026-01-31T08:56:14Z|00813|binding|INFO|Setting lport 495003a6-1a9f-4821-a3a7-1aee88998d2e down in Southbound
Jan 31 08:56:14 compute-2 ovn_controller[133834]: 2026-01-31T08:56:14Z|00814|binding|INFO|Removing iface tap495003a6-1a ovn-installed in OVS
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.792 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.802 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:14 compute-2 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000cd.scope: Deactivated successfully.
Jan 31 08:56:14 compute-2 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000cd.scope: Consumed 17.047s CPU time.
Jan 31 08:56:14 compute-2 systemd-machined[195142]: Machine qemu-92-instance-000000cd terminated.
Jan 31 08:56:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:14.847 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:2d:f9 10.100.0.11'], port_security=['fa:16:3e:37:2d:f9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27645b95-3e37-43ba-8465-c8789c0f8700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53eebd24-3b32-4949-827a-524f9e042652', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6e96de7b2784be1adce763bc9c9adc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e34ba2ee-7d71-4f69-8288-b62c847fa225', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fa4ba9c-944f-4ccd-90bc-07135c4442c5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=495003a6-1a9f-4821-a3a7-1aee88998d2e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:56:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:14.848 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 495003a6-1a9f-4821-a3a7-1aee88998d2e in datapath 53eebd24-3b32-4949-827a-524f9e042652 unbound from our chassis
Jan 31 08:56:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:14.850 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 53eebd24-3b32-4949-827a-524f9e042652, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:56:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:14.851 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3d1b6d-0020-4768-8aad-1a4886da6e4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:14 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:14.852 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-53eebd24-3b32-4949-827a-524f9e042652 namespace which is not needed anymore
Jan 31 08:56:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:14.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:14 compute-2 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[326513]: [NOTICE]   (326517) : haproxy version is 2.8.14-c23fe91
Jan 31 08:56:14 compute-2 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[326513]: [NOTICE]   (326517) : path to executable is /usr/sbin/haproxy
Jan 31 08:56:14 compute-2 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[326513]: [WARNING]  (326517) : Exiting Master process...
Jan 31 08:56:14 compute-2 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[326513]: [ALERT]    (326517) : Current worker (326519) exited with code 143 (Terminated)
Jan 31 08:56:14 compute-2 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[326513]: [WARNING]  (326517) : All workers exited. Exiting... (0)
Jan 31 08:56:14 compute-2 systemd[1]: libpod-751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f.scope: Deactivated successfully.
Jan 31 08:56:14 compute-2 podman[327152]: 2026-01-31 08:56:14.959235492 +0000 UTC m=+0.043362669 container died 751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.958 226833 INFO nova.virt.libvirt.driver [-] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Instance destroyed successfully.
Jan 31 08:56:14 compute-2 nova_compute[226829]: 2026-01-31 08:56:14.958 226833 DEBUG nova.objects.instance [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'resources' on Instance uuid 27645b95-3e37-43ba-8465-c8789c0f8700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:56:14 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f-userdata-shm.mount: Deactivated successfully.
Jan 31 08:56:14 compute-2 systemd[1]: var-lib-containers-storage-overlay-5105254819a0414b8044aa575e17971f5923bb269a84a9ce2d1731671c97d6e6-merged.mount: Deactivated successfully.
Jan 31 08:56:14 compute-2 podman[327152]: 2026-01-31 08:56:14.989965197 +0000 UTC m=+0.074092374 container cleanup 751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:56:14 compute-2 systemd[1]: libpod-conmon-751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f.scope: Deactivated successfully.
Jan 31 08:56:15 compute-2 podman[327190]: 2026-01-31 08:56:15.046971907 +0000 UTC m=+0.040329077 container remove 751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:56:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:15.051 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[80390ba0-a5fc-4311-95ae-e5402a6da3ac]: (4, ('Sat Jan 31 08:56:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652 (751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f)\n751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f\nSat Jan 31 08:56:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652 (751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f)\n751b0dc75f86283777795f5c34315144c34db0954241d124346e9254a6d5ac6f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:15.054 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[56f151af-ab10-417d-91e6-0ffa1d441555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:15.055 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53eebd24-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.105 226833 DEBUG nova.virt.libvirt.vif [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-114162633',display_name='tempest-TestStampPattern-server-114162633',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-teststamppattern-server-114162633',id=205,image_ref='6827763f-c9c4-43fc-825d-2f9c946c4536',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfz5EJ1EvVhkopw671xsxq9cmxCD9AJZYRdx1fZqnsJKH4HE3ct43AahyjBDecQtfre/K2oZ3kPMxp5bbpWjZgXwmif2lJfZCK32Cd1YqdcHbaKXFc2nUgzqikPeTQnpA==',key_name='tempest-TestStampPattern-1030074900',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:55:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6e96de7b2784be1adce763bc9c9adc5',ramdisk_id='',reservation_id='r-jkkdi703',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='da90f1fb-9090-49b5-a510-d7e6ac7a30d6',image_min_disk='1',image_min_ram='0',image_owner_id='e6e96de7b2784be1adce763bc9c9adc5',image_owner_project_name='tempest-TestStampPattern-23409568',image_owner_user_name='tempest-TestStampPattern-23409568-project-member',image_user_id='a798fdf6d13d4af4b166dd94b5cea7cc',owner_project_name='tempest-TestStampPattern-23409568',owner_user_name='tempest-TestStampPattern-23409568-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:55:21Z,user_data=None,user_id='a798fdf6d13d4af4b166dd94b5cea7cc',uuid=27645b95-3e37-43ba-8465-c8789c0f8700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.105 226833 DEBUG nova.network.os_vif_util [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converting VIF {"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.106 226833 DEBUG nova.network.os_vif_util [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:2d:f9,bridge_name='br-int',has_traffic_filtering=True,id=495003a6-1a9f-4821-a3a7-1aee88998d2e,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap495003a6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.107 226833 DEBUG os_vif [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:2d:f9,bridge_name='br-int',has_traffic_filtering=True,id=495003a6-1a9f-4821-a3a7-1aee88998d2e,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap495003a6-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.109 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.109 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap495003a6-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.113 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:15 compute-2 kernel: tap53eebd24-30: left promiscuous mode
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.116 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.119 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:15.122 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d426cbed-f7d8-4ece-9829-2a5ad7a1e562]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.124 226833 INFO os_vif [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:2d:f9,bridge_name='br-int',has_traffic_filtering=True,id=495003a6-1a9f-4821-a3a7-1aee88998d2e,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap495003a6-1a')
Jan 31 08:56:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:15.136 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ff55ef33-556d-4509-a551-eef016f31c4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:15.138 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[92a6da30-e281-44da-bb1b-25b061eb3a1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:15.155 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a231e7c2-f412-4ef6-9759-a27058216897]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 997216, 'reachable_time': 32232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327220, 'error': None, 'target': 'ovnmeta-53eebd24-3b32-4949-827a-524f9e042652', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:15 compute-2 systemd[1]: run-netns-ovnmeta\x2d53eebd24\x2d3b32\x2d4949\x2d827a\x2d524f9e042652.mount: Deactivated successfully.
Jan 31 08:56:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:15.158 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-53eebd24-3b32-4949-827a-524f9e042652 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:56:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:56:15.158 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[eade01ba-3f63-4281-a2d1-c5ea07ece919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:56:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:15.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:56:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.570 226833 INFO nova.virt.libvirt.driver [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Deleting instance files /var/lib/nova/instances/27645b95-3e37-43ba-8465-c8789c0f8700_del
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.571 226833 INFO nova.virt.libvirt.driver [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Deletion of /var/lib/nova/instances/27645b95-3e37-43ba-8465-c8789c0f8700_del complete
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.708 226833 INFO nova.compute.manager [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Took 0.98 seconds to destroy the instance on the hypervisor.
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.709 226833 DEBUG oslo.service.loopingcall [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.709 226833 DEBUG nova.compute.manager [-] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:56:15 compute-2 nova_compute[226829]: 2026-01-31 08:56:15.710 226833 DEBUG nova.network.neutron [-] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:56:16 compute-2 nova_compute[226829]: 2026-01-31 08:56:16.225 226833 DEBUG nova.compute.manager [req-783dab5a-96b3-439e-b54b-7bc557176b77 req-2bc7aa72-d96b-428e-bffc-746419179d37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received event network-vif-unplugged-495003a6-1a9f-4821-a3a7-1aee88998d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:56:16 compute-2 nova_compute[226829]: 2026-01-31 08:56:16.225 226833 DEBUG oslo_concurrency.lockutils [req-783dab5a-96b3-439e-b54b-7bc557176b77 req-2bc7aa72-d96b-428e-bffc-746419179d37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:16 compute-2 nova_compute[226829]: 2026-01-31 08:56:16.225 226833 DEBUG oslo_concurrency.lockutils [req-783dab5a-96b3-439e-b54b-7bc557176b77 req-2bc7aa72-d96b-428e-bffc-746419179d37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:16 compute-2 nova_compute[226829]: 2026-01-31 08:56:16.226 226833 DEBUG oslo_concurrency.lockutils [req-783dab5a-96b3-439e-b54b-7bc557176b77 req-2bc7aa72-d96b-428e-bffc-746419179d37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:16 compute-2 nova_compute[226829]: 2026-01-31 08:56:16.226 226833 DEBUG nova.compute.manager [req-783dab5a-96b3-439e-b54b-7bc557176b77 req-2bc7aa72-d96b-428e-bffc-746419179d37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] No waiting events found dispatching network-vif-unplugged-495003a6-1a9f-4821-a3a7-1aee88998d2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:56:16 compute-2 nova_compute[226829]: 2026-01-31 08:56:16.226 226833 DEBUG nova.compute.manager [req-783dab5a-96b3-439e-b54b-7bc557176b77 req-2bc7aa72-d96b-428e-bffc-746419179d37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received event network-vif-unplugged-495003a6-1a9f-4821-a3a7-1aee88998d2e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:56:16 compute-2 ceph-mon[77282]: pgmap v3719: 305 pgs: 305 active+clean; 304 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 573 KiB/s rd, 361 KiB/s wr, 17 op/s
Jan 31 08:56:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:16.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:17 compute-2 nova_compute[226829]: 2026-01-31 08:56:17.310 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Updating instance_info_cache with network_info: [{"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:56:17 compute-2 nova_compute[226829]: 2026-01-31 08:56:17.344 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:56:17 compute-2 nova_compute[226829]: 2026-01-31 08:56:17.344 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:56:17 compute-2 nova_compute[226829]: 2026-01-31 08:56:17.345 226833 DEBUG oslo_concurrency.lockutils [req-bd43775a-8d03-4a9e-bd99-bc3e740dc43b req-7dd45215-3932-462a-98ab-9fad1d34625e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:56:17 compute-2 nova_compute[226829]: 2026-01-31 08:56:17.345 226833 DEBUG nova.network.neutron [req-bd43775a-8d03-4a9e-bd99-bc3e740dc43b req-7dd45215-3932-462a-98ab-9fad1d34625e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Refreshing network info cache for port 495003a6-1a9f-4821-a3a7-1aee88998d2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:56:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:17.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:17 compute-2 nova_compute[226829]: 2026-01-31 08:56:17.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:18 compute-2 ceph-mon[77282]: pgmap v3720: 305 pgs: 305 active+clean; 291 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 581 KiB/s rd, 363 KiB/s wr, 29 op/s
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.532 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.532 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.701 226833 DEBUG nova.compute.manager [req-0bbb0529-0886-4d29-ae40-2a7ad34583fd req-7a27f6d8-f72a-4066-acf9-840f1863bbe3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received event network-vif-plugged-495003a6-1a9f-4821-a3a7-1aee88998d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.701 226833 DEBUG oslo_concurrency.lockutils [req-0bbb0529-0886-4d29-ae40-2a7ad34583fd req-7a27f6d8-f72a-4066-acf9-840f1863bbe3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.701 226833 DEBUG oslo_concurrency.lockutils [req-0bbb0529-0886-4d29-ae40-2a7ad34583fd req-7a27f6d8-f72a-4066-acf9-840f1863bbe3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.701 226833 DEBUG oslo_concurrency.lockutils [req-0bbb0529-0886-4d29-ae40-2a7ad34583fd req-7a27f6d8-f72a-4066-acf9-840f1863bbe3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.702 226833 DEBUG nova.compute.manager [req-0bbb0529-0886-4d29-ae40-2a7ad34583fd req-7a27f6d8-f72a-4066-acf9-840f1863bbe3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] No waiting events found dispatching network-vif-plugged-495003a6-1a9f-4821-a3a7-1aee88998d2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.702 226833 WARNING nova.compute.manager [req-0bbb0529-0886-4d29-ae40-2a7ad34583fd req-7a27f6d8-f72a-4066-acf9-840f1863bbe3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received unexpected event network-vif-plugged-495003a6-1a9f-4821-a3a7-1aee88998d2e for instance with vm_state active and task_state deleting.
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.843 226833 DEBUG nova.network.neutron [-] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.876 226833 INFO nova.compute.manager [-] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Took 3.17 seconds to deallocate network for instance.
Jan 31 08:56:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:56:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:18 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/593862730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:18.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.939 226833 DEBUG oslo_concurrency.lockutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.940 226833 DEBUG oslo_concurrency.lockutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:18 compute-2 nova_compute[226829]: 2026-01-31 08:56:18.947 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.014 226833 DEBUG oslo_concurrency.processutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.062 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.157 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.158 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4032MB free_disk=20.939666748046875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.158 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/593862730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:19 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:56:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3753263597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.421 226833 DEBUG oslo_concurrency.processutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.425 226833 DEBUG nova.compute.provider_tree [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:56:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:19.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.489 226833 DEBUG nova.network.neutron [req-bd43775a-8d03-4a9e-bd99-bc3e740dc43b req-7dd45215-3932-462a-98ab-9fad1d34625e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Updated VIF entry in instance network info cache for port 495003a6-1a9f-4821-a3a7-1aee88998d2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.490 226833 DEBUG nova.network.neutron [req-bd43775a-8d03-4a9e-bd99-bc3e740dc43b req-7dd45215-3932-462a-98ab-9fad1d34625e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Updating instance_info_cache with network_info: [{"id": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "address": "fa:16:3e:37:2d:f9", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495003a6-1a", "ovs_interfaceid": "495003a6-1a9f-4821-a3a7-1aee88998d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.575 226833 DEBUG nova.scheduler.client.report [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.590 226833 DEBUG oslo_concurrency.lockutils [req-bd43775a-8d03-4a9e-bd99-bc3e740dc43b req-7dd45215-3932-462a-98ab-9fad1d34625e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-27645b95-3e37-43ba-8465-c8789c0f8700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.608 226833 DEBUG oslo_concurrency.lockutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.613 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.691 226833 INFO nova.scheduler.client.report [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Deleted allocations for instance 27645b95-3e37-43ba-8465-c8789c0f8700
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.736 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.736 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.773 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:56:19 compute-2 nova_compute[226829]: 2026-01-31 08:56:19.828 226833 DEBUG oslo_concurrency.lockutils [None req-8df85a60-5729-40ea-8ac7-254ca734f390 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "27645b95-3e37-43ba-8465-c8789c0f8700" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:56:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/553797308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.159 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.179 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.184 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.290 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:56:20 compute-2 ceph-mon[77282]: pgmap v3721: 305 pgs: 305 active+clean; 290 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 581 KiB/s rd, 360 KiB/s wr, 28 op/s
Jan 31 08:56:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3753263597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3652238372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/553797308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.524 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.524 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.524 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.525 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:20.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.950 226833 DEBUG nova.compute.manager [req-334f33d7-97ee-49a8-a93e-40f0bac13a7d req-dbc07a70-579b-4a76-a86a-29fd0911aba8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Received event network-vif-deleted-495003a6-1a9f-4821-a3a7-1aee88998d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.950 226833 INFO nova.compute.manager [req-334f33d7-97ee-49a8-a93e-40f0bac13a7d req-dbc07a70-579b-4a76-a86a-29fd0911aba8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Neutron deleted interface 495003a6-1a9f-4821-a3a7-1aee88998d2e; detaching it from the instance and deleting it from the info cache
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.951 226833 DEBUG nova.network.neutron [req-334f33d7-97ee-49a8-a93e-40f0bac13a7d req-dbc07a70-579b-4a76-a86a-29fd0911aba8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 31 08:56:20 compute-2 nova_compute[226829]: 2026-01-31 08:56:20.953 226833 DEBUG nova.compute.manager [req-334f33d7-97ee-49a8-a93e-40f0bac13a7d req-dbc07a70-579b-4a76-a86a-29fd0911aba8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Detach interface failed, port_id=495003a6-1a9f-4821-a3a7-1aee88998d2e, reason: Instance 27645b95-3e37-43ba-8465-c8789c0f8700 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:56:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3369310111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:21.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:22 compute-2 ceph-mon[77282]: pgmap v3722: 305 pgs: 305 active+clean; 283 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 394 KiB/s rd, 360 KiB/s wr, 38 op/s
Jan 31 08:56:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:22.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:23.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:23 compute-2 sudo[327298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:23 compute-2 sudo[327298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:23 compute-2 sudo[327298]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:23 compute-2 sudo[327323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:23 compute-2 sudo[327323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:23 compute-2 sudo[327323]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:24 compute-2 nova_compute[226829]: 2026-01-31 08:56:24.065 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e401 e401: 3 total, 3 up, 3 in
Jan 31 08:56:24 compute-2 ceph-mon[77282]: pgmap v3723: 305 pgs: 305 active+clean; 283 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 3.6 KiB/s wr, 41 op/s
Jan 31 08:56:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2899906367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:56:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2899906367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:56:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:24.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:25 compute-2 nova_compute[226829]: 2026-01-31 08:56:25.160 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:25 compute-2 ceph-mon[77282]: osdmap e401: 3 total, 3 up, 3 in
Jan 31 08:56:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:25.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:26 compute-2 ceph-mon[77282]: pgmap v3725: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 4.3 KiB/s wr, 54 op/s
Jan 31 08:56:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e402 e402: 3 total, 3 up, 3 in
Jan 31 08:56:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:56:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:26.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:56:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:27.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:27 compute-2 ceph-mon[77282]: osdmap e402: 3 total, 3 up, 3 in
Jan 31 08:56:28 compute-2 ceph-mon[77282]: pgmap v3727: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 2.9 KiB/s wr, 77 op/s
Jan 31 08:56:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:28.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:29 compute-2 nova_compute[226829]: 2026-01-31 08:56:29.067 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:29.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:29 compute-2 ceph-mon[77282]: pgmap v3728: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 3.7 KiB/s wr, 74 op/s
Jan 31 08:56:29 compute-2 nova_compute[226829]: 2026-01-31 08:56:29.957 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849774.9559479, 27645b95-3e37-43ba-8465-c8789c0f8700 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:56:29 compute-2 nova_compute[226829]: 2026-01-31 08:56:29.957 226833 INFO nova.compute.manager [-] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] VM Stopped (Lifecycle Event)
Jan 31 08:56:29 compute-2 nova_compute[226829]: 2026-01-31 08:56:29.995 226833 DEBUG nova.compute.manager [None req-708607fe-2d83-4e19-ab53-862a470d9b20 - - - - - -] [instance: 27645b95-3e37-43ba-8465-c8789c0f8700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:56:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e403 e403: 3 total, 3 up, 3 in
Jan 31 08:56:30 compute-2 nova_compute[226829]: 2026-01-31 08:56:30.161 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:30.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:31 compute-2 ceph-mon[77282]: osdmap e403: 3 total, 3 up, 3 in
Jan 31 08:56:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:31.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:32 compute-2 ceph-mon[77282]: pgmap v3730: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 4.2 KiB/s wr, 67 op/s
Jan 31 08:56:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:32.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:33.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:34 compute-2 nova_compute[226829]: 2026-01-31 08:56:34.069 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:34 compute-2 ceph-mon[77282]: pgmap v3731: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 3.0 KiB/s wr, 49 op/s
Jan 31 08:56:34 compute-2 nova_compute[226829]: 2026-01-31 08:56:34.527 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:34 compute-2 nova_compute[226829]: 2026-01-31 08:56:34.528 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:56:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:34 compute-2 nova_compute[226829]: 2026-01-31 08:56:34.996 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:35 compute-2 nova_compute[226829]: 2026-01-31 08:56:35.162 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 e404: 3 total, 3 up, 3 in
Jan 31 08:56:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:35.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:36 compute-2 ceph-mon[77282]: pgmap v3732: 305 pgs: 305 active+clean; 173 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 42 op/s
Jan 31 08:56:36 compute-2 ceph-mon[77282]: osdmap e404: 3 total, 3 up, 3 in
Jan 31 08:56:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2846463667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:36.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:37.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:38 compute-2 ceph-mon[77282]: pgmap v3734: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Jan 31 08:56:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:38.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:39 compute-2 nova_compute[226829]: 2026-01-31 08:56:39.071 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:39 compute-2 podman[327356]: 2026-01-31 08:56:39.20786159 +0000 UTC m=+0.086679897 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:56:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:39.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:39 compute-2 sudo[327383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:39 compute-2 sudo[327383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:39 compute-2 sudo[327383]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:39 compute-2 sudo[327408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:56:39 compute-2 sudo[327408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:39 compute-2 sudo[327408]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:39 compute-2 sudo[327433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:39 compute-2 sudo[327433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:39 compute-2 sudo[327433]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:39 compute-2 sudo[327458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:56:39 compute-2 sudo[327458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:40 compute-2 nova_compute[226829]: 2026-01-31 08:56:40.164 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:40 compute-2 sudo[327458]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:40 compute-2 ceph-mon[77282]: pgmap v3735: 305 pgs: 305 active+clean; 122 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.6 KiB/s wr, 37 op/s
Jan 31 08:56:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:40.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:41.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:42 compute-2 ceph-mon[77282]: pgmap v3736: 305 pgs: 305 active+clean; 122 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 31 08:56:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:56:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:56:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:56:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:56:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:56:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:42.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:43.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:43 compute-2 sudo[327516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:43 compute-2 sudo[327516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:43 compute-2 sudo[327516]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:43 compute-2 sudo[327541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:43 compute-2 sudo[327541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:43 compute-2 sudo[327541]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:44 compute-2 nova_compute[226829]: 2026-01-31 08:56:44.073 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:44 compute-2 ceph-mon[77282]: pgmap v3737: 305 pgs: 305 active+clean; 122 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Jan 31 08:56:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1492873453' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:56:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1492873453' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:56:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:44.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:45 compute-2 podman[327567]: 2026-01-31 08:56:45.152609313 +0000 UTC m=+0.043993536 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 08:56:45 compute-2 nova_compute[226829]: 2026-01-31 08:56:45.166 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:45.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:46 compute-2 ceph-mon[77282]: pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1023 B/s wr, 29 op/s
Jan 31 08:56:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:46.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:47.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:47 compute-2 sudo[327587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:47 compute-2 sudo[327587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:47 compute-2 sudo[327587]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:47 compute-2 sudo[327612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:56:47 compute-2 sudo[327612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:47 compute-2 sudo[327612]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:48 compute-2 ceph-mon[77282]: pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.1 KiB/s wr, 27 op/s
Jan 31 08:56:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:48.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:49 compute-2 nova_compute[226829]: 2026-01-31 08:56:49.076 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:49.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:50 compute-2 nova_compute[226829]: 2026-01-31 08:56:50.169 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:50 compute-2 ceph-mon[77282]: pgmap v3740: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 15 op/s
Jan 31 08:56:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:50.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:56:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:51.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:56:52 compute-2 ceph-mon[77282]: pgmap v3741: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Jan 31 08:56:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:52.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2744281129' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:56:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2744281129' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:56:54 compute-2 nova_compute[226829]: 2026-01-31 08:56:54.078 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:54 compute-2 ceph-mon[77282]: pgmap v3742: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Jan 31 08:56:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:54.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:55 compute-2 nova_compute[226829]: 2026-01-31 08:56:55.170 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:55.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:55 compute-2 ceph-mon[77282]: pgmap v3743: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 2 op/s
Jan 31 08:56:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:56.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:57 compute-2 nova_compute[226829]: 2026-01-31 08:56:57.283 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:57 compute-2 nova_compute[226829]: 2026-01-31 08:56:57.331 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:57.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:58 compute-2 ceph-mon[77282]: pgmap v3744: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 31 08:56:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:58.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:59 compute-2 nova_compute[226829]: 2026-01-31 08:56:59.079 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:56:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 17K writes, 88K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1570 writes, 7393 keys, 1570 commit groups, 1.0 writes per commit group, ingest: 15.64 MB, 0.03 MB/s
                                           Interval WAL: 1570 writes, 1570 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     84.5      1.28              0.32        57    0.023       0      0       0.0       0.0
                                             L6      1/0   12.79 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4    155.9    133.8      4.35              1.62        56    0.078    437K    30K       0.0       0.0
                                            Sum      1/0   12.79 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4    120.3    122.6      5.63              1.94       113    0.050    437K    30K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.9    155.4    152.4      0.45              0.18        10    0.045     54K   2581       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    155.9    133.8      4.35              1.62        56    0.078    437K    30K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     84.6      1.28              0.32        56    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.106, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.67 GB write, 0.10 MB/s write, 0.66 GB read, 0.10 MB/s read, 5.6 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 73.14 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000658 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4214,70.03 MB,23.0358%) FilterBlock(113,1.17 MB,0.384617%) IndexBlock(113,1.94 MB,0.63969%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:56:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:56:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:56:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:59.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:57:00 compute-2 nova_compute[226829]: 2026-01-31 08:57:00.172 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:00 compute-2 ceph-mon[77282]: pgmap v3745: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:00.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:01.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:02 compute-2 ceph-mon[77282]: pgmap v3746: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:02 compute-2 nova_compute[226829]: 2026-01-31 08:57:02.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:03.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:03 compute-2 sudo[327646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:03 compute-2 sudo[327646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:03 compute-2 sudo[327646]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:03 compute-2 sudo[327671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:03 compute-2 sudo[327671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:04 compute-2 sudo[327671]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:04 compute-2 nova_compute[226829]: 2026-01-31 08:57:04.081 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:04 compute-2 ceph-mon[77282]: pgmap v3747: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:05.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:05 compute-2 nova_compute[226829]: 2026-01-31 08:57:05.174 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:05 compute-2 ceph-mon[77282]: pgmap v3748: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:57:06.935 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:57:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:57:06.936 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:57:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:57:06.936 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:57:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:07.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:57:07.222 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:57:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:57:07.223 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:57:07 compute-2 nova_compute[226829]: 2026-01-31 08:57:07.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:57:07.224 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:57:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:07.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:08 compute-2 ceph-mon[77282]: pgmap v3749: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:08 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3311638129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:08 compute-2 nova_compute[226829]: 2026-01-31 08:57:08.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:09.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:09 compute-2 nova_compute[226829]: 2026-01-31 08:57:09.126 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:10 compute-2 nova_compute[226829]: 2026-01-31 08:57:10.213 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:10 compute-2 podman[327700]: 2026-01-31 08:57:10.231778333 +0000 UTC m=+0.120957309 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:57:10 compute-2 ceph-mon[77282]: pgmap v3750: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.270297) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830270404, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1732, "num_deletes": 253, "total_data_size": 3909550, "memory_usage": 3948696, "flush_reason": "Manual Compaction"}
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830290585, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 1592728, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87570, "largest_seqno": 89296, "table_properties": {"data_size": 1587089, "index_size": 2778, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14888, "raw_average_key_size": 21, "raw_value_size": 1574680, "raw_average_value_size": 2255, "num_data_blocks": 124, "num_entries": 698, "num_filter_entries": 698, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849691, "oldest_key_time": 1769849691, "file_creation_time": 1769849830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 20371 microseconds, and 8585 cpu microseconds.
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.290662) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 1592728 bytes OK
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.290687) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.292877) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.292903) EVENT_LOG_v1 {"time_micros": 1769849830292895, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.292928) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 3901709, prev total WAL file size 3901709, number of live WAL files 2.
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.294019) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303130' seq:72057594037927935, type:22 .. '6D6772737461740033323631' seq:0, type:0; will stop at (end)
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(1555KB)], [180(12MB)]
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830294104, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 14999360, "oldest_snapshot_seqno": -1}
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 10850 keys, 12099506 bytes, temperature: kUnknown
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830384288, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 12099506, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12032823, "index_size": 38474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 286563, "raw_average_key_size": 26, "raw_value_size": 11846556, "raw_average_value_size": 1091, "num_data_blocks": 1457, "num_entries": 10850, "num_filter_entries": 10850, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769849830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.384615) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12099506 bytes
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.387927) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.1 rd, 134.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 12.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(17.0) write-amplify(7.6) OK, records in: 11312, records dropped: 462 output_compression: NoCompression
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.387962) EVENT_LOG_v1 {"time_micros": 1769849830387947, "job": 116, "event": "compaction_finished", "compaction_time_micros": 90279, "compaction_time_cpu_micros": 27596, "output_level": 6, "num_output_files": 1, "total_output_size": 12099506, "num_input_records": 11312, "num_output_records": 10850, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830388380, "job": 116, "event": "table_file_deletion", "file_number": 182}
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830390684, "job": 116, "event": "table_file_deletion", "file_number": 180}
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.293869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.390763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.390773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.390778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.390783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:57:10.390788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-2 nova_compute[226829]: 2026-01-31 08:57:10.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:11.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:11 compute-2 nova_compute[226829]: 2026-01-31 08:57:11.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:11.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:12 compute-2 ceph-mon[77282]: pgmap v3751: 305 pgs: 305 active+clean; 154 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.6 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 31 08:57:12 compute-2 nova_compute[226829]: 2026-01-31 08:57:12.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:13.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1209241508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:57:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:13.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:57:14 compute-2 nova_compute[226829]: 2026-01-31 08:57:14.127 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:14 compute-2 nova_compute[226829]: 2026-01-31 08:57:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:14 compute-2 nova_compute[226829]: 2026-01-31 08:57:14.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:57:14 compute-2 nova_compute[226829]: 2026-01-31 08:57:14.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:57:14 compute-2 ceph-mon[77282]: pgmap v3752: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:15.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:15 compute-2 nova_compute[226829]: 2026-01-31 08:57:15.216 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:15 compute-2 ceph-mon[77282]: pgmap v3753: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:15.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:57:16 compute-2 podman[327731]: 2026-01-31 08:57:16.16839951 +0000 UTC m=+0.052120098 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 08:57:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:17.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:17.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:18 compute-2 ceph-mon[77282]: pgmap v3754: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:18 compute-2 nova_compute[226829]: 2026-01-31 08:57:18.779 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:57:18 compute-2 nova_compute[226829]: 2026-01-31 08:57:18.779 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:19.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:19 compute-2 nova_compute[226829]: 2026-01-31 08:57:19.129 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:19 compute-2 nova_compute[226829]: 2026-01-31 08:57:19.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:19 compute-2 nova_compute[226829]: 2026-01-31 08:57:19.644 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:57:19 compute-2 nova_compute[226829]: 2026-01-31 08:57:19.644 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:57:19 compute-2 nova_compute[226829]: 2026-01-31 08:57:19.645 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:57:19 compute-2 nova_compute[226829]: 2026-01-31 08:57:19.645 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:57:19 compute-2 nova_compute[226829]: 2026-01-31 08:57:19.645 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:57:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:19.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:57:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2896569779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:20 compute-2 nova_compute[226829]: 2026-01-31 08:57:20.073 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:57:20 compute-2 nova_compute[226829]: 2026-01-31 08:57:20.204 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:57:20 compute-2 nova_compute[226829]: 2026-01-31 08:57:20.206 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4073MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:57:20 compute-2 nova_compute[226829]: 2026-01-31 08:57:20.206 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:57:20 compute-2 nova_compute[226829]: 2026-01-31 08:57:20.206 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:57:20 compute-2 nova_compute[226829]: 2026-01-31 08:57:20.217 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:20 compute-2 ceph-mon[77282]: pgmap v3755: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2896569779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/395214702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:21.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:21.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:21 compute-2 nova_compute[226829]: 2026-01-31 08:57:21.955 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:57:21 compute-2 nova_compute[226829]: 2026-01-31 08:57:21.956 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:57:22 compute-2 nova_compute[226829]: 2026-01-31 08:57:22.121 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:57:22 compute-2 ceph-mon[77282]: pgmap v3756: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:57:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1652517659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:22 compute-2 nova_compute[226829]: 2026-01-31 08:57:22.541 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:57:22 compute-2 nova_compute[226829]: 2026-01-31 08:57:22.546 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:57:22 compute-2 nova_compute[226829]: 2026-01-31 08:57:22.636 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:57:22 compute-2 nova_compute[226829]: 2026-01-31 08:57:22.837 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:57:22 compute-2 nova_compute[226829]: 2026-01-31 08:57:22.838 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:57:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:23.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1652517659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2312063628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/666852656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:23.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:24 compute-2 sudo[327799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:24 compute-2 sudo[327799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:24 compute-2 sudo[327799]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:24 compute-2 sudo[327824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:24 compute-2 sudo[327824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:24 compute-2 sudo[327824]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:24 compute-2 nova_compute[226829]: 2026-01-31 08:57:24.130 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:24 compute-2 ceph-mon[77282]: pgmap v3757: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 652 KiB/s wr, 12 op/s
Jan 31 08:57:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:25.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:25 compute-2 nova_compute[226829]: 2026-01-31 08:57:25.218 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:25.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:26 compute-2 ceph-mon[77282]: pgmap v3758: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:27.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2818314949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:57:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:27.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:28 compute-2 ceph-mon[77282]: pgmap v3759: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2565149127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:57:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:57:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 76K writes, 315K keys, 76K commit groups, 1.0 writes per commit group, ingest: 0.32 GB, 0.05 MB/s
                                           Cumulative WAL: 76K writes, 27K syncs, 2.76 writes per sync, written: 0.32 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4519 writes, 18K keys, 4519 commit groups, 1.0 writes per commit group, ingest: 19.24 MB, 0.03 MB/s
                                           Interval WAL: 4519 writes, 1796 syncs, 2.52 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:57:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:29.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:29 compute-2 nova_compute[226829]: 2026-01-31 08:57:29.132 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:29.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:30 compute-2 nova_compute[226829]: 2026-01-31 08:57:30.220 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:30 compute-2 ceph-mon[77282]: pgmap v3760: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:31.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:31.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:32 compute-2 ceph-mon[77282]: pgmap v3761: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 170 B/s wr, 3 op/s
Jan 31 08:57:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:33.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:33.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:34 compute-2 nova_compute[226829]: 2026-01-31 08:57:34.178 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:34 compute-2 ceph-mon[77282]: pgmap v3762: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 12 KiB/s wr, 18 op/s
Jan 31 08:57:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:35.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:35 compute-2 nova_compute[226829]: 2026-01-31 08:57:35.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:35 compute-2 ceph-mon[77282]: pgmap v3763: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Jan 31 08:57:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:35.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:36 compute-2 nova_compute[226829]: 2026-01-31 08:57:36.838 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:36 compute-2 nova_compute[226829]: 2026-01-31 08:57:36.839 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:57:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:37.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:37.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:38 compute-2 ceph-mon[77282]: pgmap v3764: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 59 op/s
Jan 31 08:57:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:39.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:39 compute-2 nova_compute[226829]: 2026-01-31 08:57:39.219 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:39.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:40 compute-2 nova_compute[226829]: 2026-01-31 08:57:40.223 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:40 compute-2 ceph-mon[77282]: pgmap v3765: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 31 08:57:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:41.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:41 compute-2 podman[327857]: 2026-01-31 08:57:41.17597674 +0000 UTC m=+0.068262028 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 31 08:57:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:41.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:42 compute-2 ceph-mon[77282]: pgmap v3766: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 31 08:57:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:44 compute-2 sudo[327887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:44 compute-2 sudo[327887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:44 compute-2 sudo[327887]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:44 compute-2 sudo[327912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:44 compute-2 sudo[327912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:44 compute-2 nova_compute[226829]: 2026-01-31 08:57:44.220 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:44 compute-2 sudo[327912]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:44 compute-2 ceph-mon[77282]: pgmap v3767: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Jan 31 08:57:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:45.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:45 compute-2 nova_compute[226829]: 2026-01-31 08:57:45.225 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:45 compute-2 nova_compute[226829]: 2026-01-31 08:57:45.747 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:46 compute-2 ceph-mon[77282]: pgmap v3768: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 78 op/s
Jan 31 08:57:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:47 compute-2 podman[327938]: 2026-01-31 08:57:47.179195748 +0000 UTC m=+0.068131574 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:57:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:48 compute-2 sudo[327957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:48 compute-2 sudo[327957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-2 sudo[327957]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-2 sudo[327983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:57:48 compute-2 sudo[327983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-2 sudo[327983]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-2 sudo[328008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:48 compute-2 sudo[328008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-2 sudo[328008]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-2 sudo[328033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:57:48 compute-2 sudo[328033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-2 ceph-mon[77282]: pgmap v3769: 305 pgs: 305 active+clean; 197 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 636 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 08:57:48 compute-2 sudo[328033]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-2 nova_compute[226829]: 2026-01-31 08:57:48.549 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:49.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:49 compute-2 nova_compute[226829]: 2026-01-31 08:57:49.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:57:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:57:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:57:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:57:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:57:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:57:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:49.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:50 compute-2 nova_compute[226829]: 2026-01-31 08:57:50.282 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:50 compute-2 ceph-mon[77282]: pgmap v3770: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 763 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 31 08:57:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:51.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:51.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:52 compute-2 ceph-mon[77282]: pgmap v3771: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:57:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:53.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/361818439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:57:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/361818439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:57:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:53.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:54 compute-2 nova_compute[226829]: 2026-01-31 08:57:54.224 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:54 compute-2 sudo[328092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:54 compute-2 sudo[328092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:54 compute-2 sudo[328092]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:54 compute-2 sudo[328117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:57:54 compute-2 sudo[328117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:54 compute-2 sudo[328117]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:54 compute-2 ceph-mon[77282]: pgmap v3772: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:57:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:57:54 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:57:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:55.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:55 compute-2 nova_compute[226829]: 2026-01-31 08:57:55.283 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:55 compute-2 ceph-mon[77282]: pgmap v3773: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:57:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:55.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:56 compute-2 nova_compute[226829]: 2026-01-31 08:57:56.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:56 compute-2 nova_compute[226829]: 2026-01-31 08:57:56.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:57:56 compute-2 nova_compute[226829]: 2026-01-31 08:57:56.625 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:57:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:57.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:58 compute-2 ceph-mon[77282]: pgmap v3774: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 206 KiB/s rd, 969 KiB/s wr, 39 op/s
Jan 31 08:57:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:57:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:57:59 compute-2 nova_compute[226829]: 2026-01-31 08:57:59.226 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:59 compute-2 ceph-mon[77282]: pgmap v3775: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 178 KiB/s rd, 301 KiB/s wr, 24 op/s
Jan 31 08:57:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:57:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:59.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:00 compute-2 nova_compute[226829]: 2026-01-31 08:58:00.285 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:01.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:01.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:02 compute-2 ceph-mon[77282]: pgmap v3776: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 KiB/s rd, 14 KiB/s wr, 0 op/s
Jan 31 08:58:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:58:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:58:03 compute-2 nova_compute[226829]: 2026-01-31 08:58:03.620 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:03.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:04 compute-2 nova_compute[226829]: 2026-01-31 08:58:04.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:04 compute-2 sudo[328147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:04 compute-2 sudo[328147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:04 compute-2 sudo[328147]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:04 compute-2 sudo[328172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:04 compute-2 sudo[328172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:04 compute-2 sudo[328172]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:04 compute-2 ceph-mon[77282]: pgmap v3777: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Jan 31 08:58:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:05.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:05 compute-2 nova_compute[226829]: 2026-01-31 08:58:05.288 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:05.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:06 compute-2 ceph-mon[77282]: pgmap v3778: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 08:58:06 compute-2 nova_compute[226829]: 2026-01-31 08:58:06.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:06 compute-2 nova_compute[226829]: 2026-01-31 08:58:06.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:58:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:06.937 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:06.937 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:06.938 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:07.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:07 compute-2 nova_compute[226829]: 2026-01-31 08:58:07.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:07.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:08 compute-2 ceph-mon[77282]: pgmap v3779: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 31 08:58:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:09.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:09 compute-2 nova_compute[226829]: 2026-01-31 08:58:09.273 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:09.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:10 compute-2 nova_compute[226829]: 2026-01-31 08:58:10.289 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:10 compute-2 nova_compute[226829]: 2026-01-31 08:58:10.509 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:10 compute-2 ceph-mon[77282]: pgmap v3780: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.0 KiB/s rd, 3.1 KiB/s wr, 6 op/s
Jan 31 08:58:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:58:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:58:11 compute-2 ceph-mon[77282]: pgmap v3781: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.3 KiB/s wr, 8 op/s
Jan 31 08:58:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:11.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:12 compute-2 ovn_controller[133834]: 2026-01-31T08:58:12Z|00815|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 31 08:58:12 compute-2 podman[328201]: 2026-01-31 08:58:12.210183775 +0000 UTC m=+0.098991412 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:58:12 compute-2 nova_compute[226829]: 2026-01-31 08:58:12.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:12 compute-2 nova_compute[226829]: 2026-01-31 08:58:12.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:13.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:13 compute-2 nova_compute[226829]: 2026-01-31 08:58:13.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:13.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:14 compute-2 nova_compute[226829]: 2026-01-31 08:58:14.275 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:14 compute-2 ceph-mon[77282]: pgmap v3782: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 KiB/s wr, 8 op/s
Jan 31 08:58:14 compute-2 nova_compute[226829]: 2026-01-31 08:58:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:14 compute-2 nova_compute[226829]: 2026-01-31 08:58:14.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:58:14 compute-2 nova_compute[226829]: 2026-01-31 08:58:14.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:58:14 compute-2 nova_compute[226829]: 2026-01-31 08:58:14.530 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:58:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:15.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:15.215 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:58:15 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:15.216 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:58:15 compute-2 nova_compute[226829]: 2026-01-31 08:58:15.257 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:15 compute-2 nova_compute[226829]: 2026-01-31 08:58:15.290 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3727841173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:15.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:16 compute-2 ceph-mon[77282]: pgmap v3783: 305 pgs: 305 active+clean; 207 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 485 KiB/s wr, 13 op/s
Jan 31 08:58:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1143733964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:17.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:17.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:18 compute-2 podman[328232]: 2026-01-31 08:58:18.152805315 +0000 UTC m=+0.038395944 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 08:58:18 compute-2 ceph-mon[77282]: pgmap v3784: 305 pgs: 305 active+clean; 219 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 700 KiB/s wr, 28 op/s
Jan 31 08:58:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:19.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:19 compute-2 nova_compute[226829]: 2026-01-31 08:58:19.276 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:19 compute-2 nova_compute[226829]: 2026-01-31 08:58:19.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:19.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:20 compute-2 nova_compute[226829]: 2026-01-31 08:58:20.292 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:20 compute-2 ceph-mon[77282]: pgmap v3785: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 31 08:58:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1049638260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.445575) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900445619, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 925, "num_deletes": 251, "total_data_size": 1843625, "memory_usage": 1877024, "flush_reason": "Manual Compaction"}
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Jan 31 08:58:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e405 e405: 3 total, 3 up, 3 in
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900462335, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 1215924, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89302, "largest_seqno": 90221, "table_properties": {"data_size": 1211751, "index_size": 1888, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9550, "raw_average_key_size": 19, "raw_value_size": 1203218, "raw_average_value_size": 2480, "num_data_blocks": 84, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849831, "oldest_key_time": 1769849831, "file_creation_time": 1769849900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 16793 microseconds, and 2720 cpu microseconds.
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.462367) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 1215924 bytes OK
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.462423) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.467126) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.467153) EVENT_LOG_v1 {"time_micros": 1769849900467135, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.467168) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1838987, prev total WAL file size 1839028, number of live WAL files 2.
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.467742) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(1187KB)], [183(11MB)]
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900467771, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 13315430, "oldest_snapshot_seqno": -1}
Jan 31 08:58:20 compute-2 nova_compute[226829]: 2026-01-31 08:58:20.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 10817 keys, 11380814 bytes, temperature: kUnknown
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900539384, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 11380814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11314909, "index_size": 37760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27077, "raw_key_size": 286569, "raw_average_key_size": 26, "raw_value_size": 11129523, "raw_average_value_size": 1028, "num_data_blocks": 1421, "num_entries": 10817, "num_filter_entries": 10817, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769849900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.539616) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11380814 bytes
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.541740) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.7 rd, 158.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.5 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(20.3) write-amplify(9.4) OK, records in: 11335, records dropped: 518 output_compression: NoCompression
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.541756) EVENT_LOG_v1 {"time_micros": 1769849900541749, "job": 118, "event": "compaction_finished", "compaction_time_micros": 71694, "compaction_time_cpu_micros": 22729, "output_level": 6, "num_output_files": 1, "total_output_size": 11380814, "num_input_records": 11335, "num_output_records": 10817, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900541957, "job": 118, "event": "table_file_deletion", "file_number": 185}
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900543376, "job": 118, "event": "table_file_deletion", "file_number": 183}
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.467652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.543398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.543401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.543403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.543404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-08:58:20.543405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-2 nova_compute[226829]: 2026-01-31 08:58:20.590 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:20 compute-2 nova_compute[226829]: 2026-01-31 08:58:20.591 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:20 compute-2 nova_compute[226829]: 2026-01-31 08:58:20.591 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:20 compute-2 nova_compute[226829]: 2026-01-31 08:58:20.591 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:58:20 compute-2 nova_compute[226829]: 2026-01-31 08:58:20.591 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:58:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2809828954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:21 compute-2 nova_compute[226829]: 2026-01-31 08:58:21.008 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:21.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:21 compute-2 nova_compute[226829]: 2026-01-31 08:58:21.140 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:58:21 compute-2 nova_compute[226829]: 2026-01-31 08:58:21.141 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4071MB free_disk=20.94265365600586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:58:21 compute-2 nova_compute[226829]: 2026-01-31 08:58:21.142 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:21 compute-2 nova_compute[226829]: 2026-01-31 08:58:21.142 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:21 compute-2 ceph-mon[77282]: osdmap e405: 3 total, 3 up, 3 in
Jan 31 08:58:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2809828954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2151004839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e406 e406: 3 total, 3 up, 3 in
Jan 31 08:58:21 compute-2 nova_compute[226829]: 2026-01-31 08:58:21.629 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:58:21 compute-2 nova_compute[226829]: 2026-01-31 08:58:21.629 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:58:21 compute-2 nova_compute[226829]: 2026-01-31 08:58:21.651 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:21.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:58:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2071452061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:22 compute-2 nova_compute[226829]: 2026-01-31 08:58:22.093 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:22 compute-2 nova_compute[226829]: 2026-01-31 08:58:22.103 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:58:22 compute-2 nova_compute[226829]: 2026-01-31 08:58:22.181 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:58:22 compute-2 nova_compute[226829]: 2026-01-31 08:58:22.183 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:58:22 compute-2 nova_compute[226829]: 2026-01-31 08:58:22.183 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:22 compute-2 ceph-mon[77282]: pgmap v3787: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 31 08:58:22 compute-2 ceph-mon[77282]: osdmap e406: 3 total, 3 up, 3 in
Jan 31 08:58:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2071452061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1977114274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e407 e407: 3 total, 3 up, 3 in
Jan 31 08:58:22 compute-2 nova_compute[226829]: 2026-01-31 08:58:22.983 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "aebd9cf4-4275-4a78-95d0-563c83d51201" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:22 compute-2 nova_compute[226829]: 2026-01-31 08:58:22.984 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:23 compute-2 nova_compute[226829]: 2026-01-31 08:58:23.106 226833 DEBUG nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:58:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:23.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:23 compute-2 nova_compute[226829]: 2026-01-31 08:58:23.455 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:23 compute-2 nova_compute[226829]: 2026-01-31 08:58:23.456 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:23 compute-2 nova_compute[226829]: 2026-01-31 08:58:23.471 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:58:23 compute-2 nova_compute[226829]: 2026-01-31 08:58:23.471 226833 INFO nova.compute.claims [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Claim successful on node compute-2.ctlplane.example.com
Jan 31 08:58:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e408 e408: 3 total, 3 up, 3 in
Jan 31 08:58:23 compute-2 ceph-mon[77282]: osdmap e407: 3 total, 3 up, 3 in
Jan 31 08:58:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:23.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:23 compute-2 nova_compute[226829]: 2026-01-31 08:58:23.821 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:58:24 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2855477775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:24 compute-2 nova_compute[226829]: 2026-01-31 08:58:24.245 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:24 compute-2 nova_compute[226829]: 2026-01-31 08:58:24.250 226833 DEBUG nova.compute.provider_tree [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:58:24 compute-2 nova_compute[226829]: 2026-01-31 08:58:24.278 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:24 compute-2 nova_compute[226829]: 2026-01-31 08:58:24.366 226833 DEBUG nova.scheduler.client.report [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:58:24 compute-2 sudo[328321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:24 compute-2 sudo[328321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:24 compute-2 sudo[328321]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:24 compute-2 sudo[328346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:24 compute-2 sudo[328346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:24 compute-2 sudo[328346]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:24 compute-2 ceph-mon[77282]: pgmap v3790: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 MiB/s wr, 50 op/s
Jan 31 08:58:24 compute-2 ceph-mon[77282]: osdmap e408: 3 total, 3 up, 3 in
Jan 31 08:58:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2855477775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:24 compute-2 nova_compute[226829]: 2026-01-31 08:58:24.543 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:24 compute-2 nova_compute[226829]: 2026-01-31 08:58:24.543 226833 DEBUG nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:58:24 compute-2 nova_compute[226829]: 2026-01-31 08:58:24.875 226833 DEBUG nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:58:24 compute-2 nova_compute[226829]: 2026-01-31 08:58:24.876 226833 DEBUG nova.network.neutron [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:58:24 compute-2 nova_compute[226829]: 2026-01-31 08:58:24.965 226833 INFO nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.084 226833 DEBUG nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:58:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.174 226833 DEBUG nova.policy [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:58:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:25.219 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.293 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.491 226833 DEBUG nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.494 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.495 226833 INFO nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Creating image(s)
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.520 226833 DEBUG nova.storage.rbd_utils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aebd9cf4-4275-4a78-95d0-563c83d51201_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.552 226833 DEBUG nova.storage.rbd_utils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aebd9cf4-4275-4a78-95d0-563c83d51201_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.575 226833 DEBUG nova.storage.rbd_utils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aebd9cf4-4275-4a78-95d0-563c83d51201_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.579 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.638 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.638 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.639 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.639 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.673 226833 DEBUG nova.storage.rbd_utils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aebd9cf4-4275-4a78-95d0-563c83d51201_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.676 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 aebd9cf4-4275-4a78-95d0-563c83d51201_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:25.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.925 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 aebd9cf4-4275-4a78-95d0-563c83d51201_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:25 compute-2 nova_compute[226829]: 2026-01-31 08:58:25.985 226833 DEBUG nova.storage.rbd_utils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image aebd9cf4-4275-4a78-95d0-563c83d51201_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:58:26 compute-2 nova_compute[226829]: 2026-01-31 08:58:26.089 226833 DEBUG nova.objects.instance [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid aebd9cf4-4275-4a78-95d0-563c83d51201 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:58:26 compute-2 ceph-mon[77282]: pgmap v3792: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 08:58:26 compute-2 nova_compute[226829]: 2026-01-31 08:58:26.609 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:58:26 compute-2 nova_compute[226829]: 2026-01-31 08:58:26.609 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Ensure instance console log exists: /var/lib/nova/instances/aebd9cf4-4275-4a78-95d0-563c83d51201/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:58:26 compute-2 nova_compute[226829]: 2026-01-31 08:58:26.610 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:26 compute-2 nova_compute[226829]: 2026-01-31 08:58:26.610 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:26 compute-2 nova_compute[226829]: 2026-01-31 08:58:26.610 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:27.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:27 compute-2 nova_compute[226829]: 2026-01-31 08:58:27.135 226833 DEBUG nova.network.neutron [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Successfully created port: de390d38-0dff-49b5-9827-253885397034 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:58:27 compute-2 ceph-mon[77282]: pgmap v3793: 305 pgs: 305 active+clean; 287 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 251 op/s
Jan 31 08:58:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:58:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/51172579' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:27.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:28 compute-2 nova_compute[226829]: 2026-01-31 08:58:28.198 226833 DEBUG nova.network.neutron [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Successfully updated port: de390d38-0dff-49b5-9827-253885397034 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:58:28 compute-2 nova_compute[226829]: 2026-01-31 08:58:28.430 226833 DEBUG nova.compute.manager [req-e3a78beb-a076-4773-9f3c-b80aa12bd666 req-fa1f909b-3519-4ba5-87ce-9077ce48902c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Received event network-changed-de390d38-0dff-49b5-9827-253885397034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:58:28 compute-2 nova_compute[226829]: 2026-01-31 08:58:28.431 226833 DEBUG nova.compute.manager [req-e3a78beb-a076-4773-9f3c-b80aa12bd666 req-fa1f909b-3519-4ba5-87ce-9077ce48902c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Refreshing instance network info cache due to event network-changed-de390d38-0dff-49b5-9827-253885397034. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:58:28 compute-2 nova_compute[226829]: 2026-01-31 08:58:28.431 226833 DEBUG oslo_concurrency.lockutils [req-e3a78beb-a076-4773-9f3c-b80aa12bd666 req-fa1f909b-3519-4ba5-87ce-9077ce48902c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aebd9cf4-4275-4a78-95d0-563c83d51201" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:58:28 compute-2 nova_compute[226829]: 2026-01-31 08:58:28.431 226833 DEBUG oslo_concurrency.lockutils [req-e3a78beb-a076-4773-9f3c-b80aa12bd666 req-fa1f909b-3519-4ba5-87ce-9077ce48902c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aebd9cf4-4275-4a78-95d0-563c83d51201" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:58:28 compute-2 nova_compute[226829]: 2026-01-31 08:58:28.431 226833 DEBUG nova.network.neutron [req-e3a78beb-a076-4773-9f3c-b80aa12bd666 req-fa1f909b-3519-4ba5-87ce-9077ce48902c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Refreshing network info cache for port de390d38-0dff-49b5-9827-253885397034 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:58:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/51172579' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:28 compute-2 nova_compute[226829]: 2026-01-31 08:58:28.605 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-aebd9cf4-4275-4a78-95d0-563c83d51201" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:58:29 compute-2 nova_compute[226829]: 2026-01-31 08:58:29.008 226833 DEBUG nova.network.neutron [req-e3a78beb-a076-4773-9f3c-b80aa12bd666 req-fa1f909b-3519-4ba5-87ce-9077ce48902c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:58:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:29.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:29 compute-2 nova_compute[226829]: 2026-01-31 08:58:29.279 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:29 compute-2 ceph-mon[77282]: pgmap v3794: 305 pgs: 305 active+clean; 307 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 320 op/s
Jan 31 08:58:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e409 e409: 3 total, 3 up, 3 in
Jan 31 08:58:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:58:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:29.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:58:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 e410: 3 total, 3 up, 3 in
Jan 31 08:58:30 compute-2 nova_compute[226829]: 2026-01-31 08:58:30.295 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:30 compute-2 ceph-mon[77282]: osdmap e409: 3 total, 3 up, 3 in
Jan 31 08:58:30 compute-2 ceph-mon[77282]: osdmap e410: 3 total, 3 up, 3 in
Jan 31 08:58:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:30 compute-2 nova_compute[226829]: 2026-01-31 08:58:30.849 226833 DEBUG nova.network.neutron [req-e3a78beb-a076-4773-9f3c-b80aa12bd666 req-fa1f909b-3519-4ba5-87ce-9077ce48902c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:58:30 compute-2 nova_compute[226829]: 2026-01-31 08:58:30.964 226833 DEBUG oslo_concurrency.lockutils [req-e3a78beb-a076-4773-9f3c-b80aa12bd666 req-fa1f909b-3519-4ba5-87ce-9077ce48902c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aebd9cf4-4275-4a78-95d0-563c83d51201" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:58:30 compute-2 nova_compute[226829]: 2026-01-31 08:58:30.965 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-aebd9cf4-4275-4a78-95d0-563c83d51201" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:58:30 compute-2 nova_compute[226829]: 2026-01-31 08:58:30.965 226833 DEBUG nova.network.neutron [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:58:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:58:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:31.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:58:31 compute-2 nova_compute[226829]: 2026-01-31 08:58:31.322 226833 DEBUG nova.network.neutron [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:58:31 compute-2 ceph-mon[77282]: pgmap v3797: 305 pgs: 305 active+clean; 361 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.0 MiB/s wr, 377 op/s
Jan 31 08:58:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:31.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:33.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.269 226833 DEBUG nova.network.neutron [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Updating instance_info_cache with network_info: [{"id": "de390d38-0dff-49b5-9827-253885397034", "address": "fa:16:3e:7b:28:5d", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde390d38-0d", "ovs_interfaceid": "de390d38-0dff-49b5-9827-253885397034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.304 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-aebd9cf4-4275-4a78-95d0-563c83d51201" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.305 226833 DEBUG nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Instance network_info: |[{"id": "de390d38-0dff-49b5-9827-253885397034", "address": "fa:16:3e:7b:28:5d", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde390d38-0d", "ovs_interfaceid": "de390d38-0dff-49b5-9827-253885397034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.309 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Start _get_guest_xml network_info=[{"id": "de390d38-0dff-49b5-9827-253885397034", "address": "fa:16:3e:7b:28:5d", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde390d38-0d", "ovs_interfaceid": "de390d38-0dff-49b5-9827-253885397034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.314 226833 WARNING nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.320 226833 DEBUG nova.virt.libvirt.host [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.321 226833 DEBUG nova.virt.libvirt.host [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.325 226833 DEBUG nova.virt.libvirt.host [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.326 226833 DEBUG nova.virt.libvirt.host [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.327 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.327 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.328 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.328 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.328 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.328 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.329 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.329 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.329 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.329 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.329 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.330 226833 DEBUG nova.virt.hardware [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.333 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:58:33 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3839769465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.740 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.778 226833 DEBUG nova.storage.rbd_utils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aebd9cf4-4275-4a78-95d0-563c83d51201_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:58:33 compute-2 nova_compute[226829]: 2026-01-31 08:58:33.782 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:33.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:58:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2286454376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.194 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.195 226833 DEBUG nova.virt.libvirt.vif [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-436061159',display_name='tempest-TestNetworkBasicOps-server-436061159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-436061159',id=208,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLFBgE9rCyhI2G9hTw2Y2HfNcIglCra8RgRZ9XDcJSsqho061PFmsCvtsFd3fwUZaRvk4TXcFgJ2EWxD7biJI4RzlI8pizWDJLLIHarfRDci6Sl5EYtSZcXnnwx6bJjJow==',key_name='tempest-TestNetworkBasicOps-1874455025',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-uwmy6ihx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:58:25Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=aebd9cf4-4275-4a78-95d0-563c83d51201,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "de390d38-0dff-49b5-9827-253885397034", "address": "fa:16:3e:7b:28:5d", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde390d38-0d", "ovs_interfaceid": "de390d38-0dff-49b5-9827-253885397034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.196 226833 DEBUG nova.network.os_vif_util [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "de390d38-0dff-49b5-9827-253885397034", "address": "fa:16:3e:7b:28:5d", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde390d38-0d", "ovs_interfaceid": "de390d38-0dff-49b5-9827-253885397034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.197 226833 DEBUG nova.network.os_vif_util [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:28:5d,bridge_name='br-int',has_traffic_filtering=True,id=de390d38-0dff-49b5-9827-253885397034,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde390d38-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.198 226833 DEBUG nova.objects.instance [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid aebd9cf4-4275-4a78-95d0-563c83d51201 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.258 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <uuid>aebd9cf4-4275-4a78-95d0-563c83d51201</uuid>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <name>instance-000000d0</name>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <metadata>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <nova:name>tempest-TestNetworkBasicOps-server-436061159</nova:name>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 08:58:33</nova:creationTime>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <nova:port uuid="de390d38-0dff-49b5-9827-253885397034">
Jan 31 08:58:34 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   </metadata>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <system>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <entry name="serial">aebd9cf4-4275-4a78-95d0-563c83d51201</entry>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <entry name="uuid">aebd9cf4-4275-4a78-95d0-563c83d51201</entry>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     </system>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <os>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   </os>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <features>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <apic/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   </features>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   </clock>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   </cpu>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   <devices>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/aebd9cf4-4275-4a78-95d0-563c83d51201_disk">
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       </source>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/aebd9cf4-4275-4a78-95d0-563c83d51201_disk.config">
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       </source>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 08:58:34 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       </auth>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     </disk>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:7b:28:5d"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <target dev="tapde390d38-0d"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     </interface>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/aebd9cf4-4275-4a78-95d0-563c83d51201/console.log" append="off"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     </serial>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <video>
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     </video>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     </rng>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 08:58:34 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 08:58:34 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 08:58:34 compute-2 nova_compute[226829]:   </devices>
Jan 31 08:58:34 compute-2 nova_compute[226829]: </domain>
Jan 31 08:58:34 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.259 226833 DEBUG nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Preparing to wait for external event network-vif-plugged-de390d38-0dff-49b5-9827-253885397034 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.259 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.260 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.260 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.261 226833 DEBUG nova.virt.libvirt.vif [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-436061159',display_name='tempest-TestNetworkBasicOps-server-436061159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-436061159',id=208,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLFBgE9rCyhI2G9hTw2Y2HfNcIglCra8RgRZ9XDcJSsqho061PFmsCvtsFd3fwUZaRvk4TXcFgJ2EWxD7biJI4RzlI8pizWDJLLIHarfRDci6Sl5EYtSZcXnnwx6bJjJow==',key_name='tempest-TestNetworkBasicOps-1874455025',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-uwmy6ihx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:58:25Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=aebd9cf4-4275-4a78-95d0-563c83d51201,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "de390d38-0dff-49b5-9827-253885397034", "address": "fa:16:3e:7b:28:5d", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde390d38-0d", "ovs_interfaceid": "de390d38-0dff-49b5-9827-253885397034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.261 226833 DEBUG nova.network.os_vif_util [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "de390d38-0dff-49b5-9827-253885397034", "address": "fa:16:3e:7b:28:5d", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde390d38-0d", "ovs_interfaceid": "de390d38-0dff-49b5-9827-253885397034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.262 226833 DEBUG nova.network.os_vif_util [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:28:5d,bridge_name='br-int',has_traffic_filtering=True,id=de390d38-0dff-49b5-9827-253885397034,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde390d38-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.262 226833 DEBUG os_vif [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:28:5d,bridge_name='br-int',has_traffic_filtering=True,id=de390d38-0dff-49b5-9827-253885397034,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde390d38-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.263 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.263 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.264 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.270 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.271 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde390d38-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.271 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapde390d38-0d, col_values=(('external_ids', {'iface-id': 'de390d38-0dff-49b5-9827-253885397034', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:28:5d', 'vm-uuid': 'aebd9cf4-4275-4a78-95d0-563c83d51201'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.273 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:34 compute-2 NetworkManager[48999]: <info>  [1769849914.2746] manager: (tapde390d38-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.277 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.279 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.281 226833 INFO os_vif [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:28:5d,bridge_name='br-int',has_traffic_filtering=True,id=de390d38-0dff-49b5-9827-253885397034,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde390d38-0d')
Jan 31 08:58:34 compute-2 ceph-mon[77282]: pgmap v3798: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.6 MiB/s wr, 306 op/s
Jan 31 08:58:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3839769465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2286454376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.416 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.416 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.416 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:7b:28:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.417 226833 INFO nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Using config drive
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.440 226833 DEBUG nova.storage.rbd_utils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aebd9cf4-4275-4a78-95d0-563c83d51201_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.783 226833 INFO nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Creating config drive at /var/lib/nova/instances/aebd9cf4-4275-4a78-95d0-563c83d51201/disk.config
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.786 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aebd9cf4-4275-4a78-95d0-563c83d51201/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp18103yqv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.926 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aebd9cf4-4275-4a78-95d0-563c83d51201/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp18103yqv" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.950 226833 DEBUG nova.storage.rbd_utils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aebd9cf4-4275-4a78-95d0-563c83d51201_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:58:34 compute-2 nova_compute[226829]: 2026-01-31 08:58:34.954 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aebd9cf4-4275-4a78-95d0-563c83d51201/disk.config aebd9cf4-4275-4a78-95d0-563c83d51201_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.097 226833 DEBUG oslo_concurrency.processutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aebd9cf4-4275-4a78-95d0-563c83d51201/disk.config aebd9cf4-4275-4a78-95d0-563c83d51201_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.098 226833 INFO nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Deleting local config drive /var/lib/nova/instances/aebd9cf4-4275-4a78-95d0-563c83d51201/disk.config because it was imported into RBD.
Jan 31 08:58:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:35 compute-2 kernel: tapde390d38-0d: entered promiscuous mode
Jan 31 08:58:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:35.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:35 compute-2 NetworkManager[48999]: <info>  [1769849915.1459] manager: (tapde390d38-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/408)
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.184 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.184 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:58:35 compute-2 ovn_controller[133834]: 2026-01-31T08:58:35Z|00816|binding|INFO|Claiming lport de390d38-0dff-49b5-9827-253885397034 for this chassis.
Jan 31 08:58:35 compute-2 ovn_controller[133834]: 2026-01-31T08:58:35Z|00817|binding|INFO|de390d38-0dff-49b5-9827-253885397034: Claiming fa:16:3e:7b:28:5d 10.100.0.27
Jan 31 08:58:35 compute-2 systemd-udevd[328673]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.194 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.196 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:35 compute-2 NetworkManager[48999]: <info>  [1769849915.2048] device (tapde390d38-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:58:35 compute-2 NetworkManager[48999]: <info>  [1769849915.2053] device (tapde390d38-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.203 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:28:5d 10.100.0.27'], port_security=['fa:16:3e:7b:28:5d 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'aebd9cf4-4275-4a78-95d0-563c83d51201', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5071208b-ad60-4226-80bf-e8d76f33781f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2dba45ae-fc83-4055-95aa-db3891b7d54e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a39089-ebf2-4133-93ff-475b6e4ca7f0, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=de390d38-0dff-49b5-9827-253885397034) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.205 143841 INFO neutron.agent.ovn.metadata.agent [-] Port de390d38-0dff-49b5-9827-253885397034 in datapath 5071208b-ad60-4226-80bf-e8d76f33781f bound to our chassis
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.207 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5071208b-ad60-4226-80bf-e8d76f33781f
Jan 31 08:58:35 compute-2 ovn_controller[133834]: 2026-01-31T08:58:35Z|00818|binding|INFO|Setting lport de390d38-0dff-49b5-9827-253885397034 ovn-installed in OVS
Jan 31 08:58:35 compute-2 ovn_controller[133834]: 2026-01-31T08:58:35Z|00819|binding|INFO|Setting lport de390d38-0dff-49b5-9827-253885397034 up in Southbound
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.211 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:35 compute-2 systemd-machined[195142]: New machine qemu-93-instance-000000d0.
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.219 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[234f9d44-5566-40b0-acce-8eae41c2fdb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.220 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5071208b-a1 in ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.223 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5071208b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.223 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2a67d5-c25b-485e-a9cd-68552a5f32fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.224 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c01e9b14-8850-4310-9a33-4d4092e046fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 systemd[1]: Started Virtual Machine qemu-93-instance-000000d0.
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.236 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[497e9c7e-de60-44f1-bf42-8a4b41940f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.248 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[422dd6e6-38ce-4654-8995-74f1963832e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.275 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[564b71c3-d189-4b3b-be1c-609357d29eb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.280 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[850d4bd3-fb28-4c36-995b-a879df1852c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 NetworkManager[48999]: <info>  [1769849915.2823] manager: (tap5071208b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/409)
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.308 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[dc44ded4-3a76-4da0-8615-aa7d8c77bc4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.312 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[07ea7dc7-bebe-4065-8e62-b141d83694a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 NetworkManager[48999]: <info>  [1769849915.3272] device (tap5071208b-a0): carrier: link connected
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.329 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc66016-394a-45a3-9f39-f0f1fab74888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.341 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0f2c5384-8b09-47f0-a532-597cd126e648]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5071208b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:80:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1016780, 'reachable_time': 42307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328709, 'error': None, 'target': 'ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.351 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5c63980f-bc35-403d-bd95-d85af7555c69]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:80c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1016780, 'tstamp': 1016780}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328710, 'error': None, 'target': 'ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.360 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[619d03eb-37a8-42ca-a744-d9c15f01ba2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5071208b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:80:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1016780, 'reachable_time': 42307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328711, 'error': None, 'target': 'ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.376 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b211c5c3-0b5c-4f33-9cd6-3df6f1e44091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.412 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c58097-cba8-4422-800c-2dd4eccfad94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.414 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5071208b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.414 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.414 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5071208b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.416 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:35 compute-2 kernel: tap5071208b-a0: entered promiscuous mode
Jan 31 08:58:35 compute-2 NetworkManager[48999]: <info>  [1769849915.4168] manager: (tap5071208b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.418 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.420 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5071208b-a0, col_values=(('external_ids', {'iface-id': 'fa0a8913-d154-4437-aa1c-6e031382e325'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.422 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:35 compute-2 ovn_controller[133834]: 2026-01-31T08:58:35Z|00820|binding|INFO|Releasing lport fa0a8913-d154-4437-aa1c-6e031382e325 from this chassis (sb_readonly=0)
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.423 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.423 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5071208b-ad60-4226-80bf-e8d76f33781f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5071208b-ad60-4226-80bf-e8d76f33781f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.424 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a42cb3-27bb-4641-9b5c-7d3760e25d4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.425 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: global
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5071208b-ad60-4226-80bf-e8d76f33781f
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5071208b-ad60-4226-80bf-e8d76f33781f.pid.haproxy
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5071208b-ad60-4226-80bf-e8d76f33781f
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:58:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:58:35.426 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f', 'env', 'PROCESS_TAG=haproxy-5071208b-ad60-4226-80bf-e8d76f33781f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5071208b-ad60-4226-80bf-e8d76f33781f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.427 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.572 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849915.572095, aebd9cf4-4275-4a78-95d0-563c83d51201 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.573 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] VM Started (Lifecycle Event)
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.593 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.597 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849915.5738149, aebd9cf4-4275-4a78-95d0-563c83d51201 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.598 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] VM Paused (Lifecycle Event)
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.618 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.621 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:58:35 compute-2 nova_compute[226829]: 2026-01-31 08:58:35.652 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:58:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:35 compute-2 podman[328785]: 2026-01-31 08:58:35.763401116 +0000 UTC m=+0.056084186 container create 22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 08:58:35 compute-2 systemd[1]: Started libpod-conmon-22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9.scope.
Jan 31 08:58:35 compute-2 systemd[1]: Started libcrun container.
Jan 31 08:58:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:35.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:35 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c83e8ca456d9d44ead218c571287b68975af3a380b31189e53912f0e5eda25a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:35 compute-2 podman[328785]: 2026-01-31 08:58:35.834472909 +0000 UTC m=+0.127155999 container init 22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:58:35 compute-2 podman[328785]: 2026-01-31 08:58:35.740111052 +0000 UTC m=+0.032794142 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:58:35 compute-2 podman[328785]: 2026-01-31 08:58:35.83823009 +0000 UTC m=+0.130913170 container start 22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:58:35 compute-2 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[328800]: [NOTICE]   (328804) : New worker (328806) forked
Jan 31 08:58:35 compute-2 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[328800]: [NOTICE]   (328804) : Loading success.
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.258 226833 DEBUG nova.compute.manager [req-c25a31d1-c9fd-451f-8c41-b70cab9c7e62 req-753e5b1d-f0af-4247-9013-519abdedc5b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Received event network-vif-plugged-de390d38-0dff-49b5-9827-253885397034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.259 226833 DEBUG oslo_concurrency.lockutils [req-c25a31d1-c9fd-451f-8c41-b70cab9c7e62 req-753e5b1d-f0af-4247-9013-519abdedc5b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.260 226833 DEBUG oslo_concurrency.lockutils [req-c25a31d1-c9fd-451f-8c41-b70cab9c7e62 req-753e5b1d-f0af-4247-9013-519abdedc5b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.260 226833 DEBUG oslo_concurrency.lockutils [req-c25a31d1-c9fd-451f-8c41-b70cab9c7e62 req-753e5b1d-f0af-4247-9013-519abdedc5b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.260 226833 DEBUG nova.compute.manager [req-c25a31d1-c9fd-451f-8c41-b70cab9c7e62 req-753e5b1d-f0af-4247-9013-519abdedc5b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Processing event network-vif-plugged-de390d38-0dff-49b5-9827-253885397034 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.261 226833 DEBUG nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.265 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769849916.2651942, aebd9cf4-4275-4a78-95d0-563c83d51201 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.265 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] VM Resumed (Lifecycle Event)
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.267 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.269 226833 INFO nova.virt.libvirt.driver [-] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Instance spawned successfully.
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.269 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.296 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.296 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.296 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.297 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.297 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.297 226833 DEBUG nova.virt.libvirt.driver [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.326 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.330 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.409 226833 INFO nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Took 10.92 seconds to spawn the instance on the hypervisor.
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.409 226833 DEBUG nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.412 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:58:36 compute-2 ceph-mon[77282]: pgmap v3799: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.7 MiB/s wr, 215 op/s
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.524 226833 INFO nova.compute.manager [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Took 13.10 seconds to build instance.
Jan 31 08:58:36 compute-2 nova_compute[226829]: 2026-01-31 08:58:36.552 226833 DEBUG oslo_concurrency.lockutils [None req-de3b2cc0-0ed6-4c0d-98ac-6370e2de44a1 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:58:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:37.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:58:37 compute-2 ceph-mon[77282]: pgmap v3800: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 95 op/s
Jan 31 08:58:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:37.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:38 compute-2 nova_compute[226829]: 2026-01-31 08:58:38.498 226833 DEBUG nova.compute.manager [req-606a4d38-7d55-4e2b-90b3-bd7914ed031a req-9fdb4aad-cc1c-478d-9b80-30eedc591d44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Received event network-vif-plugged-de390d38-0dff-49b5-9827-253885397034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:58:38 compute-2 nova_compute[226829]: 2026-01-31 08:58:38.499 226833 DEBUG oslo_concurrency.lockutils [req-606a4d38-7d55-4e2b-90b3-bd7914ed031a req-9fdb4aad-cc1c-478d-9b80-30eedc591d44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:38 compute-2 nova_compute[226829]: 2026-01-31 08:58:38.499 226833 DEBUG oslo_concurrency.lockutils [req-606a4d38-7d55-4e2b-90b3-bd7914ed031a req-9fdb4aad-cc1c-478d-9b80-30eedc591d44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:38 compute-2 nova_compute[226829]: 2026-01-31 08:58:38.499 226833 DEBUG oslo_concurrency.lockutils [req-606a4d38-7d55-4e2b-90b3-bd7914ed031a req-9fdb4aad-cc1c-478d-9b80-30eedc591d44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:38 compute-2 nova_compute[226829]: 2026-01-31 08:58:38.499 226833 DEBUG nova.compute.manager [req-606a4d38-7d55-4e2b-90b3-bd7914ed031a req-9fdb4aad-cc1c-478d-9b80-30eedc591d44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] No waiting events found dispatching network-vif-plugged-de390d38-0dff-49b5-9827-253885397034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:58:38 compute-2 nova_compute[226829]: 2026-01-31 08:58:38.500 226833 WARNING nova.compute.manager [req-606a4d38-7d55-4e2b-90b3-bd7914ed031a req-9fdb4aad-cc1c-478d-9b80-30eedc591d44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Received unexpected event network-vif-plugged-de390d38-0dff-49b5-9827-253885397034 for instance with vm_state active and task_state None.
Jan 31 08:58:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:39.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:39 compute-2 nova_compute[226829]: 2026-01-31 08:58:39.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:39 compute-2 ceph-mon[77282]: pgmap v3801: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 31 08:58:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:58:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:39.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:58:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:41.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:41.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:42 compute-2 ceph-mon[77282]: pgmap v3802: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Jan 31 08:58:42 compute-2 podman[328819]: 2026-01-31 08:58:42.933758392 +0000 UTC m=+0.069163862 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:58:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:43.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2012726985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:43.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:44 compute-2 nova_compute[226829]: 2026-01-31 08:58:44.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:44 compute-2 nova_compute[226829]: 2026-01-31 08:58:44.318 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:44 compute-2 ceph-mon[77282]: pgmap v3803: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 793 KiB/s wr, 90 op/s
Jan 31 08:58:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3860137983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:44 compute-2 sudo[328845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:44 compute-2 sudo[328845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:44 compute-2 sudo[328845]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:44 compute-2 sudo[328870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:44 compute-2 sudo[328870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:44 compute-2 sudo[328870]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:45.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:46 compute-2 ceph-mon[77282]: pgmap v3804: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 79 op/s
Jan 31 08:58:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:47.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:48 compute-2 ceph-mon[77282]: pgmap v3805: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 31 08:58:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1254434377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:49.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:49 compute-2 podman[328898]: 2026-01-31 08:58:49.183916013 +0000 UTC m=+0.047310947 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:58:49 compute-2 nova_compute[226829]: 2026-01-31 08:58:49.317 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:49 compute-2 ovn_controller[133834]: 2026-01-31T08:58:49Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:28:5d 10.100.0.27
Jan 31 08:58:49 compute-2 ovn_controller[133834]: 2026-01-31T08:58:49Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:28:5d 10.100.0.27
Jan 31 08:58:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:50 compute-2 ceph-mon[77282]: pgmap v3806: 305 pgs: 305 active+clean; 386 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 205 KiB/s wr, 73 op/s
Jan 31 08:58:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:51 compute-2 ceph-mon[77282]: pgmap v3807: 305 pgs: 305 active+clean; 414 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 124 op/s
Jan 31 08:58:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:51.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:58:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/602051847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:58:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:58:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/602051847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:58:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:58:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:53.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:58:54 compute-2 nova_compute[226829]: 2026-01-31 08:58:54.319 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:54 compute-2 ceph-mon[77282]: pgmap v3808: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 31 08:58:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/602051847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:58:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/602051847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:58:54 compute-2 sudo[328921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:54 compute-2 sudo[328921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:54 compute-2 sudo[328921]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:54 compute-2 sudo[328946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:58:54 compute-2 sudo[328946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:54 compute-2 sudo[328946]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:54 compute-2 sudo[328971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:54 compute-2 sudo[328971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:54 compute-2 sudo[328971]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:54 compute-2 sudo[328996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:58:54 compute-2 sudo[328996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:54 compute-2 podman[329090]: 2026-01-31 08:58:54.965250958 +0000 UTC m=+0.051904872 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:58:55 compute-2 podman[329111]: 2026-01-31 08:58:55.113192201 +0000 UTC m=+0.054095432 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:58:55 compute-2 podman[329090]: 2026-01-31 08:58:55.118830164 +0000 UTC m=+0.205484068 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 08:58:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:58:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:55.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:58:55 compute-2 podman[329246]: 2026-01-31 08:58:55.564602286 +0000 UTC m=+0.044930452 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:58:55 compute-2 podman[329246]: 2026-01-31 08:58:55.573681883 +0000 UTC m=+0.054010049 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 08:58:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:55 compute-2 podman[329316]: 2026-01-31 08:58:55.729648515 +0000 UTC m=+0.047893224 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.expose-services=, release=1793, version=2.2.4, distribution-scope=public, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 31 08:58:55 compute-2 podman[329316]: 2026-01-31 08:58:55.741569008 +0000 UTC m=+0.059813707 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived)
Jan 31 08:58:55 compute-2 sudo[328996]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:55.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:55 compute-2 sudo[329346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:55 compute-2 sudo[329346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:55 compute-2 sudo[329346]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:55 compute-2 sudo[329371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:58:55 compute-2 sudo[329371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:55 compute-2 sudo[329371]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:55 compute-2 sudo[329396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:55 compute-2 sudo[329396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:55 compute-2 sudo[329396]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:55 compute-2 sudo[329421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:58:55 compute-2 sudo[329421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:56 compute-2 sudo[329421]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:56 compute-2 ceph-mon[77282]: pgmap v3809: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:58:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:58:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:58:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:58:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:58:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:58:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:58:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:57.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:58:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:58:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:57.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:58 compute-2 ceph-mon[77282]: pgmap v3810: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:58:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:59.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:59 compute-2 nova_compute[226829]: 2026-01-31 08:58:59.320 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:58:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:59.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:00 compute-2 ceph-mon[77282]: pgmap v3811: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:59:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:01.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:01.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:02 compute-2 sudo[329482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:02 compute-2 sudo[329482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:02 compute-2 sudo[329482]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:02 compute-2 sudo[329507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:59:02 compute-2 sudo[329507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:02 compute-2 sudo[329507]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.284 226833 DEBUG oslo_concurrency.lockutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "aebd9cf4-4275-4a78-95d0-563c83d51201" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.284 226833 DEBUG oslo_concurrency.lockutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.285 226833 DEBUG oslo_concurrency.lockutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.285 226833 DEBUG oslo_concurrency.lockutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.285 226833 DEBUG oslo_concurrency.lockutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.287 226833 INFO nova.compute.manager [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Terminating instance
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.288 226833 DEBUG nova.compute.manager [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:59:02 compute-2 kernel: tapde390d38-0d (unregistering): left promiscuous mode
Jan 31 08:59:02 compute-2 NetworkManager[48999]: <info>  [1769849942.3368] device (tapde390d38-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.342 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:02 compute-2 ovn_controller[133834]: 2026-01-31T08:59:02Z|00821|binding|INFO|Releasing lport de390d38-0dff-49b5-9827-253885397034 from this chassis (sb_readonly=0)
Jan 31 08:59:02 compute-2 ovn_controller[133834]: 2026-01-31T08:59:02Z|00822|binding|INFO|Setting lport de390d38-0dff-49b5-9827-253885397034 down in Southbound
Jan 31 08:59:02 compute-2 ovn_controller[133834]: 2026-01-31T08:59:02Z|00823|binding|INFO|Removing iface tapde390d38-0d ovn-installed in OVS
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.352 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:02 compute-2 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d0.scope: Deactivated successfully.
Jan 31 08:59:02 compute-2 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d0.scope: Consumed 12.930s CPU time.
Jan 31 08:59:02 compute-2 systemd-machined[195142]: Machine qemu-93-instance-000000d0 terminated.
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.382 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:28:5d 10.100.0.27'], port_security=['fa:16:3e:7b:28:5d 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'aebd9cf4-4275-4a78-95d0-563c83d51201', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5071208b-ad60-4226-80bf-e8d76f33781f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2dba45ae-fc83-4055-95aa-db3891b7d54e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a39089-ebf2-4133-93ff-475b6e4ca7f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=de390d38-0dff-49b5-9827-253885397034) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.385 143841 INFO neutron.agent.ovn.metadata.agent [-] Port de390d38-0dff-49b5-9827-253885397034 in datapath 5071208b-ad60-4226-80bf-e8d76f33781f unbound from our chassis
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.387 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5071208b-ad60-4226-80bf-e8d76f33781f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.389 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e49b2029-47db-4dff-b857-fe3ceef85142]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.390 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f namespace which is not needed anymore
Jan 31 08:59:02 compute-2 ceph-mon[77282]: pgmap v3812: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 137 op/s
Jan 31 08:59:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:59:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.504 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:02 compute-2 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[328800]: [NOTICE]   (328804) : haproxy version is 2.8.14-c23fe91
Jan 31 08:59:02 compute-2 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[328800]: [NOTICE]   (328804) : path to executable is /usr/sbin/haproxy
Jan 31 08:59:02 compute-2 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[328800]: [WARNING]  (328804) : Exiting Master process...
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.511 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:02 compute-2 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[328800]: [ALERT]    (328804) : Current worker (328806) exited with code 143 (Terminated)
Jan 31 08:59:02 compute-2 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[328800]: [WARNING]  (328804) : All workers exited. Exiting... (0)
Jan 31 08:59:02 compute-2 systemd[1]: libpod-22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9.scope: Deactivated successfully.
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.518 226833 INFO nova.virt.libvirt.driver [-] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Instance destroyed successfully.
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.518 226833 DEBUG nova.objects.instance [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid aebd9cf4-4275-4a78-95d0-563c83d51201 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:59:02 compute-2 podman[329557]: 2026-01-31 08:59:02.521385394 +0000 UTC m=+0.044314485 container died 22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.541 226833 DEBUG nova.virt.libvirt.vif [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-436061159',display_name='tempest-TestNetworkBasicOps-server-436061159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-436061159',id=208,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLFBgE9rCyhI2G9hTw2Y2HfNcIglCra8RgRZ9XDcJSsqho061PFmsCvtsFd3fwUZaRvk4TXcFgJ2EWxD7biJI4RzlI8pizWDJLLIHarfRDci6Sl5EYtSZcXnnwx6bJjJow==',key_name='tempest-TestNetworkBasicOps-1874455025',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:58:36Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-uwmy6ihx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:58:36Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=aebd9cf4-4275-4a78-95d0-563c83d51201,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "de390d38-0dff-49b5-9827-253885397034", "address": "fa:16:3e:7b:28:5d", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde390d38-0d", "ovs_interfaceid": "de390d38-0dff-49b5-9827-253885397034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.541 226833 DEBUG nova.network.os_vif_util [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "de390d38-0dff-49b5-9827-253885397034", "address": "fa:16:3e:7b:28:5d", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde390d38-0d", "ovs_interfaceid": "de390d38-0dff-49b5-9827-253885397034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.542 226833 DEBUG nova.network.os_vif_util [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:28:5d,bridge_name='br-int',has_traffic_filtering=True,id=de390d38-0dff-49b5-9827-253885397034,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde390d38-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.542 226833 DEBUG os_vif [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:28:5d,bridge_name='br-int',has_traffic_filtering=True,id=de390d38-0dff-49b5-9827-253885397034,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde390d38-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.545 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.545 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde390d38-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:59:02 compute-2 systemd[1]: var-lib-containers-storage-overlay-9c83e8ca456d9d44ead218c571287b68975af3a380b31189e53912f0e5eda25a-merged.mount: Deactivated successfully.
Jan 31 08:59:02 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9-userdata-shm.mount: Deactivated successfully.
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.546 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.550 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.552 226833 INFO os_vif [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:28:5d,bridge_name='br-int',has_traffic_filtering=True,id=de390d38-0dff-49b5-9827-253885397034,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde390d38-0d')
Jan 31 08:59:02 compute-2 podman[329557]: 2026-01-31 08:59:02.559775528 +0000 UTC m=+0.082704619 container cleanup 22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:59:02 compute-2 systemd[1]: libpod-conmon-22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9.scope: Deactivated successfully.
Jan 31 08:59:02 compute-2 podman[329604]: 2026-01-31 08:59:02.609549242 +0000 UTC m=+0.035769624 container remove 22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.613 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[244150e2-57ed-4345-9965-39e109f0c145]: (4, ('Sat Jan 31 08:59:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f (22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9)\n22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9\nSat Jan 31 08:59:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f (22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9)\n22d8c9dfd4dec853be198ccf72ffe9569a5ebf43ee854aae939766658d63cef9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.614 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8d7e5e-2682-4aac-bda2-d3703799dfe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.615 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5071208b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.657 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:02 compute-2 kernel: tap5071208b-a0: left promiscuous mode
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.663 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.664 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.666 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[c4598931-ca7b-4d76-b160-4ba5f8063734]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.682 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4e7b80-a4e3-467e-8dc6-4abed6480a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.684 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e2036e-e709-490c-9d40-ebcaf1531193]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.697 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[96755fc8-6de6-4050-9156-5ff5d33bb937]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1016774, 'reachable_time': 24947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329630, 'error': None, 'target': 'ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.699 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:59:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:02.700 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[41f7f855-940c-4e54-b8e7-c004ab105586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:02 compute-2 systemd[1]: run-netns-ovnmeta\x2d5071208b\x2dad60\x2d4226\x2d80bf\x2de8d76f33781f.mount: Deactivated successfully.
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.756 226833 DEBUG nova.compute.manager [req-d583285f-2e2e-4e8d-89ae-66efcc3f8779 req-209d4421-a64a-43c8-83f9-364d1451ee2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Received event network-vif-unplugged-de390d38-0dff-49b5-9827-253885397034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.757 226833 DEBUG oslo_concurrency.lockutils [req-d583285f-2e2e-4e8d-89ae-66efcc3f8779 req-209d4421-a64a-43c8-83f9-364d1451ee2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.757 226833 DEBUG oslo_concurrency.lockutils [req-d583285f-2e2e-4e8d-89ae-66efcc3f8779 req-209d4421-a64a-43c8-83f9-364d1451ee2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.757 226833 DEBUG oslo_concurrency.lockutils [req-d583285f-2e2e-4e8d-89ae-66efcc3f8779 req-209d4421-a64a-43c8-83f9-364d1451ee2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.758 226833 DEBUG nova.compute.manager [req-d583285f-2e2e-4e8d-89ae-66efcc3f8779 req-209d4421-a64a-43c8-83f9-364d1451ee2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] No waiting events found dispatching network-vif-unplugged-de390d38-0dff-49b5-9827-253885397034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.758 226833 DEBUG nova.compute.manager [req-d583285f-2e2e-4e8d-89ae-66efcc3f8779 req-209d4421-a64a-43c8-83f9-364d1451ee2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Received event network-vif-unplugged-de390d38-0dff-49b5-9827-253885397034 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.977 226833 INFO nova.virt.libvirt.driver [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Deleting instance files /var/lib/nova/instances/aebd9cf4-4275-4a78-95d0-563c83d51201_del
Jan 31 08:59:02 compute-2 nova_compute[226829]: 2026-01-31 08:59:02.978 226833 INFO nova.virt.libvirt.driver [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Deletion of /var/lib/nova/instances/aebd9cf4-4275-4a78-95d0-563c83d51201_del complete
Jan 31 08:59:03 compute-2 nova_compute[226829]: 2026-01-31 08:59:03.077 226833 INFO nova.compute.manager [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 31 08:59:03 compute-2 nova_compute[226829]: 2026-01-31 08:59:03.077 226833 DEBUG oslo.service.loopingcall [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:59:03 compute-2 nova_compute[226829]: 2026-01-31 08:59:03.077 226833 DEBUG nova.compute.manager [-] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:59:03 compute-2 nova_compute[226829]: 2026-01-31 08:59:03.078 226833 DEBUG nova.network.neutron [-] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:59:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:03.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.229 226833 DEBUG nova.network.neutron [-] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.243 226833 DEBUG nova.compute.manager [req-e79abd23-e385-46d5-848f-a6f03e3deb51 req-6634febc-196a-45f6-99c4-e3a3df32424e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Received event network-vif-deleted-de390d38-0dff-49b5-9827-253885397034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.243 226833 INFO nova.compute.manager [req-e79abd23-e385-46d5-848f-a6f03e3deb51 req-6634febc-196a-45f6-99c4-e3a3df32424e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Neutron deleted interface de390d38-0dff-49b5-9827-253885397034; detaching it from the instance and deleting it from the info cache
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.243 226833 DEBUG nova.network.neutron [req-e79abd23-e385-46d5-848f-a6f03e3deb51 req-6634febc-196a-45f6-99c4-e3a3df32424e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.322 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.430 226833 INFO nova.compute.manager [-] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Took 1.35 seconds to deallocate network for instance.
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.436 226833 DEBUG nova.compute.manager [req-e79abd23-e385-46d5-848f-a6f03e3deb51 req-6634febc-196a-45f6-99c4-e3a3df32424e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Detach interface failed, port_id=de390d38-0dff-49b5-9827-253885397034, reason: Instance aebd9cf4-4275-4a78-95d0-563c83d51201 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:04 compute-2 ceph-mon[77282]: pgmap v3813: 305 pgs: 305 active+clean; 432 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 86 op/s
Jan 31 08:59:04 compute-2 sudo[329633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:04 compute-2 sudo[329633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:04 compute-2 sudo[329633]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:04 compute-2 sudo[329658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:04 compute-2 sudo[329658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:04 compute-2 sudo[329658]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.892 226833 DEBUG oslo_concurrency.lockutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.893 226833 DEBUG oslo_concurrency.lockutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.895 226833 DEBUG nova.compute.manager [req-7be998db-b1b6-432f-8c3b-301a1d0ab1ca req-b21f0203-e939-4f8b-b34a-5400817c8416 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Received event network-vif-plugged-de390d38-0dff-49b5-9827-253885397034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.895 226833 DEBUG oslo_concurrency.lockutils [req-7be998db-b1b6-432f-8c3b-301a1d0ab1ca req-b21f0203-e939-4f8b-b34a-5400817c8416 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.896 226833 DEBUG oslo_concurrency.lockutils [req-7be998db-b1b6-432f-8c3b-301a1d0ab1ca req-b21f0203-e939-4f8b-b34a-5400817c8416 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.896 226833 DEBUG oslo_concurrency.lockutils [req-7be998db-b1b6-432f-8c3b-301a1d0ab1ca req-b21f0203-e939-4f8b-b34a-5400817c8416 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.896 226833 DEBUG nova.compute.manager [req-7be998db-b1b6-432f-8c3b-301a1d0ab1ca req-b21f0203-e939-4f8b-b34a-5400817c8416 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] No waiting events found dispatching network-vif-plugged-de390d38-0dff-49b5-9827-253885397034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.897 226833 WARNING nova.compute.manager [req-7be998db-b1b6-432f-8c3b-301a1d0ab1ca req-b21f0203-e939-4f8b-b34a-5400817c8416 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Received unexpected event network-vif-plugged-de390d38-0dff-49b5-9827-253885397034 for instance with vm_state deleted and task_state None.
Jan 31 08:59:04 compute-2 nova_compute[226829]: 2026-01-31 08:59:04.946 226833 DEBUG oslo_concurrency.processutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:59:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:05.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:59:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1734990595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:05 compute-2 nova_compute[226829]: 2026-01-31 08:59:05.370 226833 DEBUG oslo_concurrency.processutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:59:05 compute-2 nova_compute[226829]: 2026-01-31 08:59:05.377 226833 DEBUG nova.compute.provider_tree [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:59:05 compute-2 nova_compute[226829]: 2026-01-31 08:59:05.476 226833 DEBUG nova.scheduler.client.report [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:59:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1734990595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:05 compute-2 nova_compute[226829]: 2026-01-31 08:59:05.715 226833 DEBUG oslo_concurrency.lockutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:05.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:06 compute-2 ceph-mon[77282]: pgmap v3814: 305 pgs: 305 active+clean; 422 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Jan 31 08:59:06 compute-2 nova_compute[226829]: 2026-01-31 08:59:06.856 226833 INFO nova.scheduler.client.report [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance aebd9cf4-4275-4a78-95d0-563c83d51201
Jan 31 08:59:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:06.939 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:06.939 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:06.939 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:07.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:07 compute-2 nova_compute[226829]: 2026-01-31 08:59:07.547 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:07 compute-2 nova_compute[226829]: 2026-01-31 08:59:07.571 226833 DEBUG oslo_concurrency.lockutils [None req-bd64a91f-787a-48f1-87c0-ab321f5809b7 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aebd9cf4-4275-4a78-95d0-563c83d51201" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:07 compute-2 ceph-mon[77282]: pgmap v3815: 305 pgs: 305 active+clean; 388 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 31 08:59:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:07.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:09.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:09 compute-2 nova_compute[226829]: 2026-01-31 08:59:09.324 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:09.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:10 compute-2 ceph-mon[77282]: pgmap v3816: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 08:59:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:11.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:11 compute-2 nova_compute[226829]: 2026-01-31 08:59:11.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:11.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:11 compute-2 ceph-mon[77282]: pgmap v3817: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 08:59:12 compute-2 nova_compute[226829]: 2026-01-31 08:59:12.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:12 compute-2 nova_compute[226829]: 2026-01-31 08:59:12.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:12 compute-2 nova_compute[226829]: 2026-01-31 08:59:12.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:13 compute-2 podman[329710]: 2026-01-31 08:59:13.188409446 +0000 UTC m=+0.076742107 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:59:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:13.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:13.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:14 compute-2 nova_compute[226829]: 2026-01-31 08:59:14.326 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:14 compute-2 ceph-mon[77282]: pgmap v3818: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 08:59:14 compute-2 nova_compute[226829]: 2026-01-31 08:59:14.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:15.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:15 compute-2 nova_compute[226829]: 2026-01-31 08:59:15.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:15 compute-2 nova_compute[226829]: 2026-01-31 08:59:15.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:59:15 compute-2 nova_compute[226829]: 2026-01-31 08:59:15.490 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:59:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:15 compute-2 nova_compute[226829]: 2026-01-31 08:59:15.769 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:59:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:15.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:16.002 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:59:16 compute-2 nova_compute[226829]: 2026-01-31 08:59:16.002 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:16.003 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:59:16 compute-2 ceph-mon[77282]: pgmap v3819: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 220 KiB/s rd, 1.1 MiB/s wr, 85 op/s
Jan 31 08:59:16 compute-2 nova_compute[226829]: 2026-01-31 08:59:16.619 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:17.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1437794474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:17 compute-2 nova_compute[226829]: 2026-01-31 08:59:17.516 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849942.51551, aebd9cf4-4275-4a78-95d0-563c83d51201 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:59:17 compute-2 nova_compute[226829]: 2026-01-31 08:59:17.517 226833 INFO nova.compute.manager [-] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] VM Stopped (Lifecycle Event)
Jan 31 08:59:17 compute-2 nova_compute[226829]: 2026-01-31 08:59:17.551 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:17 compute-2 nova_compute[226829]: 2026-01-31 08:59:17.571 226833 DEBUG nova.compute.manager [None req-802cceb7-efd0-4da2-84a6-f79b5baaa8f3 - - - - - -] [instance: aebd9cf4-4275-4a78-95d0-563c83d51201] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:59:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:18 compute-2 ovn_metadata_agent[143834]: 2026-01-31 08:59:18.005 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:59:18 compute-2 ceph-mon[77282]: pgmap v3820: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 68 KiB/s rd, 290 KiB/s wr, 39 op/s
Jan 31 08:59:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/929412592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1772980989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:19.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:19 compute-2 nova_compute[226829]: 2026-01-31 08:59:19.327 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:19 compute-2 ceph-mon[77282]: pgmap v3821: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 88 KiB/s wr, 21 op/s
Jan 31 08:59:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:19.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:20 compute-2 podman[329741]: 2026-01-31 08:59:20.171773737 +0000 UTC m=+0.050192236 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:59:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:21.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:21 compute-2 nova_compute[226829]: 2026-01-31 08:59:21.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:21 compute-2 nova_compute[226829]: 2026-01-31 08:59:21.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:21 compute-2 nova_compute[226829]: 2026-01-31 08:59:21.804 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:21 compute-2 nova_compute[226829]: 2026-01-31 08:59:21.804 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:21 compute-2 nova_compute[226829]: 2026-01-31 08:59:21.805 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:21 compute-2 nova_compute[226829]: 2026-01-31 08:59:21.805 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:59:21 compute-2 nova_compute[226829]: 2026-01-31 08:59:21.806 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:59:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:21.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:59:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/419757811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:22 compute-2 nova_compute[226829]: 2026-01-31 08:59:22.268 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:59:22 compute-2 nova_compute[226829]: 2026-01-31 08:59:22.401 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:59:22 compute-2 nova_compute[226829]: 2026-01-31 08:59:22.402 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4034MB free_disk=20.9776611328125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:59:22 compute-2 nova_compute[226829]: 2026-01-31 08:59:22.402 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:22 compute-2 nova_compute[226829]: 2026-01-31 08:59:22.402 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:22 compute-2 nova_compute[226829]: 2026-01-31 08:59:22.553 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:22 compute-2 ceph-mon[77282]: pgmap v3822: 305 pgs: 305 active+clean; 317 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 30 op/s
Jan 31 08:59:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/419757811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:23.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/525278072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:23.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:24 compute-2 nova_compute[226829]: 2026-01-31 08:59:24.328 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:24 compute-2 nova_compute[226829]: 2026-01-31 08:59:24.450 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:59:24 compute-2 nova_compute[226829]: 2026-01-31 08:59:24.450 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:59:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e411 e411: 3 total, 3 up, 3 in
Jan 31 08:59:24 compute-2 ceph-mon[77282]: pgmap v3823: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 41 op/s
Jan 31 08:59:24 compute-2 sudo[329787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:24 compute-2 sudo[329787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:24 compute-2 sudo[329787]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:24 compute-2 sudo[329812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:24 compute-2 sudo[329812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:24 compute-2 sudo[329812]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:25.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:25 compute-2 ceph-mon[77282]: osdmap e411: 3 total, 3 up, 3 in
Jan 31 08:59:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:25.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:26 compute-2 nova_compute[226829]: 2026-01-31 08:59:26.006 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:59:26 compute-2 nova_compute[226829]: 2026-01-31 08:59:26.027 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:59:26 compute-2 nova_compute[226829]: 2026-01-31 08:59:26.027 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:59:26 compute-2 nova_compute[226829]: 2026-01-31 08:59:26.041 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:59:26 compute-2 nova_compute[226829]: 2026-01-31 08:59:26.597 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:59:26 compute-2 nova_compute[226829]: 2026-01-31 08:59:26.613 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:59:26 compute-2 ceph-mon[77282]: pgmap v3825: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 2.6 KiB/s wr, 67 op/s
Jan 31 08:59:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1769649985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1659983838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:59:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/168774946' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:59:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:59:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/168774946' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:59:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e412 e412: 3 total, 3 up, 3 in
Jan 31 08:59:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:59:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2743009941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:27 compute-2 nova_compute[226829]: 2026-01-31 08:59:27.019 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:59:27 compute-2 nova_compute[226829]: 2026-01-31 08:59:27.025 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:59:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:27.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:27 compute-2 nova_compute[226829]: 2026-01-31 08:59:27.232 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:59:27 compute-2 nova_compute[226829]: 2026-01-31 08:59:27.555 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/168774946' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:59:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/168774946' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:59:27 compute-2 ceph-mon[77282]: osdmap e412: 3 total, 3 up, 3 in
Jan 31 08:59:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2743009941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:27 compute-2 ceph-mon[77282]: pgmap v3827: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 2.7 KiB/s wr, 69 op/s
Jan 31 08:59:27 compute-2 nova_compute[226829]: 2026-01-31 08:59:27.875 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:59:27 compute-2 nova_compute[226829]: 2026-01-31 08:59:27.875 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:27.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e413 e413: 3 total, 3 up, 3 in
Jan 31 08:59:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:59:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432836408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:59:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:59:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432836408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:59:29 compute-2 ceph-mon[77282]: osdmap e413: 3 total, 3 up, 3 in
Jan 31 08:59:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2432836408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:59:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2432836408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:59:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:29.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:29 compute-2 nova_compute[226829]: 2026-01-31 08:59:29.331 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:29.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:30 compute-2 ceph-mon[77282]: pgmap v3829: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 238 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 2.8 KiB/s wr, 73 op/s
Jan 31 08:59:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:31.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:31.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:32 compute-2 ceph-mon[77282]: pgmap v3830: 305 pgs: 305 active+clean; 148 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 77 KiB/s rd, 5.3 KiB/s wr, 110 op/s
Jan 31 08:59:32 compute-2 nova_compute[226829]: 2026-01-31 08:59:32.602 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:33.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:33.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:34 compute-2 nova_compute[226829]: 2026-01-31 08:59:34.332 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:34 compute-2 ceph-mon[77282]: pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 3.9 KiB/s wr, 71 op/s
Jan 31 08:59:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:35.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 e414: 3 total, 3 up, 3 in
Jan 31 08:59:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:35.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:36 compute-2 ceph-mon[77282]: osdmap e414: 3 total, 3 up, 3 in
Jan 31 08:59:36 compute-2 ceph-mon[77282]: pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 68 op/s
Jan 31 08:59:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:37.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:37 compute-2 nova_compute[226829]: 2026-01-31 08:59:37.604 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:37.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:38 compute-2 ceph-mon[77282]: pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 2.0 KiB/s wr, 34 op/s
Jan 31 08:59:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:39 compute-2 nova_compute[226829]: 2026-01-31 08:59:39.334 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:39.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:40 compute-2 ceph-mon[77282]: pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.9 KiB/s wr, 31 op/s
Jan 31 08:59:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:40 compute-2 nova_compute[226829]: 2026-01-31 08:59:40.876 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:40 compute-2 nova_compute[226829]: 2026-01-31 08:59:40.877 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:59:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:41.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:41.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:42 compute-2 ceph-mon[77282]: pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 31 08:59:42 compute-2 nova_compute[226829]: 2026-01-31 08:59:42.655 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:43.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:43.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:44 compute-2 podman[329869]: 2026-01-31 08:59:44.199528752 +0000 UTC m=+0.084071768 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:59:44 compute-2 nova_compute[226829]: 2026-01-31 08:59:44.336 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:44 compute-2 ceph-mon[77282]: pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:44 compute-2 sudo[329894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:44 compute-2 sudo[329894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:44 compute-2 sudo[329894]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:44 compute-2 sudo[329919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:44 compute-2 sudo[329919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:44 compute-2 sudo[329919]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 08:59:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:45.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 08:59:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:45.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:46 compute-2 ceph-mon[77282]: pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:47.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:47 compute-2 nova_compute[226829]: 2026-01-31 08:59:47.657 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:47.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:48 compute-2 nova_compute[226829]: 2026-01-31 08:59:48.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:48 compute-2 ceph-mon[77282]: pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:49.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:49 compute-2 nova_compute[226829]: 2026-01-31 08:59:49.364 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:49.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:50 compute-2 ceph-mon[77282]: pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:51 compute-2 podman[329948]: 2026-01-31 08:59:51.173390344 +0000 UTC m=+0.064155594 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 08:59:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:51.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:51.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:52 compute-2 ceph-mon[77282]: pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:52 compute-2 nova_compute[226829]: 2026-01-31 08:59:52.698 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:53.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:59:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3594944823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:59:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:59:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3594944823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:59:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3594944823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:59:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3594944823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:59:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 08:59:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:53.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 08:59:54 compute-2 nova_compute[226829]: 2026-01-31 08:59:54.367 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:54 compute-2 ceph-mon[77282]: pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:55.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:55 compute-2 ceph-mon[77282]: pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:55.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:57.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:57 compute-2 nova_compute[226829]: 2026-01-31 08:59:57.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:57.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:58 compute-2 ceph-mon[77282]: pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:59.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:59 compute-2 nova_compute[226829]: 2026-01-31 08:59:59.369 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 08:59:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:59.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:00 compute-2 ceph-mon[77282]: pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 09:00:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:01.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:01.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:02 compute-2 sudo[329974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:02 compute-2 sudo[329974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:02 compute-2 sudo[329974]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:02 compute-2 sudo[329999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:00:02 compute-2 sudo[329999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:02 compute-2 sudo[329999]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:02 compute-2 sudo[330024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:02 compute-2 sudo[330024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:02 compute-2 sudo[330024]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:02 compute-2 sudo[330049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:00:02 compute-2 sudo[330049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:02 compute-2 ceph-mon[77282]: pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:02 compute-2 nova_compute[226829]: 2026-01-31 09:00:02.702 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:02 compute-2 sudo[330049]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:00:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:03.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:00:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:00:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:00:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:00:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:00:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:00:03 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:00:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 31 09:00:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:03.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 31 09:00:04 compute-2 nova_compute[226829]: 2026-01-31 09:00:04.371 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:04 compute-2 ceph-mon[77282]: pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:05 compute-2 sudo[330107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:05 compute-2 sudo[330107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:05 compute-2 sudo[330107]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:05 compute-2 sudo[330132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:05 compute-2 sudo[330132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:05 compute-2 sudo[330132]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:05.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:05.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:06 compute-2 ceph-mon[77282]: pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:06 compute-2 nova_compute[226829]: 2026-01-31 09:00:06.604 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:00:06.940 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:00:06.940 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:00:06.940 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:00:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:07.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:00:07 compute-2 nova_compute[226829]: 2026-01-31 09:00:07.704 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:07.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:08 compute-2 ceph-mon[77282]: pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:00:08 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:00:08 compute-2 sudo[330159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:08 compute-2 sudo[330159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:08 compute-2 sudo[330159]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:08 compute-2 sudo[330184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:00:08 compute-2 sudo[330184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:08 compute-2 sudo[330184]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:09.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:09 compute-2 nova_compute[226829]: 2026-01-31 09:00:09.382 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:09.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:10 compute-2 ceph-mon[77282]: pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:11.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:11.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:12 compute-2 ceph-mon[77282]: pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:12 compute-2 nova_compute[226829]: 2026-01-31 09:00:12.706 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:13.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:13 compute-2 nova_compute[226829]: 2026-01-31 09:00:13.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:13.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:14 compute-2 nova_compute[226829]: 2026-01-31 09:00:14.384 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:14 compute-2 nova_compute[226829]: 2026-01-31 09:00:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:14 compute-2 nova_compute[226829]: 2026-01-31 09:00:14.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:14 compute-2 ceph-mon[77282]: pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:15 compute-2 podman[330212]: 2026-01-31 09:00:15.245550835 +0000 UTC m=+0.134481878 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller)
Jan 31 09:00:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:15.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:15.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:16 compute-2 nova_compute[226829]: 2026-01-31 09:00:16.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:16 compute-2 ceph-mon[77282]: pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:16 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3029930112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:17.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:17 compute-2 nova_compute[226829]: 2026-01-31 09:00:17.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:17 compute-2 nova_compute[226829]: 2026-01-31 09:00:17.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:00:17 compute-2 nova_compute[226829]: 2026-01-31 09:00:17.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:00:17 compute-2 nova_compute[226829]: 2026-01-31 09:00:17.529 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:00:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/33126419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3032600505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:17 compute-2 nova_compute[226829]: 2026-01-31 09:00:17.708 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:17.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:18 compute-2 ceph-mon[77282]: pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:19.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:19 compute-2 nova_compute[226829]: 2026-01-31 09:00:19.385 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:00:19.528 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:00:19 compute-2 nova_compute[226829]: 2026-01-31 09:00:19.528 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:00:19.529 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:00:19 compute-2 ceph-mon[77282]: pgmap v3855: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 700 KiB/s wr, 3 op/s
Jan 31 09:00:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:19.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:21 compute-2 nova_compute[226829]: 2026-01-31 09:00:21.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:21 compute-2 nova_compute[226829]: 2026-01-31 09:00:21.587 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:21 compute-2 nova_compute[226829]: 2026-01-31 09:00:21.587 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:21 compute-2 nova_compute[226829]: 2026-01-31 09:00:21.587 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:21 compute-2 nova_compute[226829]: 2026-01-31 09:00:21.587 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:00:21 compute-2 nova_compute[226829]: 2026-01-31 09:00:21.588 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:21.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:00:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1357204511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.079 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:22 compute-2 podman[330264]: 2026-01-31 09:00:22.161252967 +0000 UTC m=+0.046361001 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.216 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.217 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4056MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.218 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.218 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.381 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.381 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.413 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:22 compute-2 ceph-mon[77282]: pgmap v3856: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.8 MiB/s wr, 18 op/s
Jan 31 09:00:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1357204511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.710 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:00:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4078823607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.808 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.814 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.894 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.920 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:00:22 compute-2 nova_compute[226829]: 2026-01-31 09:00:22.921 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:00:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:23.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:00:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4078823607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/664484667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:00:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:23.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:24 compute-2 nova_compute[226829]: 2026-01-31 09:00:24.446 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:00:24.530 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:00:24 compute-2 ceph-mon[77282]: pgmap v3857: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:00:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/344523482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:00:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3499201930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:24 compute-2 nova_compute[226829]: 2026-01-31 09:00:24.922 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:25 compute-2 sudo[330306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:25 compute-2 sudo[330306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:25 compute-2 sudo[330306]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:25 compute-2 sudo[330331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:25 compute-2 sudo[330331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:25 compute-2 sudo[330331]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:00:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:25.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:00:25 compute-2 ceph-mon[77282]: pgmap v3858: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:00:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:00:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:25.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:00:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/307220799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:27 compute-2 ovn_controller[133834]: 2026-01-31T09:00:27Z|00824|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 09:00:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:27.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:27 compute-2 ceph-mon[77282]: pgmap v3859: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 09:00:27 compute-2 nova_compute[226829]: 2026-01-31 09:00:27.713 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:27.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:29.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:29 compute-2 nova_compute[226829]: 2026-01-31 09:00:29.449 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:29.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:30 compute-2 ceph-mon[77282]: pgmap v3860: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 31 09:00:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:31.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:31 compute-2 ceph-mon[77282]: pgmap v3861: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 71 op/s
Jan 31 09:00:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:31.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:32 compute-2 nova_compute[226829]: 2026-01-31 09:00:32.714 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:33.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:00:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:33.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:00:34 compute-2 ceph-mon[77282]: pgmap v3862: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Jan 31 09:00:34 compute-2 nova_compute[226829]: 2026-01-31 09:00:34.449 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:35.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:35.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:36 compute-2 nova_compute[226829]: 2026-01-31 09:00:36.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:36 compute-2 nova_compute[226829]: 2026-01-31 09:00:36.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:00:36 compute-2 ceph-mon[77282]: pgmap v3863: 305 pgs: 305 active+clean; 150 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 31 09:00:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2431638206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:00:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:37.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:00:37 compute-2 nova_compute[226829]: 2026-01-31 09:00:37.716 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:37 compute-2 ceph-mon[77282]: pgmap v3864: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 09:00:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:37.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:39.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:39 compute-2 nova_compute[226829]: 2026-01-31 09:00:39.507 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:40 compute-2 ceph-mon[77282]: pgmap v3865: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 31 09:00:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:41.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:41.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:42 compute-2 nova_compute[226829]: 2026-01-31 09:00:42.718 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:42 compute-2 ceph-mon[77282]: pgmap v3866: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 91 op/s
Jan 31 09:00:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:43.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:43.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:43 compute-2 ceph-mon[77282]: pgmap v3867: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 835 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Jan 31 09:00:44 compute-2 nova_compute[226829]: 2026-01-31 09:00:44.508 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:45 compute-2 sudo[330366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:45 compute-2 sudo[330366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:45 compute-2 sudo[330366]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:45.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:45 compute-2 sudo[330397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:45 compute-2 sudo[330397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:45 compute-2 sudo[330397]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:45 compute-2 podman[330390]: 2026-01-31 09:00:45.425816205 +0000 UTC m=+0.102270803 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 09:00:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:45.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:46 compute-2 ceph-mon[77282]: pgmap v3868: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 09:00:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:47.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:47 compute-2 nova_compute[226829]: 2026-01-31 09:00:47.719 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:47.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:48 compute-2 ceph-mon[77282]: pgmap v3869: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 852 B/s wr, 21 op/s
Jan 31 09:00:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:00:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:49.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:00:49 compute-2 nova_compute[226829]: 2026-01-31 09:00:49.510 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:49 compute-2 ceph-mon[77282]: pgmap v3870: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 10 op/s
Jan 31 09:00:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:49.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:51.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:51 compute-2 ceph-mon[77282]: pgmap v3871: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:51.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:52 compute-2 nova_compute[226829]: 2026-01-31 09:00:52.720 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:53 compute-2 podman[330446]: 2026-01-31 09:00:53.151712538 +0000 UTC m=+0.040240426 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:00:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:53.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1275907244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:00:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1275907244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:00:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:53.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:54 compute-2 nova_compute[226829]: 2026-01-31 09:00:54.513 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:54 compute-2 ceph-mon[77282]: pgmap v3872: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:55.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:55.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:56 compute-2 ceph-mon[77282]: pgmap v3873: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:00:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:57.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:00:57 compute-2 nova_compute[226829]: 2026-01-31 09:00:57.721 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:57 compute-2 ceph-mon[77282]: pgmap v3874: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:57.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2174453895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:59.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:59 compute-2 nova_compute[226829]: 2026-01-31 09:00:59.514 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:00:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:00:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:59.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:00 compute-2 ceph-mon[77282]: pgmap v3875: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:01 compute-2 CROND[330471]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 09:01:01 compute-2 run-parts[330474]: (/etc/cron.hourly) starting 0anacron
Jan 31 09:01:01 compute-2 run-parts[330480]: (/etc/cron.hourly) finished 0anacron
Jan 31 09:01:01 compute-2 CROND[330470]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 09:01:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:01.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:02 compute-2 ceph-mon[77282]: pgmap v3876: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:01:02.698 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:01:02 compute-2 nova_compute[226829]: 2026-01-31 09:01:02.699 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:01:02.700 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:01:02 compute-2 nova_compute[226829]: 2026-01-31 09:01:02.723 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:03 compute-2 ceph-mon[77282]: pgmap v3877: 305 pgs: 305 active+clean; 124 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 46 KiB/s wr, 10 op/s
Jan 31 09:01:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:03.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:04 compute-2 nova_compute[226829]: 2026-01-31 09:01:04.514 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.375381) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065375448, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1970, "num_deletes": 261, "total_data_size": 4517608, "memory_usage": 4585200, "flush_reason": "Manual Compaction"}
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Jan 31 09:01:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065390429, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 2967205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90228, "largest_seqno": 92191, "table_properties": {"data_size": 2959131, "index_size": 4887, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17064, "raw_average_key_size": 20, "raw_value_size": 2942727, "raw_average_value_size": 3490, "num_data_blocks": 213, "num_entries": 843, "num_filter_entries": 843, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849900, "oldest_key_time": 1769849900, "file_creation_time": 1769850065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 15122 microseconds, and 5411 cpu microseconds.
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.390494) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 2967205 bytes OK
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.390518) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.392022) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.392052) EVENT_LOG_v1 {"time_micros": 1769850065392047, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.392071) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 4508801, prev total WAL file size 4508801, number of live WAL files 2.
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.392841) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353137' seq:72057594037927935, type:22 .. '6C6F676D0033373731' seq:0, type:0; will stop at (end)
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(2897KB)], [186(10MB)]
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065392917, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 14348019, "oldest_snapshot_seqno": -1}
Jan 31 09:01:05 compute-2 sudo[330483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:05 compute-2 sudo[330483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:05 compute-2 sudo[330483]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 11121 keys, 14206149 bytes, temperature: kUnknown
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065460718, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 14206149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14135135, "index_size": 42101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27845, "raw_key_size": 293982, "raw_average_key_size": 26, "raw_value_size": 13941650, "raw_average_value_size": 1253, "num_data_blocks": 1603, "num_entries": 11121, "num_filter_entries": 11121, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769850065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.461483) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 14206149 bytes
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.462479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.7 rd, 207.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.9 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(9.6) write-amplify(4.8) OK, records in: 11660, records dropped: 539 output_compression: NoCompression
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.462496) EVENT_LOG_v1 {"time_micros": 1769850065462488, "job": 120, "event": "compaction_finished", "compaction_time_micros": 68407, "compaction_time_cpu_micros": 32516, "output_level": 6, "num_output_files": 1, "total_output_size": 14206149, "num_input_records": 11660, "num_output_records": 11121, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065462892, "job": 120, "event": "table_file_deletion", "file_number": 188}
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065464115, "job": 120, "event": "table_file_deletion", "file_number": 186}
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.392758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.464151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.464155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.464157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.464158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:05.464159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-2 sudo[330508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:05 compute-2 sudo[330508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:05 compute-2 sudo[330508]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:05 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:01:05.703 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:01:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:05.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:06 compute-2 ceph-mon[77282]: pgmap v3878: 305 pgs: 305 active+clean; 147 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 23 op/s
Jan 31 09:01:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:01:06.941 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:01:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:01:06.941 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:01:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:01:06.941 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:01:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:07 compute-2 nova_compute[226829]: 2026-01-31 09:01:07.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:07 compute-2 nova_compute[226829]: 2026-01-31 09:01:07.769 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:07 compute-2 ceph-mon[77282]: pgmap v3879: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 31 09:01:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:07.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:08 compute-2 sudo[330535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:08 compute-2 sudo[330535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:08 compute-2 sudo[330535]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:08 compute-2 sudo[330560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:01:08 compute-2 sudo[330560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:08 compute-2 sudo[330560]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:08 compute-2 sudo[330585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:08 compute-2 sudo[330585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:08 compute-2 sudo[330585]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:08 compute-2 sudo[330610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:01:08 compute-2 sudo[330610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:09 compute-2 sudo[330610]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:09 compute-2 nova_compute[226829]: 2026-01-31 09:01:09.516 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:09.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:10 compute-2 ceph-mon[77282]: pgmap v3880: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:01:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3591830032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:01:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3119043435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:01:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:01:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:01:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:01:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:01:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:01:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:11.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:11 compute-2 ceph-mon[77282]: pgmap v3881: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:01:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:11.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:12 compute-2 nova_compute[226829]: 2026-01-31 09:01:12.770 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:13.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:13.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:14 compute-2 nova_compute[226829]: 2026-01-31 09:01:14.517 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:15 compute-2 ceph-mon[77282]: pgmap v3882: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:01:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:15.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:15 compute-2 nova_compute[226829]: 2026-01-31 09:01:15.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:15.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:16 compute-2 podman[330669]: 2026-01-31 09:01:16.226780146 +0000 UTC m=+0.108566864 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:01:16 compute-2 ceph-mon[77282]: pgmap v3883: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 19 op/s
Jan 31 09:01:16 compute-2 nova_compute[226829]: 2026-01-31 09:01:16.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:16 compute-2 nova_compute[226829]: 2026-01-31 09:01:16.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:16 compute-2 nova_compute[226829]: 2026-01-31 09:01:16.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:17.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:17 compute-2 nova_compute[226829]: 2026-01-31 09:01:17.772 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:17 compute-2 ceph-mon[77282]: pgmap v3884: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 155 KiB/s rd, 669 KiB/s wr, 14 op/s
Jan 31 09:01:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:17.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:19 compute-2 nova_compute[226829]: 2026-01-31 09:01:19.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:19 compute-2 nova_compute[226829]: 2026-01-31 09:01:19.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:01:19 compute-2 nova_compute[226829]: 2026-01-31 09:01:19.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:01:19 compute-2 nova_compute[226829]: 2026-01-31 09:01:19.513 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:01:19 compute-2 nova_compute[226829]: 2026-01-31 09:01:19.519 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:19 compute-2 ceph-mon[77282]: pgmap v3885: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 206 KiB/s rd, 12 KiB/s wr, 17 op/s
Jan 31 09:01:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:19.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:21.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:21 compute-2 nova_compute[226829]: 2026-01-31 09:01:21.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:22.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.080 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.080 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.080 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.080 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.081 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:01:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:01:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/681303664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.505 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:01:22 compute-2 sudo[330720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:22 compute-2 sudo[330720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:22 compute-2 sudo[330720]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.632 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.633 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4061MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.633 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.634 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:01:22 compute-2 ceph-mon[77282]: pgmap v3886: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:01:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:22 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/681303664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:22 compute-2 sudo[330745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:01:22 compute-2 sudo[330745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:22 compute-2 sudo[330745]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.721 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.722 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.745 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:01:22 compute-2 nova_compute[226829]: 2026-01-31 09:01:22.811 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:01:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1841012567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:23 compute-2 nova_compute[226829]: 2026-01-31 09:01:23.173 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:01:23 compute-2 nova_compute[226829]: 2026-01-31 09:01:23.178 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:01:23 compute-2 nova_compute[226829]: 2026-01-31 09:01:23.202 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:01:23 compute-2 nova_compute[226829]: 2026-01-31 09:01:23.204 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:01:23 compute-2 nova_compute[226829]: 2026-01-31 09:01:23.204 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:01:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:24.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1502050569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1841012567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:24 compute-2 ceph-mon[77282]: pgmap v3887: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:01:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3584786760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:24 compute-2 podman[330793]: 2026-01-31 09:01:24.151780213 +0000 UTC m=+0.041874440 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:01:24 compute-2 nova_compute[226829]: 2026-01-31 09:01:24.521 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1842686204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1794774602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:25 compute-2 sudo[330812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:25 compute-2 sudo[330812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:25 compute-2 sudo[330812]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:25 compute-2 sudo[330837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:25 compute-2 sudo[330837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:25 compute-2 sudo[330837]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:26.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:26 compute-2 ceph-mon[77282]: pgmap v3888: 305 pgs: 305 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 83 op/s
Jan 31 09:01:26 compute-2 nova_compute[226829]: 2026-01-31 09:01:26.205 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:27.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:27 compute-2 nova_compute[226829]: 2026-01-31 09:01:27.813 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:28.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:28 compute-2 ceph-mon[77282]: pgmap v3889: 305 pgs: 305 active+clean; 124 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 86 op/s
Jan 31 09:01:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:29.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:29 compute-2 nova_compute[226829]: 2026-01-31 09:01:29.570 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:29 compute-2 ceph-mon[77282]: pgmap v3890: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.2 KiB/s wr, 88 op/s
Jan 31 09:01:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/444433900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:30.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:01:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:31.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:01:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:32.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:32 compute-2 ceph-mon[77282]: pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 82 op/s
Jan 31 09:01:32 compute-2 nova_compute[226829]: 2026-01-31 09:01:32.814 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:33.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:33 compute-2 ceph-mon[77282]: pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 09:01:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:34 compute-2 nova_compute[226829]: 2026-01-31 09:01:34.573 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:35.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:36.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:36 compute-2 ceph-mon[77282]: pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 09:01:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:01:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:37.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:01:37 compute-2 ceph-mon[77282]: pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Jan 31 09:01:37 compute-2 nova_compute[226829]: 2026-01-31 09:01:37.860 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:38.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:38 compute-2 nova_compute[226829]: 2026-01-31 09:01:38.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:38 compute-2 nova_compute[226829]: 2026-01-31 09:01:38.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:01:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:39.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:39 compute-2 nova_compute[226829]: 2026-01-31 09:01:39.574 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:40.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:40 compute-2 ceph-mon[77282]: pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 10 op/s
Jan 31 09:01:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:41.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:41 compute-2 ceph-mon[77282]: pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 31 09:01:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:42.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:42 compute-2 nova_compute[226829]: 2026-01-31 09:01:42.862 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:43.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:44.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:44 compute-2 nova_compute[226829]: 2026-01-31 09:01:44.574 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:44 compute-2 ceph-mon[77282]: pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:45.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:45 compute-2 sudo[330872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:45 compute-2 sudo[330872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:45 compute-2 sudo[330872]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:45 compute-2 sudo[330897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:45 compute-2 sudo[330897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:45 compute-2 sudo[330897]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:45 compute-2 ceph-mon[77282]: pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:46.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:47 compute-2 podman[330923]: 2026-01-31 09:01:47.172926656 +0000 UTC m=+0.065244885 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 09:01:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:47.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:47 compute-2 nova_compute[226829]: 2026-01-31 09:01:47.863 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:48.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:48 compute-2 ceph-mon[77282]: pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:49 compute-2 nova_compute[226829]: 2026-01-31 09:01:49.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:49 compute-2 nova_compute[226829]: 2026-01-31 09:01:49.577 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:50.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:50 compute-2 ceph-mon[77282]: pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.519874) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111519906, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 711, "num_deletes": 251, "total_data_size": 1349386, "memory_usage": 1364288, "flush_reason": "Manual Compaction"}
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111525607, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 879489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92197, "largest_seqno": 92902, "table_properties": {"data_size": 875920, "index_size": 1412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8076, "raw_average_key_size": 19, "raw_value_size": 868866, "raw_average_value_size": 2098, "num_data_blocks": 63, "num_entries": 414, "num_filter_entries": 414, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850066, "oldest_key_time": 1769850066, "file_creation_time": 1769850111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 5787 microseconds, and 2159 cpu microseconds.
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.525658) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 879489 bytes OK
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.525682) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.527557) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.527597) EVENT_LOG_v1 {"time_micros": 1769850111527584, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.527620) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1345574, prev total WAL file size 1345574, number of live WAL files 2.
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.528375) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(858KB)], [189(13MB)]
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111528548, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 15085638, "oldest_snapshot_seqno": -1}
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 11020 keys, 13133309 bytes, temperature: kUnknown
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111581763, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 13133309, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13063890, "index_size": 40762, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 292596, "raw_average_key_size": 26, "raw_value_size": 12872969, "raw_average_value_size": 1168, "num_data_blocks": 1541, "num_entries": 11020, "num_filter_entries": 11020, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769850111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.582169) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 13133309 bytes
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.583795) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 283.0 rd, 246.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 13.5 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(32.1) write-amplify(14.9) OK, records in: 11535, records dropped: 515 output_compression: NoCompression
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.583827) EVENT_LOG_v1 {"time_micros": 1769850111583812, "job": 122, "event": "compaction_finished", "compaction_time_micros": 53301, "compaction_time_cpu_micros": 31914, "output_level": 6, "num_output_files": 1, "total_output_size": 13133309, "num_input_records": 11535, "num_output_records": 11020, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111584227, "job": 122, "event": "table_file_deletion", "file_number": 191}
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111586660, "job": 122, "event": "table_file_deletion", "file_number": 189}
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.528237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.586729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.586733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.586734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.586735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:01:51.586737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:52.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:52 compute-2 ceph-mon[77282]: pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:52 compute-2 nova_compute[226829]: 2026-01-31 09:01:52.923 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:53.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2693558940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:01:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2693558940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:01:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:54.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:54 compute-2 nova_compute[226829]: 2026-01-31 09:01:54.578 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:54 compute-2 ceph-mon[77282]: pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:55 compute-2 podman[330954]: 2026-01-31 09:01:55.161866331 +0000 UTC m=+0.051774288 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 09:01:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:55.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:55 compute-2 ceph-mon[77282]: pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:01:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:56.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:01:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:57.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:57 compute-2 nova_compute[226829]: 2026-01-31 09:01:57.925 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:58.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:58 compute-2 ceph-mon[77282]: pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:01:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:01:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:59.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:01:59 compute-2 nova_compute[226829]: 2026-01-31 09:01:59.583 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:00.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:00 compute-2 ceph-mon[77282]: pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:01.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:02.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:02 compute-2 ceph-mon[77282]: pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:02 compute-2 nova_compute[226829]: 2026-01-31 09:02:02.926 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:03.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:04.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:04 compute-2 ceph-mon[77282]: pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:04 compute-2 nova_compute[226829]: 2026-01-31 09:02:04.585 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:05.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:05 compute-2 sudo[330979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:05 compute-2 sudo[330979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:05 compute-2 sudo[330979]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:05 compute-2 sudo[331004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:05 compute-2 sudo[331004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:05 compute-2 sudo[331004]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:06.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:06 compute-2 ceph-mon[77282]: pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:02:06.941 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:02:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:02:06.942 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:02:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:02:06.942 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:02:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:07.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/349253811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:07 compute-2 ceph-mon[77282]: pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:07 compute-2 nova_compute[226829]: 2026-01-31 09:02:07.963 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:08.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:08 compute-2 nova_compute[226829]: 2026-01-31 09:02:08.511 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:09 compute-2 nova_compute[226829]: 2026-01-31 09:02:09.586 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:10.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:10 compute-2 ceph-mon[77282]: pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:11.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:12.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:12 compute-2 ceph-mon[77282]: pgmap v3911: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 907 KiB/s wr, 15 op/s
Jan 31 09:02:12 compute-2 nova_compute[226829]: 2026-01-31 09:02:12.965 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:13.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:14 compute-2 ceph-mon[77282]: pgmap v3912: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:02:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:14.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:14 compute-2 nova_compute[226829]: 2026-01-31 09:02:14.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3830990779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:02:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2146965649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:02:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:15.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:16.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:16 compute-2 ceph-mon[77282]: pgmap v3913: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:02:16 compute-2 nova_compute[226829]: 2026-01-31 09:02:16.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:16 compute-2 nova_compute[226829]: 2026-01-31 09:02:16.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:02:16.967 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:02:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:02:16.968 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:02:17 compute-2 nova_compute[226829]: 2026-01-31 09:02:17.015 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:17.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:17 compute-2 ceph-mon[77282]: pgmap v3914: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:02:17 compute-2 nova_compute[226829]: 2026-01-31 09:02:17.967 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:02:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:18.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:02:18 compute-2 podman[331036]: 2026-01-31 09:02:18.203757088 +0000 UTC m=+0.085874155 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 09:02:18 compute-2 nova_compute[226829]: 2026-01-31 09:02:18.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:18 compute-2 nova_compute[226829]: 2026-01-31 09:02:18.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:19.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:19 compute-2 nova_compute[226829]: 2026-01-31 09:02:19.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:19 compute-2 nova_compute[226829]: 2026-01-31 09:02:19.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:02:19 compute-2 nova_compute[226829]: 2026-01-31 09:02:19.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:02:19 compute-2 nova_compute[226829]: 2026-01-31 09:02:19.510 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:02:19 compute-2 nova_compute[226829]: 2026-01-31 09:02:19.587 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:20.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:20 compute-2 ceph-mon[77282]: pgmap v3915: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 09:02:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:21.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:22.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:22 compute-2 nova_compute[226829]: 2026-01-31 09:02:22.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:22 compute-2 nova_compute[226829]: 2026-01-31 09:02:22.514 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:02:22 compute-2 nova_compute[226829]: 2026-01-31 09:02:22.514 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:02:22 compute-2 nova_compute[226829]: 2026-01-31 09:02:22.514 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:02:22 compute-2 nova_compute[226829]: 2026-01-31 09:02:22.514 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:02:22 compute-2 nova_compute[226829]: 2026-01-31 09:02:22.514 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:02:22 compute-2 ceph-mon[77282]: pgmap v3916: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 31 09:02:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2934368418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:22 compute-2 sudo[331086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:22 compute-2 sudo[331086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:22 compute-2 sudo[331086]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:22 compute-2 sudo[331111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:02:22 compute-2 sudo[331111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:22 compute-2 sudo[331111]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:22 compute-2 sudo[331136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:22 compute-2 sudo[331136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:22 compute-2 sudo[331136]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:22 compute-2 sudo[331161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 09:02:22 compute-2 sudo[331161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:02:22 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/279774204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:22 compute-2 nova_compute[226829]: 2026-01-31 09:02:22.949 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:02:22 compute-2 nova_compute[226829]: 2026-01-31 09:02:22.969 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:23 compute-2 sudo[331161]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.093 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.094 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4051MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.095 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.095 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:02:23 compute-2 sudo[331208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:23 compute-2 sudo[331208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:23 compute-2 sudo[331208]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:23 compute-2 sudo[331233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:02:23 compute-2 sudo[331233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:23 compute-2 sudo[331233]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.173 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.173 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:02:23 compute-2 sudo[331258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:23 compute-2 sudo[331258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:23 compute-2 sudo[331258]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.208 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:02:23 compute-2 sudo[331284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:02:23 compute-2 sudo[331284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:23.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:23 compute-2 sudo[331284]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1838854291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/279774204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 09:02:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:02:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/859458309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.639 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.643 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.689 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.691 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:02:23 compute-2 nova_compute[226829]: 2026-01-31 09:02:23.691 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:02:23 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:02:23.971 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:02:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:24.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:24 compute-2 nova_compute[226829]: 2026-01-31 09:02:24.589 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:24 compute-2 ceph-mon[77282]: pgmap v3917: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 920 KiB/s wr, 85 op/s
Jan 31 09:02:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/859458309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 09:02:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:02:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:02:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:02:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:02:24 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:02:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:25.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3888245081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:25 compute-2 ceph-mon[77282]: pgmap v3918: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:02:25 compute-2 nova_compute[226829]: 2026-01-31 09:02:25.691 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:25 compute-2 sudo[331362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:25 compute-2 sudo[331362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:25 compute-2 sudo[331362]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:25 compute-2 sudo[331393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:25 compute-2 sudo[331393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:25 compute-2 sudo[331393]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:26 compute-2 podman[331386]: 2026-01-31 09:02:26.012022572 +0000 UTC m=+0.071077935 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 09:02:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:26.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4200959977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:27.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:27 compute-2 ceph-mon[77282]: pgmap v3919: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:02:27 compute-2 nova_compute[226829]: 2026-01-31 09:02:27.971 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:28.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:29 compute-2 sudo[331432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:29 compute-2 sudo[331432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:29 compute-2 sudo[331432]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:29 compute-2 sudo[331457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:02:29 compute-2 sudo[331457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:29 compute-2 sudo[331457]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:29.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:29 compute-2 nova_compute[226829]: 2026-01-31 09:02:29.590 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:29 compute-2 ceph-mon[77282]: pgmap v3920: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:02:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:30.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:31.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:32.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:32 compute-2 ceph-mon[77282]: pgmap v3921: 305 pgs: 305 active+clean; 189 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 102 op/s
Jan 31 09:02:32 compute-2 nova_compute[226829]: 2026-01-31 09:02:32.973 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:33.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:34.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:34 compute-2 ceph-mon[77282]: pgmap v3922: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 31 09:02:34 compute-2 nova_compute[226829]: 2026-01-31 09:02:34.592 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:35.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:36 compute-2 ceph-mon[77282]: pgmap v3923: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 09:02:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:37.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:37 compute-2 nova_compute[226829]: 2026-01-31 09:02:37.973 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:38 compute-2 ceph-mon[77282]: pgmap v3924: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 09:02:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:39.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:39 compute-2 nova_compute[226829]: 2026-01-31 09:02:39.642 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:40.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:40 compute-2 nova_compute[226829]: 2026-01-31 09:02:40.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:40 compute-2 nova_compute[226829]: 2026-01-31 09:02:40.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:02:40 compute-2 ceph-mon[77282]: pgmap v3925: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 09:02:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:41.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:42.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:42 compute-2 ceph-mon[77282]: pgmap v3926: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 09:02:43 compute-2 nova_compute[226829]: 2026-01-31 09:02:43.008 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:02:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:43.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:02:43 compute-2 ceph-mon[77282]: pgmap v3927: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 168 KiB/s rd, 964 KiB/s wr, 33 op/s
Jan 31 09:02:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:44.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:44 compute-2 nova_compute[226829]: 2026-01-31 09:02:44.643 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2159508482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:45.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:46 compute-2 sudo[331490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:46 compute-2 sudo[331490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:46 compute-2 sudo[331490]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:46.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:46 compute-2 sudo[331516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:46 compute-2 sudo[331516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:46 compute-2 sudo[331516]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:46 compute-2 ceph-mon[77282]: pgmap v3928: 305 pgs: 305 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 KiB/s rd, 18 KiB/s wr, 3 op/s
Jan 31 09:02:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:47.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:48 compute-2 nova_compute[226829]: 2026-01-31 09:02:48.009 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:48.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:48 compute-2 ceph-mon[77282]: pgmap v3929: 305 pgs: 305 active+clean; 152 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 18 KiB/s wr, 17 op/s
Jan 31 09:02:49 compute-2 podman[331542]: 2026-01-31 09:02:49.18377447 +0000 UTC m=+0.072342999 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:02:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:49.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:49 compute-2 nova_compute[226829]: 2026-01-31 09:02:49.700 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:50.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:50 compute-2 ceph-mon[77282]: pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 18 KiB/s wr, 28 op/s
Jan 31 09:02:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:51.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:52.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:52 compute-2 ceph-mon[77282]: pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 28 op/s
Jan 31 09:02:53 compute-2 nova_compute[226829]: 2026-01-31 09:02:53.011 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:02:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1663783843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:02:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:02:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1663783843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:02:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:53.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1663783843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:02:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1663783843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:02:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:54.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:54 compute-2 ceph-mon[77282]: pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Jan 31 09:02:54 compute-2 nova_compute[226829]: 2026-01-31 09:02:54.702 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:55.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:55 compute-2 ceph-mon[77282]: pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 31 09:02:55 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:56.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:56 compute-2 podman[331573]: 2026-01-31 09:02:56.155720882 +0000 UTC m=+0.047809050 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 09:02:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:57.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:58 compute-2 nova_compute[226829]: 2026-01-31 09:02:58.012 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:02:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:58.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:02:58 compute-2 ceph-mon[77282]: pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 31 09:02:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:02:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:59.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:59 compute-2 nova_compute[226829]: 2026-01-31 09:02:59.704 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:00.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:00 compute-2 ceph-mon[77282]: pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.1 KiB/s rd, 341 B/s wr, 11 op/s
Jan 31 09:03:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:01 compute-2 ceph-mon[77282]: pgmap v3936: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:02.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:03 compute-2 nova_compute[226829]: 2026-01-31 09:03:03.013 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:03.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:03:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:04.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:03:04 compute-2 ceph-mon[77282]: pgmap v3937: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:04 compute-2 nova_compute[226829]: 2026-01-31 09:03:04.706 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:05.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:06.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:06 compute-2 sudo[331599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:06 compute-2 sudo[331599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:06 compute-2 sudo[331599]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:06 compute-2 sudo[331624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:06 compute-2 sudo[331624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:06 compute-2 sudo[331624]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:06 compute-2 ceph-mon[77282]: pgmap v3938: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:06.942 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:06.943 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:06.943 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:07.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:07 compute-2 ceph-mon[77282]: pgmap v3939: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:08 compute-2 nova_compute[226829]: 2026-01-31 09:03:08.015 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:08.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:09 compute-2 nova_compute[226829]: 2026-01-31 09:03:09.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:09 compute-2 nova_compute[226829]: 2026-01-31 09:03:09.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 09:03:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:09.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:09 compute-2 nova_compute[226829]: 2026-01-31 09:03:09.595 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 09:03:09 compute-2 nova_compute[226829]: 2026-01-31 09:03:09.708 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:10.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:10 compute-2 nova_compute[226829]: 2026-01-31 09:03:10.590 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:10 compute-2 ceph-mon[77282]: pgmap v3940: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:11.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:11 compute-2 ceph-mon[77282]: pgmap v3941: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:12.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1424942873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:13 compute-2 nova_compute[226829]: 2026-01-31 09:03:13.016 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:13 compute-2 nova_compute[226829]: 2026-01-31 09:03:13.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:13 compute-2 nova_compute[226829]: 2026-01-31 09:03:13.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 09:03:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:13.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:14 compute-2 ceph-mon[77282]: pgmap v3942: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:14.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:14 compute-2 nova_compute[226829]: 2026-01-31 09:03:14.740 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:15.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:16.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:16 compute-2 nova_compute[226829]: 2026-01-31 09:03:16.643 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:16 compute-2 nova_compute[226829]: 2026-01-31 09:03:16.644 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:16 compute-2 ceph-mon[77282]: pgmap v3943: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.9 KiB/s rd, 317 KiB/s wr, 9 op/s
Jan 31 09:03:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:17.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:17 compute-2 ceph-mon[77282]: pgmap v3944: 305 pgs: 305 active+clean; 150 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.0 MiB/s wr, 22 op/s
Jan 31 09:03:18 compute-2 nova_compute[226829]: 2026-01-31 09:03:18.051 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:18.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:18 compute-2 nova_compute[226829]: 2026-01-31 09:03:18.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2283409527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:19 compute-2 nova_compute[226829]: 2026-01-31 09:03:19.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:19.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:19 compute-2 nova_compute[226829]: 2026-01-31 09:03:19.742 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:20.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:20 compute-2 podman[331656]: 2026-01-31 09:03:20.186728592 +0000 UTC m=+0.067564018 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:03:20 compute-2 ceph-mon[77282]: pgmap v3945: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Jan 31 09:03:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2384576305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/26884125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:03:20 compute-2 nova_compute[226829]: 2026-01-31 09:03:20.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:20 compute-2 nova_compute[226829]: 2026-01-31 09:03:20.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:03:20 compute-2 nova_compute[226829]: 2026-01-31 09:03:20.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:03:20 compute-2 nova_compute[226829]: 2026-01-31 09:03:20.505 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:03:20 compute-2 nova_compute[226829]: 2026-01-31 09:03:20.506 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3981064920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:03:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:21.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:22.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:22 compute-2 ceph-mon[77282]: pgmap v3946: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:03:23 compute-2 nova_compute[226829]: 2026-01-31 09:03:23.052 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:23 compute-2 nova_compute[226829]: 2026-01-31 09:03:23.509 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:23 compute-2 nova_compute[226829]: 2026-01-31 09:03:23.537 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:23 compute-2 nova_compute[226829]: 2026-01-31 09:03:23.538 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:23 compute-2 nova_compute[226829]: 2026-01-31 09:03:23.538 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:23 compute-2 nova_compute[226829]: 2026-01-31 09:03:23.538 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:03:23 compute-2 nova_compute[226829]: 2026-01-31 09:03:23.538 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:23.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:23 compute-2 ceph-mon[77282]: pgmap v3947: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 09:03:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:03:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/500537900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:23 compute-2 nova_compute[226829]: 2026-01-31 09:03:23.965 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.093 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.095 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4081MB free_disk=20.967517852783203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.095 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.095 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:24.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.150 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.151 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.166 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:03:24 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3264900271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.555 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.560 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.574 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.576 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.576 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:24 compute-2 nova_compute[226829]: 2026-01-31 09:03:24.744 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/500537900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3264900271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:25 compute-2 nova_compute[226829]: 2026-01-31 09:03:25.555 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:25.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:25.736 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:03:25 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:25.737 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:03:25 compute-2 nova_compute[226829]: 2026-01-31 09:03:25.737 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/485732851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:25 compute-2 ceph-mon[77282]: pgmap v3948: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 31 09:03:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3663870704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:26 compute-2 sudo[331730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:26 compute-2 sudo[331730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:26 compute-2 sudo[331730]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:26 compute-2 podman[331754]: 2026-01-31 09:03:26.353868308 +0000 UTC m=+0.039111445 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 09:03:26 compute-2 sudo[331767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:26 compute-2 sudo[331767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:26 compute-2 sudo[331767]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:26 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:26.740 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:03:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:27.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:28 compute-2 nova_compute[226829]: 2026-01-31 09:03:28.053 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:28.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:28 compute-2 ceph-mon[77282]: pgmap v3949: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 63 op/s
Jan 31 09:03:29 compute-2 sudo[331799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:29 compute-2 sudo[331799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:29 compute-2 sudo[331799]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:29 compute-2 sudo[331824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:03:29 compute-2 sudo[331824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:29 compute-2 sudo[331824]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:29 compute-2 sudo[331849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:29 compute-2 sudo[331849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:29 compute-2 sudo[331849]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:29 compute-2 sudo[331874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:03:29 compute-2 sudo[331874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:29 compute-2 nova_compute[226829]: 2026-01-31 09:03:29.746 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:29 compute-2 sudo[331874]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:29 compute-2 ceph-mon[77282]: pgmap v3950: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 765 KiB/s wr, 77 op/s
Jan 31 09:03:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:30.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 09:03:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:03:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:03:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:03:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:03:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:03:31 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:03:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:31.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:32 compute-2 ceph-mon[77282]: pgmap v3951: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 31 09:03:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:32.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:33 compute-2 nova_compute[226829]: 2026-01-31 09:03:33.055 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:33.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:33 compute-2 nova_compute[226829]: 2026-01-31 09:03:33.629 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:33 compute-2 nova_compute[226829]: 2026-01-31 09:03:33.630 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:33 compute-2 nova_compute[226829]: 2026-01-31 09:03:33.654 226833 DEBUG nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:03:33 compute-2 nova_compute[226829]: 2026-01-31 09:03:33.748 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:33 compute-2 nova_compute[226829]: 2026-01-31 09:03:33.749 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:33 compute-2 nova_compute[226829]: 2026-01-31 09:03:33.756 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:03:33 compute-2 nova_compute[226829]: 2026-01-31 09:03:33.756 226833 INFO nova.compute.claims [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Claim successful on node compute-2.ctlplane.example.com
Jan 31 09:03:33 compute-2 nova_compute[226829]: 2026-01-31 09:03:33.876 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:03:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/123604271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.280 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.284 226833 DEBUG nova.compute.provider_tree [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.303 226833 DEBUG nova.scheduler.client.report [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.322 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.323 226833 DEBUG nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.384 226833 DEBUG nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.385 226833 DEBUG nova.network.neutron [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.405 226833 INFO nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.424 226833 DEBUG nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.515 226833 DEBUG nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.517 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.518 226833 INFO nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Creating image(s)
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.560 226833 DEBUG nova.storage.rbd_utils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.600 226833 DEBUG nova.storage.rbd_utils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:03:34 compute-2 ceph-mon[77282]: pgmap v3952: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:03:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/123604271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.717 226833 DEBUG nova.storage.rbd_utils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.722 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.754 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.760 226833 DEBUG nova.policy [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.807 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.808 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.808 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.809 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.833 226833 DEBUG nova.storage.rbd_utils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:03:34 compute-2 nova_compute[226829]: 2026-01-31 09:03:34.836 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:35 compute-2 nova_compute[226829]: 2026-01-31 09:03:35.514 226833 DEBUG nova.network.neutron [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Successfully created port: eff35ea5-3814-43cc-89a0-154862846bc4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 09:03:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:35.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:35 compute-2 sudo[332049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:35 compute-2 sudo[332049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:35 compute-2 sudo[332049]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:35 compute-2 sudo[332074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:03:35 compute-2 sudo[332074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:35 compute-2 sudo[332074]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:36.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:36 compute-2 nova_compute[226829]: 2026-01-31 09:03:36.415 226833 DEBUG nova.network.neutron [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Successfully updated port: eff35ea5-3814-43cc-89a0-154862846bc4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:03:36 compute-2 nova_compute[226829]: 2026-01-31 09:03:36.443 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:03:36 compute-2 nova_compute[226829]: 2026-01-31 09:03:36.444 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:03:36 compute-2 nova_compute[226829]: 2026-01-31 09:03:36.444 226833 DEBUG nova.network.neutron [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:03:36 compute-2 nova_compute[226829]: 2026-01-31 09:03:36.549 226833 DEBUG nova.compute.manager [req-e7c50445-75c8-4f57-a798-91c59037f2bd req-4c6cbf83-6e84-4b29-a1b8-4e3626f89cdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received event network-changed-eff35ea5-3814-43cc-89a0-154862846bc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:03:36 compute-2 nova_compute[226829]: 2026-01-31 09:03:36.549 226833 DEBUG nova.compute.manager [req-e7c50445-75c8-4f57-a798-91c59037f2bd req-4c6cbf83-6e84-4b29-a1b8-4e3626f89cdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Refreshing instance network info cache due to event network-changed-eff35ea5-3814-43cc-89a0-154862846bc4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:03:36 compute-2 nova_compute[226829]: 2026-01-31 09:03:36.549 226833 DEBUG oslo_concurrency.lockutils [req-e7c50445-75c8-4f57-a798-91c59037f2bd req-4c6cbf83-6e84-4b29-a1b8-4e3626f89cdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:03:36 compute-2 nova_compute[226829]: 2026-01-31 09:03:36.611 226833 DEBUG nova.network.neutron [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:03:36 compute-2 ceph-mon[77282]: pgmap v3953: 305 pgs: 305 active+clean; 183 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 80 op/s
Jan 31 09:03:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:03:36 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:03:36 compute-2 nova_compute[226829]: 2026-01-31 09:03:36.871 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:36 compute-2 nova_compute[226829]: 2026-01-31 09:03:36.940 226833 DEBUG nova.storage.rbd_utils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 09:03:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:37.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.636 226833 DEBUG nova.network.neutron [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Updating instance_info_cache with network_info: [{"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.667 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.668 226833 DEBUG nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Instance network_info: |[{"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.668 226833 DEBUG oslo_concurrency.lockutils [req-e7c50445-75c8-4f57-a798-91c59037f2bd req-4c6cbf83-6e84-4b29-a1b8-4e3626f89cdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.669 226833 DEBUG nova.network.neutron [req-e7c50445-75c8-4f57-a798-91c59037f2bd req-4c6cbf83-6e84-4b29-a1b8-4e3626f89cdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Refreshing network info cache for port eff35ea5-3814-43cc-89a0-154862846bc4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.893 226833 DEBUG nova.objects.instance [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.906 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.907 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Ensure instance console log exists: /var/lib/nova/instances/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.907 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.907 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.908 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.909 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Start _get_guest_xml network_info=[{"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.914 226833 WARNING nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.917 226833 DEBUG nova.virt.libvirt.host [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.918 226833 DEBUG nova.virt.libvirt.host [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.920 226833 DEBUG nova.virt.libvirt.host [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.920 226833 DEBUG nova.virt.libvirt.host [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.922 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.922 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.922 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.922 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.923 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.923 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.923 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.923 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.923 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.924 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.924 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.924 226833 DEBUG nova.virt.hardware [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:03:37 compute-2 nova_compute[226829]: 2026-01-31 09:03:37.927 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:37 compute-2 ceph-mon[77282]: pgmap v3954: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 91 op/s
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.056 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:38.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:03:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1318176720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:03:38 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.331 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.368 226833 DEBUG nova.storage.rbd_utils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.371 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:03:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3775936680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.772 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.774 226833 DEBUG nova.virt.libvirt.vif [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:03:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-933843354',display_name='tempest-TestNetworkBasicOps-server-933843354',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-933843354',id=214,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBBCTVa9BBM/k2qbT3EA7WWI3jUNc/MZdZbWPyKFq/tQecHoDCls/gJN11J6RyfywfyIFv6UrWn7cmkNqaetr4zpJMTjHdCuY6uhzUjBzIz0IaZ2No3JIE1TQLBweQlIJQ==',key_name='tempest-TestNetworkBasicOps-814625961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-keolc301',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:03:34Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=89e7ab6e-ac0f-4666-bec6-39fd4827d4d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.775 226833 DEBUG nova.network.os_vif_util [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.776 226833 DEBUG nova.network.os_vif_util [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:9d:f3,bridge_name='br-int',has_traffic_filtering=True,id=eff35ea5-3814-43cc-89a0-154862846bc4,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff35ea5-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.777 226833 DEBUG nova.objects.instance [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.796 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <uuid>89e7ab6e-ac0f-4666-bec6-39fd4827d4d9</uuid>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <name>instance-000000d6</name>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <metadata>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <nova:name>tempest-TestNetworkBasicOps-server-933843354</nova:name>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 09:03:37</nova:creationTime>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <nova:port uuid="eff35ea5-3814-43cc-89a0-154862846bc4">
Jan 31 09:03:38 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   </metadata>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <system>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <entry name="serial">89e7ab6e-ac0f-4666-bec6-39fd4827d4d9</entry>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <entry name="uuid">89e7ab6e-ac0f-4666-bec6-39fd4827d4d9</entry>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     </system>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <os>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   </os>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <features>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <apic/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   </features>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   </clock>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   </cpu>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   <devices>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk">
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       </source>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk.config">
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       </source>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:03:38 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:54:9d:f3"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <target dev="tapeff35ea5-38"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     </interface>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9/console.log" append="off"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     </serial>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <video>
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     </video>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     </rng>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 09:03:38 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 09:03:38 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 09:03:38 compute-2 nova_compute[226829]:   </devices>
Jan 31 09:03:38 compute-2 nova_compute[226829]: </domain>
Jan 31 09:03:38 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.798 226833 DEBUG nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Preparing to wait for external event network-vif-plugged-eff35ea5-3814-43cc-89a0-154862846bc4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.798 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.799 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.799 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.800 226833 DEBUG nova.virt.libvirt.vif [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:03:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-933843354',display_name='tempest-TestNetworkBasicOps-server-933843354',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-933843354',id=214,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBBCTVa9BBM/k2qbT3EA7WWI3jUNc/MZdZbWPyKFq/tQecHoDCls/gJN11J6RyfywfyIFv6UrWn7cmkNqaetr4zpJMTjHdCuY6uhzUjBzIz0IaZ2No3JIE1TQLBweQlIJQ==',key_name='tempest-TestNetworkBasicOps-814625961',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-keolc301',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:03:34Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=89e7ab6e-ac0f-4666-bec6-39fd4827d4d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.800 226833 DEBUG nova.network.os_vif_util [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.800 226833 DEBUG nova.network.os_vif_util [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:9d:f3,bridge_name='br-int',has_traffic_filtering=True,id=eff35ea5-3814-43cc-89a0-154862846bc4,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff35ea5-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.801 226833 DEBUG os_vif [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:9d:f3,bridge_name='br-int',has_traffic_filtering=True,id=eff35ea5-3814-43cc-89a0-154862846bc4,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff35ea5-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.801 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.802 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.802 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.809 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.809 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeff35ea5-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.809 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeff35ea5-38, col_values=(('external_ids', {'iface-id': 'eff35ea5-3814-43cc-89a0-154862846bc4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:9d:f3', 'vm-uuid': '89e7ab6e-ac0f-4666-bec6-39fd4827d4d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.811 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:38 compute-2 NetworkManager[48999]: <info>  [1769850218.8131] manager: (tapeff35ea5-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.813 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.816 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.817 226833 INFO os_vif [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:9d:f3,bridge_name='br-int',has_traffic_filtering=True,id=eff35ea5-3814-43cc-89a0-154862846bc4,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff35ea5-38')
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.874 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.874 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.875 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:54:9d:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.875 226833 INFO nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Using config drive
Jan 31 09:03:38 compute-2 nova_compute[226829]: 2026-01-31 09:03:38.897 226833 DEBUG nova.storage.rbd_utils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:03:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1318176720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:03:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3775936680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.341 226833 INFO nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Creating config drive at /var/lib/nova/instances/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9/disk.config
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.346 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_qoks_os execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.465 226833 DEBUG nova.network.neutron [req-e7c50445-75c8-4f57-a798-91c59037f2bd req-4c6cbf83-6e84-4b29-a1b8-4e3626f89cdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Updated VIF entry in instance network info cache for port eff35ea5-3814-43cc-89a0-154862846bc4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.466 226833 DEBUG nova.network.neutron [req-e7c50445-75c8-4f57-a798-91c59037f2bd req-4c6cbf83-6e84-4b29-a1b8-4e3626f89cdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Updating instance_info_cache with network_info: [{"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.480 226833 DEBUG oslo_concurrency.lockutils [req-e7c50445-75c8-4f57-a798-91c59037f2bd req-4c6cbf83-6e84-4b29-a1b8-4e3626f89cdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.485 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_qoks_os" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.516 226833 DEBUG nova.storage.rbd_utils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.520 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9/disk.config 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:03:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:39.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.678 226833 DEBUG oslo_concurrency.processutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9/disk.config 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.679 226833 INFO nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Deleting local config drive /var/lib/nova/instances/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9/disk.config because it was imported into RBD.
Jan 31 09:03:39 compute-2 kernel: tapeff35ea5-38: entered promiscuous mode
Jan 31 09:03:39 compute-2 NetworkManager[48999]: <info>  [1769850219.7374] manager: (tapeff35ea5-38): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Jan 31 09:03:39 compute-2 ovn_controller[133834]: 2026-01-31T09:03:39Z|00825|binding|INFO|Claiming lport eff35ea5-3814-43cc-89a0-154862846bc4 for this chassis.
Jan 31 09:03:39 compute-2 ovn_controller[133834]: 2026-01-31T09:03:39Z|00826|binding|INFO|eff35ea5-3814-43cc-89a0-154862846bc4: Claiming fa:16:3e:54:9d:f3 10.100.0.10
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.740 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.749 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.760 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.761 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 systemd-udevd[332305]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:03:39 compute-2 NetworkManager[48999]: <info>  [1769850219.7641] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.764 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:9d:f3 10.100.0.10'], port_security=['fa:16:3e:54:9d:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '89e7ab6e-ac0f-4666-bec6-39fd4827d4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7019d0b-5031-4941-b812-751bbbab3ab4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bbe7be41-fd3b-442e-aaab-98ad359e7174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7722569a-62de-45e9-b1af-a80aadef9b5c, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=eff35ea5-3814-43cc-89a0-154862846bc4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:03:39 compute-2 NetworkManager[48999]: <info>  [1769850219.7651] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.765 143841 INFO neutron.agent.ovn.metadata.agent [-] Port eff35ea5-3814-43cc-89a0-154862846bc4 in datapath c7019d0b-5031-4941-b812-751bbbab3ab4 bound to our chassis
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.767 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c7019d0b-5031-4941-b812-751bbbab3ab4
Jan 31 09:03:39 compute-2 NetworkManager[48999]: <info>  [1769850219.7784] device (tapeff35ea5-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:03:39 compute-2 NetworkManager[48999]: <info>  [1769850219.7792] device (tapeff35ea5-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.779 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e1684e-ec87-4cc2-bfe5-992e6568cd09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.780 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc7019d0b-51 in ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.783 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc7019d0b-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:03:39 compute-2 systemd-machined[195142]: New machine qemu-94-instance-000000d6.
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.783 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6653a130-324b-4431-b105-e30a8e390596]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.784 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fae30a4e-8840-4e63-bfcb-340a417dbb43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.789 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.797 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 systemd[1]: Started Virtual Machine qemu-94-instance-000000d6.
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.798 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f263f9-7048-473a-9485-e86ff7dbf227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_controller[133834]: 2026-01-31T09:03:39Z|00827|binding|INFO|Setting lport eff35ea5-3814-43cc-89a0-154862846bc4 ovn-installed in OVS
Jan 31 09:03:39 compute-2 ovn_controller[133834]: 2026-01-31T09:03:39Z|00828|binding|INFO|Setting lport eff35ea5-3814-43cc-89a0-154862846bc4 up in Southbound
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.802 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.808 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[feadb057-f3c4-4e67-b4ee-ee2b9ff7a2f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.831 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[853ff37c-0858-4a5d-b591-068defcc950c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.836 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f06bfda4-0036-4b7f-b15e-dea7b88ffbea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 NetworkManager[48999]: <info>  [1769850219.8376] manager: (tapc7019d0b-50): new Veth device (/org/freedesktop/NetworkManager/Devices/415)
Jan 31 09:03:39 compute-2 systemd-udevd[332311]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.859 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d0e967-15df-44af-9a3b-9f54127eb3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.864 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[abeeab80-672c-41b6-a12e-9ad03291567d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 NetworkManager[48999]: <info>  [1769850219.8794] device (tapc7019d0b-50): carrier: link connected
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.883 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e989ca-9ae4-4792-959a-d9db73b5fe78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.896 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc27543-497a-485a-890a-abebd44cb364]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7019d0b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:07:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1047235, 'reachable_time': 27550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332341, 'error': None, 'target': 'ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.908 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7b2378-34a7-4de6-b032-45340e7d02df]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:745'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1047235, 'tstamp': 1047235}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332342, 'error': None, 'target': 'ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.919 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8f625f-4015-4784-ba6e-25fa55457bd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7019d0b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:07:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1047235, 'reachable_time': 27550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332343, 'error': None, 'target': 'ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.938 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f3420c2f-98fb-4280-ba98-b5eb579eab7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.978 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[35c36301-7824-4eca-9013-bcd791e7e99e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.980 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7019d0b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.980 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.980 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7019d0b-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:03:39 compute-2 NetworkManager[48999]: <info>  [1769850219.9828] manager: (tapc7019d0b-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Jan 31 09:03:39 compute-2 kernel: tapc7019d0b-50: entered promiscuous mode
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.983 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.984 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc7019d0b-50, col_values=(('external_ids', {'iface-id': '7680dcda-ece7-42a1-b1cc-bd8687a1f61f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.985 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.986 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 ovn_controller[133834]: 2026-01-31T09:03:39Z|00829|binding|INFO|Releasing lport 7680dcda-ece7-42a1-b1cc-bd8687a1f61f from this chassis (sb_readonly=0)
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.987 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c7019d0b-5031-4941-b812-751bbbab3ab4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c7019d0b-5031-4941-b812-751bbbab3ab4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.987 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef413a3-a7e0-4c07-8355-7f3efe5dedbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.988 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: global
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-c7019d0b-5031-4941-b812-751bbbab3ab4
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/c7019d0b-5031-4941-b812-751bbbab3ab4.pid.haproxy
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID c7019d0b-5031-4941-b812-751bbbab3ab4
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:03:39 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:03:39.989 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4', 'env', 'PROCESS_TAG=haproxy-c7019d0b-5031-4941-b812-751bbbab3ab4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c7019d0b-5031-4941-b812-751bbbab3ab4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:03:39 compute-2 nova_compute[226829]: 2026-01-31 09:03:39.990 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:39 compute-2 ceph-mon[77282]: pgmap v3955: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 92 op/s
Jan 31 09:03:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:40.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.137 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850220.1366174, 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.137 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] VM Started (Lifecycle Event)
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.161 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.166 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850220.136889, 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.166 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] VM Paused (Lifecycle Event)
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.193 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.196 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.223 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:03:40 compute-2 podman[332418]: 2026-01-31 09:03:40.294598243 +0000 UTC m=+0.040155562 container create 11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:03:40 compute-2 systemd[1]: Started libpod-conmon-11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a.scope.
Jan 31 09:03:40 compute-2 systemd[1]: Started libcrun container.
Jan 31 09:03:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7020d9b2c61307267eb4b3006e31c84d4a9ddb35bd0c8dff9b9fb86ff5c4f16/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:40 compute-2 podman[332418]: 2026-01-31 09:03:40.368907515 +0000 UTC m=+0.114464854 container init 11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 09:03:40 compute-2 podman[332418]: 2026-01-31 09:03:40.272578684 +0000 UTC m=+0.018136023 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:03:40 compute-2 podman[332418]: 2026-01-31 09:03:40.374194658 +0000 UTC m=+0.119751977 container start 11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 09:03:40 compute-2 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[332434]: [NOTICE]   (332438) : New worker (332440) forked
Jan 31 09:03:40 compute-2 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[332434]: [NOTICE]   (332438) : Loading success.
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.612 226833 DEBUG nova.compute.manager [req-76e39b10-91f5-46e9-a438-eb77ab1a68cc req-9493127d-52af-4ec1-92f4-31588b50842f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received event network-vif-plugged-eff35ea5-3814-43cc-89a0-154862846bc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.613 226833 DEBUG oslo_concurrency.lockutils [req-76e39b10-91f5-46e9-a438-eb77ab1a68cc req-9493127d-52af-4ec1-92f4-31588b50842f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.613 226833 DEBUG oslo_concurrency.lockutils [req-76e39b10-91f5-46e9-a438-eb77ab1a68cc req-9493127d-52af-4ec1-92f4-31588b50842f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.614 226833 DEBUG oslo_concurrency.lockutils [req-76e39b10-91f5-46e9-a438-eb77ab1a68cc req-9493127d-52af-4ec1-92f4-31588b50842f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.614 226833 DEBUG nova.compute.manager [req-76e39b10-91f5-46e9-a438-eb77ab1a68cc req-9493127d-52af-4ec1-92f4-31588b50842f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Processing event network-vif-plugged-eff35ea5-3814-43cc-89a0-154862846bc4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.615 226833 DEBUG nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.619 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850220.6191056, 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.619 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] VM Resumed (Lifecycle Event)
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.622 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.628 226833 INFO nova.virt.libvirt.driver [-] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Instance spawned successfully.
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.629 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.651 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.659 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.664 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.664 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.665 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.666 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.667 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.668 226833 DEBUG nova.virt.libvirt.driver [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.678 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.758 226833 INFO nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Took 6.24 seconds to spawn the instance on the hypervisor.
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.759 226833 DEBUG nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.839 226833 INFO nova.compute.manager [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Took 7.12 seconds to build instance.
Jan 31 09:03:40 compute-2 nova_compute[226829]: 2026-01-31 09:03:40.872 226833 DEBUG oslo_concurrency.lockutils [None req-8e7bea77-5f70-478e-af1d-7ad895640678 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:41.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:41 compute-2 ceph-mon[77282]: pgmap v3956: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 357 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 09:03:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:03:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:42.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:03:42 compute-2 nova_compute[226829]: 2026-01-31 09:03:42.681 226833 DEBUG nova.compute.manager [req-5224b6d1-794f-4bdd-b526-d0deb96fb93f req-7f9c51d2-e8e3-408b-aca2-9880973b92c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received event network-vif-plugged-eff35ea5-3814-43cc-89a0-154862846bc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:03:42 compute-2 nova_compute[226829]: 2026-01-31 09:03:42.683 226833 DEBUG oslo_concurrency.lockutils [req-5224b6d1-794f-4bdd-b526-d0deb96fb93f req-7f9c51d2-e8e3-408b-aca2-9880973b92c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:42 compute-2 nova_compute[226829]: 2026-01-31 09:03:42.683 226833 DEBUG oslo_concurrency.lockutils [req-5224b6d1-794f-4bdd-b526-d0deb96fb93f req-7f9c51d2-e8e3-408b-aca2-9880973b92c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:42 compute-2 nova_compute[226829]: 2026-01-31 09:03:42.683 226833 DEBUG oslo_concurrency.lockutils [req-5224b6d1-794f-4bdd-b526-d0deb96fb93f req-7f9c51d2-e8e3-408b-aca2-9880973b92c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:42 compute-2 nova_compute[226829]: 2026-01-31 09:03:42.684 226833 DEBUG nova.compute.manager [req-5224b6d1-794f-4bdd-b526-d0deb96fb93f req-7f9c51d2-e8e3-408b-aca2-9880973b92c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] No waiting events found dispatching network-vif-plugged-eff35ea5-3814-43cc-89a0-154862846bc4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:03:42 compute-2 nova_compute[226829]: 2026-01-31 09:03:42.684 226833 WARNING nova.compute.manager [req-5224b6d1-794f-4bdd-b526-d0deb96fb93f req-7f9c51d2-e8e3-408b-aca2-9880973b92c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received unexpected event network-vif-plugged-eff35ea5-3814-43cc-89a0-154862846bc4 for instance with vm_state active and task_state None.
Jan 31 09:03:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:03:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:43.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:03:43 compute-2 nova_compute[226829]: 2026-01-31 09:03:43.811 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:44.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:44 compute-2 ceph-mon[77282]: pgmap v3957: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 31 09:03:44 compute-2 nova_compute[226829]: 2026-01-31 09:03:44.751 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:44 compute-2 nova_compute[226829]: 2026-01-31 09:03:44.761 226833 DEBUG nova.compute.manager [req-b1e6ade7-c7fd-4032-abfe-d611f206f3fd req-7d197e44-3ad6-411c-861f-abf079ed19db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received event network-changed-eff35ea5-3814-43cc-89a0-154862846bc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:03:44 compute-2 nova_compute[226829]: 2026-01-31 09:03:44.762 226833 DEBUG nova.compute.manager [req-b1e6ade7-c7fd-4032-abfe-d611f206f3fd req-7d197e44-3ad6-411c-861f-abf079ed19db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Refreshing instance network info cache due to event network-changed-eff35ea5-3814-43cc-89a0-154862846bc4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:03:44 compute-2 nova_compute[226829]: 2026-01-31 09:03:44.762 226833 DEBUG oslo_concurrency.lockutils [req-b1e6ade7-c7fd-4032-abfe-d611f206f3fd req-7d197e44-3ad6-411c-861f-abf079ed19db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:03:44 compute-2 nova_compute[226829]: 2026-01-31 09:03:44.762 226833 DEBUG oslo_concurrency.lockutils [req-b1e6ade7-c7fd-4032-abfe-d611f206f3fd req-7d197e44-3ad6-411c-861f-abf079ed19db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:03:44 compute-2 nova_compute[226829]: 2026-01-31 09:03:44.763 226833 DEBUG nova.network.neutron [req-b1e6ade7-c7fd-4032-abfe-d611f206f3fd req-7d197e44-3ad6-411c-861f-abf079ed19db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Refreshing network info cache for port eff35ea5-3814-43cc-89a0-154862846bc4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:03:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:45.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:46.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:46 compute-2 sudo[332452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:46 compute-2 sudo[332452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:46 compute-2 sudo[332452]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:46 compute-2 nova_compute[226829]: 2026-01-31 09:03:46.433 226833 DEBUG nova.network.neutron [req-b1e6ade7-c7fd-4032-abfe-d611f206f3fd req-7d197e44-3ad6-411c-861f-abf079ed19db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Updated VIF entry in instance network info cache for port eff35ea5-3814-43cc-89a0-154862846bc4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:03:46 compute-2 nova_compute[226829]: 2026-01-31 09:03:46.435 226833 DEBUG nova.network.neutron [req-b1e6ade7-c7fd-4032-abfe-d611f206f3fd req-7d197e44-3ad6-411c-861f-abf079ed19db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Updating instance_info_cache with network_info: [{"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:03:46 compute-2 nova_compute[226829]: 2026-01-31 09:03:46.461 226833 DEBUG oslo_concurrency.lockutils [req-b1e6ade7-c7fd-4032-abfe-d611f206f3fd req-7d197e44-3ad6-411c-861f-abf079ed19db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:03:46 compute-2 sudo[332477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:46 compute-2 sudo[332477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:46 compute-2 sudo[332477]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:46 compute-2 ceph-mon[77282]: pgmap v3958: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 31 09:03:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:03:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:47.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:03:47 compute-2 ceph-mon[77282]: pgmap v3959: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 147 op/s
Jan 31 09:03:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:48.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:48 compute-2 nova_compute[226829]: 2026-01-31 09:03:48.815 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:49 compute-2 nova_compute[226829]: 2026-01-31 09:03:49.753 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:49 compute-2 ceph-mon[77282]: pgmap v3960: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 124 op/s
Jan 31 09:03:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:50.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:51 compute-2 podman[332505]: 2026-01-31 09:03:51.187977801 +0000 UTC m=+0.071297190 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 09:03:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:51.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:52.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:52 compute-2 nova_compute[226829]: 2026-01-31 09:03:52.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:52 compute-2 ceph-mon[77282]: pgmap v3961: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 358 KiB/s wr, 97 op/s
Jan 31 09:03:52 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [O-0] New memtable created with log file: #60. Immutable memtables: 0.
Jan 31 09:03:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1116350203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:03:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1116350203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:03:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:53.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:53 compute-2 ovn_controller[133834]: 2026-01-31T09:03:53Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:9d:f3 10.100.0.10
Jan 31 09:03:53 compute-2 ovn_controller[133834]: 2026-01-31T09:03:53Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:9d:f3 10.100.0.10
Jan 31 09:03:53 compute-2 nova_compute[226829]: 2026-01-31 09:03:53.817 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:54.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:54 compute-2 ceph-mon[77282]: pgmap v3962: 305 pgs: 305 active+clean; 247 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 245 KiB/s wr, 74 op/s
Jan 31 09:03:54 compute-2 nova_compute[226829]: 2026-01-31 09:03:54.754 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:55.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:55 compute-2 ceph-mon[77282]: pgmap v3963: 305 pgs: 305 active+clean; 256 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 762 KiB/s wr, 60 op/s
Jan 31 09:03:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:56.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:57 compute-2 podman[332534]: 2026-01-31 09:03:57.166806656 +0000 UTC m=+0.057107805 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 09:03:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:57.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:58 compute-2 ceph-mon[77282]: pgmap v3964: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 701 KiB/s rd, 1.4 MiB/s wr, 50 op/s
Jan 31 09:03:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:58 compute-2 nova_compute[226829]: 2026-01-31 09:03:58.824 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:03:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:03:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:59.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:03:59 compute-2 nova_compute[226829]: 2026-01-31 09:03:59.756 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:04:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:04:00 compute-2 ceph-mon[77282]: pgmap v3965: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:04:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:04:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:01.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:04:01 compute-2 ceph-mon[77282]: pgmap v3966: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:04:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:02.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:03.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:03 compute-2 nova_compute[226829]: 2026-01-31 09:04:03.827 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:04 compute-2 ceph-mon[77282]: pgmap v3967: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:04:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:04.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:04 compute-2 nova_compute[226829]: 2026-01-31 09:04:04.759 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:05.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:06 compute-2 sudo[332560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:06 compute-2 sudo[332560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:06 compute-2 sudo[332560]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:06 compute-2 sudo[332585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:06 compute-2 sudo[332585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:06 compute-2 sudo[332585]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:06.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:06 compute-2 ceph-mon[77282]: pgmap v3968: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Jan 31 09:04:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:06.946 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:06 compute-2 nova_compute[226829]: 2026-01-31 09:04:06.947 226833 DEBUG nova.compute.manager [req-66ed4203-79dd-4a39-808f-7067c4ed1fa2 req-63688806-dc60-4ab8-8a6d-2b0cc5c89e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received event network-changed-eff35ea5-3814-43cc-89a0-154862846bc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:04:06 compute-2 nova_compute[226829]: 2026-01-31 09:04:06.947 226833 DEBUG nova.compute.manager [req-66ed4203-79dd-4a39-808f-7067c4ed1fa2 req-63688806-dc60-4ab8-8a6d-2b0cc5c89e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Refreshing instance network info cache due to event network-changed-eff35ea5-3814-43cc-89a0-154862846bc4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:04:06 compute-2 nova_compute[226829]: 2026-01-31 09:04:06.947 226833 DEBUG oslo_concurrency.lockutils [req-66ed4203-79dd-4a39-808f-7067c4ed1fa2 req-63688806-dc60-4ab8-8a6d-2b0cc5c89e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:04:06 compute-2 nova_compute[226829]: 2026-01-31 09:04:06.948 226833 DEBUG oslo_concurrency.lockutils [req-66ed4203-79dd-4a39-808f-7067c4ed1fa2 req-63688806-dc60-4ab8-8a6d-2b0cc5c89e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:04:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:06.948 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:06 compute-2 nova_compute[226829]: 2026-01-31 09:04:06.948 226833 DEBUG nova.network.neutron [req-66ed4203-79dd-4a39-808f-7067c4ed1fa2 req-63688806-dc60-4ab8-8a6d-2b0cc5c89e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Refreshing network info cache for port eff35ea5-3814-43cc-89a0-154862846bc4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:04:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:06.949 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.075 226833 DEBUG oslo_concurrency.lockutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.076 226833 DEBUG oslo_concurrency.lockutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.076 226833 DEBUG oslo_concurrency.lockutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.076 226833 DEBUG oslo_concurrency.lockutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.077 226833 DEBUG oslo_concurrency.lockutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.078 226833 INFO nova.compute.manager [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Terminating instance
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.079 226833 DEBUG nova.compute.manager [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 09:04:07 compute-2 kernel: tapeff35ea5-38 (unregistering): left promiscuous mode
Jan 31 09:04:07 compute-2 NetworkManager[48999]: <info>  [1769850247.3364] device (tapeff35ea5-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.343 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:07 compute-2 ovn_controller[133834]: 2026-01-31T09:04:07Z|00830|binding|INFO|Releasing lport eff35ea5-3814-43cc-89a0-154862846bc4 from this chassis (sb_readonly=0)
Jan 31 09:04:07 compute-2 ovn_controller[133834]: 2026-01-31T09:04:07Z|00831|binding|INFO|Setting lport eff35ea5-3814-43cc-89a0-154862846bc4 down in Southbound
Jan 31 09:04:07 compute-2 ovn_controller[133834]: 2026-01-31T09:04:07Z|00832|binding|INFO|Removing iface tapeff35ea5-38 ovn-installed in OVS
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.351 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:07.357 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:9d:f3 10.100.0.10'], port_security=['fa:16:3e:54:9d:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '89e7ab6e-ac0f-4666-bec6-39fd4827d4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7019d0b-5031-4941-b812-751bbbab3ab4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bbe7be41-fd3b-442e-aaab-98ad359e7174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7722569a-62de-45e9-b1af-a80aadef9b5c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=eff35ea5-3814-43cc-89a0-154862846bc4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:04:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:07.361 143841 INFO neutron.agent.ovn.metadata.agent [-] Port eff35ea5-3814-43cc-89a0-154862846bc4 in datapath c7019d0b-5031-4941-b812-751bbbab3ab4 unbound from our chassis
Jan 31 09:04:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:07.363 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c7019d0b-5031-4941-b812-751bbbab3ab4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:04:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:07.365 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2f8cde-678f-4093-94e8-b05046624365]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:07 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:07.367 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4 namespace which is not needed anymore
Jan 31 09:04:07 compute-2 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d6.scope: Deactivated successfully.
Jan 31 09:04:07 compute-2 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d6.scope: Consumed 13.882s CPU time.
Jan 31 09:04:07 compute-2 systemd-machined[195142]: Machine qemu-94-instance-000000d6 terminated.
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.496 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.499 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.509 226833 INFO nova.virt.libvirt.driver [-] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Instance destroyed successfully.
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.510 226833 DEBUG nova.objects.instance [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.526 226833 DEBUG nova.virt.libvirt.vif [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:03:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-933843354',display_name='tempest-TestNetworkBasicOps-server-933843354',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-933843354',id=214,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBBCTVa9BBM/k2qbT3EA7WWI3jUNc/MZdZbWPyKFq/tQecHoDCls/gJN11J6RyfywfyIFv6UrWn7cmkNqaetr4zpJMTjHdCuY6uhzUjBzIz0IaZ2No3JIE1TQLBweQlIJQ==',key_name='tempest-TestNetworkBasicOps-814625961',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:03:40Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-keolc301',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:03:40Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=89e7ab6e-ac0f-4666-bec6-39fd4827d4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.527 226833 DEBUG nova.network.os_vif_util [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.528 226833 DEBUG nova.network.os_vif_util [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:9d:f3,bridge_name='br-int',has_traffic_filtering=True,id=eff35ea5-3814-43cc-89a0-154862846bc4,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff35ea5-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.528 226833 DEBUG os_vif [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:9d:f3,bridge_name='br-int',has_traffic_filtering=True,id=eff35ea5-3814-43cc-89a0-154862846bc4,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff35ea5-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.530 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.531 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeff35ea5-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.532 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.534 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:07 compute-2 nova_compute[226829]: 2026-01-31 09:04:07.539 226833 INFO os_vif [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:9d:f3,bridge_name='br-int',has_traffic_filtering=True,id=eff35ea5-3814-43cc-89a0-154862846bc4,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeff35ea5-38')
Jan 31 09:04:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:07.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:07 compute-2 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[332434]: [NOTICE]   (332438) : haproxy version is 2.8.14-c23fe91
Jan 31 09:04:07 compute-2 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[332434]: [NOTICE]   (332438) : path to executable is /usr/sbin/haproxy
Jan 31 09:04:07 compute-2 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[332434]: [WARNING]  (332438) : Exiting Master process...
Jan 31 09:04:07 compute-2 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[332434]: [ALERT]    (332438) : Current worker (332440) exited with code 143 (Terminated)
Jan 31 09:04:07 compute-2 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[332434]: [WARNING]  (332438) : All workers exited. Exiting... (0)
Jan 31 09:04:07 compute-2 systemd[1]: libpod-11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a.scope: Deactivated successfully.
Jan 31 09:04:07 compute-2 podman[332634]: 2026-01-31 09:04:07.700069552 +0000 UTC m=+0.254492392 container died 11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 09:04:07 compute-2 ceph-mon[77282]: pgmap v3969: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 237 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Jan 31 09:04:07 compute-2 systemd[1]: var-lib-containers-storage-overlay-d7020d9b2c61307267eb4b3006e31c84d4a9ddb35bd0c8dff9b9fb86ff5c4f16-merged.mount: Deactivated successfully.
Jan 31 09:04:07 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a-userdata-shm.mount: Deactivated successfully.
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.093 226833 DEBUG nova.network.neutron [req-66ed4203-79dd-4a39-808f-7067c4ed1fa2 req-63688806-dc60-4ab8-8a6d-2b0cc5c89e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Updated VIF entry in instance network info cache for port eff35ea5-3814-43cc-89a0-154862846bc4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.095 226833 DEBUG nova.network.neutron [req-66ed4203-79dd-4a39-808f-7067c4ed1fa2 req-63688806-dc60-4ab8-8a6d-2b0cc5c89e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Updating instance_info_cache with network_info: [{"id": "eff35ea5-3814-43cc-89a0-154862846bc4", "address": "fa:16:3e:54:9d:f3", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeff35ea5-38", "ovs_interfaceid": "eff35ea5-3814-43cc-89a0-154862846bc4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.115 226833 DEBUG oslo_concurrency.lockutils [req-66ed4203-79dd-4a39-808f-7067c4ed1fa2 req-63688806-dc60-4ab8-8a6d-2b0cc5c89e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:04:08 compute-2 podman[332634]: 2026-01-31 09:04:08.259203596 +0000 UTC m=+0.813626466 container cleanup 11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:04:08 compute-2 systemd[1]: libpod-conmon-11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a.scope: Deactivated successfully.
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.288 226833 DEBUG nova.compute.manager [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received event network-vif-unplugged-eff35ea5-3814-43cc-89a0-154862846bc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.288 226833 DEBUG oslo_concurrency.lockutils [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.289 226833 DEBUG oslo_concurrency.lockutils [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.289 226833 DEBUG oslo_concurrency.lockutils [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.289 226833 DEBUG nova.compute.manager [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] No waiting events found dispatching network-vif-unplugged-eff35ea5-3814-43cc-89a0-154862846bc4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.289 226833 DEBUG nova.compute.manager [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received event network-vif-unplugged-eff35ea5-3814-43cc-89a0-154862846bc4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.289 226833 DEBUG nova.compute.manager [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received event network-vif-plugged-eff35ea5-3814-43cc-89a0-154862846bc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.289 226833 DEBUG oslo_concurrency.lockutils [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.290 226833 DEBUG oslo_concurrency.lockutils [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.290 226833 DEBUG oslo_concurrency.lockutils [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.290 226833 DEBUG nova.compute.manager [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] No waiting events found dispatching network-vif-plugged-eff35ea5-3814-43cc-89a0-154862846bc4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.290 226833 WARNING nova.compute.manager [req-4fbb5cf6-5915-4bfb-b9cb-25faaf642ac1 req-4f99a470-78d2-4adf-b256-ff3fbbfbe2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received unexpected event network-vif-plugged-eff35ea5-3814-43cc-89a0-154862846bc4 for instance with vm_state active and task_state deleting.
Jan 31 09:04:08 compute-2 podman[332695]: 2026-01-31 09:04:08.591080972 +0000 UTC m=+0.312232533 container remove 11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:08.595 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[900d8cc5-087f-48ed-abf5-85f256a3467c]: (4, ('Sat Jan 31 09:04:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4 (11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a)\n11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a\nSat Jan 31 09:04:08 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4 (11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a)\n11b6034fc52bc05e0d9fbf649ff1cf21163c0677eb0e29f46b114cad35a29f5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:08.597 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[129fa5f9-01bc-489c-b126-05b8b88daf88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:08.598 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7019d0b-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.601 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:08 compute-2 kernel: tapc7019d0b-50: left promiscuous mode
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.604 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:08.606 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[da867593-478f-4970-93c8-bdb35ae0943f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:08 compute-2 nova_compute[226829]: 2026-01-31 09:04:08.609 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:04:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:08.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:08.624 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[fa570ad9-952d-449d-9385-0c80c3fc425f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:08.626 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[28c6a316-f132-4b97-93b0-991067eb8890]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:08.645 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[be5c18e6-622a-4ad8-bb39-40e74d243c72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1047230, 'reachable_time': 31614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332711, 'error': None, 'target': 'ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:08 compute-2 systemd[1]: run-netns-ovnmeta\x2dc7019d0b\x2d5031\x2d4941\x2db812\x2d751bbbab3ab4.mount: Deactivated successfully.
Jan 31 09:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:08.653 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:04:08 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:08.654 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[e60bd9f1-3d90-4de1-844e-a1efba458f4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:09 compute-2 nova_compute[226829]: 2026-01-31 09:04:09.241 226833 INFO nova.virt.libvirt.driver [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Deleting instance files /var/lib/nova/instances/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_del
Jan 31 09:04:09 compute-2 nova_compute[226829]: 2026-01-31 09:04:09.241 226833 INFO nova.virt.libvirt.driver [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Deletion of /var/lib/nova/instances/89e7ab6e-ac0f-4666-bec6-39fd4827d4d9_del complete
Jan 31 09:04:09 compute-2 nova_compute[226829]: 2026-01-31 09:04:09.297 226833 INFO nova.compute.manager [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Took 2.22 seconds to destroy the instance on the hypervisor.
Jan 31 09:04:09 compute-2 nova_compute[226829]: 2026-01-31 09:04:09.298 226833 DEBUG oslo.service.loopingcall [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 09:04:09 compute-2 nova_compute[226829]: 2026-01-31 09:04:09.298 226833 DEBUG nova.compute.manager [-] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 09:04:09 compute-2 nova_compute[226829]: 2026-01-31 09:04:09.298 226833 DEBUG nova.network.neutron [-] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 09:04:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:09.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:09 compute-2 nova_compute[226829]: 2026-01-31 09:04:09.761 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:09 compute-2 nova_compute[226829]: 2026-01-31 09:04:09.999 226833 DEBUG nova.network.neutron [-] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.018 226833 INFO nova.compute.manager [-] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Took 0.72 seconds to deallocate network for instance.
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.099 226833 DEBUG nova.compute.manager [req-808ef3b8-0c26-4f20-bb61-374627ee028b req-006cbc35-716b-42ec-8c8e-a12a4f3a3589 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Received event network-vif-deleted-eff35ea5-3814-43cc-89a0-154862846bc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.123 226833 DEBUG oslo_concurrency.lockutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.124 226833 DEBUG oslo_concurrency.lockutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.211 226833 DEBUG oslo_concurrency.processutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:10.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:04:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3121644942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.655 226833 DEBUG oslo_concurrency.processutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.661 226833 DEBUG nova.compute.provider_tree [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.693 226833 DEBUG nova.scheduler.client.report [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.743 226833 DEBUG oslo_concurrency.lockutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.776 226833 INFO nova.scheduler.client.report [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9
Jan 31 09:04:10 compute-2 nova_compute[226829]: 2026-01-31 09:04:10.865 226833 DEBUG oslo_concurrency.lockutils [None req-17326f5b-de5e-4986-acf2-9f2d0f18654d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "89e7ab6e-ac0f-4666-bec6-39fd4827d4d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:11 compute-2 ceph-mon[77282]: pgmap v3970: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 172 KiB/s rd, 761 KiB/s wr, 30 op/s
Jan 31 09:04:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:11 compute-2 nova_compute[226829]: 2026-01-31 09:04:11.500 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:04:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:11.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:04:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3121644942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:12 compute-2 ceph-mon[77282]: pgmap v3971: 305 pgs: 305 active+clean; 233 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 31 KiB/s wr, 29 op/s
Jan 31 09:04:12 compute-2 nova_compute[226829]: 2026-01-31 09:04:12.535 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:12.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:13.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:14.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:14 compute-2 ceph-mon[77282]: pgmap v3972: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 29 op/s
Jan 31 09:04:14 compute-2 nova_compute[226829]: 2026-01-31 09:04:14.764 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:15.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:15 compute-2 ceph-mon[77282]: pgmap v3973: 305 pgs: 305 active+clean; 171 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 8.2 KiB/s wr, 38 op/s
Jan 31 09:04:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:04:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:16.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:04:17 compute-2 nova_compute[226829]: 2026-01-31 09:04:17.538 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:17.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:18 compute-2 nova_compute[226829]: 2026-01-31 09:04:18.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:18 compute-2 nova_compute[226829]: 2026-01-31 09:04:18.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:18 compute-2 ceph-mon[77282]: pgmap v3974: 305 pgs: 305 active+clean; 138 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 8.7 KiB/s wr, 44 op/s
Jan 31 09:04:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:19.208 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=104, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=103) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:04:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:19.209 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:04:19 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:19.210 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '104'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:04:19 compute-2 nova_compute[226829]: 2026-01-31 09:04:19.247 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:19 compute-2 nova_compute[226829]: 2026-01-31 09:04:19.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1720794115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:19.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:19 compute-2 nova_compute[226829]: 2026-01-31 09:04:19.765 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:20.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:20 compute-2 ceph-mon[77282]: pgmap v3975: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 9.0 KiB/s wr, 55 op/s
Jan 31 09:04:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2909094674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:21 compute-2 nova_compute[226829]: 2026-01-31 09:04:21.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:21 compute-2 nova_compute[226829]: 2026-01-31 09:04:21.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:04:21 compute-2 nova_compute[226829]: 2026-01-31 09:04:21.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:04:21 compute-2 nova_compute[226829]: 2026-01-31 09:04:21.505 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:04:21 compute-2 nova_compute[226829]: 2026-01-31 09:04:21.505 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:21.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:21 compute-2 ceph-mon[77282]: pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Jan 31 09:04:22 compute-2 podman[332742]: 2026-01-31 09:04:22.190450793 +0000 UTC m=+0.068289248 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 09:04:22 compute-2 nova_compute[226829]: 2026-01-31 09:04:22.508 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850247.5070093, 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:04:22 compute-2 nova_compute[226829]: 2026-01-31 09:04:22.508 226833 INFO nova.compute.manager [-] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] VM Stopped (Lifecycle Event)
Jan 31 09:04:22 compute-2 nova_compute[226829]: 2026-01-31 09:04:22.526 226833 DEBUG nova.compute.manager [None req-20e1b06b-eebc-452a-81de-05538c5e3689 - - - - - -] [instance: 89e7ab6e-ac0f-4666-bec6-39fd4827d4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:04:22 compute-2 nova_compute[226829]: 2026-01-31 09:04:22.540 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:22.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:23.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:24 compute-2 ceph-mon[77282]: pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 31 op/s
Jan 31 09:04:24 compute-2 nova_compute[226829]: 2026-01-31 09:04:24.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:24.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:24 compute-2 nova_compute[226829]: 2026-01-31 09:04:24.767 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:25 compute-2 nova_compute[226829]: 2026-01-31 09:04:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:25 compute-2 nova_compute[226829]: 2026-01-31 09:04:25.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:25 compute-2 nova_compute[226829]: 2026-01-31 09:04:25.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:25 compute-2 nova_compute[226829]: 2026-01-31 09:04:25.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:25 compute-2 nova_compute[226829]: 2026-01-31 09:04:25.531 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:04:25 compute-2 nova_compute[226829]: 2026-01-31 09:04:25.532 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:25.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:04:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3025758037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:25 compute-2 nova_compute[226829]: 2026-01-31 09:04:25.964 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.126 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.127 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4043MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.128 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.128 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:26 compute-2 ceph-mon[77282]: pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 09:04:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.275 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.276 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.336 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:04:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:26.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:04:26 compute-2 sudo[332813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:26 compute-2 sudo[332813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:26 compute-2 sudo[332813]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:26 compute-2 sudo[332838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:26 compute-2 sudo[332838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:26 compute-2 sudo[332838]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:04:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3759234867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.784 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.790 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.817 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.843 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:04:26 compute-2 nova_compute[226829]: 2026-01-31 09:04:26.843 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3025758037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3759234867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1397250279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:27 compute-2 nova_compute[226829]: 2026-01-31 09:04:27.544 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:27.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:28 compute-2 podman[332866]: 2026-01-31 09:04:28.18076221 +0000 UTC m=+0.068438492 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 09:04:28 compute-2 ceph-mon[77282]: pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Jan 31 09:04:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2358740387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:04:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:29.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:04:29 compute-2 nova_compute[226829]: 2026-01-31 09:04:29.748 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:29 compute-2 nova_compute[226829]: 2026-01-31 09:04:29.769 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:30.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:30 compute-2 ceph-mon[77282]: pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 341 B/s wr, 11 op/s
Jan 31 09:04:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:31.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:32 compute-2 nova_compute[226829]: 2026-01-31 09:04:32.548 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:32.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:32 compute-2 ceph-mon[77282]: pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:04:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:33.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:33 compute-2 ceph-mon[77282]: pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:04:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:34.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:04:34 compute-2 nova_compute[226829]: 2026-01-31 09:04:34.771 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:04:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:35.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:04:36 compute-2 sudo[332892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:36 compute-2 sudo[332892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-2 sudo[332892]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-2 sudo[332918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:04:36 compute-2 sudo[332918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-2 sudo[332918]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-2 sudo[332943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:36 compute-2 sudo[332943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-2 sudo[332943]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-2 sudo[332968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:04:36 compute-2 sudo[332968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:36 compute-2 sudo[332968]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:36.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:36 compute-2 ceph-mon[77282]: pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:37 compute-2 nova_compute[226829]: 2026-01-31 09:04:37.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:04:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:04:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:04:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:04:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:04:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:04:37 compute-2 ceph-mon[77282]: pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:04:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:37.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:04:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:38.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:39.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:39 compute-2 nova_compute[226829]: 2026-01-31 09:04:39.772 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:40 compute-2 ceph-mon[77282]: pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:41.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:41 compute-2 nova_compute[226829]: 2026-01-31 09:04:41.843 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:41 compute-2 nova_compute[226829]: 2026-01-31 09:04:41.843 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:04:42 compute-2 sudo[333027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:42 compute-2 sudo[333027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:42 compute-2 sudo[333027]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:42 compute-2 sudo[333052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:04:42 compute-2 sudo[333052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:42 compute-2 sudo[333052]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:42 compute-2 nova_compute[226829]: 2026-01-31 09:04:42.552 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:42 compute-2 ceph-mon[77282]: pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:04:42 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:04:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:42.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:04:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:43.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:04:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:44.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:44 compute-2 nova_compute[226829]: 2026-01-31 09:04:44.774 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:45 compute-2 ceph-mon[77282]: pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:45.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:46 compute-2 ceph-mon[77282]: pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:46 compute-2 sudo[333079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:46 compute-2 sudo[333079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:46 compute-2 sudo[333079]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:46 compute-2 sudo[333104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:46 compute-2 sudo[333104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:46 compute-2 sudo[333104]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:46 compute-2 nova_compute[226829]: 2026-01-31 09:04:46.877 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "aed66974-d018-4ccb-9be7-edb5f1e9debb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:46 compute-2 nova_compute[226829]: 2026-01-31 09:04:46.878 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:46 compute-2 nova_compute[226829]: 2026-01-31 09:04:46.896 226833 DEBUG nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:04:46 compute-2 nova_compute[226829]: 2026-01-31 09:04:46.971 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:46 compute-2 nova_compute[226829]: 2026-01-31 09:04:46.971 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:46 compute-2 nova_compute[226829]: 2026-01-31 09:04:46.979 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:04:46 compute-2 nova_compute[226829]: 2026-01-31 09:04:46.979 226833 INFO nova.compute.claims [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Claim successful on node compute-2.ctlplane.example.com
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.098 226833 DEBUG nova.scheduler.client.report [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.187 226833 DEBUG nova.scheduler.client.report [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.187 226833 DEBUG nova.compute.provider_tree [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.202 226833 DEBUG nova.scheduler.client.report [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.219 226833 DEBUG nova.scheduler.client.report [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.272 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.556 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:04:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2490033947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:47.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.725 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.730 226833 DEBUG nova.compute.provider_tree [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.746 226833 DEBUG nova.scheduler.client.report [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.768 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.769 226833 DEBUG nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.815 226833 DEBUG nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.816 226833 DEBUG nova.network.neutron [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.844 226833 INFO nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.874 226833 DEBUG nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.975 226833 DEBUG nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.977 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:04:47 compute-2 nova_compute[226829]: 2026-01-31 09:04:47.977 226833 INFO nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Creating image(s)
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.015 226833 DEBUG nova.storage.rbd_utils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aed66974-d018-4ccb-9be7-edb5f1e9debb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.048 226833 DEBUG nova.storage.rbd_utils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aed66974-d018-4ccb-9be7-edb5f1e9debb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.078 226833 DEBUG nova.storage.rbd_utils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aed66974-d018-4ccb-9be7-edb5f1e9debb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.084 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.144 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.145 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.146 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.146 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.172 226833 DEBUG nova.storage.rbd_utils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aed66974-d018-4ccb-9be7-edb5f1e9debb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.175 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 aed66974-d018-4ccb-9be7-edb5f1e9debb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.523 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 aed66974-d018-4ccb-9be7-edb5f1e9debb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:48 compute-2 ceph-mon[77282]: pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2490033947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.592 226833 DEBUG nova.storage.rbd_utils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image aed66974-d018-4ccb-9be7-edb5f1e9debb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 09:04:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:48.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.726 226833 DEBUG nova.objects.instance [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid aed66974-d018-4ccb-9be7-edb5f1e9debb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.743 226833 DEBUG nova.policy [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.871 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.871 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Ensure instance console log exists: /var/lib/nova/instances/aed66974-d018-4ccb-9be7-edb5f1e9debb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.872 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.872 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:48 compute-2 nova_compute[226829]: 2026-01-31 09:04:48.873 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:49.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:49 compute-2 nova_compute[226829]: 2026-01-31 09:04:49.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:49 compute-2 nova_compute[226829]: 2026-01-31 09:04:49.830 226833 DEBUG nova.network.neutron [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Successfully created port: 6b85e8c9-2a81-4166-8b71-969ab887dc81 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 09:04:50 compute-2 ceph-mon[77282]: pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:50.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:51 compute-2 nova_compute[226829]: 2026-01-31 09:04:51.618 226833 DEBUG nova.network.neutron [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Successfully updated port: 6b85e8c9-2a81-4166-8b71-969ab887dc81 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:04:51 compute-2 nova_compute[226829]: 2026-01-31 09:04:51.635 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:04:51 compute-2 nova_compute[226829]: 2026-01-31 09:04:51.636 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:04:51 compute-2 nova_compute[226829]: 2026-01-31 09:04:51.636 226833 DEBUG nova.network.neutron [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:04:51 compute-2 ceph-mon[77282]: pgmap v3991: 305 pgs: 305 active+clean; 148 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Jan 31 09:04:51 compute-2 nova_compute[226829]: 2026-01-31 09:04:51.723 226833 DEBUG nova.compute.manager [req-da33fb38-5964-44b4-b6f2-b22fa2ee0c0f req-0ec006ca-f60d-4dd2-95f1-208c456bce94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received event network-changed-6b85e8c9-2a81-4166-8b71-969ab887dc81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:04:51 compute-2 nova_compute[226829]: 2026-01-31 09:04:51.724 226833 DEBUG nova.compute.manager [req-da33fb38-5964-44b4-b6f2-b22fa2ee0c0f req-0ec006ca-f60d-4dd2-95f1-208c456bce94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Refreshing instance network info cache due to event network-changed-6b85e8c9-2a81-4166-8b71-969ab887dc81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:04:51 compute-2 nova_compute[226829]: 2026-01-31 09:04:51.724 226833 DEBUG oslo_concurrency.lockutils [req-da33fb38-5964-44b4-b6f2-b22fa2ee0c0f req-0ec006ca-f60d-4dd2-95f1-208c456bce94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:04:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:51.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:51 compute-2 nova_compute[226829]: 2026-01-31 09:04:51.818 226833 DEBUG nova.network.neutron [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.560 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:52.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.937 226833 DEBUG nova.network.neutron [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Updating instance_info_cache with network_info: [{"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.962 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.962 226833 DEBUG nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Instance network_info: |[{"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.963 226833 DEBUG oslo_concurrency.lockutils [req-da33fb38-5964-44b4-b6f2-b22fa2ee0c0f req-0ec006ca-f60d-4dd2-95f1-208c456bce94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.964 226833 DEBUG nova.network.neutron [req-da33fb38-5964-44b4-b6f2-b22fa2ee0c0f req-0ec006ca-f60d-4dd2-95f1-208c456bce94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Refreshing network info cache for port 6b85e8c9-2a81-4166-8b71-969ab887dc81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.972 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Start _get_guest_xml network_info=[{"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.977 226833 WARNING nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.982 226833 DEBUG nova.virt.libvirt.host [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.982 226833 DEBUG nova.virt.libvirt.host [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.986 226833 DEBUG nova.virt.libvirt.host [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.986 226833 DEBUG nova.virt.libvirt.host [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.987 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.987 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.988 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.988 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.988 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.988 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.988 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.988 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.989 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.989 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.989 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.989 226833 DEBUG nova.virt.hardware [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:04:52 compute-2 nova_compute[226829]: 2026-01-31 09:04:52.992 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:53 compute-2 podman[333322]: 2026-01-31 09:04:53.235864333 +0000 UTC m=+0.102503479 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 09:04:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:04:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2755407011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:04:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:04:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2455846117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:04:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:04:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2755407011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.448 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.473 226833 DEBUG nova.storage.rbd_utils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aed66974-d018-4ccb-9be7-edb5f1e9debb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.477 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2755407011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:04:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2455846117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:04:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2755407011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:04:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:53.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:04:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1592408549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.919 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.922 226833 DEBUG nova.virt.libvirt.vif [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1696854884',display_name='tempest-TestNetworkBasicOps-server-1696854884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1696854884',id=215,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdnysb/3hrszEO5uaYEpHUlXY9KvlgwgX7hSgCYCzi3CX34ar4FHuTLUFP/P+TJkt26M2ChmXJSqaY1zvwnjVkDz3oBaMo9Z7vK/IRASGcya9NxPqBEAMv2vcXkZC/m5g==',key_name='tempest-TestNetworkBasicOps-1570738526',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-0hgemvud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:04:47Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=aed66974-d018-4ccb-9be7-edb5f1e9debb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.922 226833 DEBUG nova.network.os_vif_util [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.924 226833 DEBUG nova.network.os_vif_util [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:f9:37,bridge_name='br-int',has_traffic_filtering=True,id=6b85e8c9-2a81-4166-8b71-969ab887dc81,network=Network(32796e45-c70b-4424-b922-8b4f9b4e95e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b85e8c9-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.926 226833 DEBUG nova.objects.instance [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid aed66974-d018-4ccb-9be7-edb5f1e9debb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.941 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <uuid>aed66974-d018-4ccb-9be7-edb5f1e9debb</uuid>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <name>instance-000000d7</name>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <metadata>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <nova:name>tempest-TestNetworkBasicOps-server-1696854884</nova:name>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 09:04:52</nova:creationTime>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <nova:port uuid="6b85e8c9-2a81-4166-8b71-969ab887dc81">
Jan 31 09:04:53 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   </metadata>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <system>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <entry name="serial">aed66974-d018-4ccb-9be7-edb5f1e9debb</entry>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <entry name="uuid">aed66974-d018-4ccb-9be7-edb5f1e9debb</entry>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     </system>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <os>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   </os>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <features>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <apic/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   </features>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   </clock>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   </cpu>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   <devices>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/aed66974-d018-4ccb-9be7-edb5f1e9debb_disk">
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       </source>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/aed66974-d018-4ccb-9be7-edb5f1e9debb_disk.config">
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       </source>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:04:53 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:98:f9:37"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <target dev="tap6b85e8c9-2a"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     </interface>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/aed66974-d018-4ccb-9be7-edb5f1e9debb/console.log" append="off"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     </serial>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <video>
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     </video>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     </rng>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 09:04:53 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 09:04:53 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 09:04:53 compute-2 nova_compute[226829]:   </devices>
Jan 31 09:04:53 compute-2 nova_compute[226829]: </domain>
Jan 31 09:04:53 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.943 226833 DEBUG nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Preparing to wait for external event network-vif-plugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.943 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.943 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.944 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.944 226833 DEBUG nova.virt.libvirt.vif [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1696854884',display_name='tempest-TestNetworkBasicOps-server-1696854884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1696854884',id=215,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdnysb/3hrszEO5uaYEpHUlXY9KvlgwgX7hSgCYCzi3CX34ar4FHuTLUFP/P+TJkt26M2ChmXJSqaY1zvwnjVkDz3oBaMo9Z7vK/IRASGcya9NxPqBEAMv2vcXkZC/m5g==',key_name='tempest-TestNetworkBasicOps-1570738526',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-0hgemvud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:04:47Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=aed66974-d018-4ccb-9be7-edb5f1e9debb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.945 226833 DEBUG nova.network.os_vif_util [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.945 226833 DEBUG nova.network.os_vif_util [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:f9:37,bridge_name='br-int',has_traffic_filtering=True,id=6b85e8c9-2a81-4166-8b71-969ab887dc81,network=Network(32796e45-c70b-4424-b922-8b4f9b4e95e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b85e8c9-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.946 226833 DEBUG os_vif [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:f9:37,bridge_name='br-int',has_traffic_filtering=True,id=6b85e8c9-2a81-4166-8b71-969ab887dc81,network=Network(32796e45-c70b-4424-b922-8b4f9b4e95e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b85e8c9-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.947 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.947 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.948 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.951 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.951 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b85e8c9-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.952 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b85e8c9-2a, col_values=(('external_ids', {'iface-id': '6b85e8c9-2a81-4166-8b71-969ab887dc81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:f9:37', 'vm-uuid': 'aed66974-d018-4ccb-9be7-edb5f1e9debb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.954 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:53 compute-2 NetworkManager[48999]: <info>  [1769850293.9556] manager: (tap6b85e8c9-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.955 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.960 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.962 226833 INFO os_vif [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:f9:37,bridge_name='br-int',has_traffic_filtering=True,id=6b85e8c9-2a81-4166-8b71-969ab887dc81,network=Network(32796e45-c70b-4424-b922-8b4f9b4e95e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b85e8c9-2a')
Jan 31 09:04:53 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.999 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:53.999 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.000 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:98:f9:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.000 226833 INFO nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Using config drive
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.029 226833 DEBUG nova.storage.rbd_utils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aed66974-d018-4ccb-9be7-edb5f1e9debb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.651 226833 INFO nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Creating config drive at /var/lib/nova/instances/aed66974-d018-4ccb-9be7-edb5f1e9debb/disk.config
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.655 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aed66974-d018-4ccb-9be7-edb5f1e9debb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjxq_oab6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:54.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.778 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.781 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aed66974-d018-4ccb-9be7-edb5f1e9debb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjxq_oab6" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:54 compute-2 ceph-mon[77282]: pgmap v3992: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:04:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1592408549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.852 226833 DEBUG nova.storage.rbd_utils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image aed66974-d018-4ccb-9be7-edb5f1e9debb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.856 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aed66974-d018-4ccb-9be7-edb5f1e9debb/disk.config aed66974-d018-4ccb-9be7-edb5f1e9debb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.878 226833 DEBUG nova.network.neutron [req-da33fb38-5964-44b4-b6f2-b22fa2ee0c0f req-0ec006ca-f60d-4dd2-95f1-208c456bce94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Updated VIF entry in instance network info cache for port 6b85e8c9-2a81-4166-8b71-969ab887dc81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.878 226833 DEBUG nova.network.neutron [req-da33fb38-5964-44b4-b6f2-b22fa2ee0c0f req-0ec006ca-f60d-4dd2-95f1-208c456bce94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Updating instance_info_cache with network_info: [{"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:04:54 compute-2 nova_compute[226829]: 2026-01-31 09:04:54.894 226833 DEBUG oslo_concurrency.lockutils [req-da33fb38-5964-44b4-b6f2-b22fa2ee0c0f req-0ec006ca-f60d-4dd2-95f1-208c456bce94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.004 226833 DEBUG oslo_concurrency.processutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aed66974-d018-4ccb-9be7-edb5f1e9debb/disk.config aed66974-d018-4ccb-9be7-edb5f1e9debb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.005 226833 INFO nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Deleting local config drive /var/lib/nova/instances/aed66974-d018-4ccb-9be7-edb5f1e9debb/disk.config because it was imported into RBD.
Jan 31 09:04:55 compute-2 kernel: tap6b85e8c9-2a: entered promiscuous mode
Jan 31 09:04:55 compute-2 NetworkManager[48999]: <info>  [1769850295.0457] manager: (tap6b85e8c9-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/418)
Jan 31 09:04:55 compute-2 ovn_controller[133834]: 2026-01-31T09:04:55Z|00833|binding|INFO|Claiming lport 6b85e8c9-2a81-4166-8b71-969ab887dc81 for this chassis.
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.047 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:55 compute-2 ovn_controller[133834]: 2026-01-31T09:04:55Z|00834|binding|INFO|6b85e8c9-2a81-4166-8b71-969ab887dc81: Claiming fa:16:3e:98:f9:37 10.100.0.12
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.054 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.060 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:f9:37 10.100.0.12'], port_security=['fa:16:3e:98:f9:37 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'aed66974-d018-4ccb-9be7-edb5f1e9debb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32796e45-c70b-4424-b922-8b4f9b4e95e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '86a74fed-a3fd-444f-8d57-1d7e291f9db8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a2a60c8-4f21-49f4-b6af-c6ef5707163c, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=6b85e8c9-2a81-4166-8b71-969ab887dc81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.061 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 6b85e8c9-2a81-4166-8b71-969ab887dc81 in datapath 32796e45-c70b-4424-b922-8b4f9b4e95e1 bound to our chassis
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.063 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32796e45-c70b-4424-b922-8b4f9b4e95e1
Jan 31 09:04:55 compute-2 systemd-machined[195142]: New machine qemu-95-instance-000000d7.
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.072 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7d777db3-25c9-4064-996b-69eda22e07b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.073 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32796e45-c1 in ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.074 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32796e45-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.074 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a20c045e-e62b-4dfe-9180-1e97bbef88ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.075 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ee1176-4197-4e67-b83e-a816975ae822]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.083 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfe894d-b6d9-434d-88e0-dcd09258b950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 systemd[1]: Started Virtual Machine qemu-95-instance-000000d7.
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.092 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[acbff04e-7def-45cc-8b49-dd7e5cf5bf1f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_controller[133834]: 2026-01-31T09:04:55Z|00835|binding|INFO|Setting lport 6b85e8c9-2a81-4166-8b71-969ab887dc81 ovn-installed in OVS
Jan 31 09:04:55 compute-2 ovn_controller[133834]: 2026-01-31T09:04:55Z|00836|binding|INFO|Setting lport 6b85e8c9-2a81-4166-8b71-969ab887dc81 up in Southbound
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.095 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:55 compute-2 systemd-udevd[333487]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:04:55 compute-2 NetworkManager[48999]: <info>  [1769850295.1090] device (tap6b85e8c9-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:04:55 compute-2 NetworkManager[48999]: <info>  [1769850295.1099] device (tap6b85e8c9-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.114 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[790827c9-9ae4-4a1a-ab0f-3aac8012e030]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.118 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[63622f2e-caf7-4a84-a25c-e74e9743d108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 NetworkManager[48999]: <info>  [1769850295.1193] manager: (tap32796e45-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/419)
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.141 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[c57bd74c-ef9f-4d0c-ad49-54f139f6027a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.143 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[8e81ff1b-e99e-42ba-a130-58a4804dd828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 NetworkManager[48999]: <info>  [1769850295.1594] device (tap32796e45-c0): carrier: link connected
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.161 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab72014-3443-48dc-882b-4a1a9af28d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.174 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9eb982-76ba-4ac0-b17d-76ce42fcfccf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32796e45-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:ab:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1054763, 'reachable_time': 37127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333517, 'error': None, 'target': 'ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.186 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9395f3e7-6a72-4272-94a3-b7345dc3f81e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:ab79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1054763, 'tstamp': 1054763}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333518, 'error': None, 'target': 'ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.198 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc4c75e-8f32-4fda-867d-a8cfb5d9160b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32796e45-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:ab:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1054763, 'reachable_time': 37127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333519, 'error': None, 'target': 'ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.221 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[029cef47-1729-435c-b27c-77e03390fef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.264 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8de5ee8f-e532-499d-9204-78a99acd8ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.266 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32796e45-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.266 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.266 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32796e45-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.268 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:55 compute-2 kernel: tap32796e45-c0: entered promiscuous mode
Jan 31 09:04:55 compute-2 NetworkManager[48999]: <info>  [1769850295.2688] manager: (tap32796e45-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.271 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.271 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32796e45-c0, col_values=(('external_ids', {'iface-id': '450fe9a8-1dc3-441f-a491-f547f3ef8fcf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:55 compute-2 ovn_controller[133834]: 2026-01-31T09:04:55Z|00837|binding|INFO|Releasing lport 450fe9a8-1dc3-441f-a491-f547f3ef8fcf from this chassis (sb_readonly=0)
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.278 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.280 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32796e45-c70b-4424-b922-8b4f9b4e95e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32796e45-c70b-4424-b922-8b4f9b4e95e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.281 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3e059d-df56-4d30-adb6-744651cf03ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.282 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: global
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-32796e45-c70b-4424-b922-8b4f9b4e95e1
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/32796e45-c70b-4424-b922-8b4f9b4e95e1.pid.haproxy
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 32796e45-c70b-4424-b922-8b4f9b4e95e1
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:04:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:04:55.282 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1', 'env', 'PROCESS_TAG=haproxy-32796e45-c70b-4424-b922-8b4f9b4e95e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32796e45-c70b-4424-b922-8b4f9b4e95e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.388 226833 DEBUG nova.compute.manager [req-64892e25-dbb6-4937-b2a7-79e9f316bfdd req-f959f64b-14af-4a27-8a44-4a0ef36d596f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received event network-vif-plugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.388 226833 DEBUG oslo_concurrency.lockutils [req-64892e25-dbb6-4937-b2a7-79e9f316bfdd req-f959f64b-14af-4a27-8a44-4a0ef36d596f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.389 226833 DEBUG oslo_concurrency.lockutils [req-64892e25-dbb6-4937-b2a7-79e9f316bfdd req-f959f64b-14af-4a27-8a44-4a0ef36d596f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.389 226833 DEBUG oslo_concurrency.lockutils [req-64892e25-dbb6-4937-b2a7-79e9f316bfdd req-f959f64b-14af-4a27-8a44-4a0ef36d596f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:55 compute-2 nova_compute[226829]: 2026-01-31 09:04:55.389 226833 DEBUG nova.compute.manager [req-64892e25-dbb6-4937-b2a7-79e9f316bfdd req-f959f64b-14af-4a27-8a44-4a0ef36d596f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Processing event network-vif-plugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:04:55 compute-2 podman[333551]: 2026-01-31 09:04:55.608837532 +0000 UTC m=+0.058923174 container create fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 09:04:55 compute-2 systemd[1]: Started libpod-conmon-fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db.scope.
Jan 31 09:04:55 compute-2 systemd[1]: Started libcrun container.
Jan 31 09:04:55 compute-2 podman[333551]: 2026-01-31 09:04:55.576993615 +0000 UTC m=+0.027079287 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:04:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4043d91fd28a366ce178839401a34787cd17703f8f348a8eb563020dd2239f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:55 compute-2 podman[333551]: 2026-01-31 09:04:55.686141443 +0000 UTC m=+0.136227115 container init fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 09:04:55 compute-2 podman[333551]: 2026-01-31 09:04:55.690489652 +0000 UTC m=+0.140575304 container start fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 09:04:55 compute-2 neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1[333566]: [NOTICE]   (333570) : New worker (333572) forked
Jan 31 09:04:55 compute-2 neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1[333566]: [NOTICE]   (333570) : Loading success.
Jan 31 09:04:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:55.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:55 compute-2 ceph-mon[77282]: pgmap v3993: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.031 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850296.0311463, aed66974-d018-4ccb-9be7-edb5f1e9debb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.032 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] VM Started (Lifecycle Event)
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.033 226833 DEBUG nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.036 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.038 226833 INFO nova.virt.libvirt.driver [-] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Instance spawned successfully.
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.039 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.067 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.070 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.077 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.077 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.077 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.078 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.078 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.079 226833 DEBUG nova.virt.libvirt.driver [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.115 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.116 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850296.0334578, aed66974-d018-4ccb-9be7-edb5f1e9debb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.116 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] VM Paused (Lifecycle Event)
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.155 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.157 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850296.0356944, aed66974-d018-4ccb-9be7-edb5f1e9debb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.158 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] VM Resumed (Lifecycle Event)
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.178 226833 INFO nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Took 8.20 seconds to spawn the instance on the hypervisor.
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.179 226833 DEBUG nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.180 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.186 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.210 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:04:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.242 226833 INFO nova.compute.manager [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Took 9.30 seconds to build instance.
Jan 31 09:04:56 compute-2 nova_compute[226829]: 2026-01-31 09:04:56.258 226833 DEBUG oslo_concurrency.lockutils [None req-e2dbc095-8fa1-40eb-bfa3-238b023d73ff 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:04:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:56.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:04:57 compute-2 nova_compute[226829]: 2026-01-31 09:04:57.494 226833 DEBUG nova.compute.manager [req-492c99c2-cd7b-4cd7-8b8f-5063267fa2e1 req-1e5f7669-8d88-45d4-b5f0-d33dd26edab5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received event network-vif-plugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:04:57 compute-2 nova_compute[226829]: 2026-01-31 09:04:57.494 226833 DEBUG oslo_concurrency.lockutils [req-492c99c2-cd7b-4cd7-8b8f-5063267fa2e1 req-1e5f7669-8d88-45d4-b5f0-d33dd26edab5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:57 compute-2 nova_compute[226829]: 2026-01-31 09:04:57.495 226833 DEBUG oslo_concurrency.lockutils [req-492c99c2-cd7b-4cd7-8b8f-5063267fa2e1 req-1e5f7669-8d88-45d4-b5f0-d33dd26edab5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:57 compute-2 nova_compute[226829]: 2026-01-31 09:04:57.495 226833 DEBUG oslo_concurrency.lockutils [req-492c99c2-cd7b-4cd7-8b8f-5063267fa2e1 req-1e5f7669-8d88-45d4-b5f0-d33dd26edab5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:57 compute-2 nova_compute[226829]: 2026-01-31 09:04:57.495 226833 DEBUG nova.compute.manager [req-492c99c2-cd7b-4cd7-8b8f-5063267fa2e1 req-1e5f7669-8d88-45d4-b5f0-d33dd26edab5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] No waiting events found dispatching network-vif-plugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:04:57 compute-2 nova_compute[226829]: 2026-01-31 09:04:57.495 226833 WARNING nova.compute.manager [req-492c99c2-cd7b-4cd7-8b8f-5063267fa2e1 req-1e5f7669-8d88-45d4-b5f0-d33dd26edab5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received unexpected event network-vif-plugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 for instance with vm_state active and task_state None.
Jan 31 09:04:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:57.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:58 compute-2 ceph-mon[77282]: pgmap v3994: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 09:04:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:58.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:58 compute-2 nova_compute[226829]: 2026-01-31 09:04:58.955 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:59 compute-2 podman[333626]: 2026-01-31 09:04:59.163094653 +0000 UTC m=+0.049077785 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 09:04:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:04:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:59.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:59 compute-2 nova_compute[226829]: 2026-01-31 09:04:59.783 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:00.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:00 compute-2 NetworkManager[48999]: <info>  [1769850300.7496] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Jan 31 09:05:00 compute-2 nova_compute[226829]: 2026-01-31 09:05:00.749 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:00 compute-2 NetworkManager[48999]: <info>  [1769850300.7507] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Jan 31 09:05:00 compute-2 ovn_controller[133834]: 2026-01-31T09:05:00Z|00838|binding|INFO|Releasing lport 450fe9a8-1dc3-441f-a491-f547f3ef8fcf from this chassis (sb_readonly=0)
Jan 31 09:05:00 compute-2 nova_compute[226829]: 2026-01-31 09:05:00.762 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:00 compute-2 ovn_controller[133834]: 2026-01-31T09:05:00Z|00839|binding|INFO|Releasing lport 450fe9a8-1dc3-441f-a491-f547f3ef8fcf from this chassis (sb_readonly=0)
Jan 31 09:05:00 compute-2 ceph-mon[77282]: pgmap v3995: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 635 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 31 09:05:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:01.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:01 compute-2 nova_compute[226829]: 2026-01-31 09:05:01.775 226833 DEBUG nova.compute.manager [req-97a06183-373d-4585-b465-c7b1d61badae req-1332bdff-bf06-4990-a74f-f55de6ec6326 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received event network-changed-6b85e8c9-2a81-4166-8b71-969ab887dc81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:05:01 compute-2 nova_compute[226829]: 2026-01-31 09:05:01.776 226833 DEBUG nova.compute.manager [req-97a06183-373d-4585-b465-c7b1d61badae req-1332bdff-bf06-4990-a74f-f55de6ec6326 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Refreshing instance network info cache due to event network-changed-6b85e8c9-2a81-4166-8b71-969ab887dc81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:05:01 compute-2 nova_compute[226829]: 2026-01-31 09:05:01.776 226833 DEBUG oslo_concurrency.lockutils [req-97a06183-373d-4585-b465-c7b1d61badae req-1332bdff-bf06-4990-a74f-f55de6ec6326 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:05:01 compute-2 nova_compute[226829]: 2026-01-31 09:05:01.777 226833 DEBUG oslo_concurrency.lockutils [req-97a06183-373d-4585-b465-c7b1d61badae req-1332bdff-bf06-4990-a74f-f55de6ec6326 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:05:01 compute-2 nova_compute[226829]: 2026-01-31 09:05:01.777 226833 DEBUG nova.network.neutron [req-97a06183-373d-4585-b465-c7b1d61badae req-1332bdff-bf06-4990-a74f-f55de6ec6326 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Refreshing network info cache for port 6b85e8c9-2a81-4166-8b71-969ab887dc81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:05:01 compute-2 ceph-mon[77282]: pgmap v3996: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 09:05:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:02.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:03 compute-2 nova_compute[226829]: 2026-01-31 09:05:03.547 226833 DEBUG nova.network.neutron [req-97a06183-373d-4585-b465-c7b1d61badae req-1332bdff-bf06-4990-a74f-f55de6ec6326 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Updated VIF entry in instance network info cache for port 6b85e8c9-2a81-4166-8b71-969ab887dc81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:05:03 compute-2 nova_compute[226829]: 2026-01-31 09:05:03.548 226833 DEBUG nova.network.neutron [req-97a06183-373d-4585-b465-c7b1d61badae req-1332bdff-bf06-4990-a74f-f55de6ec6326 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Updating instance_info_cache with network_info: [{"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:05:03 compute-2 nova_compute[226829]: 2026-01-31 09:05:03.567 226833 DEBUG oslo_concurrency.lockutils [req-97a06183-373d-4585-b465-c7b1d61badae req-1332bdff-bf06-4990-a74f-f55de6ec6326 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:05:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:03.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:03 compute-2 ceph-mon[77282]: pgmap v3997: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 557 KiB/s wr, 85 op/s
Jan 31 09:05:03 compute-2 nova_compute[226829]: 2026-01-31 09:05:03.957 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:04.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:04 compute-2 nova_compute[226829]: 2026-01-31 09:05:04.784 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:05.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:06.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:06 compute-2 ceph-mon[77282]: pgmap v3998: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:05:06 compute-2 sudo[333651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:06 compute-2 sudo[333651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:06 compute-2 sudo[333651]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:06 compute-2 sudo[333676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:06 compute-2 sudo[333676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:06 compute-2 sudo[333676]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:06.947 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:06.949 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:06.949 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:07.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:08 compute-2 ceph-mon[77282]: pgmap v3999: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:05:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:08.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:08 compute-2 nova_compute[226829]: 2026-01-31 09:05:08.961 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:09 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #61. Immutable memtables: 16.
Jan 31 09:05:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:09.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:09 compute-2 nova_compute[226829]: 2026-01-31 09:05:09.784 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:10 compute-2 ceph-mon[77282]: pgmap v4000: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 31 09:05:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:10.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:11 compute-2 ovn_controller[133834]: 2026-01-31T09:05:11Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:f9:37 10.100.0.12
Jan 31 09:05:11 compute-2 ovn_controller[133834]: 2026-01-31T09:05:11Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:f9:37 10.100.0.12
Jan 31 09:05:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:11.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:12 compute-2 nova_compute[226829]: 2026-01-31 09:05:12.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:12.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:12 compute-2 ceph-mon[77282]: pgmap v4001: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 82 op/s
Jan 31 09:05:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:13.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:13 compute-2 nova_compute[226829]: 2026-01-31 09:05:13.964 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:14 compute-2 ceph-mon[77282]: pgmap v4002: 305 pgs: 305 active+clean; 193 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 31 09:05:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:14.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:14 compute-2 nova_compute[226829]: 2026-01-31 09:05:14.786 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:15.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:16 compute-2 nova_compute[226829]: 2026-01-31 09:05:16.492 226833 INFO nova.compute.manager [None req-22219fee-da1f-423b-9b6a-6b0b65f11f1b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Get console output
Jan 31 09:05:16 compute-2 nova_compute[226829]: 2026-01-31 09:05:16.499 267670 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 09:05:16 compute-2 ceph-mon[77282]: pgmap v4003: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:16.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:17.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:17 compute-2 ceph-mon[77282]: pgmap v4004: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:18 compute-2 nova_compute[226829]: 2026-01-31 09:05:18.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:18 compute-2 nova_compute[226829]: 2026-01-31 09:05:18.490 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:18 compute-2 ovn_controller[133834]: 2026-01-31T09:05:18Z|00840|binding|INFO|Releasing lport 450fe9a8-1dc3-441f-a491-f547f3ef8fcf from this chassis (sb_readonly=0)
Jan 31 09:05:18 compute-2 nova_compute[226829]: 2026-01-31 09:05:18.596 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:18 compute-2 ovn_controller[133834]: 2026-01-31T09:05:18Z|00841|binding|INFO|Releasing lport 450fe9a8-1dc3-441f-a491-f547f3ef8fcf from this chassis (sb_readonly=0)
Jan 31 09:05:18 compute-2 nova_compute[226829]: 2026-01-31 09:05:18.614 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:18.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:18 compute-2 nova_compute[226829]: 2026-01-31 09:05:18.967 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3269783881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:19.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:19 compute-2 nova_compute[226829]: 2026-01-31 09:05:19.821 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:19 compute-2 nova_compute[226829]: 2026-01-31 09:05:19.975 226833 INFO nova.compute.manager [None req-3878aae5-3ff2-43eb-a66d-0a782cb09239 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Get console output
Jan 31 09:05:19 compute-2 nova_compute[226829]: 2026-01-31 09:05:19.979 267670 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 09:05:20 compute-2 ceph-mon[77282]: pgmap v4005: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2974409001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:20.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:20 compute-2 nova_compute[226829]: 2026-01-31 09:05:20.824 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:20 compute-2 NetworkManager[48999]: <info>  [1769850320.8248] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Jan 31 09:05:20 compute-2 NetworkManager[48999]: <info>  [1769850320.8261] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Jan 31 09:05:20 compute-2 ovn_controller[133834]: 2026-01-31T09:05:20Z|00842|binding|INFO|Releasing lport 450fe9a8-1dc3-441f-a491-f547f3ef8fcf from this chassis (sb_readonly=0)
Jan 31 09:05:20 compute-2 ovn_controller[133834]: 2026-01-31T09:05:20Z|00843|binding|INFO|Releasing lport 450fe9a8-1dc3-441f-a491-f547f3ef8fcf from this chassis (sb_readonly=0)
Jan 31 09:05:20 compute-2 nova_compute[226829]: 2026-01-31 09:05:20.884 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:21 compute-2 nova_compute[226829]: 2026-01-31 09:05:21.174 226833 INFO nova.compute.manager [None req-033a157d-81f2-494e-b10f-57f6fd818fd6 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Get console output
Jan 31 09:05:21 compute-2 nova_compute[226829]: 2026-01-31 09:05:21.179 267670 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 09:05:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:21 compute-2 nova_compute[226829]: 2026-01-31 09:05:21.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:21 compute-2 nova_compute[226829]: 2026-01-31 09:05:21.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:05:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:21.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:05:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:21.837 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=105, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=104) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:05:21 compute-2 nova_compute[226829]: 2026-01-31 09:05:21.837 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:21 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:21.838 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.077 226833 DEBUG nova.compute.manager [req-1e24ba83-e6bf-4607-a515-d9d846c89b30 req-ddca11c5-29f9-40c4-aedb-29812e003edf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received event network-changed-6b85e8c9-2a81-4166-8b71-969ab887dc81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.078 226833 DEBUG nova.compute.manager [req-1e24ba83-e6bf-4607-a515-d9d846c89b30 req-ddca11c5-29f9-40c4-aedb-29812e003edf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Refreshing instance network info cache due to event network-changed-6b85e8c9-2a81-4166-8b71-969ab887dc81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.078 226833 DEBUG oslo_concurrency.lockutils [req-1e24ba83-e6bf-4607-a515-d9d846c89b30 req-ddca11c5-29f9-40c4-aedb-29812e003edf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.078 226833 DEBUG oslo_concurrency.lockutils [req-1e24ba83-e6bf-4607-a515-d9d846c89b30 req-ddca11c5-29f9-40c4-aedb-29812e003edf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.078 226833 DEBUG nova.network.neutron [req-1e24ba83-e6bf-4607-a515-d9d846c89b30 req-ddca11c5-29f9-40c4-aedb-29812e003edf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Refreshing network info cache for port 6b85e8c9-2a81-4166-8b71-969ab887dc81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.181 226833 DEBUG oslo_concurrency.lockutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "aed66974-d018-4ccb-9be7-edb5f1e9debb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.182 226833 DEBUG oslo_concurrency.lockutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.182 226833 DEBUG oslo_concurrency.lockutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.182 226833 DEBUG oslo_concurrency.lockutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.182 226833 DEBUG oslo_concurrency.lockutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.183 226833 INFO nova.compute.manager [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Terminating instance
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.184 226833 DEBUG nova.compute.manager [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 09:05:22 compute-2 kernel: tap6b85e8c9-2a (unregistering): left promiscuous mode
Jan 31 09:05:22 compute-2 NetworkManager[48999]: <info>  [1769850322.2566] device (tap6b85e8c9-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:05:22 compute-2 ovn_controller[133834]: 2026-01-31T09:05:22Z|00844|binding|INFO|Releasing lport 6b85e8c9-2a81-4166-8b71-969ab887dc81 from this chassis (sb_readonly=0)
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.262 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:22 compute-2 ovn_controller[133834]: 2026-01-31T09:05:22Z|00845|binding|INFO|Setting lport 6b85e8c9-2a81-4166-8b71-969ab887dc81 down in Southbound
Jan 31 09:05:22 compute-2 ovn_controller[133834]: 2026-01-31T09:05:22Z|00846|binding|INFO|Removing iface tap6b85e8c9-2a ovn-installed in OVS
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.267 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.271 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.275 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:f9:37 10.100.0.12'], port_security=['fa:16:3e:98:f9:37 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'aed66974-d018-4ccb-9be7-edb5f1e9debb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32796e45-c70b-4424-b922-8b4f9b4e95e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '86a74fed-a3fd-444f-8d57-1d7e291f9db8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a2a60c8-4f21-49f4-b6af-c6ef5707163c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=6b85e8c9-2a81-4166-8b71-969ab887dc81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.280 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 6b85e8c9-2a81-4166-8b71-969ab887dc81 in datapath 32796e45-c70b-4424-b922-8b4f9b4e95e1 unbound from our chassis
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.284 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32796e45-c70b-4424-b922-8b4f9b4e95e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.286 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[af69e92a-d7c2-4f4c-8dbe-f39561c57a07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.287 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1 namespace which is not needed anymore
Jan 31 09:05:22 compute-2 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000d7.scope: Deactivated successfully.
Jan 31 09:05:22 compute-2 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000d7.scope: Consumed 13.761s CPU time.
Jan 31 09:05:22 compute-2 systemd-machined[195142]: Machine qemu-95-instance-000000d7 terminated.
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.414 226833 INFO nova.virt.libvirt.driver [-] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Instance destroyed successfully.
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.415 226833 DEBUG nova.objects.instance [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid aed66974-d018-4ccb-9be7-edb5f1e9debb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:05:22 compute-2 neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1[333566]: [NOTICE]   (333570) : haproxy version is 2.8.14-c23fe91
Jan 31 09:05:22 compute-2 neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1[333566]: [NOTICE]   (333570) : path to executable is /usr/sbin/haproxy
Jan 31 09:05:22 compute-2 neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1[333566]: [WARNING]  (333570) : Exiting Master process...
Jan 31 09:05:22 compute-2 neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1[333566]: [ALERT]    (333570) : Current worker (333572) exited with code 143 (Terminated)
Jan 31 09:05:22 compute-2 neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1[333566]: [WARNING]  (333570) : All workers exited. Exiting... (0)
Jan 31 09:05:22 compute-2 systemd[1]: libpod-fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db.scope: Deactivated successfully.
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.430 226833 DEBUG nova.virt.libvirt.vif [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1696854884',display_name='tempest-TestNetworkBasicOps-server-1696854884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1696854884',id=215,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdnysb/3hrszEO5uaYEpHUlXY9KvlgwgX7hSgCYCzi3CX34ar4FHuTLUFP/P+TJkt26M2ChmXJSqaY1zvwnjVkDz3oBaMo9Z7vK/IRASGcya9NxPqBEAMv2vcXkZC/m5g==',key_name='tempest-TestNetworkBasicOps-1570738526',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:04:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-0hgemvud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:04:56Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=aed66974-d018-4ccb-9be7-edb5f1e9debb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.431 226833 DEBUG nova.network.os_vif_util [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.432 226833 DEBUG nova.network.os_vif_util [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:f9:37,bridge_name='br-int',has_traffic_filtering=True,id=6b85e8c9-2a81-4166-8b71-969ab887dc81,network=Network(32796e45-c70b-4424-b922-8b4f9b4e95e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b85e8c9-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.432 226833 DEBUG os_vif [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:f9:37,bridge_name='br-int',has_traffic_filtering=True,id=6b85e8c9-2a81-4166-8b71-969ab887dc81,network=Network(32796e45-c70b-4424-b922-8b4f9b4e95e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b85e8c9-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.434 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.435 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b85e8c9-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:05:22 compute-2 podman[333732]: 2026-01-31 09:05:22.437174246 +0000 UTC m=+0.078835465 container died fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.437 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.439 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.442 226833 INFO os_vif [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:f9:37,bridge_name='br-int',has_traffic_filtering=True,id=6b85e8c9-2a81-4166-8b71-969ab887dc81,network=Network(32796e45-c70b-4424-b922-8b4f9b4e95e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b85e8c9-2a')
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.510 226833 DEBUG nova.compute.manager [req-1901b73f-0a81-4fdb-b1bc-5dfc2c585194 req-9e1a9da1-9603-45b2-a140-1ff99377e6b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received event network-vif-unplugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.511 226833 DEBUG oslo_concurrency.lockutils [req-1901b73f-0a81-4fdb-b1bc-5dfc2c585194 req-9e1a9da1-9603-45b2-a140-1ff99377e6b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.511 226833 DEBUG oslo_concurrency.lockutils [req-1901b73f-0a81-4fdb-b1bc-5dfc2c585194 req-9e1a9da1-9603-45b2-a140-1ff99377e6b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.512 226833 DEBUG oslo_concurrency.lockutils [req-1901b73f-0a81-4fdb-b1bc-5dfc2c585194 req-9e1a9da1-9603-45b2-a140-1ff99377e6b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.512 226833 DEBUG nova.compute.manager [req-1901b73f-0a81-4fdb-b1bc-5dfc2c585194 req-9e1a9da1-9603-45b2-a140-1ff99377e6b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] No waiting events found dispatching network-vif-unplugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.512 226833 DEBUG nova.compute.manager [req-1901b73f-0a81-4fdb-b1bc-5dfc2c585194 req-9e1a9da1-9603-45b2-a140-1ff99377e6b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received event network-vif-unplugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 09:05:22 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db-userdata-shm.mount: Deactivated successfully.
Jan 31 09:05:22 compute-2 systemd[1]: var-lib-containers-storage-overlay-6b4043d91fd28a366ce178839401a34787cd17703f8f348a8eb563020dd2239f-merged.mount: Deactivated successfully.
Jan 31 09:05:22 compute-2 podman[333732]: 2026-01-31 09:05:22.539317404 +0000 UTC m=+0.180978613 container cleanup fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 09:05:22 compute-2 systemd[1]: libpod-conmon-fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db.scope: Deactivated successfully.
Jan 31 09:05:22 compute-2 ceph-mon[77282]: pgmap v4006: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:22 compute-2 podman[333792]: 2026-01-31 09:05:22.608242899 +0000 UTC m=+0.048149112 container remove fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.612 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e806f880-9727-4a9b-8c1a-57492e614b07]: (4, ('Sat Jan 31 09:05:22 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1 (fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db)\nfe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db\nSat Jan 31 09:05:22 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1 (fe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db)\nfe50fd7fc8aa3def10faa097ff2f23a76076987276753e8457052100a6e705db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.613 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e7898f50-d8b4-4fec-8ee8-c7e83aa3ffed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.615 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32796e45-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.616 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:22 compute-2 kernel: tap32796e45-c0: left promiscuous mode
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.622 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.625 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[976f4574-0163-45f2-9ce6-661ac0b52d94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.640 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[85ba40ec-ce01-4216-a70a-8761059d61c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.641 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[184e6a17-30f2-421d-ad74-a54786a0f393]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.654 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[05240757-e5f9-497f-a345-8c65ff613761]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1054758, 'reachable_time': 43092, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333805, 'error': None, 'target': 'ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.657 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32796e45-c70b-4424-b922-8b4f9b4e95e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:05:22 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:22.657 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6af402-051f-4e90-88e7-d63709cb4df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:05:22 compute-2 systemd[1]: run-netns-ovnmeta\x2d32796e45\x2dc70b\x2d4424\x2db922\x2d8b4f9b4e95e1.mount: Deactivated successfully.
Jan 31 09:05:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:22.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.887 226833 INFO nova.virt.libvirt.driver [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Deleting instance files /var/lib/nova/instances/aed66974-d018-4ccb-9be7-edb5f1e9debb_del
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.888 226833 INFO nova.virt.libvirt.driver [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Deletion of /var/lib/nova/instances/aed66974-d018-4ccb-9be7-edb5f1e9debb_del complete
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.935 226833 INFO nova.compute.manager [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Took 0.75 seconds to destroy the instance on the hypervisor.
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.936 226833 DEBUG oslo.service.loopingcall [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.936 226833 DEBUG nova.compute.manager [-] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 09:05:22 compute-2 nova_compute[226829]: 2026-01-31 09:05:22.937 226833 DEBUG nova.network.neutron [-] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.481 226833 DEBUG nova.network.neutron [req-1e24ba83-e6bf-4607-a515-d9d846c89b30 req-ddca11c5-29f9-40c4-aedb-29812e003edf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Updated VIF entry in instance network info cache for port 6b85e8c9-2a81-4166-8b71-969ab887dc81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.481 226833 DEBUG nova.network.neutron [req-1e24ba83-e6bf-4607-a515-d9d846c89b30 req-ddca11c5-29f9-40c4-aedb-29812e003edf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Updating instance_info_cache with network_info: [{"id": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "address": "fa:16:3e:98:f9:37", "network": {"id": "32796e45-c70b-4424-b922-8b4f9b4e95e1", "bridge": "br-int", "label": "tempest-network-smoke--1982055243", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b85e8c9-2a", "ovs_interfaceid": "6b85e8c9-2a81-4166-8b71-969ab887dc81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.523 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.523 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.542 226833 DEBUG oslo_concurrency.lockutils [req-1e24ba83-e6bf-4607-a515-d9d846c89b30 req-ddca11c5-29f9-40c4-aedb-29812e003edf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aed66974-d018-4ccb-9be7-edb5f1e9debb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.620 226833 DEBUG nova.network.neutron [-] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.644 226833 INFO nova.compute.manager [-] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Took 0.71 seconds to deallocate network for instance.
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.686 226833 DEBUG oslo_concurrency.lockutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.686 226833 DEBUG oslo_concurrency.lockutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:23 compute-2 nova_compute[226829]: 2026-01-31 09:05:23.726 226833 DEBUG oslo_concurrency.processutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:05:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:23.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:23 compute-2 ceph-mon[77282]: pgmap v4007: 305 pgs: 305 active+clean; 175 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 707 KiB/s wr, 31 op/s
Jan 31 09:05:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:05:24 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1473516798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.166 226833 DEBUG nova.compute.manager [req-33d3ebb4-b671-473a-8922-2d38d9ae3528 req-374769d7-c8b1-429a-9f47-84c8d772ceeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received event network-vif-deleted-6b85e8c9-2a81-4166-8b71-969ab887dc81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.171 226833 DEBUG oslo_concurrency.processutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.177 226833 DEBUG nova.compute.provider_tree [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:05:24 compute-2 podman[333828]: 2026-01-31 09:05:24.18665516 +0000 UTC m=+0.067580158 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.198 226833 DEBUG nova.scheduler.client.report [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.222 226833 DEBUG oslo_concurrency.lockutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.259 226833 INFO nova.scheduler.client.report [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance aed66974-d018-4ccb-9be7-edb5f1e9debb
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.409 226833 DEBUG oslo_concurrency.lockutils [None req-282884fb-30f2-4cf0-98c0-f736e9c8b01c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.584 226833 DEBUG nova.compute.manager [req-a6f0c99c-b470-4c05-930e-d27b8605405e req-c87669c4-777e-4a11-809e-7b815c1291af 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received event network-vif-plugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.585 226833 DEBUG oslo_concurrency.lockutils [req-a6f0c99c-b470-4c05-930e-d27b8605405e req-c87669c4-777e-4a11-809e-7b815c1291af 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.586 226833 DEBUG oslo_concurrency.lockutils [req-a6f0c99c-b470-4c05-930e-d27b8605405e req-c87669c4-777e-4a11-809e-7b815c1291af 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.586 226833 DEBUG oslo_concurrency.lockutils [req-a6f0c99c-b470-4c05-930e-d27b8605405e req-c87669c4-777e-4a11-809e-7b815c1291af 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aed66974-d018-4ccb-9be7-edb5f1e9debb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.586 226833 DEBUG nova.compute.manager [req-a6f0c99c-b470-4c05-930e-d27b8605405e req-c87669c4-777e-4a11-809e-7b815c1291af 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] No waiting events found dispatching network-vif-plugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.587 226833 WARNING nova.compute.manager [req-a6f0c99c-b470-4c05-930e-d27b8605405e req-c87669c4-777e-4a11-809e-7b815c1291af 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Received unexpected event network-vif-plugged-6b85e8c9-2a81-4166-8b71-969ab887dc81 for instance with vm_state deleted and task_state None.
Jan 31 09:05:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:24.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:24 compute-2 nova_compute[226829]: 2026-01-31 09:05:24.823 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:05:24.840 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '105'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:05:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1473516798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:25 compute-2 nova_compute[226829]: 2026-01-31 09:05:25.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:25 compute-2 nova_compute[226829]: 2026-01-31 09:05:25.524 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:25 compute-2 nova_compute[226829]: 2026-01-31 09:05:25.524 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:25 compute-2 nova_compute[226829]: 2026-01-31 09:05:25.524 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:25 compute-2 nova_compute[226829]: 2026-01-31 09:05:25.524 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:05:25 compute-2 nova_compute[226829]: 2026-01-31 09:05:25.525 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:05:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:25.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:05:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/313106343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:25 compute-2 nova_compute[226829]: 2026-01-31 09:05:25.927 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:05:25 compute-2 ceph-mon[77282]: pgmap v4008: 305 pgs: 305 active+clean; 152 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 24 KiB/s wr, 27 op/s
Jan 31 09:05:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/313106343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.073 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.074 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4040MB free_disk=20.956867218017578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.074 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.075 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.215 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.215 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:05:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.231 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:05:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:05:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1755861136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.636 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.640 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.673 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:05:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.719 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:05:26 compute-2 nova_compute[226829]: 2026-01-31 09:05:26.720 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:26 compute-2 sudo[333903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:27 compute-2 sudo[333903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1755861136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:27 compute-2 sudo[333903]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:27 compute-2 sudo[333928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:27 compute-2 sudo[333928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:27 compute-2 sudo[333928]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:27 compute-2 nova_compute[226829]: 2026-01-31 09:05:27.439 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:27 compute-2 nova_compute[226829]: 2026-01-31 09:05:27.721 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:27.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3415702408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:28 compute-2 ceph-mon[77282]: pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 28 op/s
Jan 31 09:05:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2534179813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.074441) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328074506, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2364, "num_deletes": 251, "total_data_size": 5710629, "memory_usage": 5770128, "flush_reason": "Manual Compaction"}
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328090444, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 3744645, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92907, "largest_seqno": 95266, "table_properties": {"data_size": 3735148, "index_size": 5990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19595, "raw_average_key_size": 20, "raw_value_size": 3716158, "raw_average_value_size": 3866, "num_data_blocks": 261, "num_entries": 961, "num_filter_entries": 961, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850112, "oldest_key_time": 1769850112, "file_creation_time": 1769850328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 16066 microseconds, and 6111 cpu microseconds.
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.090502) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 3744645 bytes OK
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.090523) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.092015) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.092042) EVENT_LOG_v1 {"time_micros": 1769850328092038, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.092060) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 5700370, prev total WAL file size 5700370, number of live WAL files 2.
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.092819) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(3656KB)], [192(12MB)]
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328092937, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 16877954, "oldest_snapshot_seqno": -1}
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 11459 keys, 14922497 bytes, temperature: kUnknown
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328184824, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 14922497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14848746, "index_size": 44006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28677, "raw_key_size": 302440, "raw_average_key_size": 26, "raw_value_size": 14648705, "raw_average_value_size": 1278, "num_data_blocks": 1675, "num_entries": 11459, "num_filter_entries": 11459, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769850328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.185082) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 14922497 bytes
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.186699) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.5 rd, 162.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.5 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(8.5) write-amplify(4.0) OK, records in: 11981, records dropped: 522 output_compression: NoCompression
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.186714) EVENT_LOG_v1 {"time_micros": 1769850328186706, "job": 124, "event": "compaction_finished", "compaction_time_micros": 91962, "compaction_time_cpu_micros": 57149, "output_level": 6, "num_output_files": 1, "total_output_size": 14922497, "num_input_records": 11981, "num_output_records": 11459, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328187068, "job": 124, "event": "table_file_deletion", "file_number": 194}
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328188280, "job": 124, "event": "table_file_deletion", "file_number": 192}
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.092610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.188302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.188305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.188306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.188308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:28.188309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-2 nova_compute[226829]: 2026-01-31 09:05:28.534 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:28 compute-2 nova_compute[226829]: 2026-01-31 09:05:28.568 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:05:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:28.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:05:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:29.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:29 compute-2 nova_compute[226829]: 2026-01-31 09:05:29.825 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:30 compute-2 podman[333956]: 2026-01-31 09:05:30.142733776 +0000 UTC m=+0.034874769 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 09:05:30 compute-2 ceph-mon[77282]: pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 28 op/s
Jan 31 09:05:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:30.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:30.985340) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850330985409, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 272, "num_deletes": 250, "total_data_size": 71173, "memory_usage": 77344, "flush_reason": "Manual Compaction"}
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850330987591, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 45921, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95271, "largest_seqno": 95538, "table_properties": {"data_size": 44036, "index_size": 113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5305, "raw_average_key_size": 20, "raw_value_size": 40404, "raw_average_value_size": 153, "num_data_blocks": 5, "num_entries": 263, "num_filter_entries": 263, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850329, "oldest_key_time": 1769850329, "file_creation_time": 1769850330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 2296 microseconds, and 1008 cpu microseconds.
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:30.987646) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 45921 bytes OK
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:30.987661) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:30.988764) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:30.988778) EVENT_LOG_v1 {"time_micros": 1769850330988774, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:30.988796) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 69091, prev total WAL file size 69091, number of live WAL files 2.
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:30.989079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323630' seq:72057594037927935, type:22 .. '6D6772737461740033353131' seq:0, type:0; will stop at (end)
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(44KB)], [195(14MB)]
Jan 31 09:05:30 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850330989117, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 14968418, "oldest_snapshot_seqno": -1}
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 11215 keys, 11145311 bytes, temperature: kUnknown
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331052634, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 11145311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11078040, "index_size": 38102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 297560, "raw_average_key_size": 26, "raw_value_size": 10887103, "raw_average_value_size": 970, "num_data_blocks": 1428, "num_entries": 11215, "num_filter_entries": 11215, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769850330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:31.052837) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11145311 bytes
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:31.053874) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.4 rd, 175.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 14.2 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(568.7) write-amplify(242.7) OK, records in: 11722, records dropped: 507 output_compression: NoCompression
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:31.053889) EVENT_LOG_v1 {"time_micros": 1769850331053882, "job": 126, "event": "compaction_finished", "compaction_time_micros": 63577, "compaction_time_cpu_micros": 22848, "output_level": 6, "num_output_files": 1, "total_output_size": 11145311, "num_input_records": 11722, "num_output_records": 11215, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331053980, "job": 126, "event": "table_file_deletion", "file_number": 197}
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331055045, "job": 126, "event": "table_file_deletion", "file_number": 195}
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:30.989003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:31.055066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:31.055070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:31.055071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:31.055072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:05:31.055074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:31.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:32 compute-2 ceph-mon[77282]: pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Jan 31 09:05:32 compute-2 nova_compute[226829]: 2026-01-31 09:05:32.490 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:32.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:33.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:34 compute-2 ceph-mon[77282]: pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 27 op/s
Jan 31 09:05:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:34.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:34 compute-2 nova_compute[226829]: 2026-01-31 09:05:34.826 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:35.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:36 compute-2 ceph-mon[77282]: pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 852 B/s wr, 23 op/s
Jan 31 09:05:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:37 compute-2 nova_compute[226829]: 2026-01-31 09:05:37.413 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850322.4121778, aed66974-d018-4ccb-9be7-edb5f1e9debb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:05:37 compute-2 nova_compute[226829]: 2026-01-31 09:05:37.414 226833 INFO nova.compute.manager [-] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] VM Stopped (Lifecycle Event)
Jan 31 09:05:37 compute-2 nova_compute[226829]: 2026-01-31 09:05:37.448 226833 DEBUG nova.compute.manager [None req-fb2d2b7a-3a23-4f5d-a8d9-b3e5aaba0ddb - - - - - -] [instance: aed66974-d018-4ccb-9be7-edb5f1e9debb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:05:37 compute-2 nova_compute[226829]: 2026-01-31 09:05:37.494 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:05:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:37.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:05:38 compute-2 ceph-mon[77282]: pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Jan 31 09:05:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:38.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:39.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:39 compute-2 nova_compute[226829]: 2026-01-31 09:05:39.829 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:40 compute-2 nova_compute[226829]: 2026-01-31 09:05:40.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:40 compute-2 nova_compute[226829]: 2026-01-31 09:05:40.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:05:40 compute-2 ceph-mon[77282]: pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 31 09:05:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:40.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:41.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:42 compute-2 sudo[333982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:42 compute-2 sudo[333982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:42 compute-2 sudo[333982]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:42 compute-2 sudo[334007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:05:42 compute-2 sudo[334007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:42 compute-2 sudo[334007]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:42 compute-2 sudo[334032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:42 compute-2 sudo[334032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:42 compute-2 sudo[334032]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:42 compute-2 sudo[334057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:05:42 compute-2 sudo[334057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:42 compute-2 nova_compute[226829]: 2026-01-31 09:05:42.526 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:42 compute-2 ceph-mon[77282]: pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:42 compute-2 sudo[334057]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:42.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:43.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:44.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:44 compute-2 ceph-mon[77282]: pgmap v4017: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:05:44 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:05:44 compute-2 nova_compute[226829]: 2026-01-31 09:05:44.831 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:45.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:05:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:05:45 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:05:45 compute-2 ceph-mon[77282]: pgmap v4018: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:46.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:47 compute-2 sudo[334115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:47 compute-2 sudo[334115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:47 compute-2 sudo[334115]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:47 compute-2 sudo[334140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:47 compute-2 sudo[334140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:47 compute-2 sudo[334140]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:47 compute-2 nova_compute[226829]: 2026-01-31 09:05:47.530 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:47 compute-2 ceph-mon[77282]: pgmap v4019: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:48.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:49 compute-2 nova_compute[226829]: 2026-01-31 09:05:49.839 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:05:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:49.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:05:50 compute-2 ceph-mon[77282]: pgmap v4020: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:50.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:51.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:51 compute-2 ceph-mon[77282]: pgmap v4021: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:52 compute-2 nova_compute[226829]: 2026-01-31 09:05:52.533 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:52.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/825411166' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:05:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/825411166' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:05:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:53.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:54.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:54 compute-2 nova_compute[226829]: 2026-01-31 09:05:54.840 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:55 compute-2 nova_compute[226829]: 2026-01-31 09:05:55.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:56.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:56 compute-2 ceph-mon[77282]: pgmap v4022: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:56 compute-2 podman[334170]: 2026-01-31 09:05:56.314888226 +0000 UTC m=+0.086378790 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 09:05:56 compute-2 ceph-mon[77282]: pgmap v4023: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:56.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:57 compute-2 nova_compute[226829]: 2026-01-31 09:05:57.536 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:57 compute-2 sudo[334198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:57 compute-2 sudo[334198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:57 compute-2 sudo[334198]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:57 compute-2 sudo[334223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:05:57 compute-2 sudo[334223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:57 compute-2 sudo[334223]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:58.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:58 compute-2 ceph-mon[77282]: pgmap v4024: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:58 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:05:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:58.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:59 compute-2 nova_compute[226829]: 2026-01-31 09:05:59.841 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:59 compute-2 ceph-mon[77282]: pgmap v4025: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:00.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:00.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:01 compute-2 podman[334250]: 2026-01-31 09:06:01.176986893 +0000 UTC m=+0.055147690 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 09:06:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:02.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:02 compute-2 ceph-mon[77282]: pgmap v4026: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:02 compute-2 nova_compute[226829]: 2026-01-31 09:06:02.578 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:02.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:03 compute-2 ceph-mon[77282]: pgmap v4027: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:04.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:06:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:06:04 compute-2 nova_compute[226829]: 2026-01-31 09:06:04.895 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:06.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:06.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:06 compute-2 ceph-mon[77282]: pgmap v4028: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:06.949 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:06.950 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:06.950 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:07 compute-2 sudo[334272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:07 compute-2 sudo[334272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:07 compute-2 sudo[334272]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:07 compute-2 sudo[334297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:07 compute-2 sudo[334297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:07 compute-2 sudo[334297]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:07 compute-2 nova_compute[226829]: 2026-01-31 09:06:07.580 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:08 compute-2 ceph-mon[77282]: pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:08.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:08.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:09 compute-2 nova_compute[226829]: 2026-01-31 09:06:09.898 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:10.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:10 compute-2 ceph-mon[77282]: pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:06:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:10.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:06:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:12 compute-2 ceph-mon[77282]: pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:12.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:12 compute-2 nova_compute[226829]: 2026-01-31 09:06:12.583 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:06:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:12.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:06:14 compute-2 ceph-mon[77282]: pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:14.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:14.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:14 compute-2 nova_compute[226829]: 2026-01-31 09:06:14.899 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:15 compute-2 ceph-mon[77282]: pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:16.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:16.289 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=106, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=105) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:06:16 compute-2 nova_compute[226829]: 2026-01-31 09:06:16.290 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:16 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:16.291 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:06:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:16.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:16 compute-2 nova_compute[226829]: 2026-01-31 09:06:16.897 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:17 compute-2 nova_compute[226829]: 2026-01-31 09:06:17.586 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:18 compute-2 ceph-mon[77282]: pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:18.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:18 compute-2 nova_compute[226829]: 2026-01-31 09:06:18.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:06:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:18.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:06:19 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2691059760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:19 compute-2 nova_compute[226829]: 2026-01-31 09:06:19.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:19 compute-2 nova_compute[226829]: 2026-01-31 09:06:19.901 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:20.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:20 compute-2 ceph-mon[77282]: pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:20.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:21 compute-2 nova_compute[226829]: 2026-01-31 09:06:21.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/69904375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:22.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:22 compute-2 nova_compute[226829]: 2026-01-31 09:06:22.624 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:06:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:22.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:06:23 compute-2 ceph-mon[77282]: pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:23 compute-2 nova_compute[226829]: 2026-01-31 09:06:23.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:24 compute-2 ceph-mon[77282]: pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:06:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:24.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:06:24 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:24.294 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '106'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:06:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:24.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:24 compute-2 nova_compute[226829]: 2026-01-31 09:06:24.903 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.348 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.349 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.387 226833 DEBUG nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.464 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.465 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.475 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.476 226833 INFO nova.compute.claims [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Claim successful on node compute-2.ctlplane.example.com
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.508 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:06:25 compute-2 nova_compute[226829]: 2026-01-31 09:06:25.601 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:06:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3771282403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.041 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.047 226833 DEBUG nova.compute.provider_tree [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.083 226833 DEBUG nova.scheduler.client.report [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.126 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.127 226833 DEBUG nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.176 226833 DEBUG nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.177 226833 DEBUG nova.network.neutron [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.205 226833 INFO nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.225 226833 DEBUG nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:06:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:26.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.399 226833 DEBUG nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.402 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.403 226833 INFO nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Creating image(s)
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.429 226833 DEBUG nova.storage.rbd_utils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.456 226833 DEBUG nova.storage.rbd_utils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.481 226833 DEBUG nova.storage.rbd_utils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.485 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.504 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.533 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.534 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.534 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.534 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.534 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.554 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.555 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.556 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.556 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.586 226833 DEBUG nova.storage.rbd_utils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.589 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:26 compute-2 ceph-mon[77282]: pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3771282403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:26.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:06:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2809980560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:26 compute-2 nova_compute[226829]: 2026-01-31 09:06:26.944 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.071 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.072 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4008MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.073 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.073 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.098 226833 DEBUG nova.policy [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4883c0d4a7f54a6898eba5bfdbb41266', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '496e06c7521f45c994e6426c4313acea', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.151 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 1fb94592-4c46-41d2-990b-7d5d8d1a7fce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.151 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.151 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:06:27 compute-2 podman[334470]: 2026-01-31 09:06:27.173736943 +0000 UTC m=+0.065738128 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.205 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:27 compute-2 sudo[334516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:27 compute-2 sudo[334516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:27 compute-2 sudo[334516]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:27 compute-2 sudo[334541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:27 compute-2 sudo[334541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:27 compute-2 sudo[334541]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.505 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.916s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.571 226833 DEBUG nova.storage.rbd_utils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] resizing rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 09:06:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:06:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/30593137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.621 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.650 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.656 226833 DEBUG nova.objects.instance [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'migration_context' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.659 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.708 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.708 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Ensure instance console log exists: /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.709 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.709 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.709 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.733 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.785 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:06:27 compute-2 nova_compute[226829]: 2026-01-31 09:06:27.785 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2809980560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/656439385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:28 compute-2 ceph-mon[77282]: pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/30593137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1687748023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:06:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:28.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:06:28 compute-2 nova_compute[226829]: 2026-01-31 09:06:28.770 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:29 compute-2 nova_compute[226829]: 2026-01-31 09:06:29.904 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:29 compute-2 ceph-mon[77282]: pgmap v4040: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 86 KiB/s wr, 0 op/s
Jan 31 09:06:30 compute-2 nova_compute[226829]: 2026-01-31 09:06:30.074 226833 DEBUG nova.network.neutron [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Successfully created port: 32656689-8c91-4c26-8aea-d5aaac071876 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 09:06:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:30.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:30.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:31 compute-2 nova_compute[226829]: 2026-01-31 09:06:31.096 226833 DEBUG nova.network.neutron [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Successfully updated port: 32656689-8c91-4c26-8aea-d5aaac071876 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:06:31 compute-2 nova_compute[226829]: 2026-01-31 09:06:31.129 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:06:31 compute-2 nova_compute[226829]: 2026-01-31 09:06:31.129 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquired lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:06:31 compute-2 nova_compute[226829]: 2026-01-31 09:06:31.129 226833 DEBUG nova.network.neutron [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:06:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:31 compute-2 nova_compute[226829]: 2026-01-31 09:06:31.235 226833 DEBUG nova.compute.manager [req-eca8f44a-ba99-4ae8-83ff-09b5db4c6a73 req-ccb96930-95b3-422b-814e-e1812a3add70 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-changed-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:06:31 compute-2 nova_compute[226829]: 2026-01-31 09:06:31.236 226833 DEBUG nova.compute.manager [req-eca8f44a-ba99-4ae8-83ff-09b5db4c6a73 req-ccb96930-95b3-422b-814e-e1812a3add70 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Refreshing instance network info cache due to event network-changed-32656689-8c91-4c26-8aea-d5aaac071876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:06:31 compute-2 nova_compute[226829]: 2026-01-31 09:06:31.237 226833 DEBUG oslo_concurrency.lockutils [req-eca8f44a-ba99-4ae8-83ff-09b5db4c6a73 req-ccb96930-95b3-422b-814e-e1812a3add70 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:06:31 compute-2 nova_compute[226829]: 2026-01-31 09:06:31.301 226833 DEBUG nova.network.neutron [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:06:31 compute-2 ceph-mon[77282]: pgmap v4041: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:06:32 compute-2 ovn_controller[133834]: 2026-01-31T09:06:32Z|00847|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 09:06:32 compute-2 podman[334642]: 2026-01-31 09:06:32.148717951 +0000 UTC m=+0.041332345 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 09:06:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:32.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:32 compute-2 nova_compute[226829]: 2026-01-31 09:06:32.653 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:32.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.107 226833 DEBUG nova.network.neutron [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.126 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Releasing lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.126 226833 DEBUG nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance network_info: |[{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.127 226833 DEBUG oslo_concurrency.lockutils [req-eca8f44a-ba99-4ae8-83ff-09b5db4c6a73 req-ccb96930-95b3-422b-814e-e1812a3add70 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.127 226833 DEBUG nova.network.neutron [req-eca8f44a-ba99-4ae8-83ff-09b5db4c6a73 req-ccb96930-95b3-422b-814e-e1812a3add70 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Refreshing network info cache for port 32656689-8c91-4c26-8aea-d5aaac071876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.131 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Start _get_guest_xml network_info=[{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'guest_format': None, 'encrypted': False, 'device_type': 'disk', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.135 226833 WARNING nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.143 226833 DEBUG nova.virt.libvirt.host [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.143 226833 DEBUG nova.virt.libvirt.host [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.163 226833 DEBUG nova.virt.libvirt.host [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.164 226833 DEBUG nova.virt.libvirt.host [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.165 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.165 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.165 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.165 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.165 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.166 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.166 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.166 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.166 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.166 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.166 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.166 226833 DEBUG nova.virt.hardware [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.169 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:06:33 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3575391715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.616 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.641 226833 DEBUG nova.storage.rbd_utils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:06:33 compute-2 nova_compute[226829]: 2026-01-31 09:06:33.645 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3575391715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:06:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:06:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1424143602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.062 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.064 226833 DEBUG nova.virt.libvirt.vif [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-863769904',display_name='tempest-TestShelveInstance-server-863769904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-863769904',id=216,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO97h7ZlFklCcOMhjjvfpHATJVRthvPNErjo8kSCbjC05jkmSen4F2QOXVYh+JGW6zHXigZL/fAl9BFzIpJo57WrH+bNNyDWYa77kRt0fmFgaUAwVEYhYw4aHgZ+3T7mqQ==',key_name='tempest-TestShelveInstance-1892807111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-wmbqilfc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:06:26Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=1fb94592-4c46-41d2-990b-7d5d8d1a7fce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.065 226833 DEBUG nova.network.os_vif_util [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.066 226833 DEBUG nova.network.os_vif_util [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.067 226833 DEBUG nova.objects.instance [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.085 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <uuid>1fb94592-4c46-41d2-990b-7d5d8d1a7fce</uuid>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <name>instance-000000d8</name>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <metadata>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <nova:name>tempest-TestShelveInstance-server-863769904</nova:name>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 09:06:33</nova:creationTime>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <nova:user uuid="4883c0d4a7f54a6898eba5bfdbb41266">tempest-TestShelveInstance-1485729988-project-member</nova:user>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <nova:project uuid="496e06c7521f45c994e6426c4313acea">tempest-TestShelveInstance-1485729988</nova:project>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <nova:port uuid="32656689-8c91-4c26-8aea-d5aaac071876">
Jan 31 09:06:34 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   </metadata>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <system>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <entry name="serial">1fb94592-4c46-41d2-990b-7d5d8d1a7fce</entry>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <entry name="uuid">1fb94592-4c46-41d2-990b-7d5d8d1a7fce</entry>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     </system>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <os>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   </os>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <features>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <apic/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   </features>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   </clock>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   </cpu>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   <devices>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk">
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       </source>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config">
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       </source>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:06:34 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:3d:2a:d0"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <target dev="tap32656689-8c"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     </interface>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/console.log" append="off"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     </serial>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <video>
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     </video>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     </rng>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 09:06:34 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 09:06:34 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 09:06:34 compute-2 nova_compute[226829]:   </devices>
Jan 31 09:06:34 compute-2 nova_compute[226829]: </domain>
Jan 31 09:06:34 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.087 226833 DEBUG nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Preparing to wait for external event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.087 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.088 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.088 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.089 226833 DEBUG nova.virt.libvirt.vif [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-863769904',display_name='tempest-TestShelveInstance-server-863769904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-863769904',id=216,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO97h7ZlFklCcOMhjjvfpHATJVRthvPNErjo8kSCbjC05jkmSen4F2QOXVYh+JGW6zHXigZL/fAl9BFzIpJo57WrH+bNNyDWYa77kRt0fmFgaUAwVEYhYw4aHgZ+3T7mqQ==',key_name='tempest-TestShelveInstance-1892807111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-wmbqilfc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:06:26Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=1fb94592-4c46-41d2-990b-7d5d8d1a7fce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.089 226833 DEBUG nova.network.os_vif_util [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.090 226833 DEBUG nova.network.os_vif_util [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.090 226833 DEBUG os_vif [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.091 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.091 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.092 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.097 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.097 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32656689-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.098 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32656689-8c, col_values=(('external_ids', {'iface-id': '32656689-8c91-4c26-8aea-d5aaac071876', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:2a:d0', 'vm-uuid': '1fb94592-4c46-41d2-990b-7d5d8d1a7fce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.099 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:34 compute-2 NetworkManager[48999]: <info>  [1769850394.1009] manager: (tap32656689-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.102 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.106 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.107 226833 INFO os_vif [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c')
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.155 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.155 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.156 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No VIF found with MAC fa:16:3e:3d:2a:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.157 226833 INFO nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Using config drive
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.187 226833 DEBUG nova.storage.rbd_utils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:06:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:34.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.557 226833 DEBUG nova.network.neutron [req-eca8f44a-ba99-4ae8-83ff-09b5db4c6a73 req-ccb96930-95b3-422b-814e-e1812a3add70 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updated VIF entry in instance network info cache for port 32656689-8c91-4c26-8aea-d5aaac071876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.558 226833 DEBUG nova.network.neutron [req-eca8f44a-ba99-4ae8-83ff-09b5db4c6a73 req-ccb96930-95b3-422b-814e-e1812a3add70 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.574 226833 INFO nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Creating config drive at /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.579 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi23dzw63 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.603 226833 DEBUG oslo_concurrency.lockutils [req-eca8f44a-ba99-4ae8-83ff-09b5db4c6a73 req-ccb96930-95b3-422b-814e-e1812a3add70 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.707 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi23dzw63" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.734 226833 DEBUG nova.storage.rbd_utils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.738 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:34.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:34 compute-2 ceph-mon[77282]: pgmap v4042: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:06:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1424143602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:06:34 compute-2 nova_compute[226829]: 2026-01-31 09:06:34.906 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:35 compute-2 nova_compute[226829]: 2026-01-31 09:06:35.599 226833 DEBUG oslo_concurrency.processutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:35 compute-2 nova_compute[226829]: 2026-01-31 09:06:35.599 226833 INFO nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Deleting local config drive /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config because it was imported into RBD.
Jan 31 09:06:35 compute-2 kernel: tap32656689-8c: entered promiscuous mode
Jan 31 09:06:35 compute-2 nova_compute[226829]: 2026-01-31 09:06:35.633 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:35 compute-2 ovn_controller[133834]: 2026-01-31T09:06:35Z|00848|binding|INFO|Claiming lport 32656689-8c91-4c26-8aea-d5aaac071876 for this chassis.
Jan 31 09:06:35 compute-2 ovn_controller[133834]: 2026-01-31T09:06:35Z|00849|binding|INFO|32656689-8c91-4c26-8aea-d5aaac071876: Claiming fa:16:3e:3d:2a:d0 10.100.0.12
Jan 31 09:06:35 compute-2 NetworkManager[48999]: <info>  [1769850395.6357] manager: (tap32656689-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Jan 31 09:06:35 compute-2 nova_compute[226829]: 2026-01-31 09:06:35.636 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:35 compute-2 nova_compute[226829]: 2026-01-31 09:06:35.641 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.648 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:2a:d0 10.100.0.12'], port_security=['fa:16:3e:3d:2a:d0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1fb94592-4c46-41d2-990b-7d5d8d1a7fce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '496e06c7521f45c994e6426c4313acea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e909f648-e644-4ddc-8790-deca52a25b73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62ab62d6-b781-4309-8775-90b43197869a, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=32656689-8c91-4c26-8aea-d5aaac071876) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.649 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 32656689-8c91-4c26-8aea-d5aaac071876 in datapath 1d742d07-ac6a-4870-9712-15a33a8a1e71 bound to our chassis
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.651 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d742d07-ac6a-4870-9712-15a33a8a1e71
Jan 31 09:06:35 compute-2 systemd-machined[195142]: New machine qemu-96-instance-000000d8.
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.659 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0853460a-d9c4-4b3f-ae4b-9f06b8b44dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.661 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1d742d07-a1 in ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:06:35 compute-2 nova_compute[226829]: 2026-01-31 09:06:35.663 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.663 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1d742d07-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.663 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d456d6-0f80-4cad-b77a-ebeb8d381d84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.664 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0407b17f-1862-4cfd-99d9-be6e63839c38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 systemd-udevd[334800]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:06:35 compute-2 ovn_controller[133834]: 2026-01-31T09:06:35Z|00850|binding|INFO|Setting lport 32656689-8c91-4c26-8aea-d5aaac071876 ovn-installed in OVS
Jan 31 09:06:35 compute-2 ovn_controller[133834]: 2026-01-31T09:06:35Z|00851|binding|INFO|Setting lport 32656689-8c91-4c26-8aea-d5aaac071876 up in Southbound
Jan 31 09:06:35 compute-2 nova_compute[226829]: 2026-01-31 09:06:35.667 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:35 compute-2 systemd[1]: Started Virtual Machine qemu-96-instance-000000d8.
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.674 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[a768d92f-cee2-4ac7-b159-97f45d09204b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 NetworkManager[48999]: <info>  [1769850395.6803] device (tap32656689-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:06:35 compute-2 NetworkManager[48999]: <info>  [1769850395.6810] device (tap32656689-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.686 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aabd801f-fa54-42cb-a116-4094c26ea63d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.705 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f56fffd8-b2fe-4e7b-a436-b9c1d3c8e0bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 systemd-udevd[334803]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:06:35 compute-2 NetworkManager[48999]: <info>  [1769850395.7188] manager: (tap1d742d07-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/427)
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.717 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[5364cd00-5447-42c4-b339-62fe96613fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.744 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[ddec6883-cd84-47c5-8c56-df53fdaf1291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.748 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[33536bc0-07f9-4308-8b57-783e7741e339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 NetworkManager[48999]: <info>  [1769850395.7671] device (tap1d742d07-a0): carrier: link connected
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.772 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[816dc27e-f221-4e9f-8aa6-e85ec7792847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.782 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[71d3b412-2f5e-4433-8384-660403fad1d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d742d07-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3d:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1064824, 'reachable_time': 23548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334832, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.795 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8a66e5-6d0f-4390-a15a-6554f6a7ac7b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:3d77'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1064824, 'tstamp': 1064824}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334833, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.806 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0b0609-f541-415e-bad6-53da32ca3ea2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d742d07-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3d:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1064824, 'reachable_time': 23548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334834, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.827 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3af971-feeb-4dee-851a-54ecb6ca3cb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.872 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9f4b08-a82c-4a34-9052-627f48c43121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.874 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d742d07-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.874 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.874 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d742d07-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:06:35 compute-2 nova_compute[226829]: 2026-01-31 09:06:35.876 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:35 compute-2 NetworkManager[48999]: <info>  [1769850395.8766] manager: (tap1d742d07-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Jan 31 09:06:35 compute-2 kernel: tap1d742d07-a0: entered promiscuous mode
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.878 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d742d07-a0, col_values=(('external_ids', {'iface-id': 'ff7f72f7-b69e-4d38-bd70-12b9fe05b593'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:06:35 compute-2 nova_compute[226829]: 2026-01-31 09:06:35.879 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:35 compute-2 ovn_controller[133834]: 2026-01-31T09:06:35Z|00852|binding|INFO|Releasing lport ff7f72f7-b69e-4d38-bd70-12b9fe05b593 from this chassis (sb_readonly=0)
Jan 31 09:06:35 compute-2 nova_compute[226829]: 2026-01-31 09:06:35.885 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.886 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.887 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7112cb-ef60-4556-84fc-aa80912e94a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.888 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: global
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-1d742d07-ac6a-4870-9712-15a33a8a1e71
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 1d742d07-ac6a-4870-9712-15a33a8a1e71
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:06:35 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:06:35.888 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'env', 'PROCESS_TAG=haproxy-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1d742d07-ac6a-4870-9712-15a33a8a1e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:06:35 compute-2 ceph-mon[77282]: pgmap v4043: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:06:36 compute-2 podman[334866]: 2026-01-31 09:06:36.210932686 +0000 UTC m=+0.049804715 container create 8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 09:06:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:36 compute-2 systemd[1]: Started libpod-conmon-8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2.scope.
Jan 31 09:06:36 compute-2 systemd[1]: Started libcrun container.
Jan 31 09:06:36 compute-2 podman[334866]: 2026-01-31 09:06:36.179744328 +0000 UTC m=+0.018616377 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:06:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:36.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:36 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf6c2a7e56cd9353627dede985d84c4a6e5071db7a1ca4db5e9f5fd426b3065/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:06:36 compute-2 podman[334866]: 2026-01-31 09:06:36.316379664 +0000 UTC m=+0.155251713 container init 8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:06:36 compute-2 podman[334866]: 2026-01-31 09:06:36.321073831 +0000 UTC m=+0.159945860 container start 8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 09:06:36 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[334919]: [NOTICE]   (334926) : New worker (334928) forked
Jan 31 09:06:36 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[334919]: [NOTICE]   (334926) : Loading success.
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.365 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850396.3647094, 1fb94592-4c46-41d2-990b-7d5d8d1a7fce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.366 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] VM Started (Lifecycle Event)
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.423 226833 DEBUG nova.compute.manager [req-16fef5b9-957a-4198-ad2a-41a25a84d39d req-3a24a75e-7a20-4080-a901-d650c13c3e82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.424 226833 DEBUG oslo_concurrency.lockutils [req-16fef5b9-957a-4198-ad2a-41a25a84d39d req-3a24a75e-7a20-4080-a901-d650c13c3e82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.424 226833 DEBUG oslo_concurrency.lockutils [req-16fef5b9-957a-4198-ad2a-41a25a84d39d req-3a24a75e-7a20-4080-a901-d650c13c3e82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.424 226833 DEBUG oslo_concurrency.lockutils [req-16fef5b9-957a-4198-ad2a-41a25a84d39d req-3a24a75e-7a20-4080-a901-d650c13c3e82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.424 226833 DEBUG nova.compute.manager [req-16fef5b9-957a-4198-ad2a-41a25a84d39d req-3a24a75e-7a20-4080-a901-d650c13c3e82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Processing event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.425 226833 DEBUG nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.428 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.431 226833 INFO nova.virt.libvirt.driver [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance spawned successfully.
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.431 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.436 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.439 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.477 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.478 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.478 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.478 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.479 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.479 226833 DEBUG nova.virt.libvirt.driver [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.484 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.485 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850396.3649309, 1fb94592-4c46-41d2-990b-7d5d8d1a7fce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.485 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] VM Paused (Lifecycle Event)
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.533 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.536 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850396.4279647, 1fb94592-4c46-41d2-990b-7d5d8d1a7fce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.536 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] VM Resumed (Lifecycle Event)
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.559 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.563 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.568 226833 INFO nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Took 10.17 seconds to spawn the instance on the hypervisor.
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.568 226833 DEBUG nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.599 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.656 226833 INFO nova.compute.manager [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Took 11.23 seconds to build instance.
Jan 31 09:06:36 compute-2 nova_compute[226829]: 2026-01-31 09:06:36.672 226833 DEBUG oslo_concurrency.lockutils [None req-a4a9ac6e-9fac-497a-943a-2d930b51374d 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:36.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:38 compute-2 ceph-mon[77282]: pgmap v4044: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:06:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:38.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:38 compute-2 nova_compute[226829]: 2026-01-31 09:06:38.564 226833 DEBUG nova.compute.manager [req-f22e431f-ed30-4ae1-aeb0-ce7a49cd833f req-e3932856-50ac-49af-9058-ea4b976ac189 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:06:38 compute-2 nova_compute[226829]: 2026-01-31 09:06:38.565 226833 DEBUG oslo_concurrency.lockutils [req-f22e431f-ed30-4ae1-aeb0-ce7a49cd833f req-e3932856-50ac-49af-9058-ea4b976ac189 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:38 compute-2 nova_compute[226829]: 2026-01-31 09:06:38.565 226833 DEBUG oslo_concurrency.lockutils [req-f22e431f-ed30-4ae1-aeb0-ce7a49cd833f req-e3932856-50ac-49af-9058-ea4b976ac189 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:38 compute-2 nova_compute[226829]: 2026-01-31 09:06:38.565 226833 DEBUG oslo_concurrency.lockutils [req-f22e431f-ed30-4ae1-aeb0-ce7a49cd833f req-e3932856-50ac-49af-9058-ea4b976ac189 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:38 compute-2 nova_compute[226829]: 2026-01-31 09:06:38.565 226833 DEBUG nova.compute.manager [req-f22e431f-ed30-4ae1-aeb0-ce7a49cd833f req-e3932856-50ac-49af-9058-ea4b976ac189 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] No waiting events found dispatching network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:06:38 compute-2 nova_compute[226829]: 2026-01-31 09:06:38.566 226833 WARNING nova.compute.manager [req-f22e431f-ed30-4ae1-aeb0-ce7a49cd833f req-e3932856-50ac-49af-9058-ea4b976ac189 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received unexpected event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 for instance with vm_state active and task_state None.
Jan 31 09:06:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:38.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:39 compute-2 nova_compute[226829]: 2026-01-31 09:06:39.100 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:39 compute-2 ceph-mon[77282]: pgmap v4045: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:06:39 compute-2 nova_compute[226829]: 2026-01-31 09:06:39.957 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:40.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:40.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:41 compute-2 nova_compute[226829]: 2026-01-31 09:06:41.355 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:41 compute-2 NetworkManager[48999]: <info>  [1769850401.3568] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Jan 31 09:06:41 compute-2 NetworkManager[48999]: <info>  [1769850401.3579] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Jan 31 09:06:41 compute-2 nova_compute[226829]: 2026-01-31 09:06:41.376 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:41 compute-2 ovn_controller[133834]: 2026-01-31T09:06:41Z|00853|binding|INFO|Releasing lport ff7f72f7-b69e-4d38-bd70-12b9fe05b593 from this chassis (sb_readonly=0)
Jan 31 09:06:41 compute-2 nova_compute[226829]: 2026-01-31 09:06:41.389 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:42 compute-2 ceph-mon[77282]: pgmap v4046: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 100 op/s
Jan 31 09:06:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:42.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:42 compute-2 nova_compute[226829]: 2026-01-31 09:06:42.353 226833 DEBUG nova.compute.manager [req-32f3cf64-a5d2-4a96-a6ac-419528c88804 req-8d827641-c200-4bed-9cbe-2da516a50822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-changed-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:06:42 compute-2 nova_compute[226829]: 2026-01-31 09:06:42.353 226833 DEBUG nova.compute.manager [req-32f3cf64-a5d2-4a96-a6ac-419528c88804 req-8d827641-c200-4bed-9cbe-2da516a50822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Refreshing instance network info cache due to event network-changed-32656689-8c91-4c26-8aea-d5aaac071876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:06:42 compute-2 nova_compute[226829]: 2026-01-31 09:06:42.354 226833 DEBUG oslo_concurrency.lockutils [req-32f3cf64-a5d2-4a96-a6ac-419528c88804 req-8d827641-c200-4bed-9cbe-2da516a50822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:06:42 compute-2 nova_compute[226829]: 2026-01-31 09:06:42.354 226833 DEBUG oslo_concurrency.lockutils [req-32f3cf64-a5d2-4a96-a6ac-419528c88804 req-8d827641-c200-4bed-9cbe-2da516a50822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:06:42 compute-2 nova_compute[226829]: 2026-01-31 09:06:42.354 226833 DEBUG nova.network.neutron [req-32f3cf64-a5d2-4a96-a6ac-419528c88804 req-8d827641-c200-4bed-9cbe-2da516a50822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Refreshing network info cache for port 32656689-8c91-4c26-8aea-d5aaac071876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:06:42 compute-2 nova_compute[226829]: 2026-01-31 09:06:42.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:42 compute-2 nova_compute[226829]: 2026-01-31 09:06:42.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:06:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:42.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:43 compute-2 ceph-mon[77282]: pgmap v4047: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:06:44 compute-2 nova_compute[226829]: 2026-01-31 09:06:44.102 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:06:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:44.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:06:44 compute-2 nova_compute[226829]: 2026-01-31 09:06:44.657 226833 DEBUG nova.network.neutron [req-32f3cf64-a5d2-4a96-a6ac-419528c88804 req-8d827641-c200-4bed-9cbe-2da516a50822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updated VIF entry in instance network info cache for port 32656689-8c91-4c26-8aea-d5aaac071876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:06:44 compute-2 nova_compute[226829]: 2026-01-31 09:06:44.658 226833 DEBUG nova.network.neutron [req-32f3cf64-a5d2-4a96-a6ac-419528c88804 req-8d827641-c200-4bed-9cbe-2da516a50822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:06:44 compute-2 nova_compute[226829]: 2026-01-31 09:06:44.693 226833 DEBUG oslo_concurrency.lockutils [req-32f3cf64-a5d2-4a96-a6ac-419528c88804 req-8d827641-c200-4bed-9cbe-2da516a50822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:06:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:44.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:45 compute-2 nova_compute[226829]: 2026-01-31 09:06:45.014 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:45 compute-2 ceph-mon[77282]: pgmap v4048: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:06:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:06:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:46.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:06:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:06:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:46.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:06:47 compute-2 sudo[334943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:47 compute-2 sudo[334943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:47 compute-2 sudo[334943]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:47 compute-2 sudo[334968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:47 compute-2 sudo[334968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:47 compute-2 sudo[334968]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:48.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:48.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:48 compute-2 ceph-mon[77282]: pgmap v4049: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:06:49 compute-2 nova_compute[226829]: 2026-01-31 09:06:49.105 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:49 compute-2 ceph-mon[77282]: pgmap v4050: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 85 B/s wr, 63 op/s
Jan 31 09:06:50 compute-2 nova_compute[226829]: 2026-01-31 09:06:50.021 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:50.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:50.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:50 compute-2 ovn_controller[133834]: 2026-01-31T09:06:50Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:2a:d0 10.100.0.12
Jan 31 09:06:50 compute-2 ovn_controller[133834]: 2026-01-31T09:06:50Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:2a:d0 10.100.0.12
Jan 31 09:06:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:52.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:06:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:52.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:06:52 compute-2 ceph-mon[77282]: pgmap v4051: 305 pgs: 305 active+clean; 185 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Jan 31 09:06:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/77891392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:06:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/77891392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:06:53 compute-2 ceph-mon[77282]: pgmap v4052: 305 pgs: 305 active+clean; 193 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 31 09:06:54 compute-2 nova_compute[226829]: 2026-01-31 09:06:54.107 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:54.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:54.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:55 compute-2 nova_compute[226829]: 2026-01-31 09:06:55.059 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:56.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:56 compute-2 ceph-mon[77282]: pgmap v4053: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 09:06:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:56.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:57 compute-2 ceph-mon[77282]: pgmap v4054: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 09:06:58 compute-2 sudo[334999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:58 compute-2 sudo[334999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:58 compute-2 sudo[334999]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:58 compute-2 sudo[335030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:06:58 compute-2 sudo[335030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:58 compute-2 sudo[335030]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:58 compute-2 podman[335023]: 2026-01-31 09:06:58.095313683 +0000 UTC m=+0.069567863 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:06:58 compute-2 sudo[335068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:58 compute-2 sudo[335068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:58 compute-2 sudo[335068]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:58 compute-2 sudo[335099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:06:58 compute-2 sudo[335099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:58.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:58 compute-2 sudo[335099]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:06:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:58.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:58 compute-2 nova_compute[226829]: 2026-01-31 09:06:58.845 226833 DEBUG oslo_concurrency.lockutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:58 compute-2 nova_compute[226829]: 2026-01-31 09:06:58.847 226833 DEBUG oslo_concurrency.lockutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:58 compute-2 nova_compute[226829]: 2026-01-31 09:06:58.847 226833 INFO nova.compute.manager [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Shelving
Jan 31 09:06:58 compute-2 nova_compute[226829]: 2026-01-31 09:06:58.869 226833 DEBUG nova.virt.libvirt.driver [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 09:06:59 compute-2 nova_compute[226829]: 2026-01-31 09:06:59.109 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:06:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 19K writes, 96K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1516 writes, 7399 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 15.27 MB, 0.03 MB/s
                                           Interval WAL: 1516 writes, 1516 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     87.1      1.36              0.34        63    0.022       0      0       0.0       0.0
                                             L6      1/0   10.63 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5    159.4    136.9      4.79              1.82        62    0.077    507K    33K       0.0       0.0
                                            Sum      1/0   10.63 MB   0.0      0.7     0.1      0.6       0.8      0.1       0.0   6.5    124.1    125.8      6.15              2.16       125    0.049    507K    33K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    165.7    161.5      0.52              0.22        12    0.043     69K   3063       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    159.4    136.9      4.79              1.82        62    0.077    507K    33K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     87.2      1.36              0.34        62    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.116, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.76 GB write, 0.11 MB/s write, 0.75 GB read, 0.11 MB/s read, 6.1 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 81.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000712 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4678,77.88 MB,25.6176%) FilterBlock(125,1.37 MB,0.449587%) IndexBlock(125,2.21 MB,0.727809%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 09:07:00 compute-2 nova_compute[226829]: 2026-01-31 09:07:00.067 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:00 compute-2 ceph-mon[77282]: pgmap v4055: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 09:07:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:07:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:07:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:07:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:07:00 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:07:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:00.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:01 compute-2 kernel: tap32656689-8c (unregistering): left promiscuous mode
Jan 31 09:07:01 compute-2 NetworkManager[48999]: <info>  [1769850421.1677] device (tap32656689-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.173 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:01 compute-2 ovn_controller[133834]: 2026-01-31T09:07:01Z|00854|binding|INFO|Releasing lport 32656689-8c91-4c26-8aea-d5aaac071876 from this chassis (sb_readonly=0)
Jan 31 09:07:01 compute-2 ovn_controller[133834]: 2026-01-31T09:07:01Z|00855|binding|INFO|Setting lport 32656689-8c91-4c26-8aea-d5aaac071876 down in Southbound
Jan 31 09:07:01 compute-2 ovn_controller[133834]: 2026-01-31T09:07:01Z|00856|binding|INFO|Removing iface tap32656689-8c ovn-installed in OVS
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.175 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.180 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:2a:d0 10.100.0.12'], port_security=['fa:16:3e:3d:2a:d0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1fb94592-4c46-41d2-990b-7d5d8d1a7fce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '496e06c7521f45c994e6426c4313acea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e909f648-e644-4ddc-8790-deca52a25b73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62ab62d6-b781-4309-8775-90b43197869a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=32656689-8c91-4c26-8aea-d5aaac071876) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.182 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 32656689-8c91-4c26-8aea-d5aaac071876 in datapath 1d742d07-ac6a-4870-9712-15a33a8a1e71 unbound from our chassis
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.182 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.184 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d742d07-ac6a-4870-9712-15a33a8a1e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.185 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[af38b60e-9d7a-4143-9df1-a1401f9e226f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.186 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 namespace which is not needed anymore
Jan 31 09:07:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:01 compute-2 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d8.scope: Deactivated successfully.
Jan 31 09:07:01 compute-2 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d8.scope: Consumed 13.409s CPU time.
Jan 31 09:07:01 compute-2 systemd-machined[195142]: Machine qemu-96-instance-000000d8 terminated.
Jan 31 09:07:01 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[334919]: [NOTICE]   (334926) : haproxy version is 2.8.14-c23fe91
Jan 31 09:07:01 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[334919]: [NOTICE]   (334926) : path to executable is /usr/sbin/haproxy
Jan 31 09:07:01 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[334919]: [WARNING]  (334926) : Exiting Master process...
Jan 31 09:07:01 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[334919]: [WARNING]  (334926) : Exiting Master process...
Jan 31 09:07:01 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[334919]: [ALERT]    (334926) : Current worker (334928) exited with code 143 (Terminated)
Jan 31 09:07:01 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[334919]: [WARNING]  (334926) : All workers exited. Exiting... (0)
Jan 31 09:07:01 compute-2 systemd[1]: libpod-8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2.scope: Deactivated successfully.
Jan 31 09:07:01 compute-2 conmon[334919]: conmon 8352557d4ee1cca47690 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2.scope/container/memory.events
Jan 31 09:07:01 compute-2 podman[335181]: 2026-01-31 09:07:01.295461416 +0000 UTC m=+0.037907432 container died 8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 09:07:01 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2-userdata-shm.mount: Deactivated successfully.
Jan 31 09:07:01 compute-2 systemd[1]: var-lib-containers-storage-overlay-3bf6c2a7e56cd9353627dede985d84c4a6e5071db7a1ca4db5e9f5fd426b3065-merged.mount: Deactivated successfully.
Jan 31 09:07:01 compute-2 podman[335181]: 2026-01-31 09:07:01.326825509 +0000 UTC m=+0.069271525 container cleanup 8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:07:01 compute-2 systemd[1]: libpod-conmon-8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2.scope: Deactivated successfully.
Jan 31 09:07:01 compute-2 podman[335212]: 2026-01-31 09:07:01.372996244 +0000 UTC m=+0.033003198 container remove 8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.373 226833 DEBUG nova.compute.manager [req-4a88df25-df52-49de-ad37-33f56b594b8a req-8db68720-8e9c-43b7-888d-5718a74f6c78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-vif-unplugged-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.373 226833 DEBUG oslo_concurrency.lockutils [req-4a88df25-df52-49de-ad37-33f56b594b8a req-8db68720-8e9c-43b7-888d-5718a74f6c78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.375 226833 DEBUG oslo_concurrency.lockutils [req-4a88df25-df52-49de-ad37-33f56b594b8a req-8db68720-8e9c-43b7-888d-5718a74f6c78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.376 226833 DEBUG oslo_concurrency.lockutils [req-4a88df25-df52-49de-ad37-33f56b594b8a req-8db68720-8e9c-43b7-888d-5718a74f6c78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.376 226833 DEBUG nova.compute.manager [req-4a88df25-df52-49de-ad37-33f56b594b8a req-8db68720-8e9c-43b7-888d-5718a74f6c78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] No waiting events found dispatching network-vif-unplugged-32656689-8c91-4c26-8aea-d5aaac071876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.376 226833 WARNING nova.compute.manager [req-4a88df25-df52-49de-ad37-33f56b594b8a req-8db68720-8e9c-43b7-888d-5718a74f6c78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received unexpected event network-vif-unplugged-32656689-8c91-4c26-8aea-d5aaac071876 for instance with vm_state active and task_state shelving.
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.377 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[33546daf-b5ca-42d9-b904-83c40afed268]: (4, ('Sat Jan 31 09:07:01 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 (8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2)\n8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2\nSat Jan 31 09:07:01 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 (8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2)\n8352557d4ee1cca47690e301728d90d2caac9c3c3ffad69a3e1b4ea3190f7aa2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.378 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a8254cad-9ffd-411a-bac9-bd9ce7fe25fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.379 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d742d07-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.381 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.386 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:01 compute-2 kernel: tap1d742d07-a0: left promiscuous mode
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.391 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.394 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e350b38e-d458-4ce4-93d1-dff7d11cc865]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.412 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[39585f34-5f77-44f7-b2de-3f40c68153d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.414 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a579d22c-4ec5-469b-ae0a-3cab505991ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.427 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e8612b56-87c8-41d0-bee0-fb8434e89111]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1064817, 'reachable_time': 29974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335238, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.430 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:07:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:01.430 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[f95cdd55-021f-4be9-9687-4b08646ba493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:07:01 compute-2 systemd[1]: run-netns-ovnmeta\x2d1d742d07\x2dac6a\x2d4870\x2d9712\x2d15a33a8a1e71.mount: Deactivated successfully.
Jan 31 09:07:01 compute-2 ceph-mon[77282]: pgmap v4056: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.883 226833 INFO nova.virt.libvirt.driver [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance shutdown successfully after 3 seconds.
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.887 226833 INFO nova.virt.libvirt.driver [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance destroyed successfully.
Jan 31 09:07:01 compute-2 nova_compute[226829]: 2026-01-31 09:07:01.888 226833 DEBUG nova.objects.instance [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'numa_topology' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:07:02 compute-2 nova_compute[226829]: 2026-01-31 09:07:02.161 226833 INFO nova.virt.libvirt.driver [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Beginning cold snapshot process
Jan 31 09:07:02 compute-2 nova_compute[226829]: 2026-01-31 09:07:02.292 226833 DEBUG nova.virt.libvirt.imagebackend [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 09:07:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:02.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:02 compute-2 nova_compute[226829]: 2026-01-31 09:07:02.534 226833 DEBUG nova.storage.rbd_utils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] creating snapshot(a06f81e3efed4dd39716abc030988911) on rbd image(1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 09:07:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:02.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:03 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e415 e415: 3 total, 3 up, 3 in
Jan 31 09:07:03 compute-2 podman[335292]: 2026-01-31 09:07:03.195984088 +0000 UTC m=+0.082034512 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent)
Jan 31 09:07:03 compute-2 nova_compute[226829]: 2026-01-31 09:07:03.481 226833 DEBUG nova.compute.manager [req-59d5b7b5-4e2b-4639-b1e0-1b1ca3dcc051 req-ed0996e7-47b2-4e80-9dde-d2974c0422c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:07:03 compute-2 nova_compute[226829]: 2026-01-31 09:07:03.481 226833 DEBUG oslo_concurrency.lockutils [req-59d5b7b5-4e2b-4639-b1e0-1b1ca3dcc051 req-ed0996e7-47b2-4e80-9dde-d2974c0422c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:07:03 compute-2 nova_compute[226829]: 2026-01-31 09:07:03.482 226833 DEBUG oslo_concurrency.lockutils [req-59d5b7b5-4e2b-4639-b1e0-1b1ca3dcc051 req-ed0996e7-47b2-4e80-9dde-d2974c0422c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:07:03 compute-2 nova_compute[226829]: 2026-01-31 09:07:03.482 226833 DEBUG oslo_concurrency.lockutils [req-59d5b7b5-4e2b-4639-b1e0-1b1ca3dcc051 req-ed0996e7-47b2-4e80-9dde-d2974c0422c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:07:03 compute-2 nova_compute[226829]: 2026-01-31 09:07:03.482 226833 DEBUG nova.compute.manager [req-59d5b7b5-4e2b-4639-b1e0-1b1ca3dcc051 req-ed0996e7-47b2-4e80-9dde-d2974c0422c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] No waiting events found dispatching network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:07:03 compute-2 nova_compute[226829]: 2026-01-31 09:07:03.482 226833 WARNING nova.compute.manager [req-59d5b7b5-4e2b-4639-b1e0-1b1ca3dcc051 req-ed0996e7-47b2-4e80-9dde-d2974c0422c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received unexpected event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 for instance with vm_state active and task_state shelving_image_uploading.
Jan 31 09:07:03 compute-2 nova_compute[226829]: 2026-01-31 09:07:03.851 226833 DEBUG nova.storage.rbd_utils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] cloning vms/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk@a06f81e3efed4dd39716abc030988911 to images/f336042a-a032-4a9a-8976-bb911c951591 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 09:07:04 compute-2 nova_compute[226829]: 2026-01-31 09:07:04.111 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:04 compute-2 ceph-mon[77282]: osdmap e415: 3 total, 3 up, 3 in
Jan 31 09:07:04 compute-2 ceph-mon[77282]: pgmap v4058: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 145 KiB/s wr, 22 op/s
Jan 31 09:07:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:04 compute-2 nova_compute[226829]: 2026-01-31 09:07:04.479 226833 DEBUG nova.storage.rbd_utils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] flattening images/f336042a-a032-4a9a-8976-bb911c951591 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 09:07:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:04.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:05 compute-2 nova_compute[226829]: 2026-01-31 09:07:05.068 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:06 compute-2 ceph-mon[77282]: pgmap v4059: 305 pgs: 305 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.3 MiB/s wr, 41 op/s
Jan 31 09:07:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:06 compute-2 nova_compute[226829]: 2026-01-31 09:07:06.641 226833 DEBUG nova.storage.rbd_utils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] removing snapshot(a06f81e3efed4dd39716abc030988911) on rbd image(1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 09:07:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:06.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:06.950 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:07:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:06.950 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:07:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:06.951 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:07:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e416 e416: 3 total, 3 up, 3 in
Jan 31 09:07:07 compute-2 nova_compute[226829]: 2026-01-31 09:07:07.587 226833 DEBUG nova.storage.rbd_utils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] creating snapshot(snap) on rbd image(f336042a-a032-4a9a-8976-bb911c951591) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 09:07:07 compute-2 sudo[335386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:07 compute-2 sudo[335386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:07 compute-2 sudo[335386]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:07 compute-2 sudo[335429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:07 compute-2 sudo[335429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:07 compute-2 sudo[335429]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:08.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:08 compute-2 ceph-mon[77282]: osdmap e416: 3 total, 3 up, 3 in
Jan 31 09:07:08 compute-2 ceph-mon[77282]: pgmap v4061: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.5 MiB/s wr, 89 op/s
Jan 31 09:07:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:08.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e417 e417: 3 total, 3 up, 3 in
Jan 31 09:07:09 compute-2 nova_compute[226829]: 2026-01-31 09:07:09.113 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:10 compute-2 ceph-mon[77282]: osdmap e417: 3 total, 3 up, 3 in
Jan 31 09:07:10 compute-2 ceph-mon[77282]: pgmap v4063: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 5.6 MiB/s wr, 102 op/s
Jan 31 09:07:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:10 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:10 compute-2 sudo[335455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:10 compute-2 sudo[335455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:10 compute-2 sudo[335455]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:10 compute-2 nova_compute[226829]: 2026-01-31 09:07:10.069 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:10 compute-2 sudo[335480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:07:10 compute-2 sudo[335480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:10 compute-2 sudo[335480]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:10.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:10.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:12 compute-2 nova_compute[226829]: 2026-01-31 09:07:12.102 226833 INFO nova.virt.libvirt.driver [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Snapshot image upload complete
Jan 31 09:07:12 compute-2 nova_compute[226829]: 2026-01-31 09:07:12.102 226833 DEBUG nova.compute.manager [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:07:12 compute-2 nova_compute[226829]: 2026-01-31 09:07:12.196 226833 INFO nova.compute.manager [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Shelve offloading
Jan 31 09:07:12 compute-2 nova_compute[226829]: 2026-01-31 09:07:12.203 226833 INFO nova.virt.libvirt.driver [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance destroyed successfully.
Jan 31 09:07:12 compute-2 nova_compute[226829]: 2026-01-31 09:07:12.203 226833 DEBUG nova.compute.manager [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:07:12 compute-2 nova_compute[226829]: 2026-01-31 09:07:12.205 226833 DEBUG oslo_concurrency.lockutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:07:12 compute-2 nova_compute[226829]: 2026-01-31 09:07:12.206 226833 DEBUG oslo_concurrency.lockutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquired lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:07:12 compute-2 nova_compute[226829]: 2026-01-31 09:07:12.206 226833 DEBUG nova.network.neutron [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:07:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:12.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:12.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:12 compute-2 ceph-mon[77282]: pgmap v4064: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 113 op/s
Jan 31 09:07:13 compute-2 nova_compute[226829]: 2026-01-31 09:07:13.819 226833 DEBUG nova.network.neutron [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:07:13 compute-2 nova_compute[226829]: 2026-01-31 09:07:13.844 226833 DEBUG oslo_concurrency.lockutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Releasing lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:07:13 compute-2 ceph-mon[77282]: pgmap v4065: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 72 op/s
Jan 31 09:07:14 compute-2 nova_compute[226829]: 2026-01-31 09:07:14.114 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:14.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:14 compute-2 nova_compute[226829]: 2026-01-31 09:07:14.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:14.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.072 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.201 226833 INFO nova.virt.libvirt.driver [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance destroyed successfully.
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.201 226833 DEBUG nova.objects.instance [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'resources' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.222 226833 DEBUG nova.virt.libvirt.vif [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-863769904',display_name='tempest-TestShelveInstance-server-863769904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-863769904',id=216,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO97h7ZlFklCcOMhjjvfpHATJVRthvPNErjo8kSCbjC05jkmSen4F2QOXVYh+JGW6zHXigZL/fAl9BFzIpJo57WrH+bNNyDWYa77kRt0fmFgaUAwVEYhYw4aHgZ+3T7mqQ==',key_name='tempest-TestShelveInstance-1892807111',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:06:36Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-wmbqilfc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member',shelved_at='2026-01-31T09:07:12.102703',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='f336042a-a032-4a9a-8976-bb911c951591'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:07:02Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=1fb94592-4c46-41d2-990b-7d5d8d1a7fce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.222 226833 DEBUG nova.network.os_vif_util [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.223 226833 DEBUG nova.network.os_vif_util [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.223 226833 DEBUG os_vif [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.225 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.226 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32656689-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.227 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.230 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.239 226833 INFO os_vif [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c')
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.339 226833 DEBUG nova.compute.manager [req-3d340295-2669-4b5c-a3d2-4eac9d59fc9b req-85e4242d-c409-4e09-9b68-c3dff8ca5502 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-changed-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.340 226833 DEBUG nova.compute.manager [req-3d340295-2669-4b5c-a3d2-4eac9d59fc9b req-85e4242d-c409-4e09-9b68-c3dff8ca5502 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Refreshing instance network info cache due to event network-changed-32656689-8c91-4c26-8aea-d5aaac071876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.340 226833 DEBUG oslo_concurrency.lockutils [req-3d340295-2669-4b5c-a3d2-4eac9d59fc9b req-85e4242d-c409-4e09-9b68-c3dff8ca5502 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.340 226833 DEBUG oslo_concurrency.lockutils [req-3d340295-2669-4b5c-a3d2-4eac9d59fc9b req-85e4242d-c409-4e09-9b68-c3dff8ca5502 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:07:15 compute-2 nova_compute[226829]: 2026-01-31 09:07:15.341 226833 DEBUG nova.network.neutron [req-3d340295-2669-4b5c-a3d2-4eac9d59fc9b req-85e4242d-c409-4e09-9b68-c3dff8ca5502 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Refreshing network info cache for port 32656689-8c91-4c26-8aea-d5aaac071876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:07:15 compute-2 ceph-mon[77282]: pgmap v4066: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 84 op/s
Jan 31 09:07:15 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 09:07:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.321 226833 INFO nova.virt.libvirt.driver [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Deleting instance files /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_del
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.322 226833 INFO nova.virt.libvirt.driver [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Deletion of /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_del complete
Jan 31 09:07:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e418 e418: 3 total, 3 up, 3 in
Jan 31 09:07:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:16.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.402 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850421.4001937, 1fb94592-4c46-41d2-990b-7d5d8d1a7fce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.402 226833 INFO nova.compute.manager [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] VM Stopped (Lifecycle Event)
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.624 226833 DEBUG nova.compute.manager [None req-51938690-d03e-4adc-a569-4e8ad1109df8 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.627 226833 DEBUG nova.compute.manager [None req-51938690-d03e-4adc-a569-4e8ad1109df8 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.703 226833 INFO nova.compute.manager [None req-51938690-d03e-4adc-a569-4e8ad1109df8 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.705 226833 INFO nova.scheduler.client.report [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Deleted allocations for instance 1fb94592-4c46-41d2-990b-7d5d8d1a7fce
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.803 226833 DEBUG nova.network.neutron [req-3d340295-2669-4b5c-a3d2-4eac9d59fc9b req-85e4242d-c409-4e09-9b68-c3dff8ca5502 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updated VIF entry in instance network info cache for port 32656689-8c91-4c26-8aea-d5aaac071876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.804 226833 DEBUG nova.network.neutron [req-3d340295-2669-4b5c-a3d2-4eac9d59fc9b req-85e4242d-c409-4e09-9b68-c3dff8ca5502 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": null, "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap32656689-8c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:07:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:16.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.828 226833 DEBUG oslo_concurrency.lockutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.828 226833 DEBUG oslo_concurrency.lockutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:07:16 compute-2 nova_compute[226829]: 2026-01-31 09:07:16.855 226833 DEBUG oslo_concurrency.processutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:07:17 compute-2 nova_compute[226829]: 2026-01-31 09:07:17.009 226833 DEBUG oslo_concurrency.lockutils [req-3d340295-2669-4b5c-a3d2-4eac9d59fc9b req-85e4242d-c409-4e09-9b68-c3dff8ca5502 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:07:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:07:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1410142522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:17 compute-2 nova_compute[226829]: 2026-01-31 09:07:17.286 226833 DEBUG oslo_concurrency.processutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:07:17 compute-2 nova_compute[226829]: 2026-01-31 09:07:17.292 226833 DEBUG nova.compute.provider_tree [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:07:17 compute-2 ceph-mon[77282]: osdmap e418: 3 total, 3 up, 3 in
Jan 31 09:07:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1410142522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:17 compute-2 nova_compute[226829]: 2026-01-31 09:07:17.339 226833 DEBUG nova.scheduler.client.report [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:07:17 compute-2 nova_compute[226829]: 2026-01-31 09:07:17.509 226833 DEBUG oslo_concurrency.lockutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:07:17 compute-2 nova_compute[226829]: 2026-01-31 09:07:17.687 226833 DEBUG oslo_concurrency.lockutils [None req-2e9b9a15-87fc-4c6c-bdff-2cc18717a547 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 18.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:07:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:18.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:18 compute-2 nova_compute[226829]: 2026-01-31 09:07:18.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:18 compute-2 ceph-mon[77282]: pgmap v4068: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 1.3 MiB/s wr, 61 op/s
Jan 31 09:07:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:07:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:18.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:07:19 compute-2 nova_compute[226829]: 2026-01-31 09:07:19.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:20 compute-2 nova_compute[226829]: 2026-01-31 09:07:20.073 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:20 compute-2 ceph-mon[77282]: pgmap v4069: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 497 KiB/s rd, 1.1 MiB/s wr, 51 op/s
Jan 31 09:07:20 compute-2 nova_compute[226829]: 2026-01-31 09:07:20.228 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:20.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:20.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3941672233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:21 compute-2 nova_compute[226829]: 2026-01-31 09:07:21.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:22 compute-2 ceph-mon[77282]: pgmap v4070: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 45 op/s
Jan 31 09:07:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1871789765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:22.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:22.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3951575014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:24 compute-2 ceph-mon[77282]: pgmap v4071: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 43 op/s
Jan 31 09:07:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:24 compute-2 nova_compute[226829]: 2026-01-31 09:07:24.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:24.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:25 compute-2 nova_compute[226829]: 2026-01-31 09:07:25.075 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:25 compute-2 nova_compute[226829]: 2026-01-31 09:07:25.230 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:25 compute-2 nova_compute[226829]: 2026-01-31 09:07:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:25 compute-2 nova_compute[226829]: 2026-01-31 09:07:25.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:07:25 compute-2 nova_compute[226829]: 2026-01-31 09:07:25.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:07:25 compute-2 nova_compute[226829]: 2026-01-31 09:07:25.501 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:07:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:26.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:26 compute-2 ceph-mon[77282]: pgmap v4072: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 31 09:07:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:26.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:27 compute-2 nova_compute[226829]: 2026-01-31 09:07:27.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:27 compute-2 nova_compute[226829]: 2026-01-31 09:07:27.509 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:07:27 compute-2 nova_compute[226829]: 2026-01-31 09:07:27.509 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:07:27 compute-2 nova_compute[226829]: 2026-01-31 09:07:27.509 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:07:27 compute-2 nova_compute[226829]: 2026-01-31 09:07:27.510 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:07:27 compute-2 nova_compute[226829]: 2026-01-31 09:07:27.510 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:07:27 compute-2 sudo[335575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:27 compute-2 sudo[335575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:27 compute-2 sudo[335575]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1621740039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:07:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/993917815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:07:27 compute-2 sudo[335600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:27 compute-2 sudo[335600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:27 compute-2 sudo[335600]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:07:27 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1275098523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:27 compute-2 nova_compute[226829]: 2026-01-31 09:07:27.971 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.098 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.099 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4014MB free_disk=20.97762680053711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.099 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.100 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.154 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.155 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.175 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:07:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:28.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:07:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1191431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.580 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.587 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.605 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.628 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:07:28 compute-2 nova_compute[226829]: 2026-01-31 09:07:28.629 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:07:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 79K writes, 327K keys, 79K commit groups, 1.0 writes per commit group, ingest: 0.33 GB, 0.05 MB/s
                                           Cumulative WAL: 79K writes, 29K syncs, 2.74 writes per sync, written: 0.33 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3124 writes, 12K keys, 3124 commit groups, 1.0 writes per commit group, ingest: 12.28 MB, 0.02 MB/s
                                           Interval WAL: 3124 writes, 1286 syncs, 2.43 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:07:28 compute-2 ceph-mon[77282]: pgmap v4073: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 999 KiB/s wr, 56 op/s
Jan 31 09:07:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1275098523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3441572639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1191431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:28.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:29 compute-2 nova_compute[226829]: 2026-01-31 09:07:29.177 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:29.176 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=107, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=106) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:07:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:29.179 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:07:29 compute-2 podman[335651]: 2026-01-31 09:07:29.18597046 +0000 UTC m=+0.070972101 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:07:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e419 e419: 3 total, 3 up, 3 in
Jan 31 09:07:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1803889982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:29 compute-2 ceph-mon[77282]: pgmap v4074: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 935 KiB/s wr, 41 op/s
Jan 31 09:07:30 compute-2 nova_compute[226829]: 2026-01-31 09:07:30.077 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:30 compute-2 nova_compute[226829]: 2026-01-31 09:07:30.232 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:30.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:30 compute-2 nova_compute[226829]: 2026-01-31 09:07:30.629 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:30.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:30 compute-2 ceph-mon[77282]: osdmap e419: 3 total, 3 up, 3 in
Jan 31 09:07:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:32 compute-2 ceph-mon[77282]: pgmap v4076: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 243 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 145 op/s
Jan 31 09:07:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:32.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:32.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:33 compute-2 ceph-mon[77282]: pgmap v4077: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 217 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 169 op/s
Jan 31 09:07:34 compute-2 podman[335680]: 2026-01-31 09:07:34.156796552 +0000 UTC m=+0.043277528 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 09:07:34 compute-2 ceph-mgr[77635]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 09:07:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:34.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:34.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:35 compute-2 nova_compute[226829]: 2026-01-31 09:07:35.078 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:35 compute-2 nova_compute[226829]: 2026-01-31 09:07:35.233 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:35 compute-2 ceph-mon[77282]: pgmap v4078: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Jan 31 09:07:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:07:36.182 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '107'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:07:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:07:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:07:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 e420: 3 total, 3 up, 3 in
Jan 31 09:07:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:36.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:37 compute-2 ceph-mon[77282]: osdmap e420: 3 total, 3 up, 3 in
Jan 31 09:07:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:38.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:38 compute-2 ceph-mon[77282]: pgmap v4080: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.5 MiB/s wr, 221 op/s
Jan 31 09:07:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:40 compute-2 nova_compute[226829]: 2026-01-31 09:07:40.132 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:40 compute-2 nova_compute[226829]: 2026-01-31 09:07:40.235 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:40.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:40 compute-2 ceph-mon[77282]: pgmap v4081: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.7 MiB/s wr, 180 op/s
Jan 31 09:07:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:42 compute-2 ceph-mon[77282]: pgmap v4082: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.7 KiB/s wr, 63 op/s
Jan 31 09:07:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:07:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:42.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:07:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:42.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:43 compute-2 nova_compute[226829]: 2026-01-31 09:07:43.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:43 compute-2 nova_compute[226829]: 2026-01-31 09:07:43.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:07:43 compute-2 ceph-mon[77282]: pgmap v4083: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 14 KiB/s wr, 50 op/s
Jan 31 09:07:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:44.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:44 compute-2 ovn_controller[133834]: 2026-01-31T09:07:44Z|00857|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Jan 31 09:07:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:44.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:45 compute-2 nova_compute[226829]: 2026-01-31 09:07:45.132 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:45 compute-2 nova_compute[226829]: 2026-01-31 09:07:45.237 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:45 compute-2 ceph-mon[77282]: pgmap v4084: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 647 KiB/s rd, 16 KiB/s wr, 52 op/s
Jan 31 09:07:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:46.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:46.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:47 compute-2 sudo[335707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:47 compute-2 sudo[335707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:47 compute-2 sudo[335707]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:47 compute-2 sudo[335732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:47 compute-2 sudo[335732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:47 compute-2 sudo[335732]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:47 compute-2 ceph-mon[77282]: pgmap v4085: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 583 KiB/s rd, 23 KiB/s wr, 47 op/s
Jan 31 09:07:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:48.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:48.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:50 compute-2 nova_compute[226829]: 2026-01-31 09:07:50.166 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:50 compute-2 nova_compute[226829]: 2026-01-31 09:07:50.238 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:50 compute-2 ceph-mon[77282]: pgmap v4086: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 603 KiB/s rd, 22 KiB/s wr, 47 op/s
Jan 31 09:07:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:50.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:50.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:52.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:52.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:52 compute-2 ceph-mon[77282]: pgmap v4087: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 694 KiB/s rd, 24 KiB/s wr, 49 op/s
Jan 31 09:07:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2096231141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:07:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2096231141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:07:54 compute-2 ceph-mon[77282]: pgmap v4088: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 748 KiB/s rd, 23 KiB/s wr, 49 op/s
Jan 31 09:07:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:54.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:54.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:55 compute-2 nova_compute[226829]: 2026-01-31 09:07:55.168 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:55 compute-2 nova_compute[226829]: 2026-01-31 09:07:55.240 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:56 compute-2 ceph-mon[77282]: pgmap v4089: 305 pgs: 305 active+clean; 174 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 630 KiB/s rd, 12 KiB/s wr, 51 op/s
Jan 31 09:07:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:07:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:56.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:07:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:56.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:57 compute-2 ceph-mon[77282]: pgmap v4090: 305 pgs: 305 active+clean; 151 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 11 KiB/s wr, 20 op/s
Jan 31 09:07:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:07:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:58.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:07:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:07:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:58.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:59 compute-2 nova_compute[226829]: 2026-01-31 09:07:59.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2616368524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:00 compute-2 nova_compute[226829]: 2026-01-31 09:08:00.219 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:00 compute-2 podman[335764]: 2026-01-31 09:08:00.233894137 +0000 UTC m=+0.119484310 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:08:00 compute-2 nova_compute[226829]: 2026-01-31 09:08:00.241 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:00.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:00.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:01 compute-2 ceph-mon[77282]: pgmap v4091: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 2.9 KiB/s wr, 32 op/s
Jan 31 09:08:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:02.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:02 compute-2 ceph-mon[77282]: pgmap v4092: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 167 KiB/s rd, 3.2 KiB/s wr, 31 op/s
Jan 31 09:08:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:02.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:03 compute-2 ceph-mon[77282]: pgmap v4093: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 75 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 31 09:08:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:04.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:05 compute-2 podman[335790]: 2026-01-31 09:08:05.184750427 +0000 UTC m=+0.071663090 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:08:05 compute-2 nova_compute[226829]: 2026-01-31 09:08:05.221 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:05 compute-2 nova_compute[226829]: 2026-01-31 09:08:05.242 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:05 compute-2 ceph-mon[77282]: pgmap v4094: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 09:08:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:06.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:06.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:08:06.951 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:08:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:08:06.952 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:08:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:08:06.952 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:08:07 compute-2 sudo[335809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:07 compute-2 sudo[335809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:07 compute-2 sudo[335809]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:08 compute-2 sudo[335834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:08 compute-2 sudo[335834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:08 compute-2 sudo[335834]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:08.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:08 compute-2 nova_compute[226829]: 2026-01-31 09:08:08.748 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:08 compute-2 ceph-mon[77282]: pgmap v4095: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Jan 31 09:08:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:08.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:09 compute-2 ceph-mon[77282]: pgmap v4096: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 13 op/s
Jan 31 09:08:10 compute-2 sudo[335860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:10 compute-2 sudo[335860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-2 sudo[335860]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-2 sudo[335885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:08:10 compute-2 sudo[335885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-2 nova_compute[226829]: 2026-01-31 09:08:10.222 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:10 compute-2 sudo[335885]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-2 nova_compute[226829]: 2026-01-31 09:08:10.244 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:10 compute-2 sudo[335911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:10 compute-2 sudo[335911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-2 sudo[335911]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-2 sudo[335936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:08:10 compute-2 sudo[335936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:10.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:10 compute-2 sudo[335936]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:10.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:08:12 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:08:12 compute-2 ceph-mon[77282]: pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 31 09:08:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:12.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:12.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:14.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:14 compute-2 nova_compute[226829]: 2026-01-31 09:08:14.501 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:14 compute-2 ceph-mon[77282]: pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 85 B/s wr, 5 op/s
Jan 31 09:08:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:08:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:14.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:08:15 compute-2 nova_compute[226829]: 2026-01-31 09:08:15.245 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:15 compute-2 nova_compute[226829]: 2026-01-31 09:08:15.247 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:15 compute-2 nova_compute[226829]: 2026-01-31 09:08:15.247 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:08:15 compute-2 nova_compute[226829]: 2026-01-31 09:08:15.247 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:08:15 compute-2 nova_compute[226829]: 2026-01-31 09:08:15.270 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:15 compute-2 nova_compute[226829]: 2026-01-31 09:08:15.270 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:08:15 compute-2 nova_compute[226829]: 2026-01-31 09:08:15.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:15 compute-2 nova_compute[226829]: 2026-01-31 09:08:15.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 09:08:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:16.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:16 compute-2 nova_compute[226829]: 2026-01-31 09:08:16.509 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:16 compute-2 nova_compute[226829]: 2026-01-31 09:08:16.509 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 09:08:16 compute-2 nova_compute[226829]: 2026-01-31 09:08:16.525 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 09:08:16 compute-2 ceph-mon[77282]: pgmap v4099: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 9 op/s
Jan 31 09:08:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:16 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:16 compute-2 sudo[335995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:16 compute-2 sudo[335995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:16 compute-2 sudo[335995]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:16 compute-2 sudo[336020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:08:16 compute-2 sudo[336020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:16.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:16 compute-2 sudo[336020]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:18.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:18 compute-2 nova_compute[226829]: 2026-01-31 09:08:18.505 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:18 compute-2 ceph-mon[77282]: pgmap v4100: 305 pgs: 305 active+clean; 134 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 402 KiB/s wr, 15 op/s
Jan 31 09:08:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:18.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:19 compute-2 ceph-mon[77282]: pgmap v4101: 305 pgs: 305 active+clean; 151 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 994 KiB/s wr, 38 op/s
Jan 31 09:08:20 compute-2 nova_compute[226829]: 2026-01-31 09:08:20.272 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:20 compute-2 nova_compute[226829]: 2026-01-31 09:08:20.274 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:20.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:20.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:21 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1243239800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.433100) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501433209, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1985, "num_deletes": 257, "total_data_size": 4792307, "memory_usage": 4854832, "flush_reason": "Manual Compaction"}
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Jan 31 09:08:21 compute-2 nova_compute[226829]: 2026-01-31 09:08:21.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:21 compute-2 nova_compute[226829]: 2026-01-31 09:08:21.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501535387, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 3117258, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95544, "largest_seqno": 97523, "table_properties": {"data_size": 3109048, "index_size": 5023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16910, "raw_average_key_size": 20, "raw_value_size": 3092580, "raw_average_value_size": 3681, "num_data_blocks": 220, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850331, "oldest_key_time": 1769850331, "file_creation_time": 1769850501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 102343 microseconds, and 6549 cpu microseconds.
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.535455) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 3117258 bytes OK
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.535472) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.552593) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.552615) EVENT_LOG_v1 {"time_micros": 1769850501552610, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.552634) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 4783459, prev total WAL file size 4783740, number of live WAL files 2.
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.554843) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373730' seq:72057594037927935, type:22 .. '6C6F676D0034303231' seq:0, type:0; will stop at (end)
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(3044KB)], [198(10MB)]
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501554940, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14262569, "oldest_snapshot_seqno": -1}
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 11522 keys, 14131004 bytes, temperature: kUnknown
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501676136, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 14131004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14058434, "index_size": 42633, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 304921, "raw_average_key_size": 26, "raw_value_size": 13858866, "raw_average_value_size": 1202, "num_data_blocks": 1619, "num_entries": 11522, "num_filter_entries": 11522, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769850501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.676731) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 14131004 bytes
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.681307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.6 rd, 116.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.6 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 12055, records dropped: 533 output_compression: NoCompression
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.681334) EVENT_LOG_v1 {"time_micros": 1769850501681323, "job": 128, "event": "compaction_finished", "compaction_time_micros": 121282, "compaction_time_cpu_micros": 38164, "output_level": 6, "num_output_files": 1, "total_output_size": 14131004, "num_input_records": 12055, "num_output_records": 11522, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501682096, "job": 128, "event": "table_file_deletion", "file_number": 200}
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501683490, "job": 128, "event": "table_file_deletion", "file_number": 198}
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.554719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.683600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.683606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.683607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.683610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:08:21.683611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:22.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:22 compute-2 ceph-mon[77282]: pgmap v4102: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 31 09:08:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3430568507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3031132125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:22.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:08:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471333307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:08:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1471333307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:08:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:24.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:24 compute-2 ceph-mon[77282]: pgmap v4103: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 31 09:08:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.003000081s ======
Jan 31 09:08:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:24.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Jan 31 09:08:25 compute-2 nova_compute[226829]: 2026-01-31 09:08:25.274 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:25 compute-2 nova_compute[226829]: 2026-01-31 09:08:25.276 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:25 compute-2 nova_compute[226829]: 2026-01-31 09:08:25.276 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:08:25 compute-2 nova_compute[226829]: 2026-01-31 09:08:25.276 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:08:25 compute-2 nova_compute[226829]: 2026-01-31 09:08:25.311 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:25 compute-2 nova_compute[226829]: 2026-01-31 09:08:25.311 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:08:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:26.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:26 compute-2 nova_compute[226829]: 2026-01-31 09:08:26.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:26 compute-2 nova_compute[226829]: 2026-01-31 09:08:26.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:08:26 compute-2 nova_compute[226829]: 2026-01-31 09:08:26.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:08:26 compute-2 nova_compute[226829]: 2026-01-31 09:08:26.696 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:08:26 compute-2 nova_compute[226829]: 2026-01-31 09:08:26.697 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:26 compute-2 nova_compute[226829]: 2026-01-31 09:08:26.697 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:26 compute-2 ceph-mon[77282]: pgmap v4104: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 31 09:08:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:26.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:27 compute-2 nova_compute[226829]: 2026-01-31 09:08:27.584 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:27 compute-2 nova_compute[226829]: 2026-01-31 09:08:27.610 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:08:27 compute-2 nova_compute[226829]: 2026-01-31 09:08:27.610 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:08:27 compute-2 nova_compute[226829]: 2026-01-31 09:08:27.610 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:08:27 compute-2 nova_compute[226829]: 2026-01-31 09:08:27.610 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:08:27 compute-2 nova_compute[226829]: 2026-01-31 09:08:27.611 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:08:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3963900475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:08:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:08:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3877195061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.023 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:08:28 compute-2 sudo[336072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:28 compute-2 sudo[336072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:28 compute-2 sudo[336072]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:28 compute-2 sudo[336097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:28 compute-2 sudo[336097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:28 compute-2 sudo[336097]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.169 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.170 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4016MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.171 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.171 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.273 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.274 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.314 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:08:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:28.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:08:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4123028729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.722 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.727 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.742 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.744 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:08:28 compute-2 nova_compute[226829]: 2026-01-31 09:08:28.744 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:08:28 compute-2 ceph-mon[77282]: pgmap v4105: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 100 KiB/s rd, 1.8 MiB/s wr, 164 op/s
Jan 31 09:08:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3877195061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4123028729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:28.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:30 compute-2 ceph-mon[77282]: pgmap v4106: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 119 KiB/s rd, 1.4 MiB/s wr, 197 op/s
Jan 31 09:08:30 compute-2 nova_compute[226829]: 2026-01-31 09:08:30.312 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:30.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:08:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:30.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:08:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2036767708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4033565918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:31 compute-2 podman[336146]: 2026-01-31 09:08:31.224784073 +0000 UTC m=+0.092333552 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 09:08:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:32 compute-2 ceph-mon[77282]: pgmap v4107: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 834 KiB/s wr, 245 op/s
Jan 31 09:08:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:32.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:32 compute-2 nova_compute[226829]: 2026-01-31 09:08:32.648 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:32.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:34.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:34 compute-2 ceph-mon[77282]: pgmap v4108: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 238 op/s
Jan 31 09:08:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:34.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:35 compute-2 nova_compute[226829]: 2026-01-31 09:08:35.315 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:35 compute-2 nova_compute[226829]: 2026-01-31 09:08:35.320 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:35 compute-2 ceph-mon[77282]: pgmap v4109: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 242 op/s
Jan 31 09:08:36 compute-2 podman[336174]: 2026-01-31 09:08:36.172702184 +0000 UTC m=+0.055749057 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 09:08:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:36.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:36.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:38.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:38 compute-2 ceph-mon[77282]: pgmap v4110: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 210 op/s
Jan 31 09:08:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:38.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:39 compute-2 ceph-mon[77282]: pgmap v4111: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 146 op/s
Jan 31 09:08:40 compute-2 nova_compute[226829]: 2026-01-31 09:08:40.321 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:40 compute-2 nova_compute[226829]: 2026-01-31 09:08:40.323 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:40 compute-2 nova_compute[226829]: 2026-01-31 09:08:40.323 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:08:40 compute-2 nova_compute[226829]: 2026-01-31 09:08:40.323 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:08:40 compute-2 nova_compute[226829]: 2026-01-31 09:08:40.348 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:40 compute-2 nova_compute[226829]: 2026-01-31 09:08:40.348 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:08:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:40.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:40.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:42.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:42 compute-2 ceph-mon[77282]: pgmap v4112: 305 pgs: 305 active+clean; 171 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 344 KiB/s wr, 116 op/s
Jan 31 09:08:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:42.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:43 compute-2 nova_compute[226829]: 2026-01-31 09:08:43.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:43 compute-2 nova_compute[226829]: 2026-01-31 09:08:43.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:08:44 compute-2 ceph-mon[77282]: pgmap v4113: 305 pgs: 305 active+clean; 175 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 792 KiB/s wr, 52 op/s
Jan 31 09:08:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:44.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:44.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:45 compute-2 nova_compute[226829]: 2026-01-31 09:08:45.348 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:45 compute-2 nova_compute[226829]: 2026-01-31 09:08:45.349 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:46.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:47 compute-2 ceph-mon[77282]: pgmap v4114: 305 pgs: 305 active+clean; 194 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 998 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Jan 31 09:08:48 compute-2 ceph-mon[77282]: pgmap v4115: 305 pgs: 305 active+clean; 199 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 09:08:48 compute-2 sudo[336198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:48 compute-2 sudo[336198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:48 compute-2 sudo[336198]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:48 compute-2 sudo[336224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:48 compute-2 sudo[336224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:48 compute-2 sudo[336224]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:48.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:48.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:50 compute-2 nova_compute[226829]: 2026-01-31 09:08:50.350 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:08:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:50.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:08:50 compute-2 ceph-mon[77282]: pgmap v4116: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:08:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:50.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:51 compute-2 ceph-mon[77282]: pgmap v4117: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:08:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:52.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:52.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:08:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/616937897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:08:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:08:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/616937897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:08:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/616937897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:08:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/616937897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:08:54 compute-2 ceph-mon[77282]: pgmap v4118: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 352 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 09:08:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:54.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:08:54.654 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=108, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=107) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:08:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:08:54.655 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:08:54 compute-2 nova_compute[226829]: 2026-01-31 09:08:54.701 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:55 compute-2 nova_compute[226829]: 2026-01-31 09:08:55.353 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:55 compute-2 nova_compute[226829]: 2026-01-31 09:08:55.355 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:55 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:08:55.657 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '108'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:08:56 compute-2 ceph-mon[77282]: pgmap v4119: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 320 KiB/s rd, 1.4 MiB/s wr, 52 op/s
Jan 31 09:08:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:56.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:56.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:58.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:58 compute-2 ceph-mon[77282]: pgmap v4120: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 81 KiB/s wr, 13 op/s
Jan 31 09:08:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:08:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:08:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:58.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:08:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4113501718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:00 compute-2 nova_compute[226829]: 2026-01-31 09:09:00.355 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:00.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:00 compute-2 ceph-mon[77282]: pgmap v4121: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 62 KiB/s wr, 11 op/s
Jan 31 09:09:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:00.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.813733) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541813790, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 644, "num_deletes": 251, "total_data_size": 1076086, "memory_usage": 1094096, "flush_reason": "Manual Compaction"}
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Jan 31 09:09:01 compute-2 ceph-mon[77282]: pgmap v4122: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 39 KiB/s wr, 18 op/s
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541823424, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 709428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97529, "largest_seqno": 98167, "table_properties": {"data_size": 706255, "index_size": 1080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7396, "raw_average_key_size": 19, "raw_value_size": 699981, "raw_average_value_size": 1808, "num_data_blocks": 48, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850501, "oldest_key_time": 1769850501, "file_creation_time": 1769850541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 9735 microseconds, and 2352 cpu microseconds.
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.823474) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 709428 bytes OK
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.823494) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.829125) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.829144) EVENT_LOG_v1 {"time_micros": 1769850541829139, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.829163) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1072551, prev total WAL file size 1072551, number of live WAL files 2.
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.829795) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(692KB)], [201(13MB)]
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541829847, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 14840432, "oldest_snapshot_seqno": -1}
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 11398 keys, 12878680 bytes, temperature: kUnknown
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541927129, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 12878680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12808060, "index_size": 41003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28549, "raw_key_size": 303039, "raw_average_key_size": 26, "raw_value_size": 12611707, "raw_average_value_size": 1106, "num_data_blocks": 1543, "num_entries": 11398, "num_filter_entries": 11398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769850541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.927407) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12878680 bytes
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.929688) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.4 rd, 132.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.5 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(39.1) write-amplify(18.2) OK, records in: 11909, records dropped: 511 output_compression: NoCompression
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.929704) EVENT_LOG_v1 {"time_micros": 1769850541929696, "job": 130, "event": "compaction_finished", "compaction_time_micros": 97376, "compaction_time_cpu_micros": 25716, "output_level": 6, "num_output_files": 1, "total_output_size": 12878680, "num_input_records": 11909, "num_output_records": 11398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541929856, "job": 130, "event": "table_file_deletion", "file_number": 203}
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541930752, "job": 130, "event": "table_file_deletion", "file_number": 201}
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.829712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.930837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.930843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.930845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.930849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:01 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:09:01.930850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.085 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.086 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.086 226833 INFO nova.compute.manager [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Unshelving
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.139 226833 INFO nova.virt.block_device [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Booting with volume cf361715-504e-40e5-87bf-5f49366009aa at /dev/vda
Jan 31 09:09:02 compute-2 podman[336256]: 2026-01-31 09:09:02.172630559 +0000 UTC m=+0.065714158 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.271 226833 DEBUG os_brick.utils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.273 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.283 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.284 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf2ca11-8418-4bf7-ad3f-c22efae1439e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.285 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.290 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.290 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[c79918ac-7b45-4880-83ae-878361c1ae6c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.292 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.301 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.302 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b5f57c-f713-46a3-9dd7-86aa8d0be603]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.303 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[3216351b-7f34-4c54-bfb6-6de77c48b846]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.303 226833 DEBUG oslo_concurrency.processutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.324 226833 DEBUG oslo_concurrency.processutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.326 226833 DEBUG os_brick.initiator.connectors.lightos [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.326 226833 DEBUG os_brick.initiator.connectors.lightos [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.327 226833 DEBUG os_brick.initiator.connectors.lightos [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.327 226833 DEBUG os_brick.utils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 09:09:02 compute-2 nova_compute[226829]: 2026-01-31 09:09:02.327 226833 DEBUG nova.virt.block_device [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating existing volume attachment record: 70ee36d0-631d-4d42-9f55-4e4b543c4804 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 09:09:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:02.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:09:02 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2220800940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:09:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:02.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2220800940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:09:03 compute-2 nova_compute[226829]: 2026-01-31 09:09:03.212 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:03 compute-2 nova_compute[226829]: 2026-01-31 09:09:03.213 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:03 compute-2 nova_compute[226829]: 2026-01-31 09:09:03.221 226833 DEBUG nova.objects.instance [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'pci_requests' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:09:03 compute-2 nova_compute[226829]: 2026-01-31 09:09:03.314 226833 DEBUG nova.objects.instance [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'numa_topology' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:09:03 compute-2 nova_compute[226829]: 2026-01-31 09:09:03.412 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:09:03 compute-2 nova_compute[226829]: 2026-01-31 09:09:03.412 226833 INFO nova.compute.claims [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Claim successful on node compute-2.ctlplane.example.com
Jan 31 09:09:03 compute-2 nova_compute[226829]: 2026-01-31 09:09:03.668 226833 DEBUG oslo_concurrency.processutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:09:04 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/524816306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.076 226833 DEBUG oslo_concurrency.processutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.080 226833 DEBUG nova.compute.provider_tree [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.103 226833 DEBUG nova.scheduler.client.report [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.134 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.278 226833 INFO nova.network.neutron [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating port 5ca89d89-0532-424d-9005-27c4b82e9793 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 09:09:04 compute-2 ceph-mon[77282]: pgmap v4123: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 39 KiB/s wr, 18 op/s
Jan 31 09:09:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/524816306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:04.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.877 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.878 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquired lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.878 226833 DEBUG nova.network.neutron [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:09:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:04.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.983 226833 DEBUG nova.compute.manager [req-93274dc9-3daa-4c64-b67f-432df2cd94e8 req-2eb2afb9-9493-4cac-9957-91c0a5375a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-changed-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.984 226833 DEBUG nova.compute.manager [req-93274dc9-3daa-4c64-b67f-432df2cd94e8 req-2eb2afb9-9493-4cac-9957-91c0a5375a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Refreshing instance network info cache due to event network-changed-5ca89d89-0532-424d-9005-27c4b82e9793. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:09:04 compute-2 nova_compute[226829]: 2026-01-31 09:09:04.984 226833 DEBUG oslo_concurrency.lockutils [req-93274dc9-3daa-4c64-b67f-432df2cd94e8 req-2eb2afb9-9493-4cac-9957-91c0a5375a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:09:05 compute-2 nova_compute[226829]: 2026-01-31 09:09:05.356 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:05 compute-2 nova_compute[226829]: 2026-01-31 09:09:05.358 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:05 compute-2 nova_compute[226829]: 2026-01-31 09:09:05.358 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:09:05 compute-2 nova_compute[226829]: 2026-01-31 09:09:05.358 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:09:05 compute-2 nova_compute[226829]: 2026-01-31 09:09:05.359 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:09:05 compute-2 nova_compute[226829]: 2026-01-31 09:09:05.361 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:05 compute-2 ceph-mon[77282]: pgmap v4124: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 36 KiB/s wr, 17 op/s
Jan 31 09:09:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:06.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:06.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:06.953 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:06.953 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:06.953 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:07 compute-2 podman[336314]: 2026-01-31 09:09:07.15069144 +0000 UTC m=+0.043029032 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:09:07 compute-2 ceph-mon[77282]: pgmap v4125: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.5 KiB/s rd, 596 B/s wr, 12 op/s
Jan 31 09:09:08 compute-2 sudo[336334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:08 compute-2 sudo[336334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:08 compute-2 sudo[336334]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:08 compute-2 sudo[336359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:08 compute-2 sudo[336359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:08 compute-2 sudo[336359]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:08.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:10 compute-2 nova_compute[226829]: 2026-01-31 09:09:10.361 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:10.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:10 compute-2 ceph-mon[77282]: pgmap v4126: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 596 B/s wr, 12 op/s
Jan 31 09:09:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:10.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.167 226833 DEBUG nova.network.neutron [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating instance_info_cache with network_info: [{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.192 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Releasing lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.194 226833 DEBUG nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.194 226833 INFO nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Creating image(s)
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.195 226833 DEBUG nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.195 226833 DEBUG nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Ensure instance console log exists: /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.195 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.196 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.196 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.199 226833 DEBUG nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Start _get_guest_xml network_info=[{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cf361715-504e-40e5-87bf-5f49366009aa', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cf361715-504e-40e5-87bf-5f49366009aa', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9a579d21-79af-418a-a1d6-756329428431', 'attached_at': '', 'detached_at': '', 'volume_id': 'cf361715-504e-40e5-87bf-5f49366009aa', 'serial': 'cf361715-504e-40e5-87bf-5f49366009aa'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '70ee36d0-631d-4d42-9f55-4e4b543c4804', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.200 226833 DEBUG oslo_concurrency.lockutils [req-93274dc9-3daa-4c64-b67f-432df2cd94e8 req-2eb2afb9-9493-4cac-9957-91c0a5375a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.200 226833 DEBUG nova.network.neutron [req-93274dc9-3daa-4c64-b67f-432df2cd94e8 req-2eb2afb9-9493-4cac-9957-91c0a5375a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Refreshing network info cache for port 5ca89d89-0532-424d-9005-27c4b82e9793 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.206 226833 WARNING nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.211 226833 DEBUG nova.virt.libvirt.host [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.211 226833 DEBUG nova.virt.libvirt.host [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.215 226833 DEBUG nova.virt.libvirt.host [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.215 226833 DEBUG nova.virt.libvirt.host [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.217 226833 DEBUG nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.218 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.218 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.218 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.219 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.219 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.219 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.220 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.220 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.220 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.221 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.221 226833 DEBUG nova.virt.hardware [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.221 226833 DEBUG nova.objects.instance [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.273 226833 DEBUG nova.storage.rbd_utils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 9a579d21-79af-418a-a1d6-756329428431_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.276 226833 DEBUG oslo_concurrency.processutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:09:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2197960158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.703 226833 DEBUG oslo_concurrency.processutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.733 226833 DEBUG nova.virt.libvirt.vif [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T09:08:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-573915944',display_name='tempest-TestShelveInstance-server-573915944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-573915944',id=217,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-385640846',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:08:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-1rp3uu0q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:09:02Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=9a579d21-79af-418a-a1d6-756329428431,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.734 226833 DEBUG nova.network.os_vif_util [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.735 226833 DEBUG nova.network.os_vif_util [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:09:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2197960158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.737 226833 DEBUG nova.objects.instance [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.772 226833 DEBUG nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <uuid>9a579d21-79af-418a-a1d6-756329428431</uuid>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <name>instance-000000d9</name>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <metadata>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <nova:name>tempest-TestShelveInstance-server-573915944</nova:name>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 09:09:11</nova:creationTime>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <nova:user uuid="4883c0d4a7f54a6898eba5bfdbb41266">tempest-TestShelveInstance-1485729988-project-member</nova:user>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <nova:project uuid="496e06c7521f45c994e6426c4313acea">tempest-TestShelveInstance-1485729988</nova:project>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <nova:port uuid="5ca89d89-0532-424d-9005-27c4b82e9793">
Jan 31 09:09:11 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   </metadata>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <system>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <entry name="serial">9a579d21-79af-418a-a1d6-756329428431</entry>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <entry name="uuid">9a579d21-79af-418a-a1d6-756329428431</entry>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     </system>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <os>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   </os>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <features>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <apic/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   </features>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   </clock>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   </cpu>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   <devices>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/9a579d21-79af-418a-a1d6-756329428431_disk.config">
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       </source>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-cf361715-504e-40e5-87bf-5f49366009aa">
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       </source>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:09:11 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <serial>cf361715-504e-40e5-87bf-5f49366009aa</serial>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:cc:8f:3e"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <target dev="tap5ca89d89-05"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     </interface>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/console.log" append="off"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     </serial>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <video>
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     </video>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     </rng>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 09:09:11 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 09:09:11 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 09:09:11 compute-2 nova_compute[226829]:   </devices>
Jan 31 09:09:11 compute-2 nova_compute[226829]: </domain>
Jan 31 09:09:11 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.773 226833 DEBUG nova.compute.manager [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Preparing to wait for external event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.773 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.773 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.774 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.774 226833 DEBUG nova.virt.libvirt.vif [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T09:08:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-573915944',display_name='tempest-TestShelveInstance-server-573915944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-573915944',id=217,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-385640846',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:08:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-1rp3uu0q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:09:02Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=9a579d21-79af-418a-a1d6-756329428431,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.775 226833 DEBUG nova.network.os_vif_util [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.775 226833 DEBUG nova.network.os_vif_util [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.776 226833 DEBUG os_vif [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.776 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.777 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.777 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.783 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.783 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ca89d89-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.784 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ca89d89-05, col_values=(('external_ids', {'iface-id': '5ca89d89-0532-424d-9005-27c4b82e9793', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:8f:3e', 'vm-uuid': '9a579d21-79af-418a-a1d6-756329428431'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.786 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:11 compute-2 NetworkManager[48999]: <info>  [1769850551.7877] manager: (tap5ca89d89-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.792 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.793 226833 INFO os_vif [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05')
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.853 226833 DEBUG nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.854 226833 DEBUG nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.854 226833 DEBUG nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No VIF found with MAC fa:16:3e:cc:8f:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.854 226833 INFO nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Using config drive
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.876 226833 DEBUG nova.storage.rbd_utils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 9a579d21-79af-418a-a1d6-756329428431_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.901 226833 DEBUG nova.objects.instance [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:09:11 compute-2 nova_compute[226829]: 2026-01-31 09:09:11.931 226833 DEBUG nova.objects.instance [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'keypairs' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:09:12 compute-2 nova_compute[226829]: 2026-01-31 09:09:12.361 226833 INFO nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Creating config drive at /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config
Jan 31 09:09:12 compute-2 nova_compute[226829]: 2026-01-31 09:09:12.365 226833 DEBUG oslo_concurrency.processutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdt6gh053 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:12 compute-2 nova_compute[226829]: 2026-01-31 09:09:12.497 226833 DEBUG oslo_concurrency.processutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdt6gh053" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:12 compute-2 nova_compute[226829]: 2026-01-31 09:09:12.535 226833 DEBUG nova.storage.rbd_utils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 9a579d21-79af-418a-a1d6-756329428431_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:09:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:09:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:12.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:09:12 compute-2 nova_compute[226829]: 2026-01-31 09:09:12.540 226833 DEBUG oslo_concurrency.processutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config 9a579d21-79af-418a-a1d6-756329428431_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:12 compute-2 ceph-mon[77282]: pgmap v4127: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s rd, 255 B/s wr, 9 op/s
Jan 31 09:09:12 compute-2 nova_compute[226829]: 2026-01-31 09:09:12.755 226833 DEBUG oslo_concurrency.processutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config 9a579d21-79af-418a-a1d6-756329428431_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:12 compute-2 nova_compute[226829]: 2026-01-31 09:09:12.756 226833 INFO nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Deleting local config drive /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config because it was imported into RBD.
Jan 31 09:09:12 compute-2 kernel: tap5ca89d89-05: entered promiscuous mode
Jan 31 09:09:12 compute-2 NetworkManager[48999]: <info>  [1769850552.7954] manager: (tap5ca89d89-05): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Jan 31 09:09:12 compute-2 ovn_controller[133834]: 2026-01-31T09:09:12Z|00858|binding|INFO|Claiming lport 5ca89d89-0532-424d-9005-27c4b82e9793 for this chassis.
Jan 31 09:09:12 compute-2 ovn_controller[133834]: 2026-01-31T09:09:12Z|00859|binding|INFO|5ca89d89-0532-424d-9005-27c4b82e9793: Claiming fa:16:3e:cc:8f:3e 10.100.0.8
Jan 31 09:09:12 compute-2 nova_compute[226829]: 2026-01-31 09:09:12.795 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:12 compute-2 ovn_controller[133834]: 2026-01-31T09:09:12Z|00860|binding|INFO|Setting lport 5ca89d89-0532-424d-9005-27c4b82e9793 ovn-installed in OVS
Jan 31 09:09:12 compute-2 nova_compute[226829]: 2026-01-31 09:09:12.801 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:12 compute-2 ovn_controller[133834]: 2026-01-31T09:09:12Z|00861|binding|INFO|Setting lport 5ca89d89-0532-424d-9005-27c4b82e9793 up in Southbound
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.804 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:8f:3e 10.100.0.8'], port_security=['fa:16:3e:cc:8f:3e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9a579d21-79af-418a-a1d6-756329428431', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '496e06c7521f45c994e6426c4313acea', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'be46c5ac-d5d7-434f-9c7a-f07c0d88092e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62ab62d6-b781-4309-8775-90b43197869a, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=5ca89d89-0532-424d-9005-27c4b82e9793) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.805 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 5ca89d89-0532-424d-9005-27c4b82e9793 in datapath 1d742d07-ac6a-4870-9712-15a33a8a1e71 bound to our chassis
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.807 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d742d07-ac6a-4870-9712-15a33a8a1e71
Jan 31 09:09:12 compute-2 systemd-machined[195142]: New machine qemu-97-instance-000000d9.
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.821 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[482c5d5d-893e-4c20-8483-fb8f4a2f1d02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.822 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1d742d07-a1 in ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:09:12 compute-2 systemd-udevd[336500]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.824 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1d742d07-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.825 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[38ac92c4-fbf2-48f3-86d1-d0809181bd2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.825 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8390dad9-2431-4c9d-9f59-46b10ea839c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 systemd[1]: Started Virtual Machine qemu-97-instance-000000d9.
Jan 31 09:09:12 compute-2 NetworkManager[48999]: <info>  [1769850552.8338] device (tap5ca89d89-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:09:12 compute-2 NetworkManager[48999]: <info>  [1769850552.8346] device (tap5ca89d89-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.835 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[e5409973-2960-40e0-9e5f-cb94dc8cbc85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.845 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b19470ec-81fd-421b-a73f-82f91b0128c1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.869 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2864f26e-709e-4fc0-87f3-aff71888983b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.875 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0be5bd3a-e0cc-4e88-823f-bf33751d62bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 systemd-udevd[336503]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:09:12 compute-2 NetworkManager[48999]: <info>  [1769850552.8774] manager: (tap1d742d07-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/433)
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.902 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f69fa3ca-0e4c-4210-bd32-beb936c230a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.905 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[226d275b-e052-4240-95d4-8c0a92ecfd73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 NetworkManager[48999]: <info>  [1769850552.9206] device (tap1d742d07-a0): carrier: link connected
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.925 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[f24a562e-78a5-4600-9977-b93e75143909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:12.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.943 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e5699cb8-10cb-47ce-bd1b-e27e5c714337]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d742d07-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3d:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1080539, 'reachable_time': 25719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336532, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.956 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f4db53c8-3c42-4def-9019-3bde85e1c325]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:3d77'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1080539, 'tstamp': 1080539}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336533, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.970 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[69410e03-0879-4120-af0c-5ec38da8a17c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d742d07-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3d:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1080539, 'reachable_time': 25719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336534, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:12 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:12.992 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7323de-5987-4845-9327-073131e720c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:13.033 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4f16eb-0e41-4fe0-8379-bd84d3dbd3cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:13.034 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d742d07-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:13.035 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:13.035 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d742d07-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:13 compute-2 kernel: tap1d742d07-a0: entered promiscuous mode
Jan 31 09:09:13 compute-2 NetworkManager[48999]: <info>  [1769850553.0374] manager: (tap1d742d07-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:13.038 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d742d07-a0, col_values=(('external_ids', {'iface-id': 'ff7f72f7-b69e-4d38-bd70-12b9fe05b593'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:13 compute-2 ovn_controller[133834]: 2026-01-31T09:09:13Z|00862|binding|INFO|Releasing lport ff7f72f7-b69e-4d38-bd70-12b9fe05b593 from this chassis (sb_readonly=0)
Jan 31 09:09:13 compute-2 nova_compute[226829]: 2026-01-31 09:09:13.036 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:13 compute-2 nova_compute[226829]: 2026-01-31 09:09:13.040 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:13.040 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:09:13 compute-2 nova_compute[226829]: 2026-01-31 09:09:13.043 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:13.043 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[f6007e3a-b93f-4138-9842-19cc9052ef35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:13.044 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: global
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-1d742d07-ac6a-4870-9712-15a33a8a1e71
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 1d742d07-ac6a-4870-9712-15a33a8a1e71
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:09:13 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:13.045 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'env', 'PROCESS_TAG=haproxy-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1d742d07-ac6a-4870-9712-15a33a8a1e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:09:13 compute-2 nova_compute[226829]: 2026-01-31 09:09:13.405 226833 DEBUG nova.compute.manager [req-a031c503-1853-4e48-9116-918905dbb5bb req-0be915b7-6cc4-460e-8877-31a59b90e8a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:09:13 compute-2 nova_compute[226829]: 2026-01-31 09:09:13.406 226833 DEBUG oslo_concurrency.lockutils [req-a031c503-1853-4e48-9116-918905dbb5bb req-0be915b7-6cc4-460e-8877-31a59b90e8a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:13 compute-2 nova_compute[226829]: 2026-01-31 09:09:13.407 226833 DEBUG oslo_concurrency.lockutils [req-a031c503-1853-4e48-9116-918905dbb5bb req-0be915b7-6cc4-460e-8877-31a59b90e8a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:13 compute-2 nova_compute[226829]: 2026-01-31 09:09:13.407 226833 DEBUG oslo_concurrency.lockutils [req-a031c503-1853-4e48-9116-918905dbb5bb req-0be915b7-6cc4-460e-8877-31a59b90e8a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:13 compute-2 nova_compute[226829]: 2026-01-31 09:09:13.407 226833 DEBUG nova.compute.manager [req-a031c503-1853-4e48-9116-918905dbb5bb req-0be915b7-6cc4-460e-8877-31a59b90e8a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Processing event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:09:13 compute-2 podman[336566]: 2026-01-31 09:09:13.386245044 +0000 UTC m=+0.019375667 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:09:13 compute-2 podman[336566]: 2026-01-31 09:09:13.505539958 +0000 UTC m=+0.138670581 container create ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:09:13 compute-2 systemd[1]: Started libpod-conmon-ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d.scope.
Jan 31 09:09:13 compute-2 systemd[1]: Started libcrun container.
Jan 31 09:09:13 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082f4d30d11648f9e80544934e44807ecf080bfdcec9229bc55d5df123bbbde/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:13 compute-2 podman[336566]: 2026-01-31 09:09:13.583053457 +0000 UTC m=+0.216184110 container init ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:09:13 compute-2 podman[336566]: 2026-01-31 09:09:13.587481946 +0000 UTC m=+0.220612569 container start ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:09:13 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[336582]: [NOTICE]   (336586) : New worker (336588) forked
Jan 31 09:09:13 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[336582]: [NOTICE]   (336586) : Loading success.
Jan 31 09:09:13 compute-2 ceph-mon[77282]: pgmap v4128: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.023 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850554.0232704, 9a579d21-79af-418a-a1d6-756329428431 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.024 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] VM Started (Lifecycle Event)
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.026 226833 DEBUG nova.compute.manager [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.029 226833 DEBUG nova.virt.libvirt.driver [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.032 226833 INFO nova.virt.libvirt.driver [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] Instance spawned successfully.
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.032 226833 DEBUG nova.compute.manager [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.044 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.049 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.076 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.077 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850554.0254605, 9a579d21-79af-418a-a1d6-756329428431 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.077 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] VM Paused (Lifecycle Event)
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.088 226833 DEBUG nova.network.neutron [req-93274dc9-3daa-4c64-b67f-432df2cd94e8 req-2eb2afb9-9493-4cac-9957-91c0a5375a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updated VIF entry in instance network info cache for port 5ca89d89-0532-424d-9005-27c4b82e9793. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.089 226833 DEBUG nova.network.neutron [req-93274dc9-3daa-4c64-b67f-432df2cd94e8 req-2eb2afb9-9493-4cac-9957-91c0a5375a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating instance_info_cache with network_info: [{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.111 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.114 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850554.0283241, 9a579d21-79af-418a-a1d6-756329428431 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.114 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] VM Resumed (Lifecycle Event)
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.120 226833 DEBUG oslo_concurrency.lockutils [req-93274dc9-3daa-4c64-b67f-432df2cd94e8 req-2eb2afb9-9493-4cac-9957-91c0a5375a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.123 226833 DEBUG oslo_concurrency.lockutils [None req-15cd6491-da61-45ca-88a0-11362414528a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.134 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.137 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:09:14 compute-2 nova_compute[226829]: 2026-01-31 09:09:14.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:14.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:14.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:15 compute-2 nova_compute[226829]: 2026-01-31 09:09:15.363 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:15 compute-2 nova_compute[226829]: 2026-01-31 09:09:15.538 226833 DEBUG nova.compute.manager [req-05354637-463a-4a2e-af4b-99abbf89a0c5 req-24e50df7-90a0-4aa1-ad00-dcc25ba1bf56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:09:15 compute-2 nova_compute[226829]: 2026-01-31 09:09:15.538 226833 DEBUG oslo_concurrency.lockutils [req-05354637-463a-4a2e-af4b-99abbf89a0c5 req-24e50df7-90a0-4aa1-ad00-dcc25ba1bf56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:15 compute-2 nova_compute[226829]: 2026-01-31 09:09:15.538 226833 DEBUG oslo_concurrency.lockutils [req-05354637-463a-4a2e-af4b-99abbf89a0c5 req-24e50df7-90a0-4aa1-ad00-dcc25ba1bf56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:15 compute-2 nova_compute[226829]: 2026-01-31 09:09:15.539 226833 DEBUG oslo_concurrency.lockutils [req-05354637-463a-4a2e-af4b-99abbf89a0c5 req-24e50df7-90a0-4aa1-ad00-dcc25ba1bf56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:15 compute-2 nova_compute[226829]: 2026-01-31 09:09:15.539 226833 DEBUG nova.compute.manager [req-05354637-463a-4a2e-af4b-99abbf89a0c5 req-24e50df7-90a0-4aa1-ad00-dcc25ba1bf56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] No waiting events found dispatching network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:09:15 compute-2 nova_compute[226829]: 2026-01-31 09:09:15.539 226833 WARNING nova.compute.manager [req-05354637-463a-4a2e-af4b-99abbf89a0c5 req-24e50df7-90a0-4aa1-ad00-dcc25ba1bf56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received unexpected event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 for instance with vm_state active and task_state None.
Jan 31 09:09:15 compute-2 ceph-mon[77282]: pgmap v4129: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 947 KiB/s rd, 511 B/s wr, 37 op/s
Jan 31 09:09:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:16.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:16 compute-2 nova_compute[226829]: 2026-01-31 09:09:16.786 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:16 compute-2 sudo[336641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:16 compute-2 sudo[336641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:16 compute-2 sudo[336641]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:16.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:16 compute-2 sudo[336666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:09:16 compute-2 sudo[336666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:16 compute-2 sudo[336666]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:17 compute-2 sudo[336691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:17 compute-2 sudo[336691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:17 compute-2 sudo[336691]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:17 compute-2 sudo[336716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 09:09:17 compute-2 sudo[336716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:17 compute-2 podman[336813]: 2026-01-31 09:09:17.541659594 +0000 UTC m=+0.073334365 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 09:09:17 compute-2 podman[336813]: 2026-01-31 09:09:17.665353658 +0000 UTC m=+0.197028429 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:09:18 compute-2 podman[336971]: 2026-01-31 09:09:18.21560642 +0000 UTC m=+0.080764926 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 09:09:18 compute-2 podman[336992]: 2026-01-31 09:09:18.282291804 +0000 UTC m=+0.053454625 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 09:09:18 compute-2 podman[336971]: 2026-01-31 09:09:18.293003705 +0000 UTC m=+0.158162211 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 09:09:18 compute-2 podman[337036]: 2026-01-31 09:09:18.477616116 +0000 UTC m=+0.054260086 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, name=keepalived, description=keepalived for Ceph, distribution-scope=public, vcs-type=git, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, release=1793, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 09:09:18 compute-2 podman[337036]: 2026-01-31 09:09:18.488483941 +0000 UTC m=+0.065127911 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, release=1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=)
Jan 31 09:09:18 compute-2 sudo[336716]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:18.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:09:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:18.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:09:18 compute-2 ceph-mon[77282]: pgmap v4130: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 50 op/s
Jan 31 09:09:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:18 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:19 compute-2 sudo[337069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:19 compute-2 sudo[337069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:19 compute-2 sudo[337069]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:19 compute-2 sudo[337094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:09:19 compute-2 sudo[337094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:19 compute-2 sudo[337094]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:19 compute-2 sudo[337119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:19 compute-2 sudo[337119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:19 compute-2 sudo[337119]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:19 compute-2 sudo[337144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:09:19 compute-2 sudo[337144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:19 compute-2 sudo[337144]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:19 compute-2 sudo[337200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:19 compute-2 sudo[337200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:19 compute-2 sudo[337200]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:19 compute-2 sudo[337225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:09:19 compute-2 sudo[337225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:19 compute-2 sudo[337225]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:19 compute-2 sudo[337250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:19 compute-2 sudo[337250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:19 compute-2 sudo[337250]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:19 compute-2 sudo[337275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 09:09:19 compute-2 sudo[337275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:19 compute-2 sudo[337275]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:19 compute-2 ceph-mon[77282]: pgmap v4131: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:09:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:19 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:20 compute-2 nova_compute[226829]: 2026-01-31 09:09:20.366 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:20 compute-2 nova_compute[226829]: 2026-01-31 09:09:20.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:20.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:09:21 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:09:21 compute-2 nova_compute[226829]: 2026-01-31 09:09:21.788 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:22 compute-2 nova_compute[226829]: 2026-01-31 09:09:22.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 31 09:09:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:22.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 31 09:09:22 compute-2 ceph-mon[77282]: pgmap v4132: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:09:22 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4073190936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:22.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:23 compute-2 nova_compute[226829]: 2026-01-31 09:09:23.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2880264468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:24.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:24 compute-2 ceph-mon[77282]: pgmap v4133: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:09:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:24.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:25 compute-2 nova_compute[226829]: 2026-01-31 09:09:25.366 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:26 compute-2 ceph-mon[77282]: pgmap v4134: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 31 09:09:26 compute-2 nova_compute[226829]: 2026-01-31 09:09:26.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:26 compute-2 nova_compute[226829]: 2026-01-31 09:09:26.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:09:26 compute-2 nova_compute[226829]: 2026-01-31 09:09:26.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:09:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:26.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:26 compute-2 nova_compute[226829]: 2026-01-31 09:09:26.791 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:27 compute-2 nova_compute[226829]: 2026-01-31 09:09:27.160 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:09:27 compute-2 nova_compute[226829]: 2026-01-31 09:09:27.160 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquired lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:09:27 compute-2 nova_compute[226829]: 2026-01-31 09:09:27.160 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 09:09:27 compute-2 nova_compute[226829]: 2026-01-31 09:09:27.161 226833 DEBUG nova.objects.instance [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:09:27 compute-2 sudo[337322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:27 compute-2 sudo[337322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:27 compute-2 sudo[337322]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:27 compute-2 ovn_controller[133834]: 2026-01-31T09:09:27Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:8f:3e 10.100.0.8
Jan 31 09:09:27 compute-2 sudo[337347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:09:27 compute-2 sudo[337347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:27 compute-2 sudo[337347]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:28 compute-2 sudo[337373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:28 compute-2 sudo[337373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:28 compute-2 sudo[337373]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:28 compute-2 sudo[337398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:28 compute-2 sudo[337398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:28 compute-2 sudo[337398]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:28.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:28 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:28 compute-2 ceph-mon[77282]: pgmap v4135: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 53 op/s
Jan 31 09:09:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:28.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.169 226833 DEBUG nova.network.neutron [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating instance_info_cache with network_info: [{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.196 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Releasing lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.196 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.197 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.197 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.217 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.218 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.218 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.219 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.219 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:09:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2483571695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.673 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.745 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.745 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:09:29 compute-2 ceph-mon[77282]: pgmap v4136: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.8 KiB/s wr, 55 op/s
Jan 31 09:09:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2483571695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.894 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.895 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3869MB free_disk=20.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.896 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:29 compute-2 nova_compute[226829]: 2026-01-31 09:09:29.896 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:30 compute-2 nova_compute[226829]: 2026-01-31 09:09:30.027 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 9a579d21-79af-418a-a1d6-756329428431 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:09:30 compute-2 nova_compute[226829]: 2026-01-31 09:09:30.027 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:09:30 compute-2 nova_compute[226829]: 2026-01-31 09:09:30.028 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:09:30 compute-2 nova_compute[226829]: 2026-01-31 09:09:30.202 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:30 compute-2 nova_compute[226829]: 2026-01-31 09:09:30.369 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:30.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:09:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/164479365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:30 compute-2 nova_compute[226829]: 2026-01-31 09:09:30.626 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:30 compute-2 nova_compute[226829]: 2026-01-31 09:09:30.630 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:09:30 compute-2 nova_compute[226829]: 2026-01-31 09:09:30.649 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:09:30 compute-2 nova_compute[226829]: 2026-01-31 09:09:30.677 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:09:30 compute-2 nova_compute[226829]: 2026-01-31 09:09:30.678 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/349455638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/164479365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3723041403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:30.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:31 compute-2 nova_compute[226829]: 2026-01-31 09:09:31.793 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:31 compute-2 ceph-mon[77282]: pgmap v4137: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 528 KiB/s rd, 13 KiB/s wr, 44 op/s
Jan 31 09:09:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:32.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:32.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:33 compute-2 podman[337470]: 2026-01-31 09:09:33.222836997 +0000 UTC m=+0.111791600 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 09:09:33 compute-2 nova_compute[226829]: 2026-01-31 09:09:33.969 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:34.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:34 compute-2 ceph-mon[77282]: pgmap v4138: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 22 KiB/s wr, 44 op/s
Jan 31 09:09:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:34.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:35 compute-2 nova_compute[226829]: 2026-01-31 09:09:35.371 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:36 compute-2 ceph-mon[77282]: pgmap v4139: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 23 KiB/s wr, 44 op/s
Jan 31 09:09:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:36.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:36 compute-2 nova_compute[226829]: 2026-01-31 09:09:36.800 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:36.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:38 compute-2 ceph-mon[77282]: pgmap v4140: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 527 KiB/s rd, 23 KiB/s wr, 44 op/s
Jan 31 09:09:38 compute-2 podman[337499]: 2026-01-31 09:09:38.164496938 +0000 UTC m=+0.047477011 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 09:09:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:38.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:38.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:39 compute-2 ceph-mon[77282]: pgmap v4141: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 22 KiB/s wr, 27 op/s
Jan 31 09:09:40 compute-2 nova_compute[226829]: 2026-01-31 09:09:40.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:40.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:40.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:41 compute-2 nova_compute[226829]: 2026-01-31 09:09:41.805 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:41 compute-2 ceph-mon[77282]: pgmap v4142: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 21 KiB/s wr, 11 op/s
Jan 31 09:09:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:42.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:42 compute-2 ovn_controller[133834]: 2026-01-31T09:09:42Z|00863|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Jan 31 09:09:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:42.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:44 compute-2 nova_compute[226829]: 2026-01-31 09:09:44.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:44 compute-2 nova_compute[226829]: 2026-01-31 09:09:44.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:09:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:44.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:44 compute-2 ceph-mon[77282]: pgmap v4143: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 341 B/s rd, 13 KiB/s wr, 1 op/s
Jan 31 09:09:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:44.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:45 compute-2 nova_compute[226829]: 2026-01-31 09:09:45.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:45 compute-2 ceph-mon[77282]: pgmap v4144: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 KiB/s wr, 0 op/s
Jan 31 09:09:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:46.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:46 compute-2 nova_compute[226829]: 2026-01-31 09:09:46.807 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:46.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:48 compute-2 sudo[337524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:48 compute-2 sudo[337524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:48 compute-2 sudo[337524]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:48 compute-2 sudo[337549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:48 compute-2 sudo[337549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:48 compute-2 sudo[337549]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:48.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:48 compute-2 ceph-mon[77282]: pgmap v4145: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s wr, 0 op/s
Jan 31 09:09:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.003000081s ======
Jan 31 09:09:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:48.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Jan 31 09:09:49 compute-2 nova_compute[226829]: 2026-01-31 09:09:49.904 226833 DEBUG nova.compute.manager [req-b2f4fcff-5bb6-4297-8f45-d7669eb5717a req-a131575b-aa66-4d3f-89b5-19a8fe9493ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-changed-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:09:49 compute-2 nova_compute[226829]: 2026-01-31 09:09:49.905 226833 DEBUG nova.compute.manager [req-b2f4fcff-5bb6-4297-8f45-d7669eb5717a req-a131575b-aa66-4d3f-89b5-19a8fe9493ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Refreshing instance network info cache due to event network-changed-5ca89d89-0532-424d-9005-27c4b82e9793. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:09:49 compute-2 nova_compute[226829]: 2026-01-31 09:09:49.905 226833 DEBUG oslo_concurrency.lockutils [req-b2f4fcff-5bb6-4297-8f45-d7669eb5717a req-a131575b-aa66-4d3f-89b5-19a8fe9493ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:09:49 compute-2 nova_compute[226829]: 2026-01-31 09:09:49.905 226833 DEBUG oslo_concurrency.lockutils [req-b2f4fcff-5bb6-4297-8f45-d7669eb5717a req-a131575b-aa66-4d3f-89b5-19a8fe9493ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:09:49 compute-2 nova_compute[226829]: 2026-01-31 09:09:49.905 226833 DEBUG nova.network.neutron [req-b2f4fcff-5bb6-4297-8f45-d7669eb5717a req-a131575b-aa66-4d3f-89b5-19a8fe9493ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Refreshing network info cache for port 5ca89d89-0532-424d-9005-27c4b82e9793 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:09:50 compute-2 ceph-mon[77282]: pgmap v4146: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 7.7 KiB/s wr, 3 op/s
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.030 226833 DEBUG oslo_concurrency.lockutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.030 226833 DEBUG oslo_concurrency.lockutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.031 226833 DEBUG oslo_concurrency.lockutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.031 226833 DEBUG oslo_concurrency.lockutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.031 226833 DEBUG oslo_concurrency.lockutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.033 226833 INFO nova.compute.manager [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Terminating instance
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.034 226833 DEBUG nova.compute.manager [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 09:09:50 compute-2 kernel: tap5ca89d89-05 (unregistering): left promiscuous mode
Jan 31 09:09:50 compute-2 NetworkManager[48999]: <info>  [1769850590.1483] device (tap5ca89d89-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:09:50 compute-2 ovn_controller[133834]: 2026-01-31T09:09:50Z|00864|binding|INFO|Releasing lport 5ca89d89-0532-424d-9005-27c4b82e9793 from this chassis (sb_readonly=0)
Jan 31 09:09:50 compute-2 ovn_controller[133834]: 2026-01-31T09:09:50Z|00865|binding|INFO|Setting lport 5ca89d89-0532-424d-9005-27c4b82e9793 down in Southbound
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.174 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:50 compute-2 ovn_controller[133834]: 2026-01-31T09:09:50Z|00866|binding|INFO|Removing iface tap5ca89d89-05 ovn-installed in OVS
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.183 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.187 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:8f:3e 10.100.0.8'], port_security=['fa:16:3e:cc:8f:3e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9a579d21-79af-418a-a1d6-756329428431', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '496e06c7521f45c994e6426c4313acea', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'be46c5ac-d5d7-434f-9c7a-f07c0d88092e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62ab62d6-b781-4309-8775-90b43197869a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=5ca89d89-0532-424d-9005-27c4b82e9793) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.192 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 5ca89d89-0532-424d-9005-27c4b82e9793 in datapath 1d742d07-ac6a-4870-9712-15a33a8a1e71 unbound from our chassis
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.194 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d742d07-ac6a-4870-9712-15a33a8a1e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.198 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[87f86bcb-34d5-469e-ac19-50a06ea9a4bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.199 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 namespace which is not needed anymore
Jan 31 09:09:50 compute-2 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d9.scope: Deactivated successfully.
Jan 31 09:09:50 compute-2 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d9.scope: Consumed 15.025s CPU time.
Jan 31 09:09:50 compute-2 systemd-machined[195142]: Machine qemu-97-instance-000000d9 terminated.
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.275 226833 INFO nova.virt.libvirt.driver [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] Instance destroyed successfully.
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.276 226833 DEBUG nova.objects.instance [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'resources' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.302 226833 DEBUG nova.virt.libvirt.vif [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T09:08:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-573915944',display_name='tempest-TestShelveInstance-server-573915944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-573915944',id=217,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHS4QCUVvfZkmnlZau8lua//iz8/NvQiFwIgU+DK5hCJmShHpaL0dY344ffU+61pAJtdb//6v7ovdut7evpHBD6pN1yMnlHNxMsL0uM0TRO0xjCGQzhCeySiVn/Uz1Je8Q==',key_name='tempest-TestShelveInstance-385640846',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:09:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-1rp3uu0q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:09:14Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=9a579d21-79af-418a-a1d6-756329428431,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.302 226833 DEBUG nova.network.os_vif_util [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.303 226833 DEBUG nova.network.os_vif_util [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.303 226833 DEBUG os_vif [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.305 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.306 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ca89d89-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.307 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.308 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.314 226833 INFO os_vif [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05')
Jan 31 09:09:50 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[336582]: [NOTICE]   (336586) : haproxy version is 2.8.14-c23fe91
Jan 31 09:09:50 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[336582]: [NOTICE]   (336586) : path to executable is /usr/sbin/haproxy
Jan 31 09:09:50 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[336582]: [WARNING]  (336586) : Exiting Master process...
Jan 31 09:09:50 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[336582]: [WARNING]  (336586) : Exiting Master process...
Jan 31 09:09:50 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[336582]: [ALERT]    (336586) : Current worker (336588) exited with code 143 (Terminated)
Jan 31 09:09:50 compute-2 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[336582]: [WARNING]  (336586) : All workers exited. Exiting... (0)
Jan 31 09:09:50 compute-2 systemd[1]: libpod-ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d.scope: Deactivated successfully.
Jan 31 09:09:50 compute-2 podman[337609]: 2026-01-31 09:09:50.361952429 +0000 UTC m=+0.057110824 container died ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.376 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:50 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d-userdata-shm.mount: Deactivated successfully.
Jan 31 09:09:50 compute-2 systemd[1]: var-lib-containers-storage-overlay-2082f4d30d11648f9e80544934e44807ecf080bfdcec9229bc55d5df123bbbde-merged.mount: Deactivated successfully.
Jan 31 09:09:50 compute-2 podman[337609]: 2026-01-31 09:09:50.445276795 +0000 UTC m=+0.140435150 container cleanup ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:09:50 compute-2 systemd[1]: libpod-conmon-ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d.scope: Deactivated successfully.
Jan 31 09:09:50 compute-2 podman[337655]: 2026-01-31 09:09:50.514921279 +0000 UTC m=+0.052564241 container remove ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.519 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a32c3779-42c2-43d0-bf37-ce2c21f86188]: (4, ('Sat Jan 31 09:09:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 (ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d)\nce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d\nSat Jan 31 09:09:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 (ce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d)\nce6e6c10fa2453d97dd9cd4977bc3ffea5e9b615fe88e4cc6f45cd461b09ff3d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.521 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[26d66f82-93f2-40f9-b95f-3c62e44679b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.522 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d742d07-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.524 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:50 compute-2 kernel: tap1d742d07-a0: left promiscuous mode
Jan 31 09:09:50 compute-2 nova_compute[226829]: 2026-01-31 09:09:50.528 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.531 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[09b73968-6ad5-4a20-b58e-7fd8c3b693dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.551 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8054974a-c05d-4f3e-ba6b-886b301a0ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.553 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[75a544c0-9f47-4119-94d7-966eab1af411]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.566 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[39bee53f-32e8-4f32-9d76-1fea41949686]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1080533, 'reachable_time': 25463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337670, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:50 compute-2 systemd[1]: run-netns-ovnmeta\x2d1d742d07\x2dac6a\x2d4870\x2d9712\x2d15a33a8a1e71.mount: Deactivated successfully.
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.572 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:09:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:50.573 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[1070b794-2675-4165-9d55-8529d3aca808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:09:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:09:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:50.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:09:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:50.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:51 compute-2 nova_compute[226829]: 2026-01-31 09:09:51.370 226833 INFO nova.virt.libvirt.driver [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Deleting instance files /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431_del
Jan 31 09:09:51 compute-2 nova_compute[226829]: 2026-01-31 09:09:51.371 226833 INFO nova.virt.libvirt.driver [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Deletion of /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431_del complete
Jan 31 09:09:51 compute-2 nova_compute[226829]: 2026-01-31 09:09:51.916 226833 INFO nova.compute.manager [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Took 1.88 seconds to destroy the instance on the hypervisor.
Jan 31 09:09:51 compute-2 nova_compute[226829]: 2026-01-31 09:09:51.916 226833 DEBUG oslo.service.loopingcall [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 09:09:51 compute-2 nova_compute[226829]: 2026-01-31 09:09:51.916 226833 DEBUG nova.compute.manager [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 09:09:51 compute-2 nova_compute[226829]: 2026-01-31 09:09:51.917 226833 DEBUG nova.network.neutron [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.109 226833 DEBUG nova.compute.manager [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-vif-unplugged-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.110 226833 DEBUG oslo_concurrency.lockutils [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.110 226833 DEBUG oslo_concurrency.lockutils [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.110 226833 DEBUG oslo_concurrency.lockutils [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.111 226833 DEBUG nova.compute.manager [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] No waiting events found dispatching network-vif-unplugged-5ca89d89-0532-424d-9005-27c4b82e9793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.111 226833 DEBUG nova.compute.manager [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-vif-unplugged-5ca89d89-0532-424d-9005-27c4b82e9793 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.111 226833 DEBUG nova.compute.manager [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.111 226833 DEBUG oslo_concurrency.lockutils [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.111 226833 DEBUG oslo_concurrency.lockutils [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.112 226833 DEBUG oslo_concurrency.lockutils [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.112 226833 DEBUG nova.compute.manager [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] No waiting events found dispatching network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.112 226833 WARNING nova.compute.manager [req-1689fa8f-9e42-43df-a663-dc755b68abe2 req-e2064d68-41fe-43d8-adab-00c6179a9853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received unexpected event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 for instance with vm_state active and task_state deleting.
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.259 226833 DEBUG nova.network.neutron [req-b2f4fcff-5bb6-4297-8f45-d7669eb5717a req-a131575b-aa66-4d3f-89b5-19a8fe9493ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updated VIF entry in instance network info cache for port 5ca89d89-0532-424d-9005-27c4b82e9793. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.260 226833 DEBUG nova.network.neutron [req-b2f4fcff-5bb6-4297-8f45-d7669eb5717a req-a131575b-aa66-4d3f-89b5-19a8fe9493ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating instance_info_cache with network_info: [{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.298 226833 DEBUG oslo_concurrency.lockutils [req-b2f4fcff-5bb6-4297-8f45-d7669eb5717a req-a131575b-aa66-4d3f-89b5-19a8fe9493ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:09:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:52.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:52 compute-2 ceph-mon[77282]: pgmap v4147: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 213 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.840 226833 DEBUG nova.network.neutron [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.888 226833 INFO nova.compute.manager [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] Took 0.97 seconds to deallocate network for instance.
Jan 31 09:09:52 compute-2 nova_compute[226829]: 2026-01-31 09:09:52.926 226833 DEBUG nova.compute.manager [req-6d06649b-a06d-4e5a-9914-9ca3e5e33e36 req-2e7a0573-0ed4-452f-bd13-936445d40731 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-vif-deleted-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:09:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:52.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:53 compute-2 nova_compute[226829]: 2026-01-31 09:09:53.193 226833 INFO nova.compute.manager [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Took 0.30 seconds to detach 1 volumes for instance.
Jan 31 09:09:53 compute-2 nova_compute[226829]: 2026-01-31 09:09:53.194 226833 DEBUG nova.compute.manager [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Deleting volume: cf361715-504e-40e5-87bf-5f49366009aa _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 31 09:09:53 compute-2 nova_compute[226829]: 2026-01-31 09:09:53.438 226833 DEBUG oslo_concurrency.lockutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:53 compute-2 nova_compute[226829]: 2026-01-31 09:09:53.439 226833 DEBUG oslo_concurrency.lockutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:09:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/669566419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:09:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:09:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/669566419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:09:53 compute-2 nova_compute[226829]: 2026-01-31 09:09:53.478 226833 DEBUG nova.scheduler.client.report [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:09:53 compute-2 nova_compute[226829]: 2026-01-31 09:09:53.596 226833 DEBUG nova.scheduler.client.report [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:09:53 compute-2 nova_compute[226829]: 2026-01-31 09:09:53.596 226833 DEBUG nova.compute.provider_tree [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:09:53 compute-2 nova_compute[226829]: 2026-01-31 09:09:53.620 226833 DEBUG nova.scheduler.client.report [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:09:53 compute-2 nova_compute[226829]: 2026-01-31 09:09:53.650 226833 DEBUG nova.scheduler.client.report [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:09:53 compute-2 nova_compute[226829]: 2026-01-31 09:09:53.699 226833 DEBUG oslo_concurrency.processutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/669566419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:09:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/669566419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:09:53 compute-2 ceph-mon[77282]: pgmap v4148: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 213 KiB/s rd, 11 KiB/s wr, 8 op/s
Jan 31 09:09:54 compute-2 nova_compute[226829]: 2026-01-31 09:09:54.139 226833 DEBUG oslo_concurrency.processutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:54 compute-2 nova_compute[226829]: 2026-01-31 09:09:54.146 226833 DEBUG nova.compute.provider_tree [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:09:54 compute-2 nova_compute[226829]: 2026-01-31 09:09:54.210 226833 DEBUG nova.scheduler.client.report [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:09:54 compute-2 nova_compute[226829]: 2026-01-31 09:09:54.249 226833 DEBUG oslo_concurrency.lockutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:54 compute-2 nova_compute[226829]: 2026-01-31 09:09:54.291 226833 INFO nova.scheduler.client.report [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Deleted allocations for instance 9a579d21-79af-418a-a1d6-756329428431
Jan 31 09:09:54 compute-2 nova_compute[226829]: 2026-01-31 09:09:54.409 226833 DEBUG oslo_concurrency.lockutils [None req-73c48c7f-83ce-42e5-b149-0beb9de2c0fa 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:54.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:54.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1728827525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2053799275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:09:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2053799275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:09:55 compute-2 nova_compute[226829]: 2026-01-31 09:09:55.309 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:55 compute-2 nova_compute[226829]: 2026-01-31 09:09:55.379 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:56 compute-2 ceph-mon[77282]: pgmap v4149: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 10 KiB/s wr, 25 op/s
Jan 31 09:09:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:56.426 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=109, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=108) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:09:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:56.427 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:09:56 compute-2 nova_compute[226829]: 2026-01-31 09:09:56.463 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:56.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:56.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:58.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:58 compute-2 ceph-mon[77282]: pgmap v4150: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 10 KiB/s wr, 25 op/s
Jan 31 09:09:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:09:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:09:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:58.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:09:59 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:09:59.430 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '109'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:59 compute-2 ceph-mon[77282]: pgmap v4151: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 233 KiB/s rd, 10 KiB/s wr, 37 op/s
Jan 31 09:10:00 compute-2 nova_compute[226829]: 2026-01-31 09:10:00.313 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:00 compute-2 nova_compute[226829]: 2026-01-31 09:10:00.382 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:00 compute-2 nova_compute[226829]: 2026-01-31 09:10:00.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:00.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:01 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 09:10:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:02 compute-2 ceph-mon[77282]: pgmap v4152: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 176 KiB/s rd, 5.2 KiB/s wr, 33 op/s
Jan 31 09:10:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:02.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:02.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:03 compute-2 nova_compute[226829]: 2026-01-31 09:10:03.680 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:03 compute-2 nova_compute[226829]: 2026-01-31 09:10:03.711 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:03 compute-2 ceph-mon[77282]: pgmap v4153: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 31 09:10:04 compute-2 podman[337701]: 2026-01-31 09:10:04.193764049 +0000 UTC m=+0.078527636 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 09:10:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:04.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:04.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:05 compute-2 nova_compute[226829]: 2026-01-31 09:10:05.275 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850590.2726736, 9a579d21-79af-418a-a1d6-756329428431 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:10:05 compute-2 nova_compute[226829]: 2026-01-31 09:10:05.276 226833 INFO nova.compute.manager [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] VM Stopped (Lifecycle Event)
Jan 31 09:10:05 compute-2 nova_compute[226829]: 2026-01-31 09:10:05.316 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:05 compute-2 nova_compute[226829]: 2026-01-31 09:10:05.385 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:05 compute-2 ceph-mon[77282]: pgmap v4154: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 31 09:10:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:06.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:10:06.954 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:10:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:10:06.955 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:10:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:10:06.955 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:10:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:10:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:06.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:10:07 compute-2 nova_compute[226829]: 2026-01-31 09:10:07.178 226833 DEBUG nova.compute.manager [None req-96ecf3c0-5d52-417a-9b78-5a15474356a5 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:10:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:08.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:08 compute-2 sudo[337731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:08 compute-2 sudo[337731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:08 compute-2 sudo[337731]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:08 compute-2 ceph-mon[77282]: pgmap v4155: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 09:10:08 compute-2 sudo[337762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:08 compute-2 sudo[337762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:08 compute-2 sudo[337762]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:08 compute-2 podman[337755]: 2026-01-31 09:10:08.776927981 +0000 UTC m=+0.059991182 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 31 09:10:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:08.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:09 compute-2 ceph-mon[77282]: pgmap v4156: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 09:10:10 compute-2 nova_compute[226829]: 2026-01-31 09:10:10.317 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:10 compute-2 nova_compute[226829]: 2026-01-31 09:10:10.387 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:10.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:10.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:11 compute-2 ceph-mon[77282]: pgmap v4157: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:12.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:12.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:14 compute-2 ceph-mon[77282]: pgmap v4158: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:14.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:14.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:15 compute-2 nova_compute[226829]: 2026-01-31 09:10:15.320 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:15 compute-2 nova_compute[226829]: 2026-01-31 09:10:15.433 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:16 compute-2 nova_compute[226829]: 2026-01-31 09:10:16.535 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:16.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:16 compute-2 ceph-mon[77282]: pgmap v4159: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:17.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:18 compute-2 ceph-mon[77282]: pgmap v4160: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:18.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:19.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:20 compute-2 nova_compute[226829]: 2026-01-31 09:10:20.322 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:20 compute-2 nova_compute[226829]: 2026-01-31 09:10:20.435 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:20 compute-2 ceph-mon[77282]: pgmap v4161: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:20.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:21.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:22 compute-2 ceph-mon[77282]: pgmap v4162: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:22 compute-2 nova_compute[226829]: 2026-01-31 09:10:22.495 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:22.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:23.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:23 compute-2 nova_compute[226829]: 2026-01-31 09:10:23.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:24 compute-2 ceph-mon[77282]: pgmap v4163: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:24 compute-2 nova_compute[226829]: 2026-01-31 09:10:24.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:24.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:25.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/722512109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:25 compute-2 nova_compute[226829]: 2026-01-31 09:10:25.325 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:25 compute-2 nova_compute[226829]: 2026-01-31 09:10:25.480 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/644742043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:26 compute-2 ceph-mon[77282]: pgmap v4164: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:26 compute-2 nova_compute[226829]: 2026-01-31 09:10:26.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:26 compute-2 nova_compute[226829]: 2026-01-31 09:10:26.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:10:26 compute-2 nova_compute[226829]: 2026-01-31 09:10:26.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:10:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:26 compute-2 nova_compute[226829]: 2026-01-31 09:10:26.621 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:10:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:26.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:27.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:27 compute-2 nova_compute[226829]: 2026-01-31 09:10:27.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:27 compute-2 nova_compute[226829]: 2026-01-31 09:10:27.601 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:10:27 compute-2 nova_compute[226829]: 2026-01-31 09:10:27.602 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:10:27 compute-2 nova_compute[226829]: 2026-01-31 09:10:27.602 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:10:27 compute-2 nova_compute[226829]: 2026-01-31 09:10:27.602 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:10:27 compute-2 nova_compute[226829]: 2026-01-31 09:10:27.603 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:10:28 compute-2 sudo[337829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:28 compute-2 ceph-mon[77282]: pgmap v4165: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:28 compute-2 sudo[337829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-2 sudo[337829]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-2 sudo[337854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:10:28 compute-2 sudo[337854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-2 sudo[337854]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-2 sudo[337879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:28 compute-2 sudo[337879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-2 sudo[337879]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-2 sudo[337904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:10:28 compute-2 sudo[337904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:10:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3030456916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:28 compute-2 nova_compute[226829]: 2026-01-31 09:10:28.355 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.753s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:10:28 compute-2 nova_compute[226829]: 2026-01-31 09:10:28.505 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:10:28 compute-2 nova_compute[226829]: 2026-01-31 09:10:28.507 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4031MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:10:28 compute-2 nova_compute[226829]: 2026-01-31 09:10:28.507 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:10:28 compute-2 nova_compute[226829]: 2026-01-31 09:10:28.507 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:10:28 compute-2 sudo[337904]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:28.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:28 compute-2 sudo[337964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:28 compute-2 sudo[337964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-2 sudo[337964]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-2 sudo[337989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:10:28 compute-2 sudo[337989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-2 sudo[337989]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-2 sudo[338014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:28 compute-2 sudo[338014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-2 sudo[338014]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-2 sudo[338039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:28 compute-2 sudo[338039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-2 sudo[338039]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-2 sudo[338045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 09:10:28 compute-2 sudo[338045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-2 nova_compute[226829]: 2026-01-31 09:10:28.850 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:10:28 compute-2 nova_compute[226829]: 2026-01-31 09:10:28.851 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:10:28 compute-2 sudo[338088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:28 compute-2 sudo[338088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-2 sudo[338088]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-2 nova_compute[226829]: 2026-01-31 09:10:28.885 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:10:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:29.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:29 compute-2 podman[338175]: 2026-01-31 09:10:29.148363958 +0000 UTC m=+0.024423095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:10:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3030456916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:10:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/398262343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:29 compute-2 nova_compute[226829]: 2026-01-31 09:10:29.413 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:10:29 compute-2 nova_compute[226829]: 2026-01-31 09:10:29.418 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:10:29 compute-2 podman[338175]: 2026-01-31 09:10:29.43559919 +0000 UTC m=+0.311658267 container create a2983d504d6f2f309d1438f1e622a9ef19bbd0b78c9c85a55a38e0fa072ec003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcnulty, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:10:29 compute-2 nova_compute[226829]: 2026-01-31 09:10:29.501 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:10:29 compute-2 systemd[1]: Started libpod-conmon-a2983d504d6f2f309d1438f1e622a9ef19bbd0b78c9c85a55a38e0fa072ec003.scope.
Jan 31 09:10:29 compute-2 systemd[1]: Started libcrun container.
Jan 31 09:10:29 compute-2 nova_compute[226829]: 2026-01-31 09:10:29.781 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:10:29 compute-2 nova_compute[226829]: 2026-01-31 09:10:29.782 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:10:29 compute-2 podman[338175]: 2026-01-31 09:10:29.815719626 +0000 UTC m=+0.691778723 container init a2983d504d6f2f309d1438f1e622a9ef19bbd0b78c9c85a55a38e0fa072ec003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 09:10:29 compute-2 podman[338175]: 2026-01-31 09:10:29.823468667 +0000 UTC m=+0.699527764 container start a2983d504d6f2f309d1438f1e622a9ef19bbd0b78c9c85a55a38e0fa072ec003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:10:29 compute-2 elastic_mcnulty[338193]: 167 167
Jan 31 09:10:29 compute-2 systemd[1]: libpod-a2983d504d6f2f309d1438f1e622a9ef19bbd0b78c9c85a55a38e0fa072ec003.scope: Deactivated successfully.
Jan 31 09:10:29 compute-2 conmon[338193]: conmon a2983d504d6f2f309d14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2983d504d6f2f309d1438f1e622a9ef19bbd0b78c9c85a55a38e0fa072ec003.scope/container/memory.events
Jan 31 09:10:29 compute-2 podman[338175]: 2026-01-31 09:10:29.898128857 +0000 UTC m=+0.774187944 container attach a2983d504d6f2f309d1438f1e622a9ef19bbd0b78c9c85a55a38e0fa072ec003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcnulty, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:10:29 compute-2 podman[338175]: 2026-01-31 09:10:29.898873047 +0000 UTC m=+0.774932104 container died a2983d504d6f2f309d1438f1e622a9ef19bbd0b78c9c85a55a38e0fa072ec003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 09:10:30 compute-2 systemd[1]: var-lib-containers-storage-overlay-8b5db81cd9e567a8bc88214a29c05be1cd9a9315cf6ef0e7b33b90193206f66d-merged.mount: Deactivated successfully.
Jan 31 09:10:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/398262343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:30 compute-2 ceph-mon[77282]: pgmap v4166: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:30 compute-2 nova_compute[226829]: 2026-01-31 09:10:30.329 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:30 compute-2 podman[338175]: 2026-01-31 09:10:30.431789549 +0000 UTC m=+1.307848606 container remove a2983d504d6f2f309d1438f1e622a9ef19bbd0b78c9c85a55a38e0fa072ec003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 09:10:30 compute-2 nova_compute[226829]: 2026-01-31 09:10:30.482 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:30 compute-2 systemd[1]: libpod-conmon-a2983d504d6f2f309d1438f1e622a9ef19bbd0b78c9c85a55a38e0fa072ec003.scope: Deactivated successfully.
Jan 31 09:10:30 compute-2 podman[338218]: 2026-01-31 09:10:30.56455536 +0000 UTC m=+0.037673506 container create b34b5c220e478b56f1879e792713acb1c0b55ed4a613cd7248a87a604937bccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:10:30 compute-2 systemd[1]: Started libpod-conmon-b34b5c220e478b56f1879e792713acb1c0b55ed4a613cd7248a87a604937bccf.scope.
Jan 31 09:10:30 compute-2 systemd[1]: Started libcrun container.
Jan 31 09:10:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b495e4a3ba5f88179a004f2038e9c93cfc8d9e3d4997b8007e472b5b8ea09197/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b495e4a3ba5f88179a004f2038e9c93cfc8d9e3d4997b8007e472b5b8ea09197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b495e4a3ba5f88179a004f2038e9c93cfc8d9e3d4997b8007e472b5b8ea09197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:30 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b495e4a3ba5f88179a004f2038e9c93cfc8d9e3d4997b8007e472b5b8ea09197/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:30 compute-2 podman[338218]: 2026-01-31 09:10:30.545599364 +0000 UTC m=+0.018717530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:10:30 compute-2 podman[338218]: 2026-01-31 09:10:30.647841524 +0000 UTC m=+0.120959690 container init b34b5c220e478b56f1879e792713acb1c0b55ed4a613cd7248a87a604937bccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 09:10:30 compute-2 podman[338218]: 2026-01-31 09:10:30.652778589 +0000 UTC m=+0.125896735 container start b34b5c220e478b56f1879e792713acb1c0b55ed4a613cd7248a87a604937bccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:10:30 compute-2 podman[338218]: 2026-01-31 09:10:30.659410379 +0000 UTC m=+0.132528545 container attach b34b5c220e478b56f1879e792713acb1c0b55ed4a613cd7248a87a604937bccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 09:10:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:30.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:31.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]: [
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:     {
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:         "available": false,
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:         "ceph_device": false,
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:         "lsm_data": {},
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:         "lvs": [],
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:         "path": "/dev/sr0",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:         "rejected_reasons": [
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "Has a FileSystem",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "Insufficient space (<5GB)"
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:         ],
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:         "sys_api": {
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "actuators": null,
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "device_nodes": "sr0",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "devname": "sr0",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "human_readable_size": "482.00 KB",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "id_bus": "ata",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "model": "QEMU DVD-ROM",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "nr_requests": "2",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "parent": "/dev/sr0",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "partitions": {},
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "path": "/dev/sr0",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "removable": "1",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "rev": "2.5+",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "ro": "0",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "rotational": "1",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "sas_address": "",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "sas_device_handle": "",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "scheduler_mode": "mq-deadline",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "sectors": 0,
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "sectorsize": "2048",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "size": 493568.0,
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "support_discard": "2048",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "type": "disk",
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:             "vendor": "QEMU"
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:         }
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]:     }
Jan 31 09:10:31 compute-2 intelligent_shamir[338234]: ]
Jan 31 09:10:31 compute-2 systemd[1]: libpod-b34b5c220e478b56f1879e792713acb1c0b55ed4a613cd7248a87a604937bccf.scope: Deactivated successfully.
Jan 31 09:10:31 compute-2 systemd[1]: libpod-b34b5c220e478b56f1879e792713acb1c0b55ed4a613cd7248a87a604937bccf.scope: Consumed 1.075s CPU time.
Jan 31 09:10:31 compute-2 podman[338218]: 2026-01-31 09:10:31.725983543 +0000 UTC m=+1.199101729 container died b34b5c220e478b56f1879e792713acb1c0b55ed4a613cd7248a87a604937bccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:10:31 compute-2 systemd[1]: var-lib-containers-storage-overlay-b495e4a3ba5f88179a004f2038e9c93cfc8d9e3d4997b8007e472b5b8ea09197-merged.mount: Deactivated successfully.
Jan 31 09:10:31 compute-2 nova_compute[226829]: 2026-01-31 09:10:31.783 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:31 compute-2 podman[338218]: 2026-01-31 09:10:31.795441622 +0000 UTC m=+1.268559768 container remove b34b5c220e478b56f1879e792713acb1c0b55ed4a613cd7248a87a604937bccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 09:10:31 compute-2 systemd[1]: libpod-conmon-b34b5c220e478b56f1879e792713acb1c0b55ed4a613cd7248a87a604937bccf.scope: Deactivated successfully.
Jan 31 09:10:31 compute-2 sudo[338045]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:32.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:32 compute-2 ceph-mon[77282]: pgmap v4167: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:10:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:10:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:10:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:10:32 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:10:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1149389464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:33.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2222062800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:34 compute-2 nova_compute[226829]: 2026-01-31 09:10:34.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:34.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:34 compute-2 ceph-mon[77282]: pgmap v4168: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:35.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:35 compute-2 podman[339533]: 2026-01-31 09:10:35.176787331 +0000 UTC m=+0.064755612 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 09:10:35 compute-2 nova_compute[226829]: 2026-01-31 09:10:35.332 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:35 compute-2 nova_compute[226829]: 2026-01-31 09:10:35.484 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:36 compute-2 ceph-mon[77282]: pgmap v4169: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:10:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:36.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:10:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:37.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:37 compute-2 sudo[339560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:37 compute-2 sudo[339560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:37 compute-2 sudo[339560]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:37 compute-2 sudo[339585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:10:37 compute-2 sudo[339585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:37 compute-2 sudo[339585]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:38 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:38 compute-2 ceph-mon[77282]: pgmap v4170: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:38.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:39.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:39 compute-2 podman[339611]: 2026-01-31 09:10:39.173082233 +0000 UTC m=+0.055685064 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:10:39 compute-2 ceph-mon[77282]: pgmap v4171: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:40 compute-2 nova_compute[226829]: 2026-01-31 09:10:40.335 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:40 compute-2 nova_compute[226829]: 2026-01-31 09:10:40.487 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:40.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:41.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:42 compute-2 ceph-mon[77282]: pgmap v4172: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:42.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:43.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:10:43.285 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=110, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=109) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:10:43 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:10:43.286 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:10:43 compute-2 nova_compute[226829]: 2026-01-31 09:10:43.287 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:44 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:10:44.290 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '110'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:10:44 compute-2 nova_compute[226829]: 2026-01-31 09:10:44.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:44 compute-2 nova_compute[226829]: 2026-01-31 09:10:44.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:10:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:44.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:44 compute-2 ceph-mon[77282]: pgmap v4173: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:45.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:45 compute-2 nova_compute[226829]: 2026-01-31 09:10:45.337 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:45 compute-2 nova_compute[226829]: 2026-01-31 09:10:45.490 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:45 compute-2 ceph-mon[77282]: pgmap v4174: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:46.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:47.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:48.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:48 compute-2 ceph-mon[77282]: pgmap v4175: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:48 compute-2 sudo[339636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:48 compute-2 sudo[339636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:48 compute-2 sudo[339636]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:48 compute-2 sudo[339661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:48 compute-2 sudo[339661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:48 compute-2 sudo[339661]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:49.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:49 compute-2 ceph-mon[77282]: pgmap v4176: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:50 compute-2 nova_compute[226829]: 2026-01-31 09:10:50.340 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:50 compute-2 nova_compute[226829]: 2026-01-31 09:10:50.491 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:50.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:51.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:52.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:52 compute-2 ceph-mon[77282]: pgmap v4177: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:53.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3570653807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:10:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3570653807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:10:53 compute-2 ceph-mon[77282]: pgmap v4178: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:54.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:10:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:55.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:10:55 compute-2 nova_compute[226829]: 2026-01-31 09:10:55.342 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:55 compute-2 nova_compute[226829]: 2026-01-31 09:10:55.493 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:56 compute-2 ceph-mon[77282]: pgmap v4179: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:56.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:57.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:57 compute-2 ceph-mon[77282]: pgmap v4180: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:58.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:10:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:59.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:59 compute-2 ceph-mon[77282]: pgmap v4181: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:00 compute-2 nova_compute[226829]: 2026-01-31 09:11:00.344 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:00 compute-2 nova_compute[226829]: 2026-01-31 09:11:00.495 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:11:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:00.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:11:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:01.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:01 compute-2 ceph-mon[77282]: pgmap v4182: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:02.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:03.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:03 compute-2 ceph-mon[77282]: pgmap v4183: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:04.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:05.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:05 compute-2 nova_compute[226829]: 2026-01-31 09:11:05.347 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:05 compute-2 nova_compute[226829]: 2026-01-31 09:11:05.496 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:05 compute-2 ceph-mon[77282]: pgmap v4184: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:06 compute-2 podman[339695]: 2026-01-31 09:11:06.174856853 +0000 UTC m=+0.062657674 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:11:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:06.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:11:06.956 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:11:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:11:06.956 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:11:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:11:06.957 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:11:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:07.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:07 compute-2 ceph-mon[77282]: pgmap v4185: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:08.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:09 compute-2 sudo[339723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:09 compute-2 sudo[339723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:09 compute-2 sudo[339723]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:09.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:09 compute-2 sudo[339748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:09 compute-2 sudo[339748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:09 compute-2 sudo[339748]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:09 compute-2 ceph-mon[77282]: pgmap v4186: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:10 compute-2 podman[339773]: 2026-01-31 09:11:10.149575679 +0000 UTC m=+0.041768097 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 09:11:10 compute-2 nova_compute[226829]: 2026-01-31 09:11:10.348 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:10 compute-2 nova_compute[226829]: 2026-01-31 09:11:10.498 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:10.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:11.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:12 compute-2 ceph-mon[77282]: pgmap v4187: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:12.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:11:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:13.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:11:14 compute-2 ceph-mon[77282]: pgmap v4188: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:14.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:15.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:15 compute-2 nova_compute[226829]: 2026-01-31 09:11:15.351 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:15 compute-2 nova_compute[226829]: 2026-01-31 09:11:15.501 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:15 compute-2 ceph-mon[77282]: pgmap v4189: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:16.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:17.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:17 compute-2 ceph-mon[77282]: pgmap v4190: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:18 compute-2 nova_compute[226829]: 2026-01-31 09:11:18.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:18.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:11:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:19.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:11:19 compute-2 ceph-mon[77282]: pgmap v4191: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Jan 31 09:11:20 compute-2 nova_compute[226829]: 2026-01-31 09:11:20.400 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:20 compute-2 nova_compute[226829]: 2026-01-31 09:11:20.502 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:20.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:20 compute-2 ovn_controller[133834]: 2026-01-31T09:11:20Z|00867|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 31 09:11:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:21.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:21 compute-2 ceph-mon[77282]: pgmap v4192: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:22.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:23.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:23 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3332761948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:23 compute-2 nova_compute[226829]: 2026-01-31 09:11:23.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:23 compute-2 nova_compute[226829]: 2026-01-31 09:11:23.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:24 compute-2 ceph-mon[77282]: pgmap v4193: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1859109925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:24 compute-2 nova_compute[226829]: 2026-01-31 09:11:24.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:24.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:11:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:25.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:11:25 compute-2 nova_compute[226829]: 2026-01-31 09:11:25.402 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:25 compute-2 nova_compute[226829]: 2026-01-31 09:11:25.503 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:26 compute-2 ceph-mon[77282]: pgmap v4194: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:11:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:26.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:11:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:11:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:27.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:11:27 compute-2 nova_compute[226829]: 2026-01-31 09:11:27.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:27 compute-2 nova_compute[226829]: 2026-01-31 09:11:27.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:11:27 compute-2 nova_compute[226829]: 2026-01-31 09:11:27.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:11:27 compute-2 nova_compute[226829]: 2026-01-31 09:11:27.551 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:11:27 compute-2 ceph-mon[77282]: pgmap v4195: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:28 compute-2 nova_compute[226829]: 2026-01-31 09:11:28.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:28 compute-2 nova_compute[226829]: 2026-01-31 09:11:28.600 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:11:28 compute-2 nova_compute[226829]: 2026-01-31 09:11:28.601 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:11:28 compute-2 nova_compute[226829]: 2026-01-31 09:11:28.601 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:11:28 compute-2 nova_compute[226829]: 2026-01-31 09:11:28.601 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:11:28 compute-2 nova_compute[226829]: 2026-01-31 09:11:28.601 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:11:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:11:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3425866928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.056 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:11:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:29.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:29 compute-2 sudo[339824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:29 compute-2 sudo[339824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:29 compute-2 sudo[339824]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.191 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.192 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4016MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.192 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.193 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:11:29 compute-2 sudo[339849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:29 compute-2 sudo[339849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:29 compute-2 sudo[339849]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3425866928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.346 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.346 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.386 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:11:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:11:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2909810650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.814 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.818 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.874 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.876 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:11:29 compute-2 nova_compute[226829]: 2026-01-31 09:11:29.877 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:11:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2631700456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:30 compute-2 ceph-mon[77282]: pgmap v4196: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2909810650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:30 compute-2 nova_compute[226829]: 2026-01-31 09:11:30.450 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:30 compute-2 nova_compute[226829]: 2026-01-31 09:11:30.505 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:30 compute-2 nova_compute[226829]: 2026-01-31 09:11:30.877 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:11:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:31.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:11:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:11:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/216516074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:11:32 compute-2 ceph-mon[77282]: pgmap v4197: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s
Jan 31 09:11:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:32.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:33.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:11:33.213 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=111, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=110) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:11:33 compute-2 nova_compute[226829]: 2026-01-31 09:11:33.214 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:33 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:11:33.215 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:11:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/216516074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:11:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/686250951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/199953000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:34 compute-2 nova_compute[226829]: 2026-01-31 09:11:34.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:34 compute-2 ceph-mon[77282]: pgmap v4198: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:11:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:34.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:35.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:35 compute-2 nova_compute[226829]: 2026-01-31 09:11:35.484 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:35 compute-2 nova_compute[226829]: 2026-01-31 09:11:35.505 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:36.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:36 compute-2 ceph-mon[77282]: pgmap v4199: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:11:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:37.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:37 compute-2 podman[339900]: 2026-01-31 09:11:37.178840184 +0000 UTC m=+0.065434461 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:11:37 compute-2 sudo[339926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:37 compute-2 sudo[339926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:37 compute-2 sudo[339926]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:37 compute-2 sudo[339951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:11:37 compute-2 sudo[339951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:37 compute-2 sudo[339951]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:37 compute-2 sudo[339976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:37 compute-2 sudo[339976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:37 compute-2 sudo[339976]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:37 compute-2 sudo[340001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:11:37 compute-2 sudo[340001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:37 compute-2 sudo[340001]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:38 compute-2 ceph-mon[77282]: pgmap v4200: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:38.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:39.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:11:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:11:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:11:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:11:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:11:40 compute-2 nova_compute[226829]: 2026-01-31 09:11:40.487 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:40 compute-2 nova_compute[226829]: 2026-01-31 09:11:40.507 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:40 compute-2 ceph-mon[77282]: pgmap v4201: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:40.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:41.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:41 compute-2 podman[340059]: 2026-01-31 09:11:41.161837144 +0000 UTC m=+0.045280162 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 09:11:41 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:11:41.218 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '111'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:11:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:42 compute-2 ceph-mon[77282]: pgmap v4202: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:42.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:43.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:43 compute-2 ceph-mon[77282]: pgmap v4203: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:44.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:11:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:45.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:11:45 compute-2 nova_compute[226829]: 2026-01-31 09:11:45.489 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:45 compute-2 nova_compute[226829]: 2026-01-31 09:11:45.509 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:45 compute-2 sudo[340080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:45 compute-2 sudo[340080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:45 compute-2 sudo[340080]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:45 compute-2 sudo[340105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:11:45 compute-2 sudo[340105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:45 compute-2 sudo[340105]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:46 compute-2 nova_compute[226829]: 2026-01-31 09:11:46.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:46 compute-2 nova_compute[226829]: 2026-01-31 09:11:46.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:11:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:46 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:46 compute-2 ceph-mon[77282]: pgmap v4204: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3893306248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:11:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:46.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:47.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:48 compute-2 ceph-mon[77282]: pgmap v4205: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:48.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:11:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:49.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:11:49 compute-2 sudo[340132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:49 compute-2 sudo[340132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:49 compute-2 sudo[340132]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:49 compute-2 sudo[340157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:49 compute-2 sudo[340157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:49 compute-2 sudo[340157]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:50 compute-2 nova_compute[226829]: 2026-01-31 09:11:50.492 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:50 compute-2 nova_compute[226829]: 2026-01-31 09:11:50.512 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:50.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:50 compute-2 ceph-mon[77282]: pgmap v4206: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:51.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:51 compute-2 ceph-mon[77282]: pgmap v4207: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 31 09:11:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:52.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:53.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4140901243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:11:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4140901243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:11:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:54.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:55 compute-2 ceph-mon[77282]: pgmap v4208: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 09:11:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:55.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:55 compute-2 nova_compute[226829]: 2026-01-31 09:11:55.493 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:55 compute-2 nova_compute[226829]: 2026-01-31 09:11:55.513 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:56 compute-2 ceph-mon[77282]: pgmap v4209: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:11:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:56.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:57 compute-2 ceph-mon[77282]: pgmap v4210: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:11:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:11:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:58.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:11:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:11:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:59.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:59 compute-2 ceph-mon[77282]: pgmap v4211: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:12:00 compute-2 nova_compute[226829]: 2026-01-31 09:12:00.495 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:00 compute-2 nova_compute[226829]: 2026-01-31 09:12:00.515 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:00.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:01.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:02 compute-2 ceph-mon[77282]: pgmap v4212: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:12:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:02.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:03.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:03 compute-2 nova_compute[226829]: 2026-01-31 09:12:03.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:04 compute-2 ceph-mon[77282]: pgmap v4213: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 12 KiB/s wr, 17 op/s
Jan 31 09:12:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:04.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3309363941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:05.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:05 compute-2 nova_compute[226829]: 2026-01-31 09:12:05.498 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:05 compute-2 nova_compute[226829]: 2026-01-31 09:12:05.517 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:06 compute-2 ceph-mon[77282]: pgmap v4214: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 17 op/s
Jan 31 09:12:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:06.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:12:06.958 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:12:06.959 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:12:06.959 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:07.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:08 compute-2 podman[340191]: 2026-01-31 09:12:08.273243963 +0000 UTC m=+0.163337042 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 09:12:08 compute-2 ceph-mon[77282]: pgmap v4215: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 12 op/s
Jan 31 09:12:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:08.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:09.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:09 compute-2 sudo[340220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:09 compute-2 sudo[340220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:09 compute-2 sudo[340220]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:09 compute-2 sudo[340245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:09 compute-2 sudo[340245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:09 compute-2 sudo[340245]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/734212161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:12:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/734212161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:12:10 compute-2 nova_compute[226829]: 2026-01-31 09:12:10.500 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:10 compute-2 nova_compute[226829]: 2026-01-31 09:12:10.520 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:10 compute-2 ceph-mon[77282]: pgmap v4216: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 938 B/s wr, 16 op/s
Jan 31 09:12:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:10.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:11.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:12 compute-2 podman[340271]: 2026-01-31 09:12:12.149818021 +0000 UTC m=+0.041229143 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 09:12:12 compute-2 ceph-mon[77282]: pgmap v4217: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 09:12:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:12.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:13.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:14 compute-2 ceph-mon[77282]: pgmap v4218: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 09:12:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:14.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:15.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:15 compute-2 nova_compute[226829]: 2026-01-31 09:12:15.502 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:15 compute-2 nova_compute[226829]: 2026-01-31 09:12:15.522 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:16 compute-2 ceph-mon[77282]: pgmap v4219: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 697 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Jan 31 09:12:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:16.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:17.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:18 compute-2 ceph-mon[77282]: pgmap v4220: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 694 KiB/s rd, 597 B/s wr, 14 op/s
Jan 31 09:12:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:18.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:19.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:20 compute-2 ceph-mon[77282]: pgmap v4221: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 767 B/s wr, 22 op/s
Jan 31 09:12:20 compute-2 nova_compute[226829]: 2026-01-31 09:12:20.521 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:20 compute-2 nova_compute[226829]: 2026-01-31 09:12:20.523 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:20 compute-2 nova_compute[226829]: 2026-01-31 09:12:20.544 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:20.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:21.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:21 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:22 compute-2 ceph-mon[77282]: pgmap v4222: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 19 op/s
Jan 31 09:12:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:22.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:23.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:23 compute-2 nova_compute[226829]: 2026-01-31 09:12:23.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:24 compute-2 ceph-mon[77282]: pgmap v4223: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 31 09:12:24 compute-2 nova_compute[226829]: 2026-01-31 09:12:24.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:24.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:25.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1538677618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:25 compute-2 nova_compute[226829]: 2026-01-31 09:12:25.523 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:25 compute-2 nova_compute[226829]: 2026-01-31 09:12:25.525 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:26 compute-2 ceph-mon[77282]: pgmap v4224: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 767 B/s wr, 17 op/s
Jan 31 09:12:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2139921524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:26 compute-2 nova_compute[226829]: 2026-01-31 09:12:26.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:26.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:27.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:28 compute-2 ceph-mon[77282]: pgmap v4225: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 767 B/s wr, 17 op/s
Jan 31 09:12:28 compute-2 nova_compute[226829]: 2026-01-31 09:12:28.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:28 compute-2 nova_compute[226829]: 2026-01-31 09:12:28.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:12:28 compute-2 nova_compute[226829]: 2026-01-31 09:12:28.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:12:28 compute-2 nova_compute[226829]: 2026-01-31 09:12:28.530 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:12:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:28.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e421 e421: 3 total, 3 up, 3 in
Jan 31 09:12:29 compute-2 sudo[340301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:29 compute-2 sudo[340301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:29 compute-2 sudo[340301]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:29 compute-2 nova_compute[226829]: 2026-01-31 09:12:29.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:29 compute-2 sudo[340326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:29 compute-2 nova_compute[226829]: 2026-01-31 09:12:29.530 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:29 compute-2 nova_compute[226829]: 2026-01-31 09:12:29.530 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:29 compute-2 sudo[340326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:29 compute-2 nova_compute[226829]: 2026-01-31 09:12:29.530 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:29 compute-2 nova_compute[226829]: 2026-01-31 09:12:29.531 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:12:29 compute-2 nova_compute[226829]: 2026-01-31 09:12:29.531 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:29 compute-2 sudo[340326]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:29 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:12:29 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2901482307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:29 compute-2 nova_compute[226829]: 2026-01-31 09:12:29.952 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.086 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.087 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4036MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.087 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.088 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.158 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.159 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.175 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:30 compute-2 ceph-mon[77282]: osdmap e421: 3 total, 3 up, 3 in
Jan 31 09:12:30 compute-2 ceph-mon[77282]: pgmap v4227: 305 pgs: 305 active+clean; 148 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 847 KiB/s rd, 1.2 MiB/s wr, 42 op/s
Jan 31 09:12:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2901482307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.526 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.529 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:12:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:12:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3588450564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.737 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.745 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.777 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.780 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:12:30 compute-2 nova_compute[226829]: 2026-01-31 09:12:30.780 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:30.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3588450564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #205. Immutable memtables: 0.
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.318632) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 205
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752318726, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 2373, "num_deletes": 251, "total_data_size": 5906219, "memory_usage": 5985376, "flush_reason": "Manual Compaction"}
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #206: started
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752440311, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 206, "file_size": 3821959, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98173, "largest_seqno": 100540, "table_properties": {"data_size": 3812383, "index_size": 6069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19564, "raw_average_key_size": 20, "raw_value_size": 3793277, "raw_average_value_size": 3943, "num_data_blocks": 265, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850542, "oldest_key_time": 1769850542, "file_creation_time": 1769850752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 121753 microseconds, and 7418 cpu microseconds.
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.440385) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #206: 3821959 bytes OK
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.440408) [db/memtable_list.cc:519] [default] Level-0 commit table #206 started
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.607815) [db/memtable_list.cc:722] [default] Level-0 commit table #206: memtable #1 done
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.607861) EVENT_LOG_v1 {"time_micros": 1769850752607849, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.607887) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 5895897, prev total WAL file size 5895897, number of live WAL files 2.
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000202.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.608720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [206(3732KB)], [204(12MB)]
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752608771, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [206], "files_L6": [204], "score": -1, "input_data_size": 16700639, "oldest_snapshot_seqno": -1}
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #207: 11837 keys, 14760616 bytes, temperature: kUnknown
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752779606, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 207, "file_size": 14760616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14685523, "index_size": 44388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29637, "raw_key_size": 312817, "raw_average_key_size": 26, "raw_value_size": 14479978, "raw_average_value_size": 1223, "num_data_blocks": 1684, "num_entries": 11837, "num_filter_entries": 11837, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769850752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 207, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:12:32 compute-2 nova_compute[226829]: 2026-01-31 09:12:32.781 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.779877) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 14760616 bytes
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.855306) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.7 rd, 86.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.3 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(8.2) write-amplify(3.9) OK, records in: 12360, records dropped: 523 output_compression: NoCompression
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.855342) EVENT_LOG_v1 {"time_micros": 1769850752855329, "job": 132, "event": "compaction_finished", "compaction_time_micros": 170948, "compaction_time_cpu_micros": 30322, "output_level": 6, "num_output_files": 1, "total_output_size": 14760616, "num_input_records": 12360, "num_output_records": 11837, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752855742, "job": 132, "event": "table_file_deletion", "file_number": 206}
Jan 31 09:12:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000204.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752857056, "job": 132, "event": "table_file_deletion", "file_number": 204}
Jan 31 09:12:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:32.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.608611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.857079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.857084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.857086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.857088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:12:32.857089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:33 compute-2 ceph-mon[77282]: pgmap v4228: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 31 09:12:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1434427507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1432717354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:33.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:34 compute-2 ceph-mon[77282]: pgmap v4229: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 31 09:12:34 compute-2 nova_compute[226829]: 2026-01-31 09:12:34.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:34.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:35.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:35 compute-2 nova_compute[226829]: 2026-01-31 09:12:35.528 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:35 compute-2 nova_compute[226829]: 2026-01-31 09:12:35.530 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:36 compute-2 ceph-mon[77282]: pgmap v4230: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Jan 31 09:12:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:36.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:37.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:38 compute-2 ceph-mon[77282]: pgmap v4231: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Jan 31 09:12:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:38.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:39.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:39 compute-2 podman[340400]: 2026-01-31 09:12:39.167624032 +0000 UTC m=+0.058478320 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 09:12:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/130320623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:40 compute-2 ceph-mon[77282]: pgmap v4232: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 KiB/s rd, 925 KiB/s wr, 6 op/s
Jan 31 09:12:40 compute-2 nova_compute[226829]: 2026-01-31 09:12:40.531 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:12:40 compute-2 nova_compute[226829]: 2026-01-31 09:12:40.532 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:40 compute-2 nova_compute[226829]: 2026-01-31 09:12:40.532 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:12:40 compute-2 nova_compute[226829]: 2026-01-31 09:12:40.532 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:12:40 compute-2 nova_compute[226829]: 2026-01-31 09:12:40.533 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:12:40 compute-2 nova_compute[226829]: 2026-01-31 09:12:40.534 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:40.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:41.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:41 compute-2 ceph-mon[77282]: pgmap v4233: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 806 KiB/s wr, 5 op/s
Jan 31 09:12:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:42.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:43.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:43 compute-2 podman[340428]: 2026-01-31 09:12:43.190547959 +0000 UTC m=+0.056784115 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 31 09:12:43 compute-2 ceph-mon[77282]: pgmap v4234: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 426 B/s wr, 4 op/s
Jan 31 09:12:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:44.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:45.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:12:45 compute-2 nova_compute[226829]: 2026-01-31 09:12:45.535 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:12:45 compute-2 nova_compute[226829]: 2026-01-31 09:12:45.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:12:45 compute-2 nova_compute[226829]: 2026-01-31 09:12:45.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:12:45 compute-2 nova_compute[226829]: 2026-01-31 09:12:45.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:12:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:12:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2610765387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:12:45 compute-2 nova_compute[226829]: 2026-01-31 09:12:45.589 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:45 compute-2 nova_compute[226829]: 2026-01-31 09:12:45.589 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:12:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2610765387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:12:45 compute-2 sudo[340449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:45 compute-2 sudo[340449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:45 compute-2 sudo[340449]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:46 compute-2 sudo[340474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:12:46 compute-2 sudo[340474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:46 compute-2 sudo[340474]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:46 compute-2 sudo[340499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:46 compute-2 sudo[340499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:46 compute-2 sudo[340499]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:46 compute-2 sudo[340524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 09:12:46 compute-2 sudo[340524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:46 compute-2 sudo[340524]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:46.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:47 compute-2 sudo[340570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:47 compute-2 sudo[340570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:47 compute-2 sudo[340570]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:47 compute-2 ceph-mon[77282]: pgmap v4235: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 14 op/s
Jan 31 09:12:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:47 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 09:12:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/114281962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:12:47 compute-2 sudo[340595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:12:47 compute-2 sudo[340595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:47 compute-2 sudo[340595]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:47 compute-2 sudo[340620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:47 compute-2 sudo[340620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:47.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:47 compute-2 sudo[340620]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:47 compute-2 sudo[340645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:12:47 compute-2 sudo[340645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:47 compute-2 sudo[340645]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 09:12:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:12:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:12:48 compute-2 ceph-mon[77282]: pgmap v4236: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Jan 31 09:12:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:12:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:12:48 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:12:48 compute-2 nova_compute[226829]: 2026-01-31 09:12:48.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:48 compute-2 nova_compute[226829]: 2026-01-31 09:12:48.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:12:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:48.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:49.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:49 compute-2 sudo[340703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:49 compute-2 sudo[340703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:49 compute-2 sudo[340703]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:49 compute-2 sudo[340728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:49 compute-2 sudo[340728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:49 compute-2 sudo[340728]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:50 compute-2 ceph-mon[77282]: pgmap v4237: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.5 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Jan 31 09:12:50 compute-2 nova_compute[226829]: 2026-01-31 09:12:50.590 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:50 compute-2 nova_compute[226829]: 2026-01-31 09:12:50.592 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:50.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:51.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:12:51.704 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=112, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=111) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:12:51 compute-2 nova_compute[226829]: 2026-01-31 09:12:51.705 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:12:51.705 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:12:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:52 compute-2 ceph-mon[77282]: pgmap v4238: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Jan 31 09:12:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:52.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:53.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/599817729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:12:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/599817729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:12:54 compute-2 sudo[340757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:54 compute-2 sudo[340757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:54 compute-2 sudo[340757]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:54 compute-2 sudo[340782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:12:54 compute-2 sudo[340782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:54 compute-2 sudo[340782]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:54.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:55.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:55 compute-2 ceph-mon[77282]: pgmap v4239: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 720 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Jan 31 09:12:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:55 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:55 compute-2 nova_compute[226829]: 2026-01-31 09:12:55.592 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:55 compute-2 nova_compute[226829]: 2026-01-31 09:12:55.595 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:12:56 compute-2 ceph-mon[77282]: pgmap v4240: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Jan 31 09:12:56 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:56 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:12:56 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:56.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:12:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:57.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:58 compute-2 ceph-mon[77282]: pgmap v4241: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Jan 31 09:12:58 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:58 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:58 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:58.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:12:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:12:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:00 compute-2 ceph-mon[77282]: pgmap v4242: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 80 op/s
Jan 31 09:13:00 compute-2 nova_compute[226829]: 2026-01-31 09:13:00.594 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:13:00 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:00 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:00 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:13:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:01 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:13:01.709 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '112'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:13:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1702245253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:02 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:02 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:02 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:02.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:03.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:03 compute-2 ceph-mon[77282]: pgmap v4243: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 09:13:04 compute-2 ceph-mon[77282]: pgmap v4244: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 31 09:13:04 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:04 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:04 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:04.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:13:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:05.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:05 compute-2 nova_compute[226829]: 2026-01-31 09:13:05.597 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:13:05 compute-2 nova_compute[226829]: 2026-01-31 09:13:05.597 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:05 compute-2 nova_compute[226829]: 2026-01-31 09:13:05.598 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:13:05 compute-2 nova_compute[226829]: 2026-01-31 09:13:05.598 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:13:05 compute-2 nova_compute[226829]: 2026-01-31 09:13:05.598 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:13:05 compute-2 nova_compute[226829]: 2026-01-31 09:13:05.599 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2207572594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:13:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2207572594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:13:06 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:06 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:06 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:06.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:13:06.960 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:13:06.960 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:13:06.961 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:06 compute-2 ceph-mon[77282]: pgmap v4245: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 13 KiB/s wr, 75 op/s
Jan 31 09:13:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:13:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:07.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e422 e422: 3 total, 3 up, 3 in
Jan 31 09:13:08 compute-2 ceph-mon[77282]: osdmap e422: 3 total, 3 up, 3 in
Jan 31 09:13:08 compute-2 ceph-mon[77282]: pgmap v4247: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1023 B/s wr, 33 op/s
Jan 31 09:13:08 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:08 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:08 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:08.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:09.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:09 compute-2 sudo[340814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:09 compute-2 sudo[340814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:09 compute-2 sudo[340814]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:09 compute-2 sudo[340842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:09 compute-2 sudo[340842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:09 compute-2 sudo[340842]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:09 compute-2 podman[340838]: 2026-01-31 09:13:09.851980303 +0000 UTC m=+0.096588538 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:13:10 compute-2 ceph-mon[77282]: pgmap v4248: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 1.5 KiB/s wr, 43 op/s
Jan 31 09:13:10 compute-2 nova_compute[226829]: 2026-01-31 09:13:10.599 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:10 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:10 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:10 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:10.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:11.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1948929932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:13:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1948929932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:13:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:12 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:12 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:12 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:12.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:13 compute-2 ceph-mon[77282]: pgmap v4249: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Jan 31 09:13:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:13.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:14 compute-2 ceph-mon[77282]: pgmap v4250: 305 pgs: 305 active+clean; 154 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 KiB/s wr, 36 op/s
Jan 31 09:13:14 compute-2 podman[340894]: 2026-01-31 09:13:14.149703232 +0000 UTC m=+0.042762403 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 09:13:14 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:14 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:14 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:14.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:13:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:15.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:15 compute-2 nova_compute[226829]: 2026-01-31 09:13:15.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:15 compute-2 nova_compute[226829]: 2026-01-31 09:13:15.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 09:13:15 compute-2 nova_compute[226829]: 2026-01-31 09:13:15.600 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:16 compute-2 ceph-mon[77282]: pgmap v4251: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Jan 31 09:13:16 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:16 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:13:16 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:16.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:13:16 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 e423: 3 total, 3 up, 3 in
Jan 31 09:13:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:17 compute-2 ceph-mon[77282]: osdmap e423: 3 total, 3 up, 3 in
Jan 31 09:13:18 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:18 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:18 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:18.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:19 compute-2 ceph-mon[77282]: pgmap v4253: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Jan 31 09:13:20 compute-2 ceph-mon[77282]: pgmap v4254: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 473 KiB/s rd, 921 B/s wr, 26 op/s
Jan 31 09:13:20 compute-2 nova_compute[226829]: 2026-01-31 09:13:20.602 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:13:20 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:20 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:13:20 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:20.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:21.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:22 compute-2 ceph-mon[77282]: pgmap v4255: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 921 B/s wr, 26 op/s
Jan 31 09:13:22 compute-2 nova_compute[226829]: 2026-01-31 09:13:22.504 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:22 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:22 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:22 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:22.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:23.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:23 compute-2 ceph-mon[77282]: pgmap v4256: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 614 B/s wr, 23 op/s
Jan 31 09:13:24 compute-2 nova_compute[226829]: 2026-01-31 09:13:24.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:24 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:24 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:24 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:24.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:25.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:25 compute-2 nova_compute[226829]: 2026-01-31 09:13:25.605 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:26 compute-2 ceph-mon[77282]: pgmap v4257: 305 pgs: 305 active+clean; 145 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 46 op/s
Jan 31 09:13:26 compute-2 nova_compute[226829]: 2026-01-31 09:13:26.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:26 compute-2 nova_compute[226829]: 2026-01-31 09:13:26.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:26 compute-2 nova_compute[226829]: 2026-01-31 09:13:26.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 09:13:26 compute-2 nova_compute[226829]: 2026-01-31 09:13:26.526 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 09:13:26 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:26 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:26 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:26.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:27.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2455795959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3879334382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:28 compute-2 nova_compute[226829]: 2026-01-31 09:13:28.526 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:28 compute-2 nova_compute[226829]: 2026-01-31 09:13:28.526 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:13:28 compute-2 nova_compute[226829]: 2026-01-31 09:13:28.526 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:13:28 compute-2 nova_compute[226829]: 2026-01-31 09:13:28.552 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:13:28 compute-2 nova_compute[226829]: 2026-01-31 09:13:28.552 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:28 compute-2 ceph-mon[77282]: pgmap v4258: 305 pgs: 305 active+clean; 145 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 41 op/s
Jan 31 09:13:28 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:28 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:28 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:28.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:29.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:29 compute-2 sudo[340923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:29 compute-2 sudo[340923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:29 compute-2 sudo[340923]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:29 compute-2 sudo[340948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:29 compute-2 sudo[340948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:29 compute-2 sudo[340948]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:30 compute-2 ceph-mon[77282]: pgmap v4259: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 31 09:13:30 compute-2 nova_compute[226829]: 2026-01-31 09:13:30.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:30 compute-2 nova_compute[226829]: 2026-01-31 09:13:30.528 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:30 compute-2 nova_compute[226829]: 2026-01-31 09:13:30.529 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:30 compute-2 nova_compute[226829]: 2026-01-31 09:13:30.529 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:30 compute-2 nova_compute[226829]: 2026-01-31 09:13:30.529 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:13:30 compute-2 nova_compute[226829]: 2026-01-31 09:13:30.530 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:30 compute-2 nova_compute[226829]: 2026-01-31 09:13:30.606 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:30 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:30 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:30 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:30.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:13:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2559977376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.011 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.150 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.151 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4052MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.151 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.152 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:31.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.221 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.221 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.237 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:13:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1909190248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.708 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.714 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.756 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.758 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.758 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:31 compute-2 nova_compute[226829]: 2026-01-31 09:13:31.759 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2559977376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:32 compute-2 nova_compute[226829]: 2026-01-31 09:13:32.789 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:32 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:32 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:32 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:32.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1909190248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:33 compute-2 ceph-mon[77282]: pgmap v4260: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 31 09:13:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2401774579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:33.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3699418978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:34 compute-2 ceph-mon[77282]: pgmap v4261: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 09:13:34 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:34 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:13:34 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:35.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:35 compute-2 nova_compute[226829]: 2026-01-31 09:13:35.607 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:35 compute-2 nova_compute[226829]: 2026-01-31 09:13:35.609 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/645523033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:36 compute-2 nova_compute[226829]: 2026-01-31 09:13:36.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:36 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:36 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:36 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:36.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:13:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3936430947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:13:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:37.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:37 compute-2 ceph-mon[77282]: pgmap v4262: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 09:13:38 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:38 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:38 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:38.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3936430947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:13:39 compute-2 ceph-mon[77282]: pgmap v4263: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 760 KiB/s wr, 2 op/s
Jan 31 09:13:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:13:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:39.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:40 compute-2 podman[341022]: 2026-01-31 09:13:40.183692483 +0000 UTC m=+0.074319822 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 09:13:40 compute-2 ceph-mon[77282]: pgmap v4264: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 596 B/s rd, 760 KiB/s wr, 2 op/s
Jan 31 09:13:40 compute-2 nova_compute[226829]: 2026-01-31 09:13:40.608 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:40 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:40 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:40 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:40.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:13:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:41.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:13:42 compute-2 ceph-mon[77282]: pgmap v4265: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:13:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:42 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:42 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:42 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:42.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2446852087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:13:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:43.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:44 compute-2 ceph-mon[77282]: pgmap v4266: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:13:44 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:44 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:44 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:44.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:45 compute-2 podman[341052]: 2026-01-31 09:13:45.149707224 +0000 UTC m=+0.040815461 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 09:13:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:45.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:45 compute-2 nova_compute[226829]: 2026-01-31 09:13:45.610 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:46 compute-2 ceph-mon[77282]: pgmap v4267: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 09:13:46 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:46 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:46 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:46.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:47.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:47 compute-2 ceph-mon[77282]: pgmap v4268: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 09:13:48 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:48 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:48 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:48.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:49.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:49 compute-2 nova_compute[226829]: 2026-01-31 09:13:49.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:49 compute-2 nova_compute[226829]: 2026-01-31 09:13:49.490 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:13:50 compute-2 sudo[341073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:50 compute-2 sudo[341073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:50 compute-2 sudo[341073]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:50 compute-2 sudo[341098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:50 compute-2 sudo[341098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:50 compute-2 sudo[341098]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:50 compute-2 ceph-mon[77282]: pgmap v4269: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 776 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 31 09:13:50 compute-2 nova_compute[226829]: 2026-01-31 09:13:50.611 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:13:50 compute-2 nova_compute[226829]: 2026-01-31 09:13:50.613 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:13:50 compute-2 nova_compute[226829]: 2026-01-31 09:13:50.613 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:13:50 compute-2 nova_compute[226829]: 2026-01-31 09:13:50.613 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:13:50 compute-2 nova_compute[226829]: 2026-01-31 09:13:50.665 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:50 compute-2 nova_compute[226829]: 2026-01-31 09:13:50.665 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:13:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:13:50.822 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=113, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=112) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:13:50 compute-2 nova_compute[226829]: 2026-01-31 09:13:50.823 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:50 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:13:50.824 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:13:50 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:50 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:13:50 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:50.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:13:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:51 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #208. Immutable memtables: 0.
Jan 31 09:13:51 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:51.754626) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:13:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 208
Jan 31 09:13:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850831754703, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1087, "num_deletes": 251, "total_data_size": 2215995, "memory_usage": 2240536, "flush_reason": "Manual Compaction"}
Jan 31 09:13:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #209: started
Jan 31 09:13:51 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850831982399, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 209, "file_size": 956587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 100545, "largest_seqno": 101627, "table_properties": {"data_size": 952499, "index_size": 1675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10982, "raw_average_key_size": 21, "raw_value_size": 943687, "raw_average_value_size": 1825, "num_data_blocks": 72, "num_entries": 517, "num_filter_entries": 517, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850752, "oldest_key_time": 1769850752, "file_creation_time": 1769850831, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:13:51 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 227825 microseconds, and 3091 cpu microseconds.
Jan 31 09:13:51 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:51.982451) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #209: 956587 bytes OK
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:51.982470) [db/memtable_list.cc:519] [default] Level-0 commit table #209 started
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.035746) [db/memtable_list.cc:722] [default] Level-0 commit table #209: memtable #1 done
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.035804) EVENT_LOG_v1 {"time_micros": 1769850832035782, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.035834) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 2210679, prev total WAL file size 2210679, number of live WAL files 2.
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000205.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.036624) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353130' seq:72057594037927935, type:22 .. '6D6772737461740033373631' seq:0, type:0; will stop at (end)
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [209(934KB)], [207(14MB)]
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850832036680, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [209], "files_L6": [207], "score": -1, "input_data_size": 15717203, "oldest_snapshot_seqno": -1}
Jan 31 09:13:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #210: 11864 keys, 12350432 bytes, temperature: kUnknown
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850832656802, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 210, "file_size": 12350432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12278879, "index_size": 40784, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29701, "raw_key_size": 313613, "raw_average_key_size": 26, "raw_value_size": 12076810, "raw_average_value_size": 1017, "num_data_blocks": 1535, "num_entries": 11864, "num_filter_entries": 11864, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769850832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 210, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.657267) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12350432 bytes
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.699111) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.3 rd, 19.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.1 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(29.3) write-amplify(12.9) OK, records in: 12354, records dropped: 490 output_compression: NoCompression
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.699148) EVENT_LOG_v1 {"time_micros": 1769850832699134, "job": 134, "event": "compaction_finished", "compaction_time_micros": 620412, "compaction_time_cpu_micros": 31069, "output_level": 6, "num_output_files": 1, "total_output_size": 12350432, "num_input_records": 12354, "num_output_records": 11864, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850832699399, "job": 134, "event": "table_file_deletion", "file_number": 209}
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000207.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850832700689, "job": 134, "event": "table_file_deletion", "file_number": 207}
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.036526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.700807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.700814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.700816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.700818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:13:52.700820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:52 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:52 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:52.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:53 compute-2 ceph-mon[77282]: pgmap v4270: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 09:13:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:53.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/455054707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:13:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/455054707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:13:54 compute-2 ceph-mon[77282]: pgmap v4271: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 09:13:54 compute-2 sudo[341126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:54 compute-2 sudo[341126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:54 compute-2 sudo[341126]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:54 compute-2 sudo[341151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:13:54 compute-2 sudo[341151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:54 compute-2 sudo[341151]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:54 compute-2 sudo[341176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:54 compute-2 sudo[341176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:54 compute-2 sudo[341176]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:54 compute-2 sudo[341201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:13:54 compute-2 sudo[341201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:54 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:54 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:54 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:54.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:55.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:55 compute-2 sudo[341201]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:55 compute-2 nova_compute[226829]: 2026-01-31 09:13:55.753 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:55 compute-2 nova_compute[226829]: 2026-01-31 09:13:55.755 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 09:13:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:13:56 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:13:56 compute-2 ceph-mon[77282]: pgmap v4272: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 09:13:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:13:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:57.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:13:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:57.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:13:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:13:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:13:57 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:13:58 compute-2 ceph-mon[77282]: pgmap v4273: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 68 op/s
Jan 31 09:13:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:59.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:13:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:59.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:00 compute-2 ceph-mon[77282]: pgmap v4274: 305 pgs: 305 active+clean; 171 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 517 KiB/s wr, 76 op/s
Jan 31 09:14:00 compute-2 nova_compute[226829]: 2026-01-31 09:14:00.756 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:14:00 compute-2 nova_compute[226829]: 2026-01-31 09:14:00.758 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:14:00 compute-2 nova_compute[226829]: 2026-01-31 09:14:00.758 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:14:00 compute-2 nova_compute[226829]: 2026-01-31 09:14:00.758 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:14:00 compute-2 nova_compute[226829]: 2026-01-31 09:14:00.792 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:00 compute-2 nova_compute[226829]: 2026-01-31 09:14:00.792 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:14:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:00.826 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '113'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:14:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:01.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:01.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:01 compute-2 ceph-mon[77282]: pgmap v4275: 305 pgs: 305 active+clean; 184 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.6 MiB/s wr, 67 op/s
Jan 31 09:14:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:03.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:03.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:04 compute-2 ceph-mon[77282]: pgmap v4276: 305 pgs: 305 active+clean; 195 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 190 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 31 09:14:04 compute-2 sudo[341264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:04 compute-2 sudo[341264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:04 compute-2 sudo[341264]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:04 compute-2 sudo[341289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:14:04 compute-2 sudo[341289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:04 compute-2 sudo[341289]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:05.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:05.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:14:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:14:05 compute-2 nova_compute[226829]: 2026-01-31 09:14:05.793 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:06 compute-2 ceph-mon[77282]: pgmap v4277: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 09:14:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:06.961 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:06.962 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:06.962 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:07.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:07.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:08 compute-2 ceph-mon[77282]: pgmap v4278: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 09:14:08 compute-2 nova_compute[226829]: 2026-01-31 09:14:08.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:09.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:09.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:09 compute-2 ceph-mon[77282]: pgmap v4279: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:14:10 compute-2 sudo[341316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:10 compute-2 sudo[341316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:10 compute-2 sudo[341316]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:10 compute-2 sudo[341341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:10 compute-2 sudo[341341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:10 compute-2 sudo[341341]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:10 compute-2 nova_compute[226829]: 2026-01-31 09:14:10.795 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:11.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:11 compute-2 podman[341367]: 2026-01-31 09:14:11.208868971 +0000 UTC m=+0.092560017 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 09:14:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:11.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:11 compute-2 ceph-mon[77282]: pgmap v4280: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 248 KiB/s rd, 1.7 MiB/s wr, 52 op/s
Jan 31 09:14:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:13.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:13.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:14 compute-2 ceph-mon[77282]: pgmap v4281: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 125 KiB/s rd, 607 KiB/s wr, 31 op/s
Jan 31 09:14:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:15.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:15.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:15 compute-2 nova_compute[226829]: 2026-01-31 09:14:15.796 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:15 compute-2 ceph-mon[77282]: pgmap v4282: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 74 KiB/s rd, 48 KiB/s wr, 11 op/s
Jan 31 09:14:16 compute-2 podman[341395]: 2026-01-31 09:14:16.160709559 +0000 UTC m=+0.047964826 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 09:14:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:17.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:14:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:17.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:14:17 compute-2 ceph-mon[77282]: pgmap v4283: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 31 09:14:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:14:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:19.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:14:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:14:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:19.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:14:20 compute-2 nova_compute[226829]: 2026-01-31 09:14:20.799 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:14:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:21.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:14:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:21.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:21 compute-2 ceph-mon[77282]: pgmap v4284: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 31 09:14:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:23.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:23 compute-2 ceph-mon[77282]: pgmap v4285: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:14:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:23.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:24 compute-2 ceph-mon[77282]: pgmap v4286: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 09:14:24 compute-2 nova_compute[226829]: 2026-01-31 09:14:24.549 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:25.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:25.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:25 compute-2 nova_compute[226829]: 2026-01-31 09:14:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:25 compute-2 nova_compute[226829]: 2026-01-31 09:14:25.801 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e424 e424: 3 total, 3 up, 3 in
Jan 31 09:14:26 compute-2 ceph-mon[77282]: pgmap v4287: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 31 09:14:26 compute-2 ceph-mon[77282]: osdmap e424: 3 total, 3 up, 3 in
Jan 31 09:14:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/620014776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 e425: 3 total, 3 up, 3 in
Jan 31 09:14:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:27.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:27.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:28 compute-2 ceph-mon[77282]: osdmap e425: 3 total, 3 up, 3 in
Jan 31 09:14:28 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3477090387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:28 compute-2 ceph-mon[77282]: pgmap v4290: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s
Jan 31 09:14:28 compute-2 nova_compute[226829]: 2026-01-31 09:14:28.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:28 compute-2 nova_compute[226829]: 2026-01-31 09:14:28.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:14:28 compute-2 nova_compute[226829]: 2026-01-31 09:14:28.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:14:28 compute-2 nova_compute[226829]: 2026-01-31 09:14:28.510 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:14:28 compute-2 nova_compute[226829]: 2026-01-31 09:14:28.511 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:28.943 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=114, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=113) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:14:28 compute-2 nova_compute[226829]: 2026-01-31 09:14:28.943 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:28 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:28.945 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:14:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:29.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:14:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:29.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:14:29 compute-2 nova_compute[226829]: 2026-01-31 09:14:29.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:29 compute-2 ceph-mon[77282]: pgmap v4291: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 2.5 KiB/s wr, 13 op/s
Jan 31 09:14:30 compute-2 sudo[341421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:30 compute-2 sudo[341421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:30 compute-2 sudo[341421]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:30 compute-2 sudo[341447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:30 compute-2 sudo[341447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:30 compute-2 sudo[341447]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:30 compute-2 nova_compute[226829]: 2026-01-31 09:14:30.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:30 compute-2 nova_compute[226829]: 2026-01-31 09:14:30.530 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:30 compute-2 nova_compute[226829]: 2026-01-31 09:14:30.531 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:30 compute-2 nova_compute[226829]: 2026-01-31 09:14:30.532 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:30 compute-2 nova_compute[226829]: 2026-01-31 09:14:30.532 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:14:30 compute-2 nova_compute[226829]: 2026-01-31 09:14:30.532 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:30 compute-2 nova_compute[226829]: 2026-01-31 09:14:30.802 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:14:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/658694514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:30 compute-2 nova_compute[226829]: 2026-01-31 09:14:30.958 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/658694514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:31.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.154 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.155 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4048MB free_disk=20.988109588623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.155 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.155 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.211 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.211 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:14:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:31.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.303 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:14:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3177446391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.757 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.764 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.810 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.812 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:14:31 compute-2 nova_compute[226829]: 2026-01-31 09:14:31.812 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3177446391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:32 compute-2 ceph-mon[77282]: pgmap v4292: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 2.0 KiB/s wr, 24 op/s
Jan 31 09:14:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:33.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:33.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:34 compute-2 ceph-mon[77282]: pgmap v4293: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 5.7 KiB/s wr, 20 op/s
Jan 31 09:14:34 compute-2 nova_compute[226829]: 2026-01-31 09:14:34.813 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1583078624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:35.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:35 compute-2 nova_compute[226829]: 2026-01-31 09:14:35.804 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:36 compute-2 nova_compute[226829]: 2026-01-31 09:14:36.308 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "71be684d-2233-462f-8268-a0bf7ea3f281" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:36 compute-2 nova_compute[226829]: 2026-01-31 09:14:36.309 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:36 compute-2 nova_compute[226829]: 2026-01-31 09:14:36.328 226833 DEBUG nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:14:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3282541616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:36 compute-2 ceph-mon[77282]: pgmap v4294: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 6.9 KiB/s wr, 19 op/s
Jan 31 09:14:36 compute-2 nova_compute[226829]: 2026-01-31 09:14:36.470 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:36 compute-2 nova_compute[226829]: 2026-01-31 09:14:36.471 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:36 compute-2 nova_compute[226829]: 2026-01-31 09:14:36.476 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:14:36 compute-2 nova_compute[226829]: 2026-01-31 09:14:36.477 226833 INFO nova.compute.claims [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Claim successful on node compute-2.ctlplane.example.com
Jan 31 09:14:36 compute-2 nova_compute[226829]: 2026-01-31 09:14:36.623 226833 DEBUG oslo_concurrency.processutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:14:37 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1936571576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.052 226833 DEBUG oslo_concurrency.processutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.058 226833 DEBUG nova.compute.provider_tree [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.081 226833 DEBUG nova.scheduler.client.report [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:14:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.108 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.109 226833 DEBUG nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.160 226833 INFO nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.163 226833 DEBUG nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.163 226833 DEBUG nova.network.neutron [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.191 226833 DEBUG nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:14:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.232 226833 INFO nova.virt.block_device [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Booting with volume snapshot 19cd7d76-1c46-4377-b72d-ef79e4149d09 at /dev/vda
Jan 31 09:14:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:37.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:37 compute-2 nova_compute[226829]: 2026-01-31 09:14:37.849 226833 DEBUG nova.policy [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecd39871d7fd438f88b36601f25d6eb6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98d10c0290e340a08e9d1726bf0066bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 09:14:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1936571576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:38 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:38.950 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '114'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:14:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:39 compute-2 ceph-mon[77282]: pgmap v4295: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 6.1 KiB/s wr, 17 op/s
Jan 31 09:14:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:39.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:39 compute-2 nova_compute[226829]: 2026-01-31 09:14:39.449 226833 DEBUG nova.network.neutron [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Successfully created port: 16bbc36d-c752-47fb-8a65-efe72e3acf06 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 09:14:40 compute-2 nova_compute[226829]: 2026-01-31 09:14:40.806 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:14:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:41 compute-2 ceph-mon[77282]: pgmap v4296: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 5.5 KiB/s wr, 16 op/s
Jan 31 09:14:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:41.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.866 226833 DEBUG nova.network.neutron [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Successfully updated port: 16bbc36d-c752-47fb-8a65-efe72e3acf06 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.875 226833 DEBUG os_brick.utils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.879 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.896 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.896 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ab00d5-cf74-43dd-b37d-fe6fcff699ec]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.898 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.903 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "refresh_cache-71be684d-2233-462f-8268-a0bf7ea3f281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.903 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquired lock "refresh_cache-71be684d-2233-462f-8268-a0bf7ea3f281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.903 226833 DEBUG nova.network.neutron [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.904 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.904 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9791fb-a1cd-4046-8577-6aea060e6d19]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.906 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.912 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.912 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c2f035-da66-4d88-b103-9470a9c10a19]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.913 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[21265b12-3377-416d-be16-ee8e6ed638cc]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.916 226833 DEBUG oslo_concurrency.processutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.939 226833 DEBUG oslo_concurrency.processutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.941 226833 DEBUG os_brick.initiator.connectors.lightos [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.942 226833 DEBUG os_brick.initiator.connectors.lightos [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.942 226833 DEBUG os_brick.initiator.connectors.lightos [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.942 226833 DEBUG os_brick.utils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 09:14:41 compute-2 nova_compute[226829]: 2026-01-31 09:14:41.942 226833 DEBUG nova.virt.block_device [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Updating existing volume attachment record: 6950d456-da22-44be-910e-060a13ad0ffd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 09:14:42 compute-2 nova_compute[226829]: 2026-01-31 09:14:42.029 226833 DEBUG nova.compute.manager [req-3e6b1a7d-f50f-437d-8305-20f442da5ba0 req-61c414ab-1d5f-4740-9e96-2595486b1194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Received event network-changed-16bbc36d-c752-47fb-8a65-efe72e3acf06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:14:42 compute-2 nova_compute[226829]: 2026-01-31 09:14:42.029 226833 DEBUG nova.compute.manager [req-3e6b1a7d-f50f-437d-8305-20f442da5ba0 req-61c414ab-1d5f-4740-9e96-2595486b1194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Refreshing instance network info cache due to event network-changed-16bbc36d-c752-47fb-8a65-efe72e3acf06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:14:42 compute-2 nova_compute[226829]: 2026-01-31 09:14:42.030 226833 DEBUG oslo_concurrency.lockutils [req-3e6b1a7d-f50f-437d-8305-20f442da5ba0 req-61c414ab-1d5f-4740-9e96-2595486b1194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-71be684d-2233-462f-8268-a0bf7ea3f281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:14:42 compute-2 podman[341550]: 2026-01-31 09:14:42.212232916 +0000 UTC m=+0.099279640 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Jan 31 09:14:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:42 compute-2 nova_compute[226829]: 2026-01-31 09:14:42.752 226833 DEBUG nova.network.neutron [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:14:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:14:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:43.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:14:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:43.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:43 compute-2 ceph-mon[77282]: pgmap v4297: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 5.8 KiB/s wr, 19 op/s
Jan 31 09:14:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1294371103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:14:43 compute-2 nova_compute[226829]: 2026-01-31 09:14:43.400 226833 DEBUG nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:14:43 compute-2 nova_compute[226829]: 2026-01-31 09:14:43.402 226833 DEBUG nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:14:43 compute-2 nova_compute[226829]: 2026-01-31 09:14:43.402 226833 INFO nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Creating image(s)
Jan 31 09:14:43 compute-2 nova_compute[226829]: 2026-01-31 09:14:43.403 226833 DEBUG nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 09:14:43 compute-2 nova_compute[226829]: 2026-01-31 09:14:43.403 226833 DEBUG nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Ensure instance console log exists: /var/lib/nova/instances/71be684d-2233-462f-8268-a0bf7ea3f281/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:14:43 compute-2 nova_compute[226829]: 2026-01-31 09:14:43.404 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:43 compute-2 nova_compute[226829]: 2026-01-31 09:14:43.404 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:43 compute-2 nova_compute[226829]: 2026-01-31 09:14:43.404 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:45.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.138 226833 DEBUG nova.network.neutron [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Updating instance_info_cache with network_info: [{"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.157 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Releasing lock "refresh_cache-71be684d-2233-462f-8268-a0bf7ea3f281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.158 226833 DEBUG nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Instance network_info: |[{"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.158 226833 DEBUG oslo_concurrency.lockutils [req-3e6b1a7d-f50f-437d-8305-20f442da5ba0 req-61c414ab-1d5f-4740-9e96-2595486b1194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-71be684d-2233-462f-8268-a0bf7ea3f281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.158 226833 DEBUG nova.network.neutron [req-3e6b1a7d-f50f-437d-8305-20f442da5ba0 req-61c414ab-1d5f-4740-9e96-2595486b1194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Refreshing network info cache for port 16bbc36d-c752-47fb-8a65-efe72e3acf06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.161 226833 DEBUG nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Start _get_guest_xml network_info=[{"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-01-31T09:14:24Z,direct_url=<?>,disk_format='qcow2',id=3235f05a-670b-496a-a8b6-5b3e82346d62,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1538724823',owner='98d10c0290e340a08e9d1726bf0066bf',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-01-31T09:14:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-873ab52c-6ddb-4591-95be-4fd33ed9a07f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '873ab52c-6ddb-4591-95be-4fd33ed9a07f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '71be684d-2233-462f-8268-a0bf7ea3f281', 'attached_at': '', 'detached_at': '', 'volume_id': '873ab52c-6ddb-4591-95be-4fd33ed9a07f', 'serial': '873ab52c-6ddb-4591-95be-4fd33ed9a07f'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '6950d456-da22-44be-910e-060a13ad0ffd', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.166 226833 WARNING nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.171 226833 DEBUG nova.virt.libvirt.host [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.171 226833 DEBUG nova.virt.libvirt.host [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.176 226833 DEBUG nova.virt.libvirt.host [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.177 226833 DEBUG nova.virt.libvirt.host [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.178 226833 DEBUG nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.179 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-01-31T09:14:24Z,direct_url=<?>,disk_format='qcow2',id=3235f05a-670b-496a-a8b6-5b3e82346d62,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1538724823',owner='98d10c0290e340a08e9d1726bf0066bf',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-01-31T09:14:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.179 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.179 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.180 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.180 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.180 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.180 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.181 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.181 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.181 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.181 226833 DEBUG nova.virt.hardware [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.211 226833 DEBUG nova.storage.rbd_utils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 71be684d-2233-462f-8268-a0bf7ea3f281_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.215 226833 DEBUG oslo_concurrency.processutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:45.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:45 compute-2 ceph-mon[77282]: pgmap v4298: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 5.3 KiB/s wr, 13 op/s
Jan 31 09:14:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:14:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2205534106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.619 226833 DEBUG oslo_concurrency.processutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.642 226833 DEBUG nova.virt.libvirt.vif [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-604575612',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-604575612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-604575612',id=221,image_ref='3235f05a-670b-496a-a8b6-5b3e82346d62',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMFXo6TXyOjOGOwMN7nMTJychPX0W8Ids/CcSfyK9jF+CBR02NNz9kBE3K04DOWNvBm8TZYgtZQkcIS2FaVaawjlXKIyFEJNIhjDFXBARZmbEzviijrK8StVMvPdMN+fA==',key_name='tempest-keypair-1394431595',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-k0oxdq20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1294459393',image_owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=71be684d-2233-462f-8268-a0bf7ea3f281,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.643 226833 DEBUG nova.network.os_vif_util [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.644 226833 DEBUG nova.network.os_vif_util [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ba:ad,bridge_name='br-int',has_traffic_filtering=True,id=16bbc36d-c752-47fb-8a65-efe72e3acf06,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16bbc36d-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.646 226833 DEBUG nova.objects.instance [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 71be684d-2233-462f-8268-a0bf7ea3f281 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.658 226833 DEBUG nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <uuid>71be684d-2233-462f-8268-a0bf7ea3f281</uuid>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <name>instance-000000dd</name>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <metadata>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-604575612</nova:name>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 09:14:45</nova:creationTime>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <nova:user uuid="ecd39871d7fd438f88b36601f25d6eb6">tempest-TestVolumeBootPattern-1294459393-project-member</nova:user>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <nova:project uuid="98d10c0290e340a08e9d1726bf0066bf">tempest-TestVolumeBootPattern-1294459393</nova:project>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <nova:root type="image" uuid="3235f05a-670b-496a-a8b6-5b3e82346d62"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <nova:port uuid="16bbc36d-c752-47fb-8a65-efe72e3acf06">
Jan 31 09:14:45 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   </metadata>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <system>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <entry name="serial">71be684d-2233-462f-8268-a0bf7ea3f281</entry>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <entry name="uuid">71be684d-2233-462f-8268-a0bf7ea3f281</entry>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     </system>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <os>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   </os>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <features>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <apic/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   </features>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   </clock>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   </cpu>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   <devices>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/71be684d-2233-462f-8268-a0bf7ea3f281_disk.config">
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       </source>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-873ab52c-6ddb-4591-95be-4fd33ed9a07f">
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       </source>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:14:45 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <serial>873ab52c-6ddb-4591-95be-4fd33ed9a07f</serial>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:d5:ba:ad"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <target dev="tap16bbc36d-c7"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     </interface>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/71be684d-2233-462f-8268-a0bf7ea3f281/console.log" append="off"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     </serial>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <video>
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     </video>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <input type="keyboard" bus="usb"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     </rng>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 09:14:45 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 09:14:45 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 09:14:45 compute-2 nova_compute[226829]:   </devices>
Jan 31 09:14:45 compute-2 nova_compute[226829]: </domain>
Jan 31 09:14:45 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.660 226833 DEBUG nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Preparing to wait for external event network-vif-plugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.660 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.660 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.661 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.662 226833 DEBUG nova.virt.libvirt.vif [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-604575612',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-604575612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-604575612',id=221,image_ref='3235f05a-670b-496a-a8b6-5b3e82346d62',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMFXo6TXyOjOGOwMN7nMTJychPX0W8Ids/CcSfyK9jF+CBR02NNz9kBE3K04DOWNvBm8TZYgtZQkcIS2FaVaawjlXKIyFEJNIhjDFXBARZmbEzviijrK8StVMvPdMN+fA==',key_name='tempest-keypair-1394431595',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-k0oxdq20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1294459393',image_owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=71be684d-2233-462f-8268-a0bf7ea3f281,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.662 226833 DEBUG nova.network.os_vif_util [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.663 226833 DEBUG nova.network.os_vif_util [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ba:ad,bridge_name='br-int',has_traffic_filtering=True,id=16bbc36d-c752-47fb-8a65-efe72e3acf06,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16bbc36d-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.663 226833 DEBUG os_vif [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ba:ad,bridge_name='br-int',has_traffic_filtering=True,id=16bbc36d-c752-47fb-8a65-efe72e3acf06,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16bbc36d-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.664 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.664 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.665 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.674 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.675 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16bbc36d-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.675 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap16bbc36d-c7, col_values=(('external_ids', {'iface-id': '16bbc36d-c752-47fb-8a65-efe72e3acf06', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:ba:ad', 'vm-uuid': '71be684d-2233-462f-8268-a0bf7ea3f281'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:14:45 compute-2 NetworkManager[48999]: <info>  [1769850885.6784] manager: (tap16bbc36d-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.679 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.683 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.684 226833 INFO os_vif [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ba:ad,bridge_name='br-int',has_traffic_filtering=True,id=16bbc36d-c752-47fb-8a65-efe72e3acf06,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16bbc36d-c7')
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.796 226833 DEBUG nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.797 226833 DEBUG nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.798 226833 DEBUG nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No VIF found with MAC fa:16:3e:d5:ba:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.799 226833 INFO nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Using config drive
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.827 226833 DEBUG nova.storage.rbd_utils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 71be684d-2233-462f-8268-a0bf7ea3f281_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:14:45 compute-2 nova_compute[226829]: 2026-01-31 09:14:45.832 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:46 compute-2 nova_compute[226829]: 2026-01-31 09:14:46.292 226833 INFO nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Creating config drive at /var/lib/nova/instances/71be684d-2233-462f-8268-a0bf7ea3f281/disk.config
Jan 31 09:14:46 compute-2 nova_compute[226829]: 2026-01-31 09:14:46.300 226833 DEBUG oslo_concurrency.processutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71be684d-2233-462f-8268-a0bf7ea3f281/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo8wgecw2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:46 compute-2 nova_compute[226829]: 2026-01-31 09:14:46.431 226833 DEBUG oslo_concurrency.processutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71be684d-2233-462f-8268-a0bf7ea3f281/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo8wgecw2" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:46 compute-2 nova_compute[226829]: 2026-01-31 09:14:46.461 226833 DEBUG nova.storage.rbd_utils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 71be684d-2233-462f-8268-a0bf7ea3f281_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:14:46 compute-2 nova_compute[226829]: 2026-01-31 09:14:46.464 226833 DEBUG oslo_concurrency.processutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71be684d-2233-462f-8268-a0bf7ea3f281/disk.config 71be684d-2233-462f-8268-a0bf7ea3f281_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2205534106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:14:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:47.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:47 compute-2 podman[341677]: 2026-01-31 09:14:47.152671694 +0000 UTC m=+0.043654258 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 09:14:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:47.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.152 226833 DEBUG nova.network.neutron [req-3e6b1a7d-f50f-437d-8305-20f442da5ba0 req-61c414ab-1d5f-4740-9e96-2595486b1194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Updated VIF entry in instance network info cache for port 16bbc36d-c752-47fb-8a65-efe72e3acf06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.153 226833 DEBUG nova.network.neutron [req-3e6b1a7d-f50f-437d-8305-20f442da5ba0 req-61c414ab-1d5f-4740-9e96-2595486b1194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Updating instance_info_cache with network_info: [{"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.174 226833 DEBUG oslo_concurrency.lockutils [req-3e6b1a7d-f50f-437d-8305-20f442da5ba0 req-61c414ab-1d5f-4740-9e96-2595486b1194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-71be684d-2233-462f-8268-a0bf7ea3f281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:14:48 compute-2 ceph-mon[77282]: pgmap v4299: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.440 226833 DEBUG oslo_concurrency.processutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71be684d-2233-462f-8268-a0bf7ea3f281/disk.config 71be684d-2233-462f-8268-a0bf7ea3f281_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.976s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.441 226833 INFO nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Deleting local config drive /var/lib/nova/instances/71be684d-2233-462f-8268-a0bf7ea3f281/disk.config because it was imported into RBD.
Jan 31 09:14:48 compute-2 kernel: tap16bbc36d-c7: entered promiscuous mode
Jan 31 09:14:48 compute-2 ovn_controller[133834]: 2026-01-31T09:14:48Z|00868|binding|INFO|Claiming lport 16bbc36d-c752-47fb-8a65-efe72e3acf06 for this chassis.
Jan 31 09:14:48 compute-2 ovn_controller[133834]: 2026-01-31T09:14:48Z|00869|binding|INFO|16bbc36d-c752-47fb-8a65-efe72e3acf06: Claiming fa:16:3e:d5:ba:ad 10.100.0.12
Jan 31 09:14:48 compute-2 NetworkManager[48999]: <info>  [1769850888.4887] manager: (tap16bbc36d-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.487 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.489 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.496 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.498 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 NetworkManager[48999]: <info>  [1769850888.5007] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.500 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 NetworkManager[48999]: <info>  [1769850888.5016] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.505 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:ba:ad 10.100.0.12'], port_security=['fa:16:3e:d5:ba:ad 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '71be684d-2233-462f-8268-a0bf7ea3f281', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2b30892c-1416-4aa6-83f7-0d4ffa0734bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=16bbc36d-c752-47fb-8a65-efe72e3acf06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.506 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 16bbc36d-c752-47fb-8a65-efe72e3acf06 in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 bound to our chassis
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.507 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:14:48 compute-2 systemd-udevd[341713]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.518 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc40d34-1f5d-4ed2-8527-816a21a578f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.520 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c9ca540-51 in ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:14:48 compute-2 systemd-machined[195142]: New machine qemu-98-instance-000000dd.
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.522 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c9ca540-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.522 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9311099f-5471-4337-8c4d-5371356cf7d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.523 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[28883a68-1718-45af-9574-4ba986a67997]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.527 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 NetworkManager[48999]: <info>  [1769850888.5295] device (tap16bbc36d-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:14:48 compute-2 NetworkManager[48999]: <info>  [1769850888.5302] device (tap16bbc36d-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:14:48 compute-2 systemd[1]: Started Virtual Machine qemu-98-instance-000000dd.
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.534 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[678b6b64-6634-403d-bdcf-512a31bb5621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.537 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 ovn_controller[133834]: 2026-01-31T09:14:48Z|00870|binding|INFO|Setting lport 16bbc36d-c752-47fb-8a65-efe72e3acf06 ovn-installed in OVS
Jan 31 09:14:48 compute-2 ovn_controller[133834]: 2026-01-31T09:14:48Z|00871|binding|INFO|Setting lport 16bbc36d-c752-47fb-8a65-efe72e3acf06 up in Southbound
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.540 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.554 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3b27b4cb-7658-44b5-993f-6388542d9557]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.579 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[e5064ac5-52a3-4193-8d37-342c8f9e162a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 systemd-udevd[341717]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.583 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6958e947-6302-4768-a97f-875a8d149b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 NetworkManager[48999]: <info>  [1769850888.5847] manager: (tap5c9ca540-50): new Veth device (/org/freedesktop/NetworkManager/Devices/439)
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.609 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[9926bfb6-e92b-48b5-9228-5e1a93d61c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.612 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[773ecad1-6b06-441f-add6-943690f19e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 NetworkManager[48999]: <info>  [1769850888.6258] device (tap5c9ca540-50): carrier: link connected
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.629 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[cbec7ed1-7954-4b68-824c-57976b5248b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.642 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6037ac-8773-4257-af26-9a1e8b65f71d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1114109, 'reachable_time': 22744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341746, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.653 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[a1829143-105b-4163-8e80-cf3348a76fb1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:dcf7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1114109, 'tstamp': 1114109}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341747, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.663 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2adb56-3e8f-47b7-962b-1697cb87138a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1114109, 'reachable_time': 22744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341748, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.685 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b44f66a0-21f9-4616-8ab7-bbe1d5e464de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.727 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[1783f0c6-308e-49c4-a924-50becef6b016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.729 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.729 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.730 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c9ca540-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.731 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 NetworkManager[48999]: <info>  [1769850888.7325] manager: (tap5c9ca540-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Jan 31 09:14:48 compute-2 kernel: tap5c9ca540-50: entered promiscuous mode
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.734 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c9ca540-50, col_values=(('external_ids', {'iface-id': '016c97be-36ee-470a-8bac-28db98577a8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.736 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 ovn_controller[133834]: 2026-01-31T09:14:48Z|00872|binding|INFO|Releasing lport 016c97be-36ee-470a-8bac-28db98577a8c from this chassis (sb_readonly=0)
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.737 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:14:48 compute-2 nova_compute[226829]: 2026-01-31 09:14:48.741 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.741 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7537e228-6724-4814-bb00-bfcfb8ba450f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.742 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: global
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:14:48 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:14:48.743 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'env', 'PROCESS_TAG=haproxy-5c9ca540-57e7-412d-8ef3-af923db0a265', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c9ca540-57e7-412d-8ef3-af923db0a265.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:14:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:49.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:49 compute-2 podman[341821]: 2026-01-31 09:14:49.129369002 +0000 UTC m=+0.056809175 container create c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.150 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850889.1493511, 71be684d-2233-462f-8268-a0bf7ea3f281 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.151 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] VM Started (Lifecycle Event)
Jan 31 09:14:49 compute-2 systemd[1]: Started libpod-conmon-c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a.scope.
Jan 31 09:14:49 compute-2 systemd[1]: Started libcrun container.
Jan 31 09:14:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42a2ab490bf57f8f5bde9c8c0047ccddd928e5b7bfc7eeb8c98e0cb09d8b5cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:14:49 compute-2 podman[341821]: 2026-01-31 09:14:49.195751557 +0000 UTC m=+0.123191730 container init c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 09:14:49 compute-2 podman[341821]: 2026-01-31 09:14:49.104418145 +0000 UTC m=+0.031858338 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:14:49 compute-2 podman[341821]: 2026-01-31 09:14:49.200676512 +0000 UTC m=+0.128116685 container start c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:14:49 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[341838]: [NOTICE]   (341842) : New worker (341844) forked
Jan 31 09:14:49 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[341838]: [NOTICE]   (341842) : Loading success.
Jan 31 09:14:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:49.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.316 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.321 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850889.1506002, 71be684d-2233-462f-8268-a0bf7ea3f281 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.321 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] VM Paused (Lifecycle Event)
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.340 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.342 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.360 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.815 226833 DEBUG nova.compute.manager [req-95c3f078-cc29-483a-960f-30231f0bff73 req-58d1c34a-f66c-40b8-a83f-7a6c11a221bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Received event network-vif-plugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.815 226833 DEBUG oslo_concurrency.lockutils [req-95c3f078-cc29-483a-960f-30231f0bff73 req-58d1c34a-f66c-40b8-a83f-7a6c11a221bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.816 226833 DEBUG oslo_concurrency.lockutils [req-95c3f078-cc29-483a-960f-30231f0bff73 req-58d1c34a-f66c-40b8-a83f-7a6c11a221bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.816 226833 DEBUG oslo_concurrency.lockutils [req-95c3f078-cc29-483a-960f-30231f0bff73 req-58d1c34a-f66c-40b8-a83f-7a6c11a221bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.816 226833 DEBUG nova.compute.manager [req-95c3f078-cc29-483a-960f-30231f0bff73 req-58d1c34a-f66c-40b8-a83f-7a6c11a221bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Processing event network-vif-plugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.817 226833 DEBUG nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.820 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769850889.8201478, 71be684d-2233-462f-8268-a0bf7ea3f281 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.821 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] VM Resumed (Lifecycle Event)
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.822 226833 DEBUG nova.virt.libvirt.driver [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.825 226833 INFO nova.virt.libvirt.driver [-] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Instance spawned successfully.
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.826 226833 INFO nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Took 6.43 seconds to spawn the instance on the hypervisor.
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.826 226833 DEBUG nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:14:49 compute-2 ceph-mon[77282]: pgmap v4300: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.852 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.856 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.894 226833 INFO nova.compute.manager [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Took 13.51 seconds to build instance.
Jan 31 09:14:49 compute-2 nova_compute[226829]: 2026-01-31 09:14:49.910 226833 DEBUG oslo_concurrency.lockutils [None req-276be118-9b50-4ddd-8695-2606897c8f69 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:50 compute-2 sudo[341854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:50 compute-2 sudo[341854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:50 compute-2 sudo[341854]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:50 compute-2 sudo[341879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:50 compute-2 sudo[341879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:50 compute-2 sudo[341879]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:50 compute-2 nova_compute[226829]: 2026-01-31 09:14:50.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:50 compute-2 nova_compute[226829]: 2026-01-31 09:14:50.487 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:14:50 compute-2 nova_compute[226829]: 2026-01-31 09:14:50.677 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:50 compute-2 nova_compute[226829]: 2026-01-31 09:14:50.811 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:50 compute-2 ceph-mon[77282]: pgmap v4301: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 15 KiB/s wr, 16 op/s
Jan 31 09:14:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:51.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:51.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:52 compute-2 nova_compute[226829]: 2026-01-31 09:14:52.251 226833 DEBUG nova.compute.manager [req-61080951-4734-4c89-81a8-548a17b1609f req-e533fff6-d36d-48a7-bf96-de7ef24dc9c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Received event network-vif-plugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:14:52 compute-2 nova_compute[226829]: 2026-01-31 09:14:52.251 226833 DEBUG oslo_concurrency.lockutils [req-61080951-4734-4c89-81a8-548a17b1609f req-e533fff6-d36d-48a7-bf96-de7ef24dc9c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:52 compute-2 nova_compute[226829]: 2026-01-31 09:14:52.252 226833 DEBUG oslo_concurrency.lockutils [req-61080951-4734-4c89-81a8-548a17b1609f req-e533fff6-d36d-48a7-bf96-de7ef24dc9c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:52 compute-2 nova_compute[226829]: 2026-01-31 09:14:52.252 226833 DEBUG oslo_concurrency.lockutils [req-61080951-4734-4c89-81a8-548a17b1609f req-e533fff6-d36d-48a7-bf96-de7ef24dc9c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:52 compute-2 nova_compute[226829]: 2026-01-31 09:14:52.252 226833 DEBUG nova.compute.manager [req-61080951-4734-4c89-81a8-548a17b1609f req-e533fff6-d36d-48a7-bf96-de7ef24dc9c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] No waiting events found dispatching network-vif-plugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:14:52 compute-2 nova_compute[226829]: 2026-01-31 09:14:52.253 226833 WARNING nova.compute.manager [req-61080951-4734-4c89-81a8-548a17b1609f req-e533fff6-d36d-48a7-bf96-de7ef24dc9c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Received unexpected event network-vif-plugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 for instance with vm_state active and task_state None.
Jan 31 09:14:52 compute-2 sshd-session[341905]: Accepted publickey for zuul from 192.168.122.10 port 52050 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 09:14:52 compute-2 systemd-logind[801]: New session 66 of user zuul.
Jan 31 09:14:52 compute-2 systemd[1]: Started Session 66 of User zuul.
Jan 31 09:14:52 compute-2 sshd-session[341905]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:14:52 compute-2 sudo[341909]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 31 09:14:52 compute-2 sudo[341909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:14:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:53.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:53 compute-2 ceph-mon[77282]: pgmap v4302: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 24 KiB/s wr, 32 op/s
Jan 31 09:14:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:53.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:54 compute-2 nova_compute[226829]: 2026-01-31 09:14:54.369 226833 DEBUG nova.compute.manager [req-dfbcc3e0-3eea-4ac4-981a-890a853401d5 req-990bea6a-98d5-4bce-98f4-e4298d1610c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Received event network-changed-16bbc36d-c752-47fb-8a65-efe72e3acf06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:14:54 compute-2 nova_compute[226829]: 2026-01-31 09:14:54.371 226833 DEBUG nova.compute.manager [req-dfbcc3e0-3eea-4ac4-981a-890a853401d5 req-990bea6a-98d5-4bce-98f4-e4298d1610c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Refreshing instance network info cache due to event network-changed-16bbc36d-c752-47fb-8a65-efe72e3acf06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:14:54 compute-2 nova_compute[226829]: 2026-01-31 09:14:54.372 226833 DEBUG oslo_concurrency.lockutils [req-dfbcc3e0-3eea-4ac4-981a-890a853401d5 req-990bea6a-98d5-4bce-98f4-e4298d1610c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-71be684d-2233-462f-8268-a0bf7ea3f281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:14:54 compute-2 nova_compute[226829]: 2026-01-31 09:14:54.372 226833 DEBUG oslo_concurrency.lockutils [req-dfbcc3e0-3eea-4ac4-981a-890a853401d5 req-990bea6a-98d5-4bce-98f4-e4298d1610c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-71be684d-2233-462f-8268-a0bf7ea3f281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:14:54 compute-2 nova_compute[226829]: 2026-01-31 09:14:54.373 226833 DEBUG nova.network.neutron [req-dfbcc3e0-3eea-4ac4-981a-890a853401d5 req-990bea6a-98d5-4bce-98f4-e4298d1610c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Refreshing network info cache for port 16bbc36d-c752-47fb-8a65-efe72e3acf06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:14:54 compute-2 ceph-mon[77282]: from='client.37467 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2147898005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:14:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2147898005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:14:54 compute-2 ceph-mon[77282]: from='client.51443 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:54 compute-2 ceph-mon[77282]: from='client.37482 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/176635303' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:14:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 31 09:14:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:55.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 31 09:14:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:55.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:55 compute-2 ceph-mon[77282]: from='client.51467 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:55 compute-2 ceph-mon[77282]: pgmap v4303: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 23 KiB/s wr, 49 op/s
Jan 31 09:14:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3681007901' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:14:55 compute-2 nova_compute[226829]: 2026-01-31 09:14:55.679 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:55 compute-2 nova_compute[226829]: 2026-01-31 09:14:55.813 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:57.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:57.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:57 compute-2 nova_compute[226829]: 2026-01-31 09:14:57.427 226833 DEBUG nova.network.neutron [req-dfbcc3e0-3eea-4ac4-981a-890a853401d5 req-990bea6a-98d5-4bce-98f4-e4298d1610c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Updated VIF entry in instance network info cache for port 16bbc36d-c752-47fb-8a65-efe72e3acf06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:14:57 compute-2 nova_compute[226829]: 2026-01-31 09:14:57.428 226833 DEBUG nova.network.neutron [req-dfbcc3e0-3eea-4ac4-981a-890a853401d5 req-990bea6a-98d5-4bce-98f4-e4298d1610c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Updating instance_info_cache with network_info: [{"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:14:57 compute-2 nova_compute[226829]: 2026-01-31 09:14:57.466 226833 DEBUG oslo_concurrency.lockutils [req-dfbcc3e0-3eea-4ac4-981a-890a853401d5 req-990bea6a-98d5-4bce-98f4-e4298d1610c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-71be684d-2233-462f-8268-a0bf7ea3f281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:14:57 compute-2 ceph-mon[77282]: pgmap v4304: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 31 09:14:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 09:14:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/840669752' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:14:58 compute-2 ceph-mon[77282]: from='client.37500 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:58 compute-2 ceph-mon[77282]: from='client.46444 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/840669752' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:14:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:59.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:14:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:59.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:59 compute-2 ceph-mon[77282]: pgmap v4305: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 31 09:14:59 compute-2 ceph-mon[77282]: from='client.37506 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-2 ceph-mon[77282]: from='client.51488 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-2 ceph-mon[77282]: from='client.51506 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2613753304' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:14:59 compute-2 ceph-mon[77282]: from='client.37518 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-2 ceph-mon[77282]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:14:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1754209629' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:14:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1389096197' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:00 compute-2 nova_compute[226829]: 2026-01-31 09:15:00.681 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:00 compute-2 nova_compute[226829]: 2026-01-31 09:15:00.816 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3081761994' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:01 compute-2 ceph-mon[77282]: pgmap v4306: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 31 09:15:01 compute-2 ceph-mon[77282]: from='client.51533 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3338682382' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:15:01 compute-2 ceph-mon[77282]: from='client.37554 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2102815293' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:15:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/347420494' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:15:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1846108260' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:15:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2990179853' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:15:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:01.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:01.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:02 compute-2 ovs-vsctl[342203]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2417167780' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.37584 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/109357756' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1292535670' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.51581 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.37596 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2668804060' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3341601396' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.51602 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2703213647' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1922402219' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:03 compute-2 virtqemud[226546]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 09:15:03 compute-2 virtqemud[226546]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 09:15:03 compute-2 virtqemud[226546]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 09:15:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:15:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:03.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:15:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:03.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:03 compute-2 ceph-mon[77282]: pgmap v4307: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.1 KiB/s wr, 72 op/s
Jan 31 09:15:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3053518246' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:03 compute-2 ceph-mon[77282]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3047504095' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/885162426' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:15:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3175809376' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3994025984' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1890310869' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:15:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/549006704' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:15:03 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2604331417' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:15:03 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: cache status {prefix=cache status} (starting...)
Jan 31 09:15:03 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: client ls {prefix=client ls} (starting...)
Jan 31 09:15:03 compute-2 lvm[342574]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 09:15:03 compute-2 lvm[342574]: VG ceph_vg0 finished
Jan 31 09:15:04 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 09:15:04 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 09:15:04 compute-2 ceph-mon[77282]: from='client.37644 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-2 ceph-mon[77282]: from='client.51668 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3547050634' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/840059746' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:15:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/183632181' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3062583683' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:15:04 compute-2 ceph-mon[77282]: from='client.51692 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/488891734' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:15:04 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4009751160' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:15:04 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 09:15:04 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4270911575' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:15:04 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 09:15:04 compute-2 sudo[342691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:04 compute-2 sudo[342691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:04 compute-2 sudo[342691]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:04 compute-2 sudo[342724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:15:04 compute-2 sudo[342724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:04 compute-2 sudo[342724]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:04 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 09:15:04 compute-2 sudo[342761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:04 compute-2 sudo[342761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:04 compute-2 sudo[342761]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:04 compute-2 sudo[342788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:15:04 compute-2 sudo[342788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:04 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 09:15:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:15:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2682141508' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:05 compute-2 ovn_controller[133834]: 2026-01-31T09:15:05Z|00123|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.12
Jan 31 09:15:05 compute-2 ovn_controller[133834]: 2026-01-31T09:15:05Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d5:ba:ad 10.100.0.12
Jan 31 09:15:05 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 09:15:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:15:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:15:05 compute-2 sudo[342788]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:05 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 09:15:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:05.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 31 09:15:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/758987683' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.37686 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.46510 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.51716 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: pgmap v4308: 305 pgs: 305 active+clean; 213 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 470 KiB/s wr, 90 op/s
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2641837516' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.37695 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.46519 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.37710 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3646074436' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4270911575' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.51740 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.37722 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2476336237' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2823572616' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2682141508' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/816330882' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/758987683' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3690361327' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:15:05 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: ops {prefix=ops} (starting...)
Jan 31 09:15:05 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 31 09:15:05 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2139579517' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:15:05 compute-2 nova_compute[226829]: 2026-01-31 09:15:05.684 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:05 compute-2 nova_compute[226829]: 2026-01-31 09:15:05.817 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 09:15:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/345464744' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: session ls {prefix=session ls} (starting...)
Jan 31 09:15:06 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: status {prefix=status} (starting...)
Jan 31 09:15:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 09:15:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1889120454' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.46558 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.51755 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.37731 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2139579517' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.51782 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3864898821' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2887442106' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2939993999' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/345464744' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2531390572' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3362502088' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1889120454' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:06 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 09:15:06 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1002175140' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:06.963 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:06.964 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:06.965 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 09:15:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1470515231' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:07.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 09:15:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/658685375' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:07.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 09:15:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2544725457' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 09:15:07 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1012007661' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.37755 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.51803 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: pgmap v4309: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 494 KiB/s wr, 78 op/s
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.46591 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.37782 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.51821 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.46609 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.37791 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2656656714' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2926247127' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1002175140' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1470515231' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/658685375' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2544725457' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1012007661' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:15:07 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4287032663' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:15:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 09:15:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2079264228' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 31 09:15:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4249357889' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:15:08 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 31 09:15:08 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3684087532' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 09:15:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1762674557' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:09 compute-2 ovn_controller[133834]: 2026-01-31T09:15:09Z|00125|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.12
Jan 31 09:15:09 compute-2 ovn_controller[133834]: 2026-01-31T09:15:09Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d5:ba:ad 10.100.0.12
Jan 31 09:15:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:09.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.51842 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.37803 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.37818 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.51878 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/523129022' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.46657 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2079264228' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: pgmap v4310: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 494 KiB/s wr, 52 op/s
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3401814255' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4269053121' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4249357889' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3246896467' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.37854 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.46684 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1731634458' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/366387459' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3684087532' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3227798696' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/949643787' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4091371072' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1327770669' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:15:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:09.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc837400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc837400 session 0x562dbff912c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:27.036612+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc4da4000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.100792885s of 15.409749031s, submitted: 17
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 532561920 unmapped: 71729152 heap: 604291072 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc4da4000 session 0x562dbc7770e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e5c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc18e5c00 session 0x562dbecb2b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:28.036830+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533602304 unmapped: 74366976 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:29.037019+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5340708 data_alloc: 218103808 data_used: 18477056
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533602304 unmapped: 74366976 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:30.037229+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533602304 unmapped: 74366976 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:31.037386+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533602304 unmapped: 74366976 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:32.037577+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198ba2000/0x0/0x1bfc00000, data 0x3f3ab87/0x413c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533602304 unmapped: 74366976 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:33.037759+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533602304 unmapped: 74366976 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:34.037910+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5340708 data_alloc: 218103808 data_used: 18477056
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533610496 unmapped: 74358784 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:35.038081+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533610496 unmapped: 74358784 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198ba2000/0x0/0x1bfc00000, data 0x3f3ab87/0x413c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:36.038307+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533610496 unmapped: 74358784 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:37.038511+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198ba2000/0x0/0x1bfc00000, data 0x3f3ab87/0x413c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533610496 unmapped: 74358784 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e1400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc68e1400 session 0x562dbd7ff680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:38.038692+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533610496 unmapped: 74358784 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:39.038879+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5340708 data_alloc: 218103808 data_used: 18477056
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533610496 unmapped: 74358784 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:40.039119+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198ba2000/0x0/0x1bfc00000, data 0x3f3ab87/0x413c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533610496 unmapped: 74358784 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:41.039308+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533610496 unmapped: 74358784 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc68e1000 session 0x562dbd84c000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:42.039459+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 74350592 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:43.039603+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 74350592 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc2854800 session 0x562dbb56eb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cf800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.751882553s of 16.397167206s, submitted: 43
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc7cf800 session 0x562dbbac14a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198ba2000/0x0/0x1bfc00000, data 0x3f3ab87/0x413c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:44.039750+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198ba2000/0x0/0x1bfc00000, data 0x3f3ab87/0x413c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5340708 data_alloc: 218103808 data_used: 18477056
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 74350592 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77f400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:45.039952+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 74350592 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198ba2000/0x0/0x1bfc00000, data 0x3f3ab87/0x413c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:46.040162+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 74350592 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:47.040391+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 74350592 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:48.040589+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533618688 unmapped: 74350592 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:49.040734+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5346148 data_alloc: 234881024 data_used: 19124224
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533495808 unmapped: 74473472 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198ba2000/0x0/0x1bfc00000, data 0x3f3ab87/0x413c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:50.040898+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 74465280 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57da400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc57da400 session 0x562dc822c1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:51.041045+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 74465280 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198ba2000/0x0/0x1bfc00000, data 0x3f3ab87/0x413c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:52.041228+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 74465280 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cf800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc7cf800 session 0x562dbba830e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:53.041378+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc2854800 session 0x562dba8a0f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 74465280 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc68e1000 session 0x562dbc777c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:54.041516+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e1400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7ce400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5377741 data_alloc: 234881024 data_used: 23269376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 74465280 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198b7d000/0x0/0x1bfc00000, data 0x3f5ebaa/0x4161000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:55.041802+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533512192 unmapped: 74457088 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:56.041929+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533471232 unmapped: 74498048 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:57.042119+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533577728 unmapped: 74391552 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:58.042257+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533577728 unmapped: 74391552 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.015396118s of 15.042432785s, submitted: 7
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198b7d000/0x0/0x1bfc00000, data 0x3f5ebaa/0x4161000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:59.042420+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5449965 data_alloc: 234881024 data_used: 33284096
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 533577728 unmapped: 74391552 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:00.042543+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534822912 unmapped: 73146368 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:01.042692+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536788992 unmapped: 71180288 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:02.042896+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198315000/0x0/0x1bfc00000, data 0x47c0baa/0x49c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 537182208 unmapped: 70787072 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:03.043110+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 537190400 unmapped: 70778880 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:04.043253+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5525987 data_alloc: 234881024 data_used: 34414592
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 537190400 unmapped: 70778880 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:05.043460+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 537190400 unmapped: 70778880 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:06.043584+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 537206784 unmapped: 70762496 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1982f8000/0x0/0x1bfc00000, data 0x47e3baa/0x49e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:07.043777+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541163520 unmapped: 66805760 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba77f400 session 0x562dc822f860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:08.043902+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540925952 unmapped: 67043328 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.640155792s of 10.329380989s, submitted: 225
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:09.044052+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5488725 data_alloc: 234881024 data_used: 29667328
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4000 session 0x562dbd1b21e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:10.044209+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198361000/0x0/0x1bfc00000, data 0x44dfb9a/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198361000/0x0/0x1bfc00000, data 0x44dfb9a/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:11.044401+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:12.044588+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19835e000/0x0/0x1bfc00000, data 0x44e2b9a/0x46e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:13.044767+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:14.044958+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5487429 data_alloc: 234881024 data_used: 29679616
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:15.045114+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc68e1400 session 0x562dbff90f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc7ce400 session 0x562dbd84d680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:16.045289+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:17.045526+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1985d9000/0x0/0x1bfc00000, data 0x4503b9a/0x4705000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:18.045729+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:19.045926+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5487357 data_alloc: 234881024 data_used: 29679616
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:20.046078+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1985d9000/0x0/0x1bfc00000, data 0x4503b9a/0x4705000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:21.046278+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:22.046457+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:23.046649+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:24.046944+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5487357 data_alloc: 234881024 data_used: 29679616
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540942336 unmapped: 67026944 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc43dc800 session 0x562dbd84b680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbd881800 session 0x562dbd5d92c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:25.047063+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.215866089s of 16.269037247s, submitted: 20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534872064 unmapped: 73097216 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba6a1000 session 0x562dc822e000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:26.047234+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19984a000/0x0/0x1bfc00000, data 0x3292b9a/0x3494000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:27.047512+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19984b000/0x0/0x1bfc00000, data 0x3292b67/0x3492000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:28.047658+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:29.047811+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5275110 data_alloc: 234881024 data_used: 20070400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:30.047992+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:31.048176+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2102800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:32.048438+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19984b000/0x0/0x1bfc00000, data 0x3292b67/0x3492000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:33.048618+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:34.048852+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19984b000/0x0/0x1bfc00000, data 0x3292b67/0x3492000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5275270 data_alloc: 234881024 data_used: 20074496
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:35.049090+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:36.049316+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.647685051s of 11.885573387s, submitted: 23
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:37.049523+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199846000/0x0/0x1bfc00000, data 0x3298b67/0x3498000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:38.049710+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:39.049917+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5275574 data_alloc: 234881024 data_used: 20074496
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:40.050073+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:41.050213+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:42.050376+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199846000/0x0/0x1bfc00000, data 0x3298b67/0x3498000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 534880256 unmapped: 73089024 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:43.050532+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199846000/0x0/0x1bfc00000, data 0x3298b67/0x3498000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535928832 unmapped: 72040448 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:44.050694+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5275650 data_alloc: 234881024 data_used: 20074496
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535928832 unmapped: 72040448 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:45.050834+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535928832 unmapped: 72040448 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:46.050959+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199843000/0x0/0x1bfc00000, data 0x329bb67/0x349b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535928832 unmapped: 72040448 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:47.051168+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535928832 unmapped: 72040448 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:48.051328+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199843000/0x0/0x1bfc00000, data 0x329bb67/0x349b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535928832 unmapped: 72040448 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:49.051467+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5275650 data_alloc: 234881024 data_used: 20074496
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535928832 unmapped: 72040448 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:50.051654+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535928832 unmapped: 72040448 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.775457382s of 13.801543236s, submitted: 5
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc2102800 session 0x562dbecb23c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac11c00 session 0x562dbd527c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199843000/0x0/0x1bfc00000, data 0x329bb67/0x349b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:51.051839+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e4000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535945216 unmapped: 72024064 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:52.052004+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199843000/0x0/0x1bfc00000, data 0x329bb67/0x349b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc18e4000 session 0x562dbff90d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535969792 unmapped: 71999488 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:53.052171+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535969792 unmapped: 71999488 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:54.052339+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021643 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535977984 unmapped: 71991296 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:55.052518+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535977984 unmapped: 71991296 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:56.052693+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535977984 unmapped: 71991296 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19ac6f000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:57.052913+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535977984 unmapped: 71991296 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:58.053102+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535977984 unmapped: 71991296 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:59.053255+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021643 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535977984 unmapped: 71991296 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc738000 session 0x562dbd5265a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73a000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc73a000 session 0x562dbbb14960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac11c00 session 0x562dbd5d9a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc738000 session 0x562dbecb3860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e4000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:00.053409+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc18e4000 session 0x562dba895860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2102800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc2102800 session 0x562dbb548780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd880400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbd880400 session 0x562dba2843c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac11c00 session 0x562dbc7774a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc738000 session 0x562dbb9de960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535986176 unmapped: 71983104 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:01.053586+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535986176 unmapped: 71983104 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:02.053753+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535986176 unmapped: 71983104 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a421000/0x0/0x1bfc00000, data 0x26beaf2/0x28bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:03.053951+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535986176 unmapped: 71983104 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a421000/0x0/0x1bfc00000, data 0x26beaf2/0x28bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:04.054145+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5104141 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535994368 unmapped: 71974912 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:05.054320+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535994368 unmapped: 71974912 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:06.054454+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a421000/0x0/0x1bfc00000, data 0x26beaf2/0x28bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535994368 unmapped: 71974912 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:07.054640+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535994368 unmapped: 71974912 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:08.054839+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535994368 unmapped: 71974912 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:09.054994+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5104141 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535994368 unmapped: 71974912 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:10.055237+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536002560 unmapped: 71966720 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:11.055403+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a421000/0x0/0x1bfc00000, data 0x26beaf2/0x28bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536002560 unmapped: 71966720 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:12.055571+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536002560 unmapped: 71966720 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:13.055756+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536002560 unmapped: 71966720 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:14.055896+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5104141 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536002560 unmapped: 71966720 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a421000/0x0/0x1bfc00000, data 0x26beaf2/0x28bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:15.056048+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536002560 unmapped: 71966720 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:16.056231+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536002560 unmapped: 71966720 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:17.056474+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536002560 unmapped: 71966720 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:18.056643+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536010752 unmapped: 71958528 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:19.056794+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5104141 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536010752 unmapped: 71958528 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:20.056925+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a421000/0x0/0x1bfc00000, data 0x26beaf2/0x28bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536010752 unmapped: 71958528 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:21.057103+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536010752 unmapped: 71958528 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:22.057276+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536010752 unmapped: 71958528 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:23.057412+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536010752 unmapped: 71958528 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a421000/0x0/0x1bfc00000, data 0x26beaf2/0x28bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:24.057530+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ec400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.366577148s of 33.342880249s, submitted: 58
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbe3ec400 session 0x562dbbac1e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5104945 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3666800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535945216 unmapped: 72024064 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57d9400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a3fd000/0x0/0x1bfc00000, data 0x26e2af2/0x28e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:25.057665+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535945216 unmapped: 72024064 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:26.057852+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:27.058113+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:28.058249+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a3fd000/0x0/0x1bfc00000, data 0x26e2af2/0x28e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:29.058373+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5165425 data_alloc: 218103808 data_used: 17371136
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:30.058512+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:31.058655+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:32.059131+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a3fd000/0x0/0x1bfc00000, data 0x26e2af2/0x28e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:33.059254+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:34.059373+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5165425 data_alloc: 218103808 data_used: 17371136
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a3fd000/0x0/0x1bfc00000, data 0x26e2af2/0x28e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:35.059497+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:36.059706+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 72081408 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.633664131s of 12.669456482s, submitted: 2
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a3fd000/0x0/0x1bfc00000, data 0x26e2af2/0x28e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,11])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:37.059953+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540803072 unmapped: 67166208 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:38.060171+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:39.060379+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5330165 data_alloc: 218103808 data_used: 18481152
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:40.060529+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:41.060696+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:42.060877+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199298000/0x0/0x1bfc00000, data 0x383faf2/0x3a3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:43.061092+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:44.061254+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5325053 data_alloc: 218103808 data_used: 18481152
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:45.061373+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:46.061513+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19927f000/0x0/0x1bfc00000, data 0x3860af2/0x3a5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:47.061682+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:48.061851+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19927f000/0x0/0x1bfc00000, data 0x3860af2/0x3a5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:49.062096+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5325373 data_alloc: 218103808 data_used: 18489344
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.482344627s of 13.037282944s, submitted: 155
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:50.062194+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19927f000/0x0/0x1bfc00000, data 0x3860af2/0x3a5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:51.062371+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:52.062511+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199271000/0x0/0x1bfc00000, data 0x386eaf2/0x3a6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:53.062646+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:54.062779+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199271000/0x0/0x1bfc00000, data 0x386eaf2/0x3a6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5325973 data_alloc: 218103808 data_used: 18550784
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:55.062968+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:56.063181+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:57.063447+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:58.063574+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 72K writes, 297K keys, 72K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.05 MB/s
                                           Cumulative WAL: 72K writes, 26K syncs, 2.77 writes per sync, written: 0.30 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4825 writes, 20K keys, 4825 commit groups, 1.0 writes per commit group, ingest: 24.47 MB, 0.04 MB/s
                                           Interval WAL: 4825 writes, 1762 syncs, 2.74 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562db927b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:59.063725+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19926a000/0x0/0x1bfc00000, data 0x3875af2/0x3a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5326545 data_alloc: 218103808 data_used: 18550784
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:00.063928+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:01.064073+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:02.064241+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:03.064439+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19926a000/0x0/0x1bfc00000, data 0x3875af2/0x3a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:04.064658+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5326545 data_alloc: 218103808 data_used: 18550784
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:05.064848+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19926a000/0x0/0x1bfc00000, data 0x3875af2/0x3a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 67911680 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19926a000/0x0/0x1bfc00000, data 0x3875af2/0x3a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:06.065105+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 67903488 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:07.065294+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19926a000/0x0/0x1bfc00000, data 0x3875af2/0x3a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 67903488 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:08.065492+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 67903488 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:09.065699+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.185024261s of 19.200548172s, submitted: 6
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5326677 data_alloc: 218103808 data_used: 18550784
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 67928064 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:10.065856+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 67928064 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:11.066080+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 67928064 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:12.066216+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 67928064 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:13.066376+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199268000/0x0/0x1bfc00000, data 0x3876af2/0x3a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 67928064 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:14.066586+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5326677 data_alloc: 218103808 data_used: 18550784
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540049408 unmapped: 67919872 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:15.066793+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199268000/0x0/0x1bfc00000, data 0x3876af2/0x3a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540049408 unmapped: 67919872 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:16.066960+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540049408 unmapped: 67919872 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:17.067135+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540049408 unmapped: 67919872 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:18.067301+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199268000/0x0/0x1bfc00000, data 0x3876af2/0x3a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540049408 unmapped: 67919872 heap: 607969280 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:19.067467+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd883800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbd883800 session 0x562dbd1b32c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba095800 session 0x562dbd572000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba095800 session 0x562dba8b7e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199268000/0x0/0x1bfc00000, data 0x3876af2/0x3a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac11c00 session 0x562dbd84fe00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.349569321s of 10.361606598s, submitted: 4
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc738000 session 0x562dbd5d8000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5401271 data_alloc: 218103808 data_used: 18550784
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbae52000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbae52000 session 0x562dbbb6b680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x199268000/0x0/0x1bfc00000, data 0x3876af2/0x3a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73a800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc73a800 session 0x562dbb4cb0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540082176 unmapped: 75243520 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba095800 session 0x562dbabfa1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac11c00 session 0x562dbb34f2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:20.067597+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540082176 unmapped: 75243520 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:21.067721+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540082176 unmapped: 75243520 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:22.067845+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540082176 unmapped: 75243520 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:23.068091+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540090368 unmapped: 75235328 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:24.068258+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5401879 data_alloc: 218103808 data_used: 18538496
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540098560 unmapped: 75227136 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:25.068390+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989ad000/0x0/0x1bfc00000, data 0x4131b54/0x4331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540106752 unmapped: 75218944 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:26.068536+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540106752 unmapped: 75218944 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989ad000/0x0/0x1bfc00000, data 0x4131b54/0x4331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:27.068711+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540106752 unmapped: 75218944 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:28.068887+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540106752 unmapped: 75218944 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:29.069016+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5401879 data_alloc: 218103808 data_used: 18538496
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 75210752 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:30.069167+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989ad000/0x0/0x1bfc00000, data 0x4131b54/0x4331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 75210752 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:31.069332+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 75210752 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:32.069523+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc5acd800 session 0x562dbc8f23c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc68e0c00 session 0x562dbb56f0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba6a1000 session 0x562dbecb25a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba095800 session 0x562dbd84d4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8d400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.970045090s of 13.127735138s, submitted: 57
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541073408 unmapped: 74252288 heap: 615325696 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:33.069666+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbca8d400 session 0x562dbbac0000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba6a1000 session 0x562dc1833680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac11c00 session 0x562dbabfbe00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc5acd800 session 0x562dbd1b2b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba095800 session 0x562dba7fa960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba6a1000 session 0x562dbff905a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac11c00 session 0x562dbff90d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8d400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbca8d400 session 0x562dbabfb860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc68e0c00 session 0x562dbd84cf00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541089792 unmapped: 78438400 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:34.069892+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5592037 data_alloc: 218103808 data_used: 18542592
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541179904 unmapped: 78348288 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba095800 session 0x562dbbac14a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc1100800 session 0x562dbc777c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1162800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc1162800 session 0x562dbd527c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:35.070016+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd883400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196dcc000/0x0/0x1bfc00000, data 0x58ffb9d/0x5b02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbd883400 session 0x562dbb34e780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd881800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbd881800 session 0x562dc1832f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541179904 unmapped: 78348288 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:36.070133+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541130752 unmapped: 78397440 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:37.070371+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba095800 session 0x562dbd5d90e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196dcc000/0x0/0x1bfc00000, data 0x58ffbd6/0x5b02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541286400 unmapped: 78241792 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:38.070577+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196da8000/0x0/0x1bfc00000, data 0x5923bd6/0x5b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541286400 unmapped: 78241792 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:39.070778+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd883400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196da8000/0x0/0x1bfc00000, data 0x5923bd6/0x5b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5593418 data_alloc: 218103808 data_used: 18550784
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541294592 unmapped: 78233600 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:40.070910+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc739c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc739c00 session 0x562dbbb6a1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541179904 unmapped: 78348288 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:41.071085+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4c00 session 0x562dbba82960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541179904 unmapped: 78348288 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:42.071250+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbaa1b000 session 0x562dbd527a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541196288 unmapped: 78331904 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:43.071377+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc2854800 session 0x562dc18334a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196da8000/0x0/0x1bfc00000, data 0x5923bd6/0x5b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541196288 unmapped: 78331904 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.023278236s of 11.477704048s, submitted: 78
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:44.071500+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbaa1b000 session 0x562dbd527e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196da8000/0x0/0x1bfc00000, data 0x5923bd6/0x5b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5649550 data_alloc: 234881024 data_used: 26476544
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541204480 unmapped: 78323712 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:45.071712+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67fc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbd67fc00 session 0x562dbd5734a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196da8000/0x0/0x1bfc00000, data 0x5923bd6/0x5b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541204480 unmapped: 78323712 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:46.071859+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541204480 unmapped: 78323712 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:47.072104+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43de800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc43de800 session 0x562dbd84a960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3666000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc3666000 session 0x562dbd84af00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 541212672 unmapped: 78315520 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2103800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:48.072310+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 544907264 unmapped: 74620928 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:49.072461+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196da8000/0x0/0x1bfc00000, data 0x5923bd6/0x5b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5749899 data_alloc: 251658240 data_used: 36675584
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 544907264 unmapped: 74620928 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:50.072602+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196da8000/0x0/0x1bfc00000, data 0x5923bd6/0x5b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550445056 unmapped: 69083136 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:51.072762+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550666240 unmapped: 68861952 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:52.073010+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550731776 unmapped: 68796416 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:53.073179+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1965de000/0x0/0x1bfc00000, data 0x60debd6/0x62e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553009152 unmapped: 66519040 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.138950348s of 10.006451607s, submitted: 311
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:54.073281+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5879139 data_alloc: 251658240 data_used: 44544000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552361984 unmapped: 67166208 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:55.073411+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dba095800 session 0x562dbd5274a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1965d9000/0x0/0x1bfc00000, data 0x60f2bd6/0x62f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552361984 unmapped: 67166208 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:56.073555+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1965d9000/0x0/0x1bfc00000, data 0x60f2bd6/0x62f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552361984 unmapped: 67166208 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:57.073801+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1965d9000/0x0/0x1bfc00000, data 0x60f2bd6/0x62f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552361984 unmapped: 67166208 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:58.073952+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1965d9000/0x0/0x1bfc00000, data 0x60f2bd6/0x62f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552361984 unmapped: 67166208 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:59.074148+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5879579 data_alloc: 251658240 data_used: 44544000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552361984 unmapped: 67166208 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:00.074306+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552361984 unmapped: 67166208 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:01.074474+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 551985152 unmapped: 67543040 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:02.074607+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552001536 unmapped: 67526656 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:03.074779+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196110000/0x0/0x1bfc00000, data 0x65b9bd6/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552058880 unmapped: 67469312 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:04.074989+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5928893 data_alloc: 251658240 data_used: 44613632
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552058880 unmapped: 67469312 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:05.075182+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1960f6000/0x0/0x1bfc00000, data 0x65cdbd6/0x67d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552058880 unmapped: 67469312 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:06.075405+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552058880 unmapped: 67469312 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:07.075614+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552058880 unmapped: 67469312 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:08.075817+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e1400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.256783485s of 14.512090683s, submitted: 105
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552058880 unmapped: 67469312 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:09.075969+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1960f6000/0x0/0x1bfc00000, data 0x65cdbd6/0x67d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5929025 data_alloc: 251658240 data_used: 44613632
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1960f6000/0x0/0x1bfc00000, data 0x65cdbd6/0x67d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552067072 unmapped: 67461120 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:10.076128+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552067072 unmapped: 67461120 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:11.076250+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552067072 unmapped: 67461120 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:12.076360+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1960fb000/0x0/0x1bfc00000, data 0x65d0bd6/0x67d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552067072 unmapped: 67461120 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:13.076463+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:14.076585+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552067072 unmapped: 67461120 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5975043 data_alloc: 251658240 data_used: 44662784
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:15.076836+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553820160 unmapped: 65708032 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbe2e1400 session 0x562dbd84e780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:16.076973+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553820160 unmapped: 65708032 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x195ac7000/0x0/0x1bfc00000, data 0x6c04bd6/0x6e07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:17.077236+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553820160 unmapped: 65708032 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x195ac7000/0x0/0x1bfc00000, data 0x6c04bd6/0x6e07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:18.077398+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553820160 unmapped: 65708032 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.672842979s of 10.174317360s, submitted: 46
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac10c00 session 0x562dbc8f2780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc2103800 session 0x562dbb9df4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:19.077535+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553820160 unmapped: 65708032 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbbb82000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbbb82000 session 0x562dbc8f3a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5814191 data_alloc: 251658240 data_used: 36913152
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:20.077682+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548536320 unmapped: 70991872 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:21.077913+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548536320 unmapped: 70991872 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:22.078037+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548536320 unmapped: 70991872 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196459000/0x0/0x1bfc00000, data 0x5ec5b64/0x60c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:23.078200+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548536320 unmapped: 70991872 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:24.078380+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548552704 unmapped: 70975488 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x196457000/0x0/0x1bfc00000, data 0x5ec6b64/0x60c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5834923 data_alloc: 251658240 data_used: 36913152
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:25.078543+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 549101568 unmapped: 70426624 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:26.078662+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 549330944 unmapped: 70197248 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8cc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbca8cc00 session 0x562dbd5261e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac10c00 session 0x562dbd84f680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbbb82000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbbb82000 session 0x562dbff90960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8cc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbca8cc00 session 0x562dbab8b680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e1800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:27.078865+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 549396480 unmapped: 70131712 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc68e1800 session 0x562dbd573860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc4da4400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc4da4400 session 0x562dba8b72c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac10c00 session 0x562dbff91a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbbb82000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbbb82000 session 0x562dbd84e3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8cc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbca8cc00 session 0x562dbbb152c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbca8dc00 session 0x562dc822d2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:28.079068+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550535168 unmapped: 68993024 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e1800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.601000786s of 10.006449699s, submitted: 128
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:29.079240+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc68e1800 session 0x562dbbb6a3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19608e000/0x0/0x1bfc00000, data 0x54fabc6/0x56fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5650952 data_alloc: 234881024 data_used: 22700032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:30.079417+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:31.079600+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:32.079831+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19608e000/0x0/0x1bfc00000, data 0x54fabc6/0x56fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:33.079974+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19608e000/0x0/0x1bfc00000, data 0x54fabc6/0x56fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:34.080183+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5650952 data_alloc: 234881024 data_used: 22700032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:35.080310+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19608e000/0x0/0x1bfc00000, data 0x54fabc6/0x56fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:36.080446+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:37.080638+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4800 session 0x562dbd1b3680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc3666800 session 0x562dbb9deb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc57d9400 session 0x562dbbac1860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd880c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19608e000/0x0/0x1bfc00000, data 0x54fabc6/0x56fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:38.080818+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc706c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc706c00 session 0x562dbb4cb0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:39.080963+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:40.081156+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5650952 data_alloc: 234881024 data_used: 22700032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:41.081317+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 73859072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:42.081493+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545677312 unmapped: 73850880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:43.081619+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1496c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc1496c00 session 0x562dba895680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.402840614s of 14.433990479s, submitted: 20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545685504 unmapped: 73842688 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19608e000/0x0/0x1bfc00000, data 0x54fabc6/0x56fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:44.081776+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545685504 unmapped: 73842688 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:45.081942+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5650952 data_alloc: 234881024 data_used: 22700032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545685504 unmapped: 73842688 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:46.082078+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545685504 unmapped: 73842688 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19608e000/0x0/0x1bfc00000, data 0x54fabc6/0x56fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4800 session 0x562dba7fa960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:47.082231+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19608e000/0x0/0x1bfc00000, data 0x54fabc6/0x56fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545505280 unmapped: 74022912 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbd880c00 session 0x562dbd5d9860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1497c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:48.082375+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545505280 unmapped: 74022912 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:49.082505+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545505280 unmapped: 74022912 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:50.082649+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5375104 data_alloc: 218103808 data_used: 18124800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545513472 unmapped: 74014720 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:51.082835+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546684928 unmapped: 72843264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198cf8000/0x0/0x1bfc00000, data 0x39d5bb6/0x3bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:52.083122+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546684928 unmapped: 72843264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:53.083324+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546684928 unmapped: 72843264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:54.083446+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.960980415s of 10.991615295s, submitted: 23
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546693120 unmapped: 72835072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:55.083586+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5457728 data_alloc: 234881024 data_used: 27475968
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546693120 unmapped: 72835072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:56.083735+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546693120 unmapped: 72835072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:57.083903+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546693120 unmapped: 72835072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198cf8000/0x0/0x1bfc00000, data 0x39d5bb6/0x3bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:58.084061+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546693120 unmapped: 72835072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198cf8000/0x0/0x1bfc00000, data 0x39d5bb6/0x3bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:59.084227+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546693120 unmapped: 72835072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:00.084410+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5458208 data_alloc: 234881024 data_used: 27488256
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546693120 unmapped: 72835072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:01.084531+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 71270400 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198515000/0x0/0x1bfc00000, data 0x41b8bb6/0x43b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:02.084658+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 71270400 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:03.084834+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 71270400 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:04.085090+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548257792 unmapped: 71270400 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:05.085242+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5561932 data_alloc: 234881024 data_used: 27758592
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548290560 unmapped: 71237632 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:06.085481+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548290560 unmapped: 71237632 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:07.085674+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548290560 unmapped: 71237632 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.192600250s of 13.429332733s, submitted: 115
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1980e2000/0x0/0x1bfc00000, data 0x45ebbb6/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:08.085888+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 71262208 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc1497c00 session 0x562dbff905a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc707400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:09.086002+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540983296 unmapped: 78544896 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc707400 session 0x562dc822cf00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:10.086149+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5289618 data_alloc: 218103808 data_used: 16461824
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540983296 unmapped: 78544896 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:11.086331+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19969d000/0x0/0x1bfc00000, data 0x2e06b44/0x3005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540983296 unmapped: 78544896 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:12.086536+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540983296 unmapped: 78544896 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:13.086711+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540983296 unmapped: 78544896 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:14.086892+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540983296 unmapped: 78544896 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19969d000/0x0/0x1bfc00000, data 0x2e06b44/0x3005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:15.087094+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5289618 data_alloc: 218103808 data_used: 16461824
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540983296 unmapped: 78544896 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19969d000/0x0/0x1bfc00000, data 0x2e06b44/0x3005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:16.087333+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540983296 unmapped: 78544896 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:17.087545+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540983296 unmapped: 78544896 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:18.087751+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 78536704 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:19.087900+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19969d000/0x0/0x1bfc00000, data 0x2e06b44/0x3005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 78536704 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:20.088060+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5289618 data_alloc: 218103808 data_used: 16461824
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 78536704 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19969d000/0x0/0x1bfc00000, data 0x2e06b44/0x3005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:21.088203+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 78536704 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19969d000/0x0/0x1bfc00000, data 0x2e06b44/0x3005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:22.088374+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 78536704 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:23.088506+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 78536704 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:24.088800+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 78536704 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:25.089137+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.422040939s of 17.547616959s, submitted: 28
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5289298 data_alloc: 218103808 data_used: 16461824
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540909568 unmapped: 78618624 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbd883400 session 0x562dbd84eb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc1100800 session 0x562dba2841e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:26.089301+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540909568 unmapped: 78618624 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4800 session 0x562dbff90f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:27.089530+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:28.089680+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:29.089813+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:30.089950+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:31.090171+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:32.090331+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:33.090452+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:34.090629+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:35.090782+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:36.090918+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:37.091070+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535404544 unmapped: 84123648 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:38.091214+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:39.091355+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:40.091479+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:41.091630+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:42.091805+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:43.091948+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:44.092093+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:45.092294+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:46.092421+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:47.092609+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:48.092765+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:49.092955+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:50.093105+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:51.093224+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:52.093978+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:53.094174+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:54.094361+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:55.094513+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:56.094700+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:57.094893+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:58.095114+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:59.095288+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:00.095438+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:01.095571+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:02.095728+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:03.095885+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:04.096160+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:05.096305+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:06.096494+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:07.096677+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:08.096820+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:09.096977+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:10.097126+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:11.097298+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:12.097544+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:13.097704+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:14.097903+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:15.098077+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ff400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc10ff400 session 0x562dbb56eb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc73c000 session 0x562dbb34e5a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc736c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc736c00 session 0x562dbd7fe960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4800 session 0x562dbd84d2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.421176910s of 50.721790314s, submitted: 58
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc73c000 session 0x562dbd84d0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:16.098344+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ff400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc10ff400 session 0x562dbabd1e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc1100800 session 0x562dba84cf00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57dac00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc57dac00 session 0x562dbd84a3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4800 session 0x562dbb9df860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536502272 unmapped: 83025920 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:17.098565+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536502272 unmapped: 83025920 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:18.098733+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:19.098931+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:20.099219+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5164673 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:21.099394+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:22.099517+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:23.099704+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:24.099897+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:25.100073+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5164673 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:26.100232+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:27.100462+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:28.100614+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:29.100846+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:30.101095+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5164673 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:31.101288+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:32.101571+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:33.101759+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:34.101919+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dd2f19000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.573759079s of 18.680780411s, submitted: 29
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dd2f19000 session 0x562dbd572b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536657920 unmapped: 82870272 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc834c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc736400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:35.102082+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5166757 data_alloc: 218103808 data_used: 8937472
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0c1000/0x0/0x1bfc00000, data 0x260db54/0x280d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536657920 unmapped: 82870272 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:36.102249+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536657920 unmapped: 82870272 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0c1000/0x0/0x1bfc00000, data 0x260db54/0x280d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:37.102420+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:38.102649+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:39.102893+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0c1000/0x0/0x1bfc00000, data 0x260db54/0x280d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:40.103044+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5230757 data_alloc: 218103808 data_used: 17928192
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:41.103232+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:42.103416+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:43.103573+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:44.103750+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0c1000/0x0/0x1bfc00000, data 0x260db54/0x280d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:45.103947+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5230757 data_alloc: 218103808 data_used: 17928192
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:46.104127+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0c1000/0x0/0x1bfc00000, data 0x260db54/0x280d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.525684357s of 12.534331322s, submitted: 2
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:47.104358+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538509312 unmapped: 81018880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:48.104561+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 81002496 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:49.104779+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 81002496 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:50.104947+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1995ac000/0x0/0x1bfc00000, data 0x3122b54/0x3322000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5330569 data_alloc: 218103808 data_used: 18362368
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 81002496 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:51.105144+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1995ac000/0x0/0x1bfc00000, data 0x3122b54/0x3322000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac10c00 session 0x562dbd84e000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:52.105377+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989c7000/0x0/0x1bfc00000, data 0x3d07b54/0x3f07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:53.105529+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:54.105738+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:55.105941+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5419695 data_alloc: 218103808 data_used: 18366464
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:56.106159+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:57.106341+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:58.106492+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989c7000/0x0/0x1bfc00000, data 0x3d07b54/0x3f07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:59.106631+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989c7000/0x0/0x1bfc00000, data 0x3d07b54/0x3f07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:00.106774+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5419695 data_alloc: 218103808 data_used: 18366464
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:01.106989+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc04af800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc04af800 session 0x562dbc8f2780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:02.107223+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc04af000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc04af000 session 0x562dbd84a960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989c7000/0x0/0x1bfc00000, data 0x3d07b54/0x3f07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:03.107446+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:04.107615+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac10c00 session 0x562dbd84f680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.902978897s of 17.200611115s, submitted: 92
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4800 session 0x562dbd5d8780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 88317952 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:05.107786+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc707c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5425836 data_alloc: 218103808 data_used: 18366464
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 88317952 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:06.107950+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 88317952 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:07.108145+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 88317952 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:08.108327+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 88317952 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:09.108503+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538574848 unmapped: 88309760 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:10.108721+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5425836 data_alloc: 218103808 data_used: 18366464
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538574848 unmapped: 88309760 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:11.108904+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538574848 unmapped: 88309760 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:12.109084+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538574848 unmapped: 88309760 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:13.109246+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 539549696 unmapped: 87334912 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:14.109439+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:15.109606+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5512716 data_alloc: 234881024 data_used: 29069312
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:16.109801+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:17.109979+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:18.110190+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:19.110376+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:20.110515+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5512716 data_alloc: 234881024 data_used: 29069312
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:21.110690+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540999680 unmapped: 85884928 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:22.110885+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540999680 unmapped: 85884928 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:23.111117+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540999680 unmapped: 85884928 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:24.111271+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540999680 unmapped: 85884928 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.574790955s of 20.601783752s, submitted: 7
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:25.111488+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5623834 data_alloc: 234881024 data_used: 29339648
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 544997376 unmapped: 81887232 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:26.111638+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545292288 unmapped: 81592320 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:27.111857+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545292288 unmapped: 81592320 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:28.112097+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbe2e1000 session 0x562dbc8dfa40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73b000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc73b000 session 0x562dbab8bc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545308672 unmapped: 81575936 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc834c00 session 0x562dbff905a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc736400 session 0x562dbc8f2d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:29.112258+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x195fb5000/0x0/0x1bfc00000, data 0x5576be9/0x5779000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac10c00 session 0x562dbd5fe5a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:30.112435+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x197132000/0x0/0x1bfc00000, data 0x41abb77/0x43ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5507283 data_alloc: 234881024 data_used: 22052864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:31.112627+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:32.112859+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:33.113053+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x197360000/0x0/0x1bfc00000, data 0x41cdb77/0x43ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:34.113232+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:35.113410+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5506939 data_alloc: 234881024 data_used: 22061056
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:36.113587+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:37.113776+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x197360000/0x0/0x1bfc00000, data 0x41cdb77/0x43ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545341440 unmapped: 81543168 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:38.113888+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545341440 unmapped: 81543168 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:39.114110+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545341440 unmapped: 81543168 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:40.114278+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.408903122s of 15.796367645s, submitted: 162
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5507167 data_alloc: 234881024 data_used: 22061056
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546390016 unmapped: 80494592 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc5acd000 session 0x562dbba82960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:41.114411+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ee800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546586624 unmapped: 80297984 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:42.114543+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10fec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 549724160 unmapped: 77160448 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x197336000/0x0/0x1bfc00000, data 0x41f6b9a/0x43f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:43.114664+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd5ff2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc707c00 session 0x562dc822e1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd5feb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:44.114801+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:45.114971+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5276119 data_alloc: 218103808 data_used: 18386944
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:46.115115+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:47.115282+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:48.115514+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198e57000/0x0/0x1bfc00000, data 0x26d6b67/0x28d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:49.115687+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:50.115876+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5276119 data_alloc: 218103808 data_used: 18386944
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:51.116054+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:52.116178+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.169054031s of 12.246128082s, submitted: 36
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198e57000/0x0/0x1bfc00000, data 0x26d6b67/0x28d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,0,3])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:53.116327+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198e57000/0x0/0x1bfc00000, data 0x26d6b67/0x28d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546766848 unmapped: 80117760 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:54.116506+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19826a000/0x0/0x1bfc00000, data 0x32c4b67/0x34c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:55.116759+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5383197 data_alloc: 234881024 data_used: 18669568
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:56.116930+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:57.117124+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:58.117273+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:59.117447+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981e1000/0x0/0x1bfc00000, data 0x334db67/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:00.117626+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5380193 data_alloc: 234881024 data_used: 18673664
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:01.117803+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:02.117941+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981c0000/0x0/0x1bfc00000, data 0x336eb67/0x356e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:03.118139+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:04.118326+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:05.118549+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5380193 data_alloc: 234881024 data_used: 18673664
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:06.118745+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.875093460s of 14.082410812s, submitted: 95
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 79888384 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:07.118966+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981c0000/0x0/0x1bfc00000, data 0x336eb67/0x356e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 79822848 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:08.119130+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981ad000/0x0/0x1bfc00000, data 0x3380b67/0x3580000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 79822848 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:09.119262+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981ad000/0x0/0x1bfc00000, data 0x3380b67/0x3580000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 79822848 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:10.119429+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5381577 data_alloc: 234881024 data_used: 18681856
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 79822848 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:11.119608+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981ad000/0x0/0x1bfc00000, data 0x3380b67/0x3580000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547069952 unmapped: 79814656 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:12.119758+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 385 ms_handle_reset con 0x562dbabaf000 session 0x562dbd573680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 385 ms_handle_reset con 0x562dbac10c00 session 0x562dbd1b3e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 549101568 unmapped: 77783040 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:13.119970+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559349760 unmapped: 67534848 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:14.120147+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 385 ms_handle_reset con 0x562dbe2e1000 session 0x562dc822c000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaf8f400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560914432 unmapped: 81928192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:15.120321+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 386 ms_handle_reset con 0x562dbaf8f400 session 0x562dbd5d9c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5570981 data_alloc: 234881024 data_used: 25153536
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 81911808 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:16.120454+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd5272c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 81911808 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:17.120840+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 heartbeat osd_stat(store_statfs(0x195a9b000/0x0/0x1bfc00000, data 0x48ec154/0x4af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 81903616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:18.120995+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 81903616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:19.121186+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbabaf000 session 0x562dbd572780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbac10c00 session 0x562dbb9de960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e4400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dc18e4400 session 0x562dbb47e1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 81903616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:20.121339+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5576603 data_alloc: 234881024 data_used: 25157632
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 81903616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:21.121479+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 heartbeat osd_stat(store_statfs(0x195a9b000/0x0/0x1bfc00000, data 0x48ec154/0x4af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 81903616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:22.121640+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa19400 session 0x562dc822c5a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa19400 session 0x562dc1832780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa1b800 session 0x562dbc8f3680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43dec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dc43dec00 session 0x562dbb56e5a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.199718475s of 15.581806183s, submitted: 41
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbabaf000 session 0x562dbd5d92c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbac10c00 session 0x562dbd527680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa19400 session 0x562dbd84cd20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd5ff0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbabaf000 session 0x562dbabd10e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:23.121810+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:24.121981+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950f1000/0x0/0x1bfc00000, data 0x52961c6/0x549d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: get_auth_request con 0x562dbabaec00 auth_method 0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558104576 unmapped: 84738048 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:25.122212+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950ed000/0x0/0x1bfc00000, data 0x5297d05/0x54a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5653084 data_alloc: 234881024 data_used: 25165824
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:26.122397+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:27.122601+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950ed000/0x0/0x1bfc00000, data 0x5297d05/0x54a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:28.122739+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:29.122951+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950ed000/0x0/0x1bfc00000, data 0x5297d05/0x54a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:30.123067+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ef000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbf6ef000 session 0x562dc822c1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5653084 data_alloc: 234881024 data_used: 25165824
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:31.123220+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd882400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbd882400 session 0x562dbc777c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa19400 session 0x562dbb549680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd5d9c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbabaf000 session 0x562dc822c000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ef000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbf6ef000 session 0x562dbd573680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a82000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e5800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dc18e5800 session 0x562dbff912c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dc5a82000 session 0x562dbd5feb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa19400 session 0x562dbd5ff2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa1b800 session 0x562dbba82960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbabaf000 session 0x562dbc8f2d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558137344 unmapped: 84705280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:32.123339+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:33.123483+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558137344 unmapped: 84705280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ef000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbf6ef000 session 0x562dbab8bc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.702510834s of 10.845293999s, submitted: 53
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa19400 session 0x562dbc8dfa40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:34.123618+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558153728 unmapped: 84688896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950c8000/0x0/0x1bfc00000, data 0x52bbd77/0x54c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:35.123780+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558153728 unmapped: 84688896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5658694 data_alloc: 234881024 data_used: 25174016
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:36.123922+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558153728 unmapped: 84688896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:37.124067+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:38.124199+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:39.124323+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950c8000/0x0/0x1bfc00000, data 0x52bbd77/0x54c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:40.124459+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5729094 data_alloc: 234881024 data_used: 31989760
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:41.124608+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dc5acec00 session 0x562dba8b7c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:42.124750+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a83000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2102000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:43.124949+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:44.125165+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558186496 unmapped: 84656128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950c8000/0x0/0x1bfc00000, data 0x52bbd77/0x54c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:45.125320+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558186496 unmapped: 84656128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5776001 data_alloc: 251658240 data_used: 38297600
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:46.125461+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558186496 unmapped: 84656128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:47.125647+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558186496 unmapped: 84656128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.278841019s of 14.302386284s, submitted: 9
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:48.125795+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559742976 unmapped: 83099648 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:49.125916+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x194867000/0x0/0x1bfc00000, data 0x5b16d77/0x5d21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:50.126072+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560144384 unmapped: 82698240 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5855407 data_alloc: 251658240 data_used: 39071744
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:51.126205+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560144384 unmapped: 82698240 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:52.126320+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560144384 unmapped: 82698240 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1947da000/0x0/0x1bfc00000, data 0x5b9bd77/0x5da6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:53.126469+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560144384 unmapped: 82698240 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:54.126676+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560144384 unmapped: 82698240 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:55.126843+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564051968 unmapped: 78790656 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5923366 data_alloc: 251658240 data_used: 45199360
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:56.127005+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564232192 unmapped: 78610432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x194476000/0x0/0x1bfc00000, data 0x5f0dd77/0x6118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:57.127192+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564232192 unmapped: 78610432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:58.127354+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564232192 unmapped: 78610432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:59.127514+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564232192 unmapped: 78610432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.490035057s of 11.795151711s, submitted: 117
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd84f680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbabaf000 session 0x562dbb47e000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:00.127765+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561037312 unmapped: 81805312 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x194477000/0x0/0x1bfc00000, data 0x5f0dd67/0x6117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbd67f000 session 0x562dbd526780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5698756 data_alloc: 234881024 data_used: 34893824
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:01.127964+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:02.128119+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:03.128290+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:04.128463+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:05.128643+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x195312000/0x0/0x1bfc00000, data 0x4c3ed05/0x4e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5698756 data_alloc: 234881024 data_used: 34893824
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:06.128797+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:07.128987+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:08.129114+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x195312000/0x0/0x1bfc00000, data 0x4c3ed05/0x4e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:09.129287+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:10.129447+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5703556 data_alloc: 234881024 data_used: 35553280
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:11.129606+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5ace800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dc5ace800 session 0x562dbecb32c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57da000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:12.129774+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.805006981s of 12.859222412s, submitted: 23
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 389 ms_handle_reset con 0x562dc57da000 session 0x562dbecb3c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd8a3800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43dec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 389 ms_handle_reset con 0x562dc43dec00 session 0x562dbecb3e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 389 ms_handle_reset con 0x562dbd8a3800 session 0x562dba8a1860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73d400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:13.129978+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575791104 unmapped: 67051520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 389 ms_handle_reset con 0x562dbc73d400 session 0x562dbc8f34a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd8a3800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 389 heartbeat osd_stat(store_statfs(0x1946fe000/0x0/0x1bfc00000, data 0x587595e/0x5a7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:14.130135+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575791104 unmapped: 67051520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 390 ms_handle_reset con 0x562dbd8a3800 session 0x562dbc7770e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43dec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:15.130316+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575848448 unmapped: 66994176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 391 ms_handle_reset con 0x562dc43dec00 session 0x562dbb56e780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5916266 data_alloc: 251658240 data_used: 47558656
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:16.130484+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575905792 unmapped: 66936832 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:17.130681+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575905792 unmapped: 66936832 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 391 ms_handle_reset con 0x562dbc738400 session 0x562dba8a05a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 391 ms_handle_reset con 0x562dc5acd000 session 0x562dbd84ab40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc4da5800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 391 ms_handle_reset con 0x562dc4da5800 session 0x562dba7fb4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:18.130833+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 391 heartbeat osd_stat(store_statfs(0x193ef6000/0x0/0x1bfc00000, data 0x6079280/0x6285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575905792 unmapped: 66936832 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:19.130960+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572170240 unmapped: 70672384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 392 ms_handle_reset con 0x562dbc738400 session 0x562dbd84ed20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:20.131135+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 70664192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 392 heartbeat osd_stat(store_statfs(0x19532a000/0x0/0x1bfc00000, data 0x4c45f49/0x4e53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5764012 data_alloc: 251658240 data_used: 47558656
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:21.131264+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 70664192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:22.131442+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 70664192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:23.131654+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 70664192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:24.131838+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 70664192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.179473877s of 12.394300461s, submitted: 48
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:25.131977+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 393 heartbeat osd_stat(store_statfs(0x19532a000/0x0/0x1bfc00000, data 0x4c45f49/0x4e53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: get_auth_request con 0x562dbabae000 auth_method 0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572194816 unmapped: 70647808 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:26.132094+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5767178 data_alloc: 251658240 data_used: 47558656
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 393 ms_handle_reset con 0x562dc5a83000 session 0x562dbb34e5a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 393 ms_handle_reset con 0x562dc2102000 session 0x562dba7fb680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572194816 unmapped: 70647808 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3666c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 393 ms_handle_reset con 0x562dc3666c00 session 0x562dbd5d8b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:27.132270+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:28.132482+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1101000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:29.132643+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 393 ms_handle_reset con 0x562dc1101000 session 0x562dc1832b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 393 heartbeat osd_stat(store_statfs(0x195679000/0x0/0x1bfc00000, data 0x48f6a4e/0x4b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:30.132850+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 393 heartbeat osd_stat(store_statfs(0x195679000/0x0/0x1bfc00000, data 0x48f6a4e/0x4b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:31.133003+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5728205 data_alloc: 251658240 data_used: 44077056
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3666c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:32.133162+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 394 ms_handle_reset con 0x562dc3666c00 session 0x562dbd84dc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:33.133354+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559374336 unmapped: 83468288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:34.133508+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559374336 unmapped: 83468288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:35.133711+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559374336 unmapped: 83468288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:36.133870+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5458850 data_alloc: 234881024 data_used: 21647360
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559374336 unmapped: 83468288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd883000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.765892029s of 12.027838707s, submitted: 79
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 394 heartbeat osd_stat(store_statfs(0x196bdb000/0x0/0x1bfc00000, data 0x3396689/0x35a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 394 ms_handle_reset con 0x562dbd883000 session 0x562dbd5d8d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:37.134083+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77e800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 394 ms_handle_reset con 0x562dba77e800 session 0x562dbd84b860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554950656 unmapped: 87891968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:38.134179+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554950656 unmapped: 87891968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:39.134316+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 394 ms_handle_reset con 0x562dc10fec00 session 0x562dbd5721e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 394 ms_handle_reset con 0x562dbf6ee800 session 0x562dbc8def00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbcffb000 session 0x562dbb56f2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:40.134434+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196a90000/0x0/0x1bfc00000, data 0x273a1a5/0x2946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:41.134576+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301797 data_alloc: 218103808 data_used: 8826880
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196a90000/0x0/0x1bfc00000, data 0x273a1a5/0x2946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:42.134698+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:43.134843+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:44.134989+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196a90000/0x0/0x1bfc00000, data 0x273a1a5/0x2946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd8a3000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbd8a3000 session 0x562dbff91860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:45.135189+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbc738800 session 0x562dbc8f30e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:46.135375+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301797 data_alloc: 218103808 data_used: 8826880
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:47.135613+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10fe400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dc10fe400 session 0x562dbc8f2b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.382782936s of 10.577653885s, submitted: 83
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dc1100400 session 0x562dbd5d83c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:48.136122+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542064640 unmapped: 100777984 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x197837000/0x0/0x1bfc00000, data 0x273a1c8/0x2947000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:49.136264+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542072832 unmapped: 100769792 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:50.136379+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542048256 unmapped: 100794368 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:51.136555+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5376823 data_alloc: 234881024 data_used: 18935808
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542965760 unmapped: 99876864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:52.136687+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542965760 unmapped: 99876864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:53.136890+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542965760 unmapped: 99876864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x197837000/0x0/0x1bfc00000, data 0x273a1c8/0x2947000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:54.137066+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542965760 unmapped: 99876864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dba095c00 session 0x562dbd84e960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbad4ac00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbad4ac00 session 0x562dba8730e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dc2854c00 session 0x562dc18334a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x197837000/0x0/0x1bfc00000, data 0x273a1c8/0x2947000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc736400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbc736400 session 0x562dbd1b25a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1162800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dc1162800 session 0x562dbabfa3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:55.137260+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dba095c00 session 0x562dbecb2780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbad4ac00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbad4ac00 session 0x562dba8b6d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc736400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbc736400 session 0x562dba8b7a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dc2854c00 session 0x562dba2854a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:56.137421+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5429297 data_alloc: 234881024 data_used: 18939904
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:57.137597+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x19725d000/0x0/0x1bfc00000, data 0x2d1223a/0x2f21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:58.137750+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:59.137940+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x19725d000/0x0/0x1bfc00000, data 0x2d1223a/0x2f21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:00.138118+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffa000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbcffa000 session 0x562dbd572780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:01.138245+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.624453545s of 13.748756409s, submitted: 50
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5453979 data_alloc: 234881024 data_used: 18939904
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 94576640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dba095c00 session 0x562dbab8b680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196e38000/0x0/0x1bfc00000, data 0x313723a/0x3346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:02.138394+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196e38000/0x0/0x1bfc00000, data 0x313723a/0x3346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbad4ac00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbad4ac00 session 0x562dbb47fc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc736400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 549715968 unmapped: 93126656 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbc736400 session 0x562dbb34ed20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:03.138545+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548503552 unmapped: 94339072 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:04.138737+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550371328 unmapped: 92471296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:05.138865+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:06.139081+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5551333 data_alloc: 234881024 data_used: 26824704
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:07.139252+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196a72000/0x0/0x1bfc00000, data 0x34f723a/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:08.139448+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:09.139607+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:10.139753+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196a72000/0x0/0x1bfc00000, data 0x34f723a/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:11.139894+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5551349 data_alloc: 234881024 data_used: 26824704
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:12.140054+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:13.140198+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:14.140318+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.047923088s of 13.415540695s, submitted: 128
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:15.140465+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553410560 unmapped: 89432064 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:16.140614+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5651979 data_alloc: 234881024 data_used: 27148288
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x195f00000/0x0/0x1bfc00000, data 0x406f23a/0x427e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554885120 unmapped: 87957504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:17.141076+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:18.141276+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbca8dc00 session 0x562dbd5ff0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:19.141436+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:20.141599+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffa800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:21.141742+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5671089 data_alloc: 234881024 data_used: 27164672
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 396 ms_handle_reset con 0x562dbcffa800 session 0x562dbd572b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:22.141912+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 396 heartbeat osd_stat(store_statfs(0x195cd4000/0x0/0x1bfc00000, data 0x42970f5/0x44a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 396 heartbeat osd_stat(store_statfs(0x195cd4000/0x0/0x1bfc00000, data 0x42970f5/0x44a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:23.142124+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 396 heartbeat osd_stat(store_statfs(0x195cd4000/0x0/0x1bfc00000, data 0x42970f5/0x44a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:24.142331+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffa800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 397 ms_handle_reset con 0x562dba095c00 session 0x562dba0592c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 397 ms_handle_reset con 0x562dbcffa800 session 0x562dbb548b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbad4ac00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.674940109s of 10.024015427s, submitted: 146
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 397 ms_handle_reset con 0x562dbad4ac00 session 0x562dbd526780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:25.322019+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553779200 unmapped: 89063424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:26.322208+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5817283 data_alloc: 234881024 data_used: 33517568
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 398 ms_handle_reset con 0x562dbaa1b800 session 0x562dbb549680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7ce800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553787392 unmapped: 89055232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:27.322406+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbc7ce800 session 0x562dbd1b3e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553787392 unmapped: 89055232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:28.322556+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553787392 unmapped: 89055232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dba095c00 session 0x562dbecb21e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbaa1b800 session 0x562dbb34f680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbad4ac00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbad4ac00 session 0x562dbbb154a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 heartbeat osd_stat(store_statfs(0x194e32000/0x0/0x1bfc00000, data 0x51336d2/0x534a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:29.322851+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550412288 unmapped: 92430336 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:30.323068+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550412288 unmapped: 92430336 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbc738400 session 0x562dbb47e780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbb701800 session 0x562dbd5d94a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbb701800 session 0x562dbc7770e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:31.323255+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5816329 data_alloc: 234881024 data_used: 33525760
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550420480 unmapped: 92422144 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 heartbeat osd_stat(store_statfs(0x194e28000/0x0/0x1bfc00000, data 0x513d744/0x5356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:32.323399+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550420480 unmapped: 92422144 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:33.323621+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550420480 unmapped: 92422144 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 heartbeat osd_stat(store_statfs(0x194e28000/0x0/0x1bfc00000, data 0x513d744/0x5356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:34.323808+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550420480 unmapped: 92422144 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.589864731s of 10.252849579s, submitted: 32
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:35.323987+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550428672 unmapped: 92413952 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a83000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc5a83000 session 0x562dbd527a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbecb2000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbcffb800 session 0x562dbbb15e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:36.324196+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5819623 data_alloc: 234881024 data_used: 33533952
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbba83860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550428672 unmapped: 92413952 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10fec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbd7ff4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc10fec00 session 0x562dbc8f2b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb701800 session 0x562dbd84f680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbd527680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a83000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc5a83000 session 0x562dbd7fe1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb701800 session 0x562dbbb230e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbbb232c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbcffb800 session 0x562dbb56e5a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10fec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc10fec00 session 0x562dbd1b30e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbd526f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbc8dfc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb701800 session 0x562dbd84fe00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:37.324365+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbd84be00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbcffb800 session 0x562dbabfbc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x19457b000/0x0/0x1bfc00000, data 0x59e432e/0x5c02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:38.324526+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:39.324732+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:40.324891+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:41.325091+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5902396 data_alloc: 234881024 data_used: 33533952
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:42.325279+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x19457b000/0x0/0x1bfc00000, data 0x59e4367/0x5c02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:43.325410+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:44.325559+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:45.325732+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:46.325887+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5902572 data_alloc: 234881024 data_used: 33529856
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.640461922s of 11.789118767s, submitted: 56
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73b400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbc73b400 session 0x562dc822c960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb701800 session 0x562dbab8bc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:47.326089+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbd5feb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550830080 unmapped: 92012544 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbcffb800 session 0x562dc822c1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x19457c000/0x0/0x1bfc00000, data 0x59e4367/0x5c02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:48.326265+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbbb6b2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1163800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550830080 unmapped: 92012544 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc1163800 session 0x562dbd84b680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb701800 session 0x562dbd526f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbb56e5a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:49.326448+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc834c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550993920 unmapped: 91848704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:50.326597+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550993920 unmapped: 91848704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194530000/0x0/0x1bfc00000, data 0x5a2c3cc/0x5c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:51.326738+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5915091 data_alloc: 234881024 data_used: 33542144
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550993920 unmapped: 91848704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:52.326865+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559505408 unmapped: 83337216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:53.327131+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:54.327499+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:55.327638+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194530000/0x0/0x1bfc00000, data 0x5a2c3cc/0x5c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:56.327751+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6058291 data_alloc: 251658240 data_used: 52396032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:57.327893+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:58.328011+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:59.328173+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:00.328294+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:01.328418+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194530000/0x0/0x1bfc00000, data 0x5a2c3cc/0x5c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6058291 data_alloc: 251658240 data_used: 52396032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:02.328601+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:03.328716+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.273065567s of 17.347787857s, submitted: 23
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:04.328819+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569090048 unmapped: 73752576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:05.328952+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569851904 unmapped: 72990720 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:06.329092+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6205106 data_alloc: 268435456 data_used: 55169024
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570286080 unmapped: 72556544 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x193697000/0x0/0x1bfc00000, data 0x68bd3cc/0x6adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:07.329249+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570286080 unmapped: 72556544 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:08.329437+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570286080 unmapped: 72556544 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:09.329615+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x193697000/0x0/0x1bfc00000, data 0x68bd3cc/0x6adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570351616 unmapped: 72491008 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x193697000/0x0/0x1bfc00000, data 0x68bd3cc/0x6adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:10.329766+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571031552 unmapped: 71811072 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbcffb800 session 0x562dbc8f2b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbc777a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x193697000/0x0/0x1bfc00000, data 0x68bd3cc/0x6adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:11.329896+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cf800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6203724 data_alloc: 268435456 data_used: 55177216
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567336960 unmapped: 75505664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbc7cf800 session 0x562dbd5730e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:12.330049+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567345152 unmapped: 75497472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:13.330225+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567345152 unmapped: 75497472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:14.330368+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567345152 unmapped: 75497472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194d8d000/0x0/0x1bfc00000, data 0x51d1328/0x53ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:15.330517+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567345152 unmapped: 75497472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:16.330711+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5955331 data_alloc: 251658240 data_used: 44916736
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567345152 unmapped: 75497472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:17.330893+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.040993690s of 13.372898102s, submitted: 214
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567353344 unmapped: 75489280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:18.331165+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567353344 unmapped: 75489280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:19.331306+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567353344 unmapped: 75489280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:20.331470+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194d8d000/0x0/0x1bfc00000, data 0x51d1328/0x53ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567353344 unmapped: 75489280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194d8d000/0x0/0x1bfc00000, data 0x51d1328/0x53ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:21.331603+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5955331 data_alloc: 251658240 data_used: 44916736
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567353344 unmapped: 75489280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:22.331757+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567361536 unmapped: 75481088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:23.331893+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc2854c00 session 0x562dbd5d9680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbaa19400 session 0x562dc1832d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567361536 unmapped: 75481088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194d8e000/0x0/0x1bfc00000, data 0x51d2328/0x53f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:24.332014+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567361536 unmapped: 75481088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:25.332172+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567377920 unmapped: 75464704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:26.332948+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbac11c00 session 0x562dba285e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774259 data_alloc: 251658240 data_used: 38666240
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195b85000/0x0/0x1bfc00000, data 0x40302b6/0x424c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:27.333195+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:28.333460+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:29.333620+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:30.334184+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:31.334315+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774579 data_alloc: 251658240 data_used: 38674432
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195b85000/0x0/0x1bfc00000, data 0x40302b6/0x424c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:32.334442+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:33.334769+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:34.335171+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd883800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.279706955s of 17.971815109s, submitted: 44
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:35.335467+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195b85000/0x0/0x1bfc00000, data 0x40302b6/0x424c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570195968 unmapped: 72646656 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:36.335618+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5802171 data_alloc: 251658240 data_used: 38973440
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570204160 unmapped: 72638464 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:37.335782+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570671104 unmapped: 72171520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195b04000/0x0/0x1bfc00000, data 0x445d4b6/0x467a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:38.336099+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570679296 unmapped: 72163328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:39.336362+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570679296 unmapped: 72163328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:40.336661+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570679296 unmapped: 72163328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbd883800 session 0x562dbd7fe960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:41.336927+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5823598 data_alloc: 251658240 data_used: 41078784
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568393728 unmapped: 74448896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195b03000/0x0/0x1bfc00000, data 0x445e4b6/0x467b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:42.337206+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568393728 unmapped: 74448896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:43.337329+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568393728 unmapped: 74448896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:44.337548+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb5a4c00 session 0x562dbd84f860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbc834c00 session 0x562dbd7ff4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568393728 unmapped: 74448896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:45.337739+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.340224266s of 10.018557549s, submitted: 24
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbaa19400 session 0x562dbc8f2780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:46.337961+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5781271 data_alloc: 251658240 data_used: 40714240
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195dc5000/0x0/0x1bfc00000, data 0x419e411/0x43b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:47.338178+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:48.338361+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:49.338487+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195dc5000/0x0/0x1bfc00000, data 0x419e411/0x43b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:50.338682+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:51.338869+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1163c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5781271 data_alloc: 251658240 data_used: 40714240
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195dc5000/0x0/0x1bfc00000, data 0x419e411/0x43b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc1163c00 session 0x562dbd7ff0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e4800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:52.339076+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc18e4800 session 0x562dbc8de000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:53.339265+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568426496 unmapped: 74416128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e0800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:54.339393+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 401 ms_handle_reset con 0x562dc68e0800 session 0x562dbbac14a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:55.339546+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 401 ms_handle_reset con 0x562dbaa19400 session 0x562dba8a0960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc834c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.123268127s of 10.318778992s, submitted: 56
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:56.339703+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 402 ms_handle_reset con 0x562dbc834c00 session 0x562dbd84ab40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 402 heartbeat osd_stat(store_statfs(0x196e60000/0x0/0x1bfc00000, data 0x3107d88/0x331d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5572465 data_alloc: 234881024 data_used: 27693056
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:57.339902+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:58.340090+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 402 heartbeat osd_stat(store_statfs(0x196e5d000/0x0/0x1bfc00000, data 0x3109a51/0x3320000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:59.340228+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 402 heartbeat osd_stat(store_statfs(0x196e5d000/0x0/0x1bfc00000, data 0x3109a51/0x3320000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:00.340369+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568451072 unmapped: 74391552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 403 ms_handle_reset con 0x562dbcffb000 session 0x562dbecb30e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 403 ms_handle_reset con 0x562dbc738800 session 0x562dbd1b3c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:01.340576+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43dd800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5575235 data_alloc: 234881024 data_used: 27693056
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568459264 unmapped: 74383360 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:02.340697+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554188800 unmapped: 88653824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 403 ms_handle_reset con 0x562dc43dd800 session 0x562dbecb2000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:03.340877+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:04.341111+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:05.341230+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x197d6c000/0x0/0x1bfc00000, data 0x1f7a527/0x2190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:06.341395+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5333061 data_alloc: 218103808 data_used: 10977280
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:07.341566+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:08.341722+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:09.341860+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x197fea000/0x0/0x1bfc00000, data 0x1f7c066/0x2193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:10.341995+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:11.342129+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7ce400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbc7ce400 session 0x562dbd5d9860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5333061 data_alloc: 218103808 data_used: 10977280
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.315015793s of 16.486459732s, submitted: 80
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc54c6400 session 0x562dba7fb680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:12.342266+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:13.342404+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:14.342539+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:15.342670+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:16.342794+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5313558 data_alloc: 218103808 data_used: 8871936
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:17.342951+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:18.343087+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:19.343208+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:20.343336+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:21.343465+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5313718 data_alloc: 218103808 data_used: 8876032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:22.343601+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:23.343735+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:24.343923+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:25.344102+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:26.344284+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5313718 data_alloc: 218103808 data_used: 8876032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:27.344488+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:28.344630+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:29.344796+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:30.344968+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:31.345143+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5313718 data_alloc: 218103808 data_used: 8876032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:32.345284+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:33.345462+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554246144 unmapped: 88596480 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:34.345645+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554246144 unmapped: 88596480 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:35.345794+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554246144 unmapped: 88596480 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:36.345995+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5313718 data_alloc: 218103808 data_used: 8876032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554246144 unmapped: 88596480 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:37.346244+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554246144 unmapped: 88596480 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc4da4000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc4da4000 session 0x562dbd1b2960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbb5a5800 session 0x562dba872780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dba6a1000 session 0x562dbabd03c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:38.346378+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbb5a5800 session 0x562dbb9dfc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7ce400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.329820633s of 26.333835602s, submitted: 1
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbc7ce400 session 0x562dbd5721e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc4da4000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc4da4000 session 0x562dbff90f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc54c6400 session 0x562dbb9df0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cf400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbc7cf400 session 0x562dc1832960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbb5a5800 session 0x562dbd84e3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554270720 unmapped: 88571904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:39.346509+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554270720 unmapped: 88571904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:40.346695+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554270720 unmapped: 88571904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:41.346851+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5421124 data_alloc: 218103808 data_used: 8876032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:42.347076+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:43.347216+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:44.347361+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:45.347488+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:46.347689+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5421124 data_alloc: 218103808 data_used: 8876032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:47.347859+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:48.348093+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:49.348286+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:50.348439+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:51.348604+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5421124 data_alloc: 218103808 data_used: 8876032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:52.348761+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:53.348904+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:54.349138+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:55.349331+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:56.349475+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5421124 data_alloc: 218103808 data_used: 8876032
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:57.349694+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554303488 unmapped: 88539136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:58.349869+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 76K writes, 315K keys, 76K commit groups, 1.0 writes per commit group, ingest: 0.32 GB, 0.05 MB/s
                                           Cumulative WAL: 76K writes, 27K syncs, 2.76 writes per sync, written: 0.32 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4519 writes, 18K keys, 4519 commit groups, 1.0 writes per commit group, ingest: 19.24 MB, 0.03 MB/s
                                           Interval WAL: 4519 writes, 1796 syncs, 2.52 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554303488 unmapped: 88539136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:59.350076+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77f000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.789453506s of 20.890707016s, submitted: 20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dba77f000 session 0x562dbd1b2960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554319872 unmapped: 88522752 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc737c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:00.350280+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554319872 unmapped: 88522752 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:01.350435+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5421489 data_alloc: 218103808 data_used: 8884224
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554319872 unmapped: 88522752 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:02.350599+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:03.350788+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:04.350961+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:05.351111+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:06.351258+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5508529 data_alloc: 234881024 data_used: 21176320
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:07.351458+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:08.351732+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:09.351882+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:10.352206+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:11.352454+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5508529 data_alloc: 234881024 data_used: 21176320
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:12.352599+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.633522987s of 13.655125618s, submitted: 7
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558006272 unmapped: 84836352 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:13.352793+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x19672a000/0x0/0x1bfc00000, data 0x3834e76/0x3a4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560463872 unmapped: 82378752 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:14.352956+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559759360 unmapped: 83083264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:15.353134+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559759360 unmapped: 83083264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1966a2000/0x0/0x1bfc00000, data 0x38c4e76/0x3adc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:16.353350+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5635579 data_alloc: 234881024 data_used: 22097920
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559759360 unmapped: 83083264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:17.353569+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1966a2000/0x0/0x1bfc00000, data 0x38c4e76/0x3adc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559759360 unmapped: 83083264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:18.353758+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559759360 unmapped: 83083264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:19.353926+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:20.354084+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196681000/0x0/0x1bfc00000, data 0x38e5e76/0x3afd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:21.354234+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5634223 data_alloc: 234881024 data_used: 22106112
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:22.354420+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196681000/0x0/0x1bfc00000, data 0x38e5e76/0x3afd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:23.354572+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:24.354707+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196681000/0x0/0x1bfc00000, data 0x38e5e76/0x3afd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:25.520138+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.650030136s of 12.929843903s, submitted: 104
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:26.520415+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5634279 data_alloc: 234881024 data_used: 22106112
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:27.520697+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:28.520851+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:29.521099+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x19667b000/0x0/0x1bfc00000, data 0x38ebe76/0x3b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:30.521312+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:31.521601+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x19667b000/0x0/0x1bfc00000, data 0x38ebe76/0x3b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5634279 data_alloc: 234881024 data_used: 22106112
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:32.521867+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:33.522092+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:34.522260+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196678000/0x0/0x1bfc00000, data 0x38eee76/0x3b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:35.522462+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc68e1000 session 0x562dbd1b30e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3666000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc3666000 session 0x562dbd84e1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:36.522658+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5637555 data_alloc: 234881024 data_used: 23416832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:37.522866+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:38.523080+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc739400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.833637238s of 12.855980873s, submitted: 4
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbc739400 session 0x562dbb47e000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196678000/0x0/0x1bfc00000, data 0x38eee76/0x3b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:39.523284+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196677000/0x0/0x1bfc00000, data 0x38eee86/0x3b07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:40.523440+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196677000/0x0/0x1bfc00000, data 0x38eee86/0x3b07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:41.523742+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5640703 data_alloc: 234881024 data_used: 23441408
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:42.523921+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a82000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc5a82000 session 0x562dbab8a3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57da000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc57da000 session 0x562dbd1b21e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:43.524128+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc835800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:44.524295+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbc835800 session 0x562dc822c960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbaba0c00 session 0x562dbc8f2b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:45.524499+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1960c9000/0x0/0x1bfc00000, data 0x3e9be86/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:46.524625+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5685536 data_alloc: 234881024 data_used: 23441408
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:47.524809+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:48.524957+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:49.525111+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.471217155s of 10.727568626s, submitted: 25
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:50.525285+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 405 heartbeat osd_stat(store_statfs(0x1960c6000/0x0/0x1bfc00000, data 0x3e9dadf/0x40b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560865280 unmapped: 81977344 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:51.525396+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 406 ms_handle_reset con 0x562dbc721c00 session 0x562dbd7ff0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5693801 data_alloc: 234881024 data_used: 23449600
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 81960960 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:52.525506+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc835800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 407 ms_handle_reset con 0x562dbaba0c00 session 0x562dbd5d9860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560717824 unmapped: 82124800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 407 heartbeat osd_stat(store_statfs(0x195dad000/0x0/0x1bfc00000, data 0x41b1463/0x43ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:53.525631+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dbc835800 session 0x562dbba83860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dbb5a5c00 session 0x562dbb56f4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560807936 unmapped: 82034688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:54.525746+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 81993728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:55.525871+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c7c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dc54c7c00 session 0x562dbabfa3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 81985536 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:56.526094+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e1000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dc68e1000 session 0x562dbb9de5a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5885576 data_alloc: 234881024 data_used: 23461888
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560824320 unmapped: 82018304 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 408 heartbeat osd_stat(store_statfs(0x19449c000/0x0/0x1bfc00000, data 0x56b313a/0x58d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:57.526243+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dbb5a5c00 session 0x562dbff914a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc835800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c7c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559374336 unmapped: 83468288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:58.526498+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dbc835800 session 0x562dbecb2960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559603712 unmapped: 83238912 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:59.526651+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.030683517s of 10.079618454s, submitted: 340
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 409 ms_handle_reset con 0x562dc54c7c00 session 0x562dbb9df860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559579136 unmapped: 83263488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x19387f000/0x0/0x1bfc00000, data 0x62cddaf/0x64ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:00.526821+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbaba0c00 session 0x562dbd7ff2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559579136 unmapped: 83263488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:01.526971+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5987540 data_alloc: 234881024 data_used: 23470080
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559587328 unmapped: 83255296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:02.527176+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x19387b000/0x0/0x1bfc00000, data 0x62cf8ee/0x64f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559595520 unmapped: 83247104 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:03.527319+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e0400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dc68e0400 session 0x562dbd5d8960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559595520 unmapped: 83247104 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbaba0c00 session 0x562dbc8f2780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:04.527440+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd882c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbd882c00 session 0x562dba872d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2102c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dc2102c00 session 0x562dbd84dc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559595520 unmapped: 83247104 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67e800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ef800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:05.527558+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559595520 unmapped: 83247104 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:06.527695+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6048401 data_alloc: 234881024 data_used: 32047104
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:07.527863+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:08.528004+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x19387d000/0x0/0x1bfc00000, data 0x62cf8ee/0x64f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:09.528140+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:10.528268+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:11.528549+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6078161 data_alloc: 251658240 data_used: 36192256
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:12.528694+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:13.528994+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x19387d000/0x0/0x1bfc00000, data 0x62cf8ee/0x64f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:14.529069+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.160544395s of 15.233458519s, submitted: 31
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x19387d000/0x0/0x1bfc00000, data 0x62cf8ee/0x64f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563290112 unmapped: 79552512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:15.529189+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563290112 unmapped: 79552512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:16.529306+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6079081 data_alloc: 251658240 data_used: 36192256
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 79544320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:17.529458+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565846016 unmapped: 76996608 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:18.529606+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc834c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbc834c00 session 0x562dbbb230e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf563400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566075392 unmapped: 76767232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:19.529761+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x193302000/0x0/0x1bfc00000, data 0x683b8ee/0x6a5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566575104 unmapped: 76267520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:20.529876+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 71319552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:21.530020+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6225035 data_alloc: 251658240 data_used: 48607232
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 71319552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:22.530222+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571531264 unmapped: 71311360 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:23.530394+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x1932ff000/0x0/0x1bfc00000, data 0x683e8ee/0x6a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571531264 unmapped: 71311360 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:24.530566+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571531264 unmapped: 71311360 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:25.530710+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.780009270s of 11.007946014s, submitted: 78
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:26.530842+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6217183 data_alloc: 251658240 data_used: 48607232
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:27.531096+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:28.531210+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x193309000/0x0/0x1bfc00000, data 0x68438ee/0x6a65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:29.531364+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:30.531614+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:31.531755+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6363169 data_alloc: 251658240 data_used: 49393664
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbd67e800 session 0x562dbd5feb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbf6ef800 session 0x562dbd5d92c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575488000 unmapped: 67354624 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:32.531890+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbaba0c00 session 0x562dc822e780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575586304 unmapped: 67256320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:33.532045+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575586304 unmapped: 67256320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:34.532221+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x194b86000/0x0/0x1bfc00000, data 0x68348ee/0x6220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575586304 unmapped: 67256320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:35.532381+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575586304 unmapped: 67256320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:36.532544+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.567891121s of 11.008643150s, submitted: 149
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6146745 data_alloc: 251658240 data_used: 36081664
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576643072 unmapped: 66199552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:37.532797+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576643072 unmapped: 66199552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:38.533013+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x194b8b000/0x0/0x1bfc00000, data 0x68378ee/0x6223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576643072 unmapped: 66199552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:39.533257+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576643072 unmapped: 66199552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:40.533439+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x194b87000/0x0/0x1bfc00000, data 0x683a8ee/0x6226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576643072 unmapped: 66199552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:41.533613+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6147625 data_alloc: 251658240 data_used: 36089856
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576651264 unmapped: 66191360 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:42.533846+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:43.534080+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbf563400 session 0x562dbc8de000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbb701400 session 0x562dbff912c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:44.534282+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd8a3c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbd8a3c00 session 0x562dbb4cb0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:45.534435+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:46.534564+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x194bac000/0x0/0x1bfc00000, data 0x68168ee/0x6202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6140485 data_alloc: 251658240 data_used: 35979264
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:47.534801+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbc737c00 session 0x562dbd1b3c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.351653099s of 11.635720253s, submitted: 10
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dc2854c00 session 0x562dbab8a780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:48.534953+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbaba0c00 session 0x562dbbb230e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 71131136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:49.535092+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x194bac000/0x0/0x1bfc00000, data 0x68168ee/0x6202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:50.535267+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 71131136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:51.535453+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 71131136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5833995 data_alloc: 234881024 data_used: 22663168
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:52.535605+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 71131136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:53.535751+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 71131136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57da000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x196747000/0x0/0x1bfc00000, data 0x4c7c8de/0x4667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:54.535879+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571719680 unmapped: 71122944 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 411 ms_handle_reset con 0x562dc57da000 session 0x562dbd1b21e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2855c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 411 ms_handle_reset con 0x562dc2855c00 session 0x562dbb9df0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:55.536070+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565313536 unmapped: 77529088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 411 heartbeat osd_stat(store_statfs(0x197d7b000/0x0/0x1bfc00000, data 0x2e0c529/0x302d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 411 heartbeat osd_stat(store_statfs(0x197d7b000/0x0/0x1bfc00000, data 0x2e0c529/0x302d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:56.536189+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565313536 unmapped: 77529088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc737c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5445562 data_alloc: 218103808 data_used: 10223616
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:57.536384+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565313536 unmapped: 77529088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 412 ms_handle_reset con 0x562dbaba0c00 session 0x562dbd1b23c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 413 ms_handle_reset con 0x562dbc737c00 session 0x562dbb4ca1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 413 ms_handle_reset con 0x562dc2854c00 session 0x562dbd84c1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:58.536518+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 413 heartbeat osd_stat(store_statfs(0x198e00000/0x0/0x1bfc00000, data 0x1d8ae49/0x1fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:59.536665+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 413 heartbeat osd_stat(store_statfs(0x198e00000/0x0/0x1bfc00000, data 0x1d8ae49/0x1fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 413 heartbeat osd_stat(store_statfs(0x198e00000/0x0/0x1bfc00000, data 0x1d8ae49/0x1fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:00.536833+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 413 heartbeat osd_stat(store_statfs(0x198e00000/0x0/0x1bfc00000, data 0x1d8ae49/0x1fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:01.536963+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408029 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:02.537118+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:03.537294+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:04.537410+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.379796982s of 16.798450470s, submitted: 154
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:05.537556+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565288960 unmapped: 77553664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:06.537687+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565297152 unmapped: 77545472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:07.537867+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565297152 unmapped: 77545472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:08.538005+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565297152 unmapped: 77545472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:09.538236+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565305344 unmapped: 77537280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:10.538489+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565305344 unmapped: 77537280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:11.538690+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565305344 unmapped: 77537280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:12.538831+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565305344 unmapped: 77537280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:13.538979+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565313536 unmapped: 77529088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:14.539352+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:15.539474+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:16.539594+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:17.539769+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:18.539894+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:19.540015+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:20.540200+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:21.540334+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:22.540465+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:23.540598+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:24.540751+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:25.540923+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:26.541090+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:27.541356+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565329920 unmapped: 77512704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:28.541502+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565329920 unmapped: 77512704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:29.541666+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565329920 unmapped: 77512704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:30.541815+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:31.541959+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:32.542199+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:33.542330+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:34.542487+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:35.542596+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:36.542734+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:37.542902+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:38.543069+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:39.543194+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:40.543334+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:41.543464+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:42.543629+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:43.543773+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:44.543944+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:45.544100+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:46.544374+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565354496 unmapped: 77488128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57d8800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc57d8800 session 0x562dba8a01e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60e800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb60e800 session 0x562dba872b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaba0c00 session 0x562dbecb2000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc737c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc737c00 session 0x562dbbb14960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.153324127s of 42.163852692s, submitted: 22
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413057 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:47.544579+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc2854c00 session 0x562dba8b6000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57d8800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc57d8800 session 0x562dbd526960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e4800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc18e4800 session 0x562dbd7fe960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaba0c00 session 0x562dbbb14f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc737c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc737c00 session 0x562dbba832c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:48.544726+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:49.544842+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1986cf000/0x0/0x1bfc00000, data 0x24baa32/0x26df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:50.544965+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:51.545106+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5474404 data_alloc: 218103808 data_used: 10223616
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:52.545248+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acc000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc5acc000 session 0x562dbd7feb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:53.545382+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565682176 unmapped: 77160448 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc707000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc707000 session 0x562dbbb150e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:54.545512+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dba6a0c00 session 0x562dbd5ff0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565690368 unmapped: 77152256 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dba6a0c00 session 0x562dc822fe00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:55.545644+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1986aa000/0x0/0x1bfc00000, data 0x24dea41/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:56.545786+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:57.546007+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5478980 data_alloc: 218103808 data_used: 10223616
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1986aa000/0x0/0x1bfc00000, data 0x24dea41/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:58.546192+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:59.546330+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1986aa000/0x0/0x1bfc00000, data 0x24dea41/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:00.546532+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1986aa000/0x0/0x1bfc00000, data 0x24dea41/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:01.546674+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:02.546813+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532420 data_alloc: 234881024 data_used: 17657856
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc5acd400 session 0x562dbbb6ad20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb5a5800 session 0x562dbd573680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.490621567s of 15.586330414s, submitted: 39
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:03.546986+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbac11400 session 0x562dbbb22780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:04.547134+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:05.547280+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:06.547468+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:07.547674+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418162 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:08.547828+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:09.547946+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:10.548089+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:11.548223+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:12.548388+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418162 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:13.548560+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:14.548755+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:15.548900+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:16.549062+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:17.549254+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418162 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:18.549397+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:19.549580+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:20.549729+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:21.549868+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:22.550019+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418162 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:23.550201+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:24.550359+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:25.550489+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:26.550606+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 76800000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:27.550832+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418162 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 76800000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:28.550980+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 76800000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:29.551142+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 76800000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:30.551310+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 76800000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.252534866s of 28.299730301s, submitted: 19
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:31.551501+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaa1a400 session 0x562dbd84be00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a0c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dba6a0c00 session 0x562dc822fa40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbac11400 session 0x562dbb56f860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:32.551647+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5519877 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb5a5800 session 0x562dbff91a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc5acd400 session 0x562dbb47ef00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1980f6000/0x0/0x1bfc00000, data 0x2a959c0/0x2cb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:33.551786+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:34.551971+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:35.552362+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:36.552467+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1980f6000/0x0/0x1bfc00000, data 0x2a959c0/0x2cb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:37.552633+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5519877 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:38.552808+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e5c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc18e5c00 session 0x562dbd84ba40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:39.553008+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3668c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:40.553196+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3668c00 session 0x562dba8b7e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:41.553367+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1980f6000/0x0/0x1bfc00000, data 0x2a959c0/0x2cb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3669000 session 0x562dbd5272c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:42.553518+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc8be7800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.124199867s of 11.224075317s, submitted: 25
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5521363 data_alloc: 218103808 data_used: 10219520
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc8be7800 session 0x562dbd1b25a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:43.553682+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77f000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8d400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:44.553801+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:45.553961+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565919744 unmapped: 76922880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:46.554123+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1980d1000/0x0/0x1bfc00000, data 0x2ab99d0/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:47.554314+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5620731 data_alloc: 234881024 data_used: 21942272
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:48.554511+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:49.554700+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1980d1000/0x0/0x1bfc00000, data 0x2ab99d0/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:50.554849+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:51.555650+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dba77f000 session 0x562dbb56f860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbca8d400 session 0x562dba7fa960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:52.555807+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8d400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.052662849s of 10.073778152s, submitted: 5
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5620647 data_alloc: 234881024 data_used: 21942272
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 75841536 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:53.556327+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:54.556617+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:55.557846+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbca8d400 session 0x562dbd5d9c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ddb000/0x0/0x1bfc00000, data 0x1db09c0/0x1fd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:56.558402+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:57.558846+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:58.559209+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:59.569370+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:00.569811+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:01.570219+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:02.570342+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:03.570645+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:04.571306+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:05.571571+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:06.571794+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:07.572084+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:08.572226+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:09.572378+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:10.572572+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:11.572818+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:12.572967+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:13.573169+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:14.573311+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:15.573578+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:16.573857+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:17.574131+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:18.574351+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:19.574497+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:20.574665+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:21.574858+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:22.574994+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:23.575147+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:24.575338+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:25.575504+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:26.575655+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:27.575811+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:28.575953+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:29.576077+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:30.576221+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:31.576357+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:32.576487+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:33.576578+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:34.576741+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:35.576900+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:36.577124+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:37.577324+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc707000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 45.020236969s of 45.082485199s, submitted: 25
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc707000 session 0x562dbd84e960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:38.577457+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:39.577585+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:40.577758+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:41.577928+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:42.578132+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5441886 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:43.578302+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:44.578440+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:45.578617+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbd67f400 session 0x562dbc8def00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:46.578786+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3edc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:47.578943+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5446846 data_alloc: 218103808 data_used: 9527296
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:48.579102+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:49.579225+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:50.579364+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:51.579521+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:52.579695+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5447966 data_alloc: 218103808 data_used: 9695232
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:53.579825+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 82239488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:54.579999+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 82239488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:55.580212+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 82239488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:56.580392+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 82239488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:57.580610+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5447966 data_alloc: 218103808 data_used: 9695232
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 82239488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.518692017s of 20.542280197s, submitted: 11
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:58.580792+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564060160 unmapped: 78782464 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:59.580920+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19826e000/0x0/0x1bfc00000, data 0x291d9c0/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564076544 unmapped: 78766080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19826e000/0x0/0x1bfc00000, data 0x291d9c0/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:00.581094+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564076544 unmapped: 78766080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:01.581261+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19826e000/0x0/0x1bfc00000, data 0x291d9c0/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564076544 unmapped: 78766080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:02.581395+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5536814 data_alloc: 218103808 data_used: 10964992
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564076544 unmapped: 78766080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:03.581528+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564076544 unmapped: 78766080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:04.581658+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:05.581803+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19824d000/0x0/0x1bfc00000, data 0x293e9c0/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:06.581926+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:07.582083+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5530274 data_alloc: 218103808 data_used: 10977280
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:08.582191+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:09.582362+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:10.582518+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200681686s of 12.423737526s, submitted: 87
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563994624 unmapped: 78848000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:11.582736+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19823f000/0x0/0x1bfc00000, data 0x294c9c0/0x2b6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563994624 unmapped: 78848000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:12.582899+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbe3edc00 session 0x562dba8a03c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5531018 data_alloc: 218103808 data_used: 10977280
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60e400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563994624 unmapped: 78848000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:13.583068+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb60e400 session 0x562dbc8f30e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:14.583185+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:15.583344+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:16.583530+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:17.583703+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433604 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:18.583874+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:19.584215+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:20.584368+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:21.584503+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:22.584633+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433604 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:23.584769+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:24.584938+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:25.585108+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:26.585287+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:27.585501+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433604 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:28.585688+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:29.585828+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:30.585973+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:31.586125+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:32.586265+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433604 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:33.586425+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:34.586666+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:35.586849+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:36.587007+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:37.587265+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433604 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:38.587488+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:39.587655+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:40.587818+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:41.587979+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564043776 unmapped: 78798848 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:42.588129+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbae52400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbae52400 session 0x562dbc776b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1497c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc1497c00 session 0x562dba8a1a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaa1a800 session 0x562dbbb154a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a82c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc5a82c00 session 0x562dbb4cad20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed2400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.121410370s of 32.159393311s, submitted: 10
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5435078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:43.588279+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc1ed2400 session 0x562dbd5d8b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaa1a800 session 0x562dbb9df860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbae52400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbae52400 session 0x562dbd1b3680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1497c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:44.588450+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc1497c00 session 0x562dc822c780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4f800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbce4f800 session 0x562dbd84d0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:45.588595+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:46.593137+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:47.593359+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a47000/0x0/0x1bfc00000, data 0x21449c0/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5464721 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:48.593505+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:49.593621+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:50.593771+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c7800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:51.593922+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc54c7800 session 0x562dbd573680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbae52400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:52.594101+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5483289 data_alloc: 218103808 data_used: 11051008
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:53.594231+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:54.594358+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:55.594589+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:56.594787+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:57.594991+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494969 data_alloc: 218103808 data_used: 12722176
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:58.595167+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565125120 unmapped: 77717504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:59.595322+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565125120 unmapped: 77717504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:00.595455+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565125120 unmapped: 77717504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:01.595620+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565125120 unmapped: 77717504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:02.595775+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494969 data_alloc: 218103808 data_used: 12722176
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.423847198s of 20.521347046s, submitted: 17
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565125120 unmapped: 77717504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:03.595924+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568721408 unmapped: 74121216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3669000 session 0x562dbb56f2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:04.596067+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc8be6c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc8be6c00 session 0x562dba895860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57db800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc57db800 session 0x562dc18323c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196c64000/0x0/0x1bfc00000, data 0x2d7f9c0/0x2fa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbabaf400 session 0x562dbd1b2d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568721408 unmapped: 74121216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:05.596209+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568852480 unmapped: 73990144 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:06.596360+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc5acd400 session 0x562dbff910e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc720000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc720000 session 0x562dbab8a3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbabaf400 session 0x562dbabfb860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19689f000/0x0/0x1bfc00000, data 0x31449c0/0x3367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567951360 unmapped: 74891264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:07.596532+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3669000 session 0x562dbd5ff4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57db800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc57db800 session 0x562dbb34f680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5636806 data_alloc: 218103808 data_used: 13115392
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc8be6c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc8be6c00 session 0x562dbbac0b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567951360 unmapped: 74891264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:08.596623+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbabaf400 session 0x562dbd84f0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc720000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc720000 session 0x562dba872780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567951360 unmapped: 74891264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:09.596750+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3669000 session 0x562dbecb2960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ed800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3668000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567869440 unmapped: 74973184 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:10.596875+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196874000/0x0/0x1bfc00000, data 0x31769d0/0x339a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 75259904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:11.596973+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:12.597107+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5669817 data_alloc: 234881024 data_used: 16855040
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:13.597225+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:14.597363+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:15.597493+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196871000/0x0/0x1bfc00000, data 0x31799d0/0x339d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196871000/0x0/0x1bfc00000, data 0x31799d0/0x339d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:16.597638+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196871000/0x0/0x1bfc00000, data 0x31799d0/0x339d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:17.597822+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5669817 data_alloc: 234881024 data_used: 16855040
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:18.597954+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196871000/0x0/0x1bfc00000, data 0x31799d0/0x339d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:19.598063+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:20.598199+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196871000/0x0/0x1bfc00000, data 0x31799d0/0x339d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:21.598365+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.615375519s of 19.088668823s, submitted: 114
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:22.598488+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567812096 unmapped: 75030528 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [O-0] New memtable created with log file: #60. Immutable memtables: 0.
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5721487 data_alloc: 234881024 data_used: 17354752
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:23.598575+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571244544 unmapped: 71598080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:24.598721+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 71589888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19510d000/0x0/0x1bfc00000, data 0x372f9d0/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:25.598829+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:26.598963+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:27.599125+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5728055 data_alloc: 234881024 data_used: 17260544
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:28.599249+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950ff000/0x0/0x1bfc00000, data 0x37439d0/0x3967000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:29.599376+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:30.599576+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:31.599772+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:32.599936+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5728071 data_alloc: 234881024 data_used: 17260544
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:33.600105+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950ff000/0x0/0x1bfc00000, data 0x37439d0/0x3967000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:34.600264+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:35.600395+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950ff000/0x0/0x1bfc00000, data 0x37439d0/0x3967000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.496047974s of 13.780189514s, submitted: 72
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:36.600525+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571392000 unmapped: 71450624 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbe3ed800 session 0x562dbecb25a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3668000 session 0x562dbabfb680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:37.600670+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 71442432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722903 data_alloc: 234881024 data_used: 17260544
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:38.600798+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 71442432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbabaf400 session 0x562dc18332c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:39.600950+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 71417856 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x195a9b000/0x0/0x1bfc00000, data 0x2db09c0/0x2fd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:40.601088+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 71417856 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:41.601268+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 71417856 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:42.601457+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 71417856 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5612912 data_alloc: 218103808 data_used: 13115392
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:43.601642+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 71417856 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaa1a800 session 0x562dba872b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbae52400 session 0x562dbd84a000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd881000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:44.601809+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196a9b000/0x0/0x1bfc00000, data 0x1db09c0/0x1fd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbd881000 session 0x562dbabfa3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:45.601977+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:46.602118+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:47.602288+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:48.602442+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:49.602580+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:50.602716+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:51.602842+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:52.602980+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:53.603103+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:54.603225+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:55.603422+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:56.603577+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:57.603827+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:58.604006+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:59.604186+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:00.604321+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:01.604455+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:02.604647+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:03.604781+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:04.604920+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:05.605104+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:06.605246+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:07.605432+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:08.605743+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:09.605930+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:10.606103+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:11.606268+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:12.606425+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:13.606565+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:14.606732+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:15.606849+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:16.606984+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:17.607167+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 71442432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbbb83c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.288108826s of 42.024108887s, submitted: 75
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494804 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbbb83c00 session 0x562dbb34e5a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:18.607288+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:19.607385+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:20.607647+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:21.607752+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1965ea000/0x0/0x1bfc00000, data 0x22619c0/0x2484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:22.607877+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ed800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbe3ed800 session 0x562dbbb14f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494804 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:23.607989+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ff000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc10ff000 session 0x562dbd526960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:24.608122+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3668400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3668400 session 0x562dba8954a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc720000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc720000 session 0x562dbd1b32c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:25.608269+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1965c5000/0x0/0x1bfc00000, data 0x22859e3/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571588608 unmapped: 71254016 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ffc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc706c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:26.608432+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:27.608607+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5537178 data_alloc: 218103808 data_used: 13893632
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:28.609543+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:29.609705+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1965c5000/0x0/0x1bfc00000, data 0x22859e3/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1965c5000/0x0/0x1bfc00000, data 0x22859e3/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:30.609860+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:31.609998+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-01-31T09:04:32.610184+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _finish_auth 0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:32.611528+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5537178 data_alloc: 218103808 data_used: 13893632
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:33.610339+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1965c5000/0x0/0x1bfc00000, data 0x22859e3/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:34.610497+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:35.610671+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:36.610807+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.229181290s of 19.316902161s, submitted: 19
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:37.611052+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571621376 unmapped: 71221248 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5587726 data_alloc: 218103808 data_used: 13918208
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:38.613920+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571752448 unmapped: 71090176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x195f2a000/0x0/0x1bfc00000, data 0x29209e3/0x2b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:39.615896+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #61. Immutable memtables: 16.
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574160896 unmapped: 68681728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:40.616645+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:41.616908+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:42.618017+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5618182 data_alloc: 218103808 data_used: 15122432
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:43.626845+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194af2000/0x0/0x1bfc00000, data 0x2bb89e3/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:44.627496+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:45.628233+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:46.628402+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194af2000/0x0/0x1bfc00000, data 0x2bb89e3/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:47.629359+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194af2000/0x0/0x1bfc00000, data 0x2bb89e3/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5618182 data_alloc: 218103808 data_used: 15122432
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:48.629569+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194af2000/0x0/0x1bfc00000, data 0x2bb89e3/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:49.630362+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:50.630706+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:51.631003+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194af2000/0x0/0x1bfc00000, data 0x2bb89e3/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc706c00 session 0x562dbd5d94a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.434545517s of 14.631439209s, submitted: 69
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc10ffc00 session 0x562dbd573e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc706c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:52.631088+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc706c00 session 0x562dbba832c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:53.631294+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:54.631430+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:55.631549+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:56.631687+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:57.631914+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:58.632212+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:59.632440+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:00.632720+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:01.632943+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:02.633188+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:03.633425+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:04.633614+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:05.633883+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:06.634144+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:07.634334+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:08.634598+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:09.634788+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:10.634920+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:11.635096+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:12.638691+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:13.638843+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:14.639004+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:15.639184+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:16.639326+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:17.639513+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:18.639680+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:19.639866+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:20.640043+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:21.640237+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:22.640331+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:23.640446+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:24.640575+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:25.640730+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:26.640854+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:27.640983+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:28.641133+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:29.641224+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:30.641326+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:31.641441+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:32.641622+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:33.641786+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:34.641951+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:35.642152+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:36.642343+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:37.642539+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:38.642673+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:39.642779+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:40.643172+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:41.643415+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:42.643802+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:43.644319+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:44.644618+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:45.645008+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:46.645301+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:47.645640+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:48.646104+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:49.646608+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:50.647048+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:51.647341+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:52.647600+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:53.648418+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 81403904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:54.648611+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 81403904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:55.648774+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 81403904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:56.650107+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.340148926s of 64.443199158s, submitted: 35
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561627136 unmapped: 81215488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbac11400 session 0x562dc822de00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbd67f800 session 0x562dba8b7e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaa1a000 session 0x562dbb56f860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbac11400 session 0x562dbbb154a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc706c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:57.650225+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc706c00 session 0x562dbd84d4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561627136 unmapped: 81215488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:58.650398+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5537708 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561627136 unmapped: 81215488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:59.650552+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:00.650692+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:01.650854+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:02.650998+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:03.651076+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5537708 data_alloc: 218103808 data_used: 8908800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:04.651196+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb701400 session 0x562dbd7ff4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:05.651809+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60f400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561520640 unmapped: 81321984 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:06.651991+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561479680 unmapped: 81362944 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:07.652170+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:08.652316+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5596320 data_alloc: 234881024 data_used: 17432576
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:09.652482+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:10.652752+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:11.652911+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:12.653072+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:13.653205+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5596320 data_alloc: 234881024 data_used: 17432576
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:14.653353+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:15.653559+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:16.653718+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:17.654764+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.878530502s of 21.040298462s, submitted: 45
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566747136 unmapped: 76095488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:18.655466+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194d9b000/0x0/0x1bfc00000, data 0x290fa22/0x2b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,3])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5656128 data_alloc: 234881024 data_used: 17444864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566198272 unmapped: 76644352 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:19.655967+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566239232 unmapped: 76603392 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:20.656089+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566665216 unmapped: 76177408 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:21.656351+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:22.656487+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19488e000/0x0/0x1bfc00000, data 0x2e1ca22/0x3040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:23.657010+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19488e000/0x0/0x1bfc00000, data 0x2e1ca22/0x3040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5679952 data_alloc: 234881024 data_used: 18206720
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:24.657195+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:25.657326+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:26.657462+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19488e000/0x0/0x1bfc00000, data 0x2e1ca22/0x3040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:27.657851+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.117003441s of 10.006759644s, submitted: 89
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:28.658188+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5681556 data_alloc: 234881024 data_used: 18247680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:29.658426+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:30.658609+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb60f400 session 0x562dbecb21e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:31.658810+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbd67f800 session 0x562dbab8ad20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566681600 unmapped: 76161024 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:32.659107+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 415 heartbeat osd_stat(store_statfs(0x19484d000/0x0/0x1bfc00000, data 0x2e5da22/0x3081000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566681600 unmapped: 76161024 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:33.659409+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 415 ms_handle_reset con 0x562dbac11400 session 0x562dbd84da40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60f400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5684570 data_alloc: 234881024 data_used: 18251776
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 76152832 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 415 ms_handle_reset con 0x562dbb701400 session 0x562dbc8dfa40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 415 ms_handle_reset con 0x562dbb60f400 session 0x562dba284960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc706c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:34.659615+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 579526656 unmapped: 74473472 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:35.659746+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572686336 unmapped: 81313792 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 415 ms_handle_reset con 0x562dbc706c00 session 0x562dbb4cb860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:36.659925+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _renew_subs
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 415 handle_osd_map epochs [416,416], i have 416, src has [1,416]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572686336 unmapped: 81313792 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 416 ms_handle_reset con 0x562dbaa1a800 session 0x562dbba82780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:37.660159+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.213700294s of 10.016177177s, submitted: 54
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 416 heartbeat osd_stat(store_statfs(0x193e76000/0x0/0x1bfc00000, data 0x383367b/0x3a58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572702720 unmapped: 81297408 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:38.660383+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5793518 data_alloc: 234881024 data_used: 26558464
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572710912 unmapped: 81289216 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:39.660596+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 417 ms_handle_reset con 0x562dbaa1a800 session 0x562dbd5d8d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572719104 unmapped: 81281024 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:40.660842+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8cc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 417 ms_handle_reset con 0x562dbca8cc00 session 0x562dc822fc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc720000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572719104 unmapped: 81281024 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 417 ms_handle_reset con 0x562dbc720000 session 0x562dbbb14b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:41.661705+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 417 ms_handle_reset con 0x562dbb5a4800 session 0x562dbba82000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572727296 unmapped: 81272832 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:42.661949+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 417 heartbeat osd_stat(store_statfs(0x193e6e000/0x0/0x1bfc00000, data 0x3836f9d/0x3a5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572727296 unmapped: 81272832 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:43.662220+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 417 heartbeat osd_stat(store_statfs(0x193e6e000/0x0/0x1bfc00000, data 0x3836f9d/0x3a5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5793518 data_alloc: 234881024 data_used: 26558464
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572727296 unmapped: 81272832 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:44.662391+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 417 heartbeat osd_stat(store_statfs(0x193e6e000/0x0/0x1bfc00000, data 0x3836f9d/0x3a5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573784064 unmapped: 80216064 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:45.662532+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 417 ms_handle_reset con 0x562dbd67f400 session 0x562dbd5d8d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:46.662692+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:47.662918+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:48.663062+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569148 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:49.663203+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 heartbeat osd_stat(store_statfs(0x194f3d000/0x0/0x1bfc00000, data 0x2767a7a/0x298f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:50.663390+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:51.663537+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:52.663710+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 heartbeat osd_stat(store_statfs(0x194f3d000/0x0/0x1bfc00000, data 0x2767a7a/0x298f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:53.663855+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569148 data_alloc: 218103808 data_used: 8933376
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:54.663994+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:55.664150+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77fc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dba77fc00 session 0x562dba8954a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd881000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4ec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.006954193s of 18.347778320s, submitted: 84
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbce4ec00 session 0x562dbba83860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbd881000 session 0x562dbd526960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbc73c800 session 0x562dbbb14f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed2400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dc1ed2400 session 0x562dbabfa3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77fc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dba77fc00 session 0x562dbd84a000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbc73c800 session 0x562dba872b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4ec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 581124096 unmapped: 76652544 heap: 657776640 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 heartbeat osd_stat(store_statfs(0x194f3d000/0x0/0x1bfc00000, data 0x2767a7a/0x298f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [0,0,0,0,0,0,1,3,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:56.664290+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbce4ec00 session 0x562dc18332c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 581312512 unmapped: 88498176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:57.664430+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd881000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbd881000 session 0x562dbd7ff4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574291968 unmapped: 95518720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:58.664574+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 79K writes, 327K keys, 79K commit groups, 1.0 writes per commit group, ingest: 0.33 GB, 0.05 MB/s
                                           Cumulative WAL: 79K writes, 29K syncs, 2.74 writes per sync, written: 0.33 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3124 writes, 12K keys, 3124 commit groups, 1.0 writes per commit group, ingest: 12.28 MB, 0.02 MB/s
                                           Interval WAL: 3124 writes, 1286 syncs, 2.43 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ee400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc04af800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5783238 data_alloc: 234881024 data_used: 19243008
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dd2f18800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574316544 unmapped: 95494144 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:59.664725+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 419 ms_handle_reset con 0x562dd2f18800 session 0x562dbc8de3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570941440 unmapped: 98869248 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:00.664892+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 419 heartbeat osd_stat(store_statfs(0x193d58000/0x0/0x1bfc00000, data 0x353b74a/0x3765000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 98836480 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:01.665095+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 98836480 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:02.665198+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 98836480 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:03.665370+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5783588 data_alloc: 234881024 data_used: 21766144
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 98836480 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:04.665525+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 98836480 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:05.665720+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.061226845s of 10.367254257s, submitted: 36
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567123968 unmapped: 102686720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d58000/0x0/0x1bfc00000, data 0x353b74a/0x3765000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:06.665906+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567123968 unmapped: 102686720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:07.666080+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567123968 unmapped: 102686720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:08.666184+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d55000/0x0/0x1bfc00000, data 0x353d289/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: mgrc ms_handle_reset ms_handle_reset con 0x562dbf563800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3465938080
Jan 31 09:15:09 compute-2 ceph-osd[79942]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3465938080,v1:192.168.122.100:6801/3465938080]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: get_auth_request con 0x562dd2f18800 auth_method 0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: mgrc handle_mgr_configure stats_period=5
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5776882 data_alloc: 234881024 data_used: 21770240
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567123968 unmapped: 102686720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:09.666299+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567123968 unmapped: 102686720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:10.666456+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:11.666620+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:12.666799+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:13.666955+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5802946 data_alloc: 234881024 data_used: 24379392
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:14.667123+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d56000/0x0/0x1bfc00000, data 0x353d289/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:15.667267+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:16.667513+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d56000/0x0/0x1bfc00000, data 0x353d289/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:17.667722+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:18.667910+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5809346 data_alloc: 234881024 data_used: 25161728
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:19.668118+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d56000/0x0/0x1bfc00000, data 0x353d289/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:20.668453+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:21.668608+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbf6ee400 session 0x562dbb9def00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.527951241s of 15.564998627s, submitted: 14
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc04af800 session 0x562dbd5d9c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:22.668764+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbbb83800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:23.668928+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5808510 data_alloc: 234881024 data_used: 25161728
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:24.669171+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d56000/0x0/0x1bfc00000, data 0x353d289/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:25.669358+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954d8000/0x0/0x1bfc00000, data 0x1dbb289/0x1fe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954d8000/0x0/0x1bfc00000, data 0x1dbb289/0x1fe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbbb83800 session 0x562dbd84ab40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:26.669508+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954d8000/0x0/0x1bfc00000, data 0x1dbb289/0x1fe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:27.669650+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:28.669766+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5515958 data_alloc: 218103808 data_used: 8941568
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:29.669893+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:30.670057+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:31.670240+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:32.670376+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:33.670502+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5515958 data_alloc: 218103808 data_used: 8941568
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:34.670649+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:35.670798+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:36.670931+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:37.671091+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:38.671248+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5515958 data_alloc: 218103808 data_used: 8941568
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:39.671413+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed3000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:40.671570+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1ed3000 session 0x562dbd1b21e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbb5a4400 session 0x562dbd5d94a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:41.671737+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:42.671880+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.716526031s of 20.871770859s, submitted: 35
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbb5a4400 session 0x562dbd84f680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:43.672091+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5520866 data_alloc: 218103808 data_used: 10252288
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:44.672229+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc54c6c00 session 0x562dbecb32c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567246848 unmapped: 102563840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:45.672354+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc3669000 session 0x562dbbb14960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd881800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbd881800 session 0x562dbd5ff4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1100c00 session 0x562dbc8f2780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:46.672495+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567558144 unmapped: 102252544 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:47.672685+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567558144 unmapped: 102252544 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0b000/0x0/0x1bfc00000, data 0x2488276/0x26b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:48.672845+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567566336 unmapped: 102244352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5576699 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0b000/0x0/0x1bfc00000, data 0x2488276/0x26b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,2,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:49.672972+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565886976 unmapped: 103923712 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0b000/0x0/0x1bfc00000, data 0x2488276/0x26b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:50.673152+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565886976 unmapped: 103923712 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:51.673299+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565886976 unmapped: 103923712 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:52.673381+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565919744 unmapped: 103890944 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:53.673503+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565919744 unmapped: 103890944 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.528172493s of 10.987412453s, submitted: 168
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5576771 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:54.673680+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565936128 unmapped: 103874560 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0b000/0x0/0x1bfc00000, data 0x2488276/0x26b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,2])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:55.673807+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565985280 unmapped: 103825408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1496800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1496800 session 0x562dbba83e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:56.673952+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 103768064 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4e000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbce4e000 session 0x562dbc8f23c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:57.674127+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2855800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc2855800 session 0x562dba894780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566059008 unmapped: 103751680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbb5a4000 session 0x562dbb548780
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0b000/0x0/0x1bfc00000, data 0x2488276/0x26b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:58.674250+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3668000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566050816 unmapped: 103759872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5579808 data_alloc: 218103808 data_used: 10407936
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:59.674378+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566059008 unmapped: 103751680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:00.674529+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:01.674653+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:02.674775+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0a000/0x0/0x1bfc00000, data 0x2488299/0x26b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:03.674916+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5629248 data_alloc: 234881024 data_used: 17272832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:04.675060+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0a000/0x0/0x1bfc00000, data 0x2488299/0x26b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:05.675198+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:06.675327+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:07.675477+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:08.675650+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5629248 data_alloc: 234881024 data_used: 17272832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0a000/0x0/0x1bfc00000, data 0x2488299/0x26b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:09.675825+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0a000/0x0/0x1bfc00000, data 0x2488299/0x26b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.626567841s of 16.307867050s, submitted: 196
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:10.675986+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 96436224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:11.676151+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570761216 unmapped: 99049472 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1943db000/0x0/0x1bfc00000, data 0x2eb7299/0x30e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:12.676256+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570761216 unmapped: 99049472 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:13.676429+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570761216 unmapped: 99049472 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5714668 data_alloc: 234881024 data_used: 18874368
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194394000/0x0/0x1bfc00000, data 0x2efe299/0x312a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:14.676577+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:15.676723+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:16.680579+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19435b000/0x0/0x1bfc00000, data 0x2f37299/0x3163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:17.680768+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:18.680932+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19435b000/0x0/0x1bfc00000, data 0x2f37299/0x3163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722260 data_alloc: 234881024 data_used: 19111936
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:19.681091+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19435b000/0x0/0x1bfc00000, data 0x2f37299/0x3163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:20.681247+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.862438202s of 10.933945656s, submitted: 83
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:21.681422+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:22.681614+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:23.681875+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc3668000 session 0x562dbd526b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5723952 data_alloc: 234881024 data_used: 19161088
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:24.682496+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:25.682728+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76299/0x31a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:26.683505+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbb5a4000 session 0x562dba873e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:27.683774+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:28.684159+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722728 data_alloc: 234881024 data_used: 19156992
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:29.684342+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:30.684747+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:31.685168+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:32.685377+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:33.685521+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722728 data_alloc: 234881024 data_used: 19156992
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:34.685824+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:35.686062+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:36.686369+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:37.686568+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:38.686723+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722728 data_alloc: 234881024 data_used: 19156992
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:39.687003+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:40.687237+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbac10800 session 0x562dbd84ad20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:41.687375+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc3669000 session 0x562dbd84bc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7ce800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbc7ce800 session 0x562dbd5d9a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.005628586s of 21.517366409s, submitted: 22
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:42.687542+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1100c00 session 0x562dbb56f2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571064320 unmapped: 98746368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:43.687732+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5724341 data_alloc: 234881024 data_used: 19165184
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:44.687910+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:45.688124+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:46.688250+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:47.688427+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:48.688642+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5725301 data_alloc: 234881024 data_used: 19234816
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:49.688778+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:50.688986+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:51.689213+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:52.689391+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:53.689544+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5725301 data_alloc: 234881024 data_used: 19234816
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:54.689759+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:55.689860+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.429984093s of 13.465677261s, submitted: 7
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:56.690015+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:57.690186+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:58.690428+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5741525 data_alloc: 234881024 data_used: 20668416
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:59.690810+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:00.690955+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:01.691074+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:02.691230+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:03.691373+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5741525 data_alloc: 234881024 data_used: 20668416
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:04.691531+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:05.691719+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:06.691873+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:07.692082+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:08.692226+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5741525 data_alloc: 234881024 data_used: 20668416
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:09.692374+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:10.692512+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:11.692667+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:12.692773+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:13.692932+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5741525 data_alloc: 234881024 data_used: 20668416
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:14.693081+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:15.693213+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.990146637s of 20.020147324s, submitted: 4
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:16.693348+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:17.693507+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:18.693655+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5742629 data_alloc: 234881024 data_used: 20701184
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:19.693799+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbac10800 session 0x562dbd526b40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbb5a4000 session 0x562dbb34ed20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ffc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:20.693910+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc10ffc00 session 0x562dbbb14960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:21.694126+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:22.694264+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ee000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbf6ee000 session 0x562dbd84a000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:23.694416+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1496800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1496800 session 0x562dbbb14f00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:24.694544+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 98721792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:25.694669+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 98721792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:26.694798+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 98721792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:27.694966+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 98721792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:28.695090+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 98721792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:29.695220+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:30.695357+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:31.695498+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:32.695611+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:33.696159+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:34.696354+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:35.696497+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:36.696666+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:37.696860+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:38.697046+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:39.697271+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:40.697429+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:41.697609+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:42.697745+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:43.697863+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:44.698120+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:45.698286+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:46.698447+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:47.698659+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:48.698795+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:49.699187+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:50.699353+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:51.699503+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:52.699706+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:53.699849+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:54.699990+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:55.700143+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:56.700306+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:57.700659+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:58.700835+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:59.703167+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:00.703303+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:01.703444+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:02.704365+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:03.704571+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:04.705046+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:05.705350+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:06.705856+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:07.706285+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:08.706556+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:09.706844+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:10.707093+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:11.707222+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:12.707452+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:13.707762+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:14.707927+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:15.708095+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:16.708288+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:17.708635+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571170816 unmapped: 98639872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:18.708803+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571179008 unmapped: 98631680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:19.709392+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:20.709614+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:21.710112+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:22.710231+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:23.710522+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:24.710710+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:25.710951+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:27.152975+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:28.153292+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:29.153417+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:30.153686+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:31.153844+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:32.154148+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:33.154294+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:34.155238+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:35.155377+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:36.155505+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:37.155642+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:38.155840+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:39.155995+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:40.156122+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:41.156279+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:42.156590+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:43.156780+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:44.157478+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:45.157838+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:46.158205+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:47.158517+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 91.183334351s of 91.992614746s, submitted: 27
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:48.158809+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:49.158975+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571236352 unmapped: 98574336 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532376 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:50.159257+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbc73c000 session 0x562dbb9dfe00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:51.159461+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:52.159718+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:53.159858+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:54.160284+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532304 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:55.160564+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:56.160810+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571260928 unmapped: 98549760 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:57.161080+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571260928 unmapped: 98549760 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:58.161318+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:59.161446+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532304 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:00.161637+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:01.161821+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:02.161989+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:03.162198+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:04.162366+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532304 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:05.162519+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:06.162686+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:07.162836+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:08.163064+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:09.163242+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532304 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:10.163404+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:11.163558+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:12.163747+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 09:15:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4037258470' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:13.163911+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:14.164012+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 98508800 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532304 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:15.164193+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc04ae400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc04ae400 session 0x562dbc8f3c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 98508800 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:16.164354+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 98508800 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:17.164510+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571310080 unmapped: 98500608 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:18.164697+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73a000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbc73a000 session 0x562dba8a1a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571310080 unmapped: 98500608 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:19.164777+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed3000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1ed3000 session 0x562dbb9df860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1101800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.385900497s of 31.481319427s, submitted: 5
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571310080 unmapped: 98500608 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:20.164901+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532256 data_alloc: 218103808 data_used: 10280960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 98492416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1101800 session 0x562dbecb3e00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:21.165066+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73a000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 98492416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:22.165183+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:23.165274+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:24.165365+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:25.165504+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5536132 data_alloc: 218103808 data_used: 10547200
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x195498000/0x0/0x1bfc00000, data 0x1dfb2c9/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:26.165663+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:27.165801+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:28.165940+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:29.166322+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:30.166589+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5536132 data_alloc: 218103808 data_used: 10547200
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x195498000/0x0/0x1bfc00000, data 0x1dfb2c9/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbc73a000 session 0x562dbb34e1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbc73c000 session 0x562dc18330e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571334656 unmapped: 98476032 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57da400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.131487846s of 11.351829529s, submitted: 2
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:31.166736+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc57da400 session 0x562dbecb2000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:32.166875+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:33.167061+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:34.167234+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:35.167392+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5533976 data_alloc: 218103808 data_used: 10543104
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:36.167591+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e4400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc18e4400 session 0x562dbb47ef00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43dd800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:37.167725+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:38.259549+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571359232 unmapped: 98451456 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc43dd800 session 0x562dbd84f4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:39.259882+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5529916 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:40.260049+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:41.260484+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:42.260633+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:43.260944+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:44.261090+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e0000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5529916 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:45.261378+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbe2e0000 session 0x562dbd84da40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57d9400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc57d9400 session 0x562dba285a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:46.261551+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:47.261780+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:48.261955+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ff800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.310680389s of 17.490665436s, submitted: 12
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 98435072 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc10ff800 session 0x562dbd84e1e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:49.262082+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 98435072 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:50.262215+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5533282 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 98435072 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:51.262578+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 98435072 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:52.262729+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ecc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbe3ecc00 session 0x562dbd7ff680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed3400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 98426880 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97276/0x1fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:53.262875+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 98410496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:54.263020+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 98394112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1ed3400 session 0x562dbc8dfc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:55.263244+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5531817 data_alloc: 218103808 data_used: 10276864
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e0000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 98394112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:56.263386+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbe2e0000 session 0x562dbecb3680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 98394112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ecc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:57.263546+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbe3ecc00 session 0x562dbd5feb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 98394112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:58.263718+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc737400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.789875031s of 10.388438225s, submitted: 39
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 98394112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x195113000/0x0/0x1bfc00000, data 0x2180276/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:59.263975+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _renew_subs
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 98377728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:00.264115+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5573668 data_alloc: 218103808 data_used: 10289152
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbc737400 session 0x562dbb549680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 98377728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:01.264265+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 98377728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:02.264393+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:03.264547+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:04.264680+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x19510f000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:05.264881+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5573668 data_alloc: 218103808 data_used: 10289152
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:06.265014+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x19510f000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:07.265154+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:08.265326+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:09.265487+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 98361344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:10.265619+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5573668 data_alloc: 218103808 data_used: 10289152
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffbc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed3000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.935684204s of 11.886166573s, submitted: 7
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x19510f000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:11.265786+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dc1ed3000 session 0x562dc822c5a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbcffbc00 session 0x562dbb34f680
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:12.265939+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x195110000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:13.266140+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:14.266308+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:15.266457+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5574262 data_alloc: 218103808 data_used: 10289152
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57d9000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dc57d9000 session 0x562dba8a1860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:16.266589+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbc721c00 session 0x562dbd5732c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:17.266823+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf563800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbf563800 session 0x562dbd84a3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571465728 unmapped: 98344960 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43df400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:18.267084+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x195110000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571473920 unmapped: 98336768 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:19.267243+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dc43df400 session 0x562dba8b65a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571482112 unmapped: 98328576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:20.267587+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5580845 data_alloc: 218103808 data_used: 10289152
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x1950ea000/0x0/0x1bfc00000, data 0x21a5f02/0x23d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571498496 unmapped: 98312192 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:21.267709+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:22.267826+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:23.268019+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:24.268185+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:25.268463+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5607857 data_alloc: 218103808 data_used: 14090240
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd84e3c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.251832008s of 14.803118706s, submitted: 14
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbaa19400 session 0x562dbbac1860
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x1950ea000/0x0/0x1bfc00000, data 0x21a5f02/0x23d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:26.268620+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:27.268808+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:28.269082+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbc721c00 session 0x562dba8730e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x19510f000/0x0/0x1bfc00000, data 0x2181ef2/0x23af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:29.269273+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:30.269444+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5603924 data_alloc: 218103808 data_used: 14086144
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a82c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:31.269672+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dc5a82c00 session 0x562dbd1b32c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2855800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:32.269831+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 98295808 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:33.269985+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 98295808 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x195110000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:34.270135+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dc2855800 session 0x562dba284960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 98295808 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:35.270285+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5603179 data_alloc: 218103808 data_used: 14086144
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 98295808 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60f400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.243093014s of 10.641144753s, submitted: 30
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:36.270435+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbb60f400 session 0x562dbd5732c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 98287616 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:37.270592+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 422 ms_handle_reset con 0x562dbaa19400 session 0x562dbd5feb40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572579840 unmapped: 97230848 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 422 heartbeat osd_stat(store_statfs(0x19510c000/0x0/0x1bfc00000, data 0x2183b7c/0x23b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:38.270771+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572579840 unmapped: 97230848 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 422 ms_handle_reset con 0x562dbc721c00 session 0x562dba285a40
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:39.270919+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbae52400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572579840 unmapped: 97230848 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:40.271089+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5605593 data_alloc: 218103808 data_used: 14090240
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:41.271264+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 422 ms_handle_reset con 0x562dbae52400 session 0x562dbb47ef00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:42.271398+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:43.271570+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 422 heartbeat osd_stat(store_statfs(0x1954f7000/0x0/0x1bfc00000, data 0x1d9ab6c/0x1fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:44.271715+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:45.271859+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5547029 data_alloc: 218103808 data_used: 10293248
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc720c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 422 ms_handle_reset con 0x562dbc720c00 session 0x562dbb34ed20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:46.271994+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 422 ms_handle_reset con 0x562dc54c6800 session 0x562dbb56f2c0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.051100731s of 11.013141632s, submitted: 41
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:47.272159+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x1954f7000/0x0/0x1bfc00000, data 0x1d9ab6c/0x1fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x1954f3000/0x0/0x1bfc00000, data 0x1d9c6ab/0x1fca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:48.272346+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dbaa19400 session 0x562dbbb14960
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571719680 unmapped: 98091008 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:49.272498+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571736064 unmapped: 98074624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:50.272647+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5552677 data_alloc: 218103808 data_used: 10301440
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571736064 unmapped: 98074624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:51.272806+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60ec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dbb60ec00 session 0x562dbd1b30e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43df400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571736064 unmapped: 98074624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x1954f4000/0x0/0x1bfc00000, data 0x1d9c6ab/0x1fca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:52.272962+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571736064 unmapped: 98074624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:53.273133+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dc43df400 session 0x562dbd84d4a0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571736064 unmapped: 98074624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaf8fc00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:54.273335+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dbaf8fc00 session 0x562dbd5d81e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dbb5a4400 session 0x562dbd5d9c20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:55.273493+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608672 data_alloc: 218103808 data_used: 10301440
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:56.273646+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:57.273830+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:58.274002+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:59.274146+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:00.274261+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608992 data_alloc: 218103808 data_used: 10309632
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:01.274378+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:02.274610+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:03.274912+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:04.275169+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:05.275353+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608992 data_alloc: 218103808 data_used: 10309632
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:06.275609+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:07.275779+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:08.276088+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:09.276483+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:10.276642+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608992 data_alloc: 218103808 data_used: 10309632
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:11.276794+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:12.276976+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:13.277130+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e0800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.204570770s of 26.430173874s, submitted: 37
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:14.277584+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dbe2e0800 session 0x562dbc8f2d20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ed800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:15.277951+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5610621 data_alloc: 218103808 data_used: 10309632
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:16.278230+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566681600 unmapped: 103129088 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:17.278514+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:18.278813+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:19.279108+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:20.279239+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5665661 data_alloc: 234881024 data_used: 17715200
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:21.279479+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:22.279642+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:23.279825+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:24.280104+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5665661 data_alloc: 234881024 data_used: 17715200
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:25.755279+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:26.756808+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.507968903s of 14.137735367s, submitted: 7
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:27.757021+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568377344 unmapped: 101433344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:28.757878+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:29.758158+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x1947a2000/0x0/0x1bfc00000, data 0x2aee6ab/0x2d1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:30.758334+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5710805 data_alloc: 234881024 data_used: 18042880
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:31.758669+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:32.758859+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:33.759111+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:34.759345+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:35.759538+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5710017 data_alloc: 234881024 data_used: 18305024
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:36.759730+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:37.759907+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:38.760203+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:39.760413+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:40.760660+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5710017 data_alloc: 234881024 data_used: 18305024
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:41.760824+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:42.760964+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:43.761134+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:44.761266+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:45.761422+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5710017 data_alloc: 234881024 data_used: 18305024
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:46.764084+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:47.767230+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:48.769783+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:49.771170+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:50.772589+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5710017 data_alloc: 234881024 data_used: 18305024
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:51.772906+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:52.773164+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:53.773691+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a400
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569638912 unmapped: 100171776 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:54.773840+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6eec00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.571533203s of 27.842430115s, submitted: 62
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569638912 unmapped: 100171776 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:55.773987+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5714694 data_alloc: 234881024 data_used: 18313216
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 424 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 424 ms_handle_reset con 0x562dbaa1a400 session 0x562dbd84d0e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569655296 unmapped: 100155392 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:56.774137+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dbf6eec00 session 0x562dbba82000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:57.774395+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569671680 unmapped: 100139008 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:58.774629+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569679872 unmapped: 100130816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194793000/0x0/0x1bfc00000, data 0x2af9f5d/0x2d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:59.775544+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569679872 unmapped: 100130816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:00.775741+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569679872 unmapped: 100130816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5717444 data_alloc: 234881024 data_used: 18313216
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194793000/0x0/0x1bfc00000, data 0x2af9f5d/0x2d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194793000/0x0/0x1bfc00000, data 0x2af9f5d/0x2d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:01.775881+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569688064 unmapped: 100122624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:02.776083+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:03.776325+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:04.776943+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:05.777374+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5739694 data_alloc: 234881024 data_used: 18313216
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:06.777644+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194792000/0x0/0x1bfc00000, data 0x2e04f5d/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:07.777803+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2855c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dc2855c00 session 0x562dbc8f30e0
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dc1100800 session 0x562dbd84dc20
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:08.778117+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:09.778390+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194792000/0x0/0x1bfc00000, data 0x2e04f5d/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:10.778559+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5739694 data_alloc: 234881024 data_used: 18313216
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:11.778722+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:12.778923+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:13.779138+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194792000/0x0/0x1bfc00000, data 0x2e04f5d/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:14.779357+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:15.779609+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5739694 data_alloc: 234881024 data_used: 18313216
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f800
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.885513306s of 20.959287643s, submitted: 11
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:16.779830+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569679872 unmapped: 100130816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:17.779982+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569679872 unmapped: 100130816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194792000/0x0/0x1bfc00000, data 0x2e04f5d/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dbd67f800 session 0x562dc822c000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:18.780155+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a82c00
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed3000
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:19.780309+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:20.780415+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5745121 data_alloc: 234881024 data_used: 18382848
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194768000/0x0/0x1bfc00000, data 0x2e2ef5d/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:21.780703+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:22.780850+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:23.781007+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:24.781159+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:25.781287+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5745121 data_alloc: 234881024 data_used: 18382848
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:26.781402+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194768000/0x0/0x1bfc00000, data 0x2e2ef5d/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:27.781529+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:28.781669+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:29.781792+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:30.781904+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5745121 data_alloc: 234881024 data_used: 18382848
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:31.782055+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.065715790s of 15.500749588s, submitted: 8
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194768000/0x0/0x1bfc00000, data 0x2e2ef5d/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:32.782169+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574160896 unmapped: 95649792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:33.782285+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574160896 unmapped: 95649792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:34.788855+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574160896 unmapped: 95649792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:35.789002+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574160896 unmapped: 95649792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:09 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:09 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5824545 data_alloc: 234881024 data_used: 21880832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:36.789154+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574341120 unmapped: 95469568 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: do_command 'config diff' '{prefix=config diff}'
Jan 31 09:15:09 compute-2 ceph-osd[79942]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 09:15:09 compute-2 ceph-osd[79942]: do_command 'config show' '{prefix=config show}'
Jan 31 09:15:09 compute-2 ceph-osd[79942]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 09:15:09 compute-2 ceph-osd[79942]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 09:15:09 compute-2 ceph-osd[79942]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 09:15:09 compute-2 ceph-osd[79942]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 09:15:09 compute-2 ceph-osd[79942]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:37.789264+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573784064 unmapped: 96026624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:15:09 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:38.789426+0000)
Jan 31 09:15:09 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571121664 unmapped: 98689024 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:09 compute-2 ceph-osd[79942]: do_command 'log dump' '{prefix=log dump}'
Jan 31 09:15:09 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:15:09 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 31 09:15:09 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255271531' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-2 ovn_controller[133834]: 2026-01-31T09:15:10Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:ba:ad 10.100.0.12
Jan 31 09:15:10 compute-2 ovn_controller[133834]: 2026-01-31T09:15:10Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:ba:ad 10.100.0.12
Jan 31 09:15:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 09:15:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/72351150' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.46699 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1762674557' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/278558995' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.46711 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3770759324' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/944244339' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2502968145' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.46720 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/786834206' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/367686405' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4037258470' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3608176849' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1255271531' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2280146538' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2781276104' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/72351150' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:10 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/44827191' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-2 sudo[343794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:10 compute-2 sudo[343794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:10 compute-2 sudo[343794]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:10 compute-2 sudo[343827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:10 compute-2 sudo[343827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:10 compute-2 sudo[343827]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:10 compute-2 nova_compute[226829]: 2026-01-31 09:15:10.686 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:10 compute-2 nova_compute[226829]: 2026-01-31 09:15:10.819 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 09:15:10 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/644624968' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:11.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:11 compute-2 crontab[343970]: (root) LIST (root)
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.46735 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: pgmap v4311: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 504 KiB/s wr, 53 op/s
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2739556914' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.46750 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1117598229' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3058079886' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/859707252' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1159161859' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.46762 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/644624968' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4193824802' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2195936456' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2741872118' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 09:15:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/229452426' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:11.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #211. Immutable memtables: 0.
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.611415) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 211
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911611491, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1228, "num_deletes": 257, "total_data_size": 2362501, "memory_usage": 2395360, "flush_reason": "Manual Compaction"}
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #212: started
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911662952, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 212, "file_size": 1557821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 101632, "largest_seqno": 102855, "table_properties": {"data_size": 1552111, "index_size": 2913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14514, "raw_average_key_size": 21, "raw_value_size": 1539829, "raw_average_value_size": 2228, "num_data_blocks": 127, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850833, "oldest_key_time": 1769850833, "file_creation_time": 1769850911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 51598 microseconds, and 3430 cpu microseconds.
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.663009) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #212: 1557821 bytes OK
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.663076) [db/memtable_list.cc:519] [default] Level-0 commit table #212 started
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.666723) [db/memtable_list.cc:722] [default] Level-0 commit table #212: memtable #1 done
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.666740) EVENT_LOG_v1 {"time_micros": 1769850911666734, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.666758) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 2356192, prev total WAL file size 2356192, number of live WAL files 2.
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000208.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.667464) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303230' seq:72057594037927935, type:22 .. '6C6F676D0034323733' seq:0, type:0; will stop at (end)
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [212(1521KB)], [210(11MB)]
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911667545, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [212], "files_L6": [210], "score": -1, "input_data_size": 13908253, "oldest_snapshot_seqno": -1}
Jan 31 09:15:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 09:15:11 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3506100947' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #213: 12024 keys, 13785058 bytes, temperature: kUnknown
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911862416, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 213, "file_size": 13785058, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13710702, "index_size": 43203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 318806, "raw_average_key_size": 26, "raw_value_size": 13503877, "raw_average_value_size": 1123, "num_data_blocks": 1635, "num_entries": 12024, "num_filter_entries": 12024, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769850911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 213, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.862699) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 13785058 bytes
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.873348) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.3 rd, 70.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.8 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(17.8) write-amplify(8.8) OK, records in: 12555, records dropped: 531 output_compression: NoCompression
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.873382) EVENT_LOG_v1 {"time_micros": 1769850911873369, "job": 136, "event": "compaction_finished", "compaction_time_micros": 194960, "compaction_time_cpu_micros": 27673, "output_level": 6, "num_output_files": 1, "total_output_size": 13785058, "num_input_records": 12555, "num_output_records": 12024, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911873648, "job": 136, "event": "table_file_deletion", "file_number": 212}
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000210.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911874712, "job": 136, "event": "table_file_deletion", "file_number": 210}
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.667323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.874783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.874790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.874791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.874793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:15:11.874795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:12 compute-2 ceph-mon[77282]: from='client.46768 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3682084879' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/229452426' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: from='client.52031 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: from='client.52022 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: from='client.46780 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: from='client.46786 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3143707149' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3506100947' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: from='client.37974 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 31 09:15:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/784799401' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:15:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 31 09:15:12 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3811429950' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-2 sudo[344182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:12 compute-2 sudo[344182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:12 compute-2 sudo[344182]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:12 compute-2 sudo[344253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:15:12 compute-2 sudo[344253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:12 compute-2 sudo[344253]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:12 compute-2 podman[344224]: 2026-01-31 09:15:12.986239886 +0000 UTC m=+0.108935444 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 09:15:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 31 09:15:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/727621978' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:15:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:13.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 31 09:15:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2630241147' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:13.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 31 09:15:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1228540386' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.52043 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.37980 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: pgmap v4312: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 504 KiB/s wr, 53 op/s
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.52055 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.37989 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.46819 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.52073 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.38001 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/784799401' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3811429950' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3123822868' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/390468352' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/727621978' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2630241147' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1680094024' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3679429532' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1228540386' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 31 09:15:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3907526084' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 31 09:15:13 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1591563330' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1669534733' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 systemd[1]: Starting Hostname Service...
Jan 31 09:15:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2454191839' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:15:14 compute-2 systemd[1]: Started Hostname Service.
Jan 31 09:15:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2745963781' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1497551720' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.52091 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.38019 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.52112 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.38040 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3907526084' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.52127 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3985781755' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1591563330' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.38055 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1356496342' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1669534733' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2454191839' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4113116959' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2399550788' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2745963781' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:14 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1497551720' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3118020784' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3308132613' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:14 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:15.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 09:15:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4007122892' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 31 09:15:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/183049023' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:15:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:15.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:15 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 09:15:15 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1314856845' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:15:15 compute-2 nova_compute[226829]: 2026-01-31 09:15:15.688 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:15 compute-2 ceph-mon[77282]: pgmap v4313: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 541 KiB/s wr, 55 op/s
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='client.38064 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='client.38091 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3118020784' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3308132613' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4007122892' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/183049023' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1026374254' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1314856845' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:15:15 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3326806464' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:15:15 compute-2 nova_compute[226829]: 2026-01-31 09:15:15.820 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:17.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 31 09:15:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3756849655' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:15:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:17.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 31 09:15:17 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1638522012' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:15:18 compute-2 ceph-mon[77282]: from='client.46981 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:18 compute-2 ceph-mon[77282]: from='client.52238 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2534597622' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:15:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2530731165' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:15:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 09:15:18 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2729769620' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:18 compute-2 podman[345133]: 2026-01-31 09:15:18.161524349 +0000 UTC m=+0.050903735 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 09:15:18 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 31 09:15:18 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3196856528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:15:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:15:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:19.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:15:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:15:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:19.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:15:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:19 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.38178 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.46990 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: pgmap v4314: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 292 KiB/s rd, 71 KiB/s wr, 22 op/s
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.46996 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.52244 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.47008 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1791200665' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3364397964' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3756849655' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.47020 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.47038 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1638522012' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/161788473' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/422475968' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.47044 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2729769620' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: pgmap v4315: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 86 KiB/s rd, 47 KiB/s wr, 6 op/s
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.47056 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3196856528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2778064777' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:15:20 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 31 09:15:20 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3875175616' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:15:20 compute-2 nova_compute[226829]: 2026-01-31 09:15:20.691 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:20 compute-2 nova_compute[226829]: 2026-01-31 09:15:20.823 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:21.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:15:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:21.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:15:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:23.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 31 09:15:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3135131644' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:15:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:23.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 31 09:15:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3688358254' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:15:24 compute-2 ovn_controller[133834]: 2026-01-31T09:15:24Z|00873|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 31 09:15:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 31 09:15:24 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4078247171' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:15:24 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 31 09:15:24 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/538974174' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:15:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:25.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:25.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 31 09:15:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/833197903' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 09:15:25 compute-2 nova_compute[226829]: 2026-01-31 09:15:25.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:25 compute-2 nova_compute[226829]: 2026-01-31 09:15:25.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:25 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:25 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:25 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:25 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:25 compute-2 ceph-mon[77282]: from='client.38226 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:25 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1461464631' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 09:15:25 compute-2 nova_compute[226829]: 2026-01-31 09:15:25.693 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:25 compute-2 nova_compute[226829]: 2026-01-31 09:15:25.824 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:25 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Jan 31 09:15:25 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3773263261' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 09:15:26 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Jan 31 09:15:26 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1092119460' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 31 09:15:26 compute-2 ovs-appctl[346771]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 09:15:26 compute-2 ovs-appctl[346777]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 09:15:26 compute-2 ovs-appctl[346782]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3083985766' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:27 compute-2 ceph-mon[77282]: pgmap v4316: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 50 KiB/s wr, 3 op/s
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3875175616' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/772685924' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: pgmap v4317: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 40 KiB/s wr, 2 op/s
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.52319 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.47125 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3135131644' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3639415667' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3688358254' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/19574536' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4078247171' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: pgmap v4318: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.52337 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/538974174' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.47155 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.38268 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2338748616' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/833197903' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/693196906' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.38286 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3773263261' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 09:15:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:27.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:27.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:28 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Jan 31 09:15:28 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2314985797' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 31 09:15:28 compute-2 nova_compute[226829]: 2026-01-31 09:15:28.883 226833 DEBUG oslo_concurrency.lockutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "71be684d-2233-462f-8268-a0bf7ea3f281" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:28 compute-2 nova_compute[226829]: 2026-01-31 09:15:28.884 226833 DEBUG oslo_concurrency.lockutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:28 compute-2 nova_compute[226829]: 2026-01-31 09:15:28.885 226833 DEBUG oslo_concurrency.lockutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:28 compute-2 nova_compute[226829]: 2026-01-31 09:15:28.885 226833 DEBUG oslo_concurrency.lockutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:28 compute-2 nova_compute[226829]: 2026-01-31 09:15:28.885 226833 DEBUG oslo_concurrency.lockutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:28 compute-2 nova_compute[226829]: 2026-01-31 09:15:28.887 226833 INFO nova.compute.manager [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Terminating instance
Jan 31 09:15:28 compute-2 nova_compute[226829]: 2026-01-31 09:15:28.888 226833 DEBUG nova.compute.manager [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 09:15:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:29.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:29.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:29 compute-2 ceph-mon[77282]: pgmap v4319: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.38292 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.47176 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.52358 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1092119460' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/236107094' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.52370 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1189658548' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/162892977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.47188 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.38313 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/703663555' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.47194 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4153647404' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 31 09:15:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2469722022' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 31 09:15:30 compute-2 kernel: tap16bbc36d-c7 (unregistering): left promiscuous mode
Jan 31 09:15:30 compute-2 NetworkManager[48999]: <info>  [1769850930.3027] device (tap16bbc36d-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:15:30 compute-2 ovn_controller[133834]: 2026-01-31T09:15:30Z|00874|binding|INFO|Releasing lport 16bbc36d-c752-47fb-8a65-efe72e3acf06 from this chassis (sb_readonly=0)
Jan 31 09:15:30 compute-2 ovn_controller[133834]: 2026-01-31T09:15:30Z|00875|binding|INFO|Setting lport 16bbc36d-c752-47fb-8a65-efe72e3acf06 down in Southbound
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.307 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:30 compute-2 ovn_controller[133834]: 2026-01-31T09:15:30Z|00876|binding|INFO|Removing iface tap16bbc36d-c7 ovn-installed in OVS
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.310 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:30.320 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:ba:ad 10.100.0.12'], port_security=['fa:16:3e:d5:ba:ad 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '71be684d-2233-462f-8268-a0bf7ea3f281', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2b30892c-1416-4aa6-83f7-0d4ffa0734bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=16bbc36d-c752-47fb-8a65-efe72e3acf06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.322 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:30.330 143841 INFO neutron.agent.ovn.metadata.agent [-] Port 16bbc36d-c752-47fb-8a65-efe72e3acf06 in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 unbound from our chassis
Jan 31 09:15:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:30.333 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c9ca540-57e7-412d-8ef3-af923db0a265, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:15:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:30.335 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[b0514db1-7530-4176-9a99-4982064510cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:30 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:30.336 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace which is not needed anymore
Jan 31 09:15:30 compute-2 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000dd.scope: Deactivated successfully.
Jan 31 09:15:30 compute-2 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000dd.scope: Consumed 15.232s CPU time.
Jan 31 09:15:30 compute-2 systemd-machined[195142]: Machine qemu-98-instance-000000dd terminated.
Jan 31 09:15:30 compute-2 ceph-mon[77282]: from='client.38325 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:30 compute-2 ceph-mon[77282]: pgmap v4320: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 09:15:30 compute-2 ceph-mon[77282]: from='client.52403 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4124568240' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 31 09:15:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2043893363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2314985797' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 31 09:15:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1815982370' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 31 09:15:30 compute-2 ceph-mon[77282]: from='client.38355 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.504 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.504 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.504 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.523 226833 INFO nova.virt.libvirt.driver [-] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Instance destroyed successfully.
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.523 226833 DEBUG nova.objects.instance [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'resources' on Instance uuid 71be684d-2233-462f-8268-a0bf7ea3f281 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.540 226833 DEBUG nova.virt.libvirt.vif [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-604575612',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-604575612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-604575612',id=221,image_ref='3235f05a-670b-496a-a8b6-5b3e82346d62',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMFXo6TXyOjOGOwMN7nMTJychPX0W8Ids/CcSfyK9jF+CBR02NNz9kBE3K04DOWNvBm8TZYgtZQkcIS2FaVaawjlXKIyFEJNIhjDFXBARZmbEzviijrK8StVMvPdMN+fA==',key_name='tempest-keypair-1394431595',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:14:49Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-k0oxdq20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1294459393',image_owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:14:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=71be684d-2233-462f-8268-a0bf7ea3f281,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.540 226833 DEBUG nova.network.os_vif_util [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "address": "fa:16:3e:d5:ba:ad", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16bbc36d-c7", "ovs_interfaceid": "16bbc36d-c752-47fb-8a65-efe72e3acf06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.541 226833 DEBUG nova.network.os_vif_util [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:ba:ad,bridge_name='br-int',has_traffic_filtering=True,id=16bbc36d-c752-47fb-8a65-efe72e3acf06,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16bbc36d-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.542 226833 DEBUG os_vif [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:ba:ad,bridge_name='br-int',has_traffic_filtering=True,id=16bbc36d-c752-47fb-8a65-efe72e3acf06,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16bbc36d-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.545 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.545 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16bbc36d-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.549 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.551 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.556 226833 INFO os_vif [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:ba:ad,bridge_name='br-int',has_traffic_filtering=True,id=16bbc36d-c752-47fb-8a65-efe72e3acf06,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16bbc36d-c7')
Jan 31 09:15:30 compute-2 sudo[348195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:30 compute-2 sudo[348195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:30 compute-2 sudo[348195]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.672 226833 DEBUG nova.compute.manager [req-1dc02e59-a66d-4d57-816a-da4a6e57c82d req-73b23612-590e-41b7-9f12-145b91f37223 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Received event network-vif-unplugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.672 226833 DEBUG oslo_concurrency.lockutils [req-1dc02e59-a66d-4d57-816a-da4a6e57c82d req-73b23612-590e-41b7-9f12-145b91f37223 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.673 226833 DEBUG oslo_concurrency.lockutils [req-1dc02e59-a66d-4d57-816a-da4a6e57c82d req-73b23612-590e-41b7-9f12-145b91f37223 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.673 226833 DEBUG oslo_concurrency.lockutils [req-1dc02e59-a66d-4d57-816a-da4a6e57c82d req-73b23612-590e-41b7-9f12-145b91f37223 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.673 226833 DEBUG nova.compute.manager [req-1dc02e59-a66d-4d57-816a-da4a6e57c82d req-73b23612-590e-41b7-9f12-145b91f37223 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] No waiting events found dispatching network-vif-unplugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.674 226833 DEBUG nova.compute.manager [req-1dc02e59-a66d-4d57-816a-da4a6e57c82d req-73b23612-590e-41b7-9f12-145b91f37223 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Received event network-vif-unplugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 09:15:30 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[341838]: [NOTICE]   (341842) : haproxy version is 2.8.14-c23fe91
Jan 31 09:15:30 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[341838]: [NOTICE]   (341842) : path to executable is /usr/sbin/haproxy
Jan 31 09:15:30 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[341838]: [WARNING]  (341842) : Exiting Master process...
Jan 31 09:15:30 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[341838]: [ALERT]    (341842) : Current worker (341844) exited with code 143 (Terminated)
Jan 31 09:15:30 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[341838]: [WARNING]  (341842) : All workers exited. Exiting... (0)
Jan 31 09:15:30 compute-2 systemd[1]: libpod-c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a.scope: Deactivated successfully.
Jan 31 09:15:30 compute-2 sudo[348230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:30 compute-2 sudo[348230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:30 compute-2 sudo[348230]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:30 compute-2 podman[348157]: 2026-01-31 09:15:30.718736327 +0000 UTC m=+0.307451090 container died c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:15:30 compute-2 nova_compute[226829]: 2026-01-31 09:15:30.826 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:30 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a-userdata-shm.mount: Deactivated successfully.
Jan 31 09:15:30 compute-2 systemd[1]: var-lib-containers-storage-overlay-b42a2ab490bf57f8f5bde9c8c0047ccddd928e5b7bfc7eeb8c98e0cb09d8b5cd-merged.mount: Deactivated successfully.
Jan 31 09:15:30 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Jan 31 09:15:30 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2412532984' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 31 09:15:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:31.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.206 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=115, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=114) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:15:31 compute-2 nova_compute[226829]: 2026-01-31 09:15:31.208 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:31 compute-2 podman[348157]: 2026-01-31 09:15:31.246641332 +0000 UTC m=+0.835356075 container cleanup c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 09:15:31 compute-2 systemd[1]: libpod-conmon-c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a.scope: Deactivated successfully.
Jan 31 09:15:31 compute-2 podman[348340]: 2026-01-31 09:15:31.30283772 +0000 UTC m=+0.038590691 container remove c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.306 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8d1543-bd67-45fa-8590-f3ec1b3a1566]: (4, ('Sat Jan 31 09:15:30 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a)\nc2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a\nSat Jan 31 09:15:31 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (c2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a)\nc2e34254c1355d8790c4fd3b1bc350de805dd098a8e31a4afef9b34916032d8a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.309 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9227b3f1-9223-4647-b8c9-560dfdb060bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.310 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:15:31 compute-2 nova_compute[226829]: 2026-01-31 09:15:31.312 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:31 compute-2 kernel: tap5c9ca540-50: left promiscuous mode
Jan 31 09:15:31 compute-2 nova_compute[226829]: 2026-01-31 09:15:31.317 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.321 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f8d98e-9b75-48e0-a3e4-c3685a3dfa72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.342 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[bba3718c-805b-45ff-86d9-caf16bcc08fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.343 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[e049f8ae-278a-4584-9a61-31bba1926366]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:31.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.352 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[98d7ce46-680e-475c-99f4-6d1fec8010bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1114104, 'reachable_time': 23242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348366, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:31 compute-2 systemd[1]: run-netns-ovnmeta\x2d5c9ca540\x2d57e7\x2d412d\x2d8ef3\x2daf923db0a265.mount: Deactivated successfully.
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.358 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.358 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[92c787c0-eae7-4605-88a8-69049ae2ae0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:31 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:31.359 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:15:31 compute-2 nova_compute[226829]: 2026-01-31 09:15:31.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:31 compute-2 nova_compute[226829]: 2026-01-31 09:15:31.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:31 compute-2 nova_compute[226829]: 2026-01-31 09:15:31.523 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:31 compute-2 nova_compute[226829]: 2026-01-31 09:15:31.523 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:31 compute-2 nova_compute[226829]: 2026-01-31 09:15:31.523 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:31 compute-2 nova_compute[226829]: 2026-01-31 09:15:31.523 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:15:31 compute-2 nova_compute[226829]: 2026-01-31 09:15:31.524 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:15:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:15:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3354097032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.036 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:15:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 09:15:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3737881186' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.120 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000dd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.121 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000dd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.242 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.244 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3802MB free_disk=20.98794174194336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.244 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.244 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.549 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance 71be684d-2233-462f-8268-a0bf7ea3f281 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.549 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.549 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:15:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Jan 31 09:15:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3060654967' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.620 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.651 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.652 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.676 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.704 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:15:32 compute-2 nova_compute[226829]: 2026-01-31 09:15:32.740 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:15:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Jan 31 09:15:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2246476096' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.030 226833 DEBUG nova.compute.manager [req-d8c569ec-6ad4-45b7-a374-8762757e44c2 req-9e82bfa3-c24c-4690-bf6a-ae62f44b70fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Received event network-vif-plugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.031 226833 DEBUG oslo_concurrency.lockutils [req-d8c569ec-6ad4-45b7-a374-8762757e44c2 req-9e82bfa3-c24c-4690-bf6a-ae62f44b70fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.031 226833 DEBUG oslo_concurrency.lockutils [req-d8c569ec-6ad4-45b7-a374-8762757e44c2 req-9e82bfa3-c24c-4690-bf6a-ae62f44b70fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.032 226833 DEBUG oslo_concurrency.lockutils [req-d8c569ec-6ad4-45b7-a374-8762757e44c2 req-9e82bfa3-c24c-4690-bf6a-ae62f44b70fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.032 226833 DEBUG nova.compute.manager [req-d8c569ec-6ad4-45b7-a374-8762757e44c2 req-9e82bfa3-c24c-4690-bf6a-ae62f44b70fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] No waiting events found dispatching network-vif-plugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.032 226833 WARNING nova.compute.manager [req-d8c569ec-6ad4-45b7-a374-8762757e44c2 req-9e82bfa3-c24c-4690-bf6a-ae62f44b70fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Received unexpected event network-vif-plugged-16bbc36d-c752-47fb-8a65-efe72e3acf06 for instance with vm_state active and task_state deleting.
Jan 31 09:15:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:15:33 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2791884155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.183 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:15:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:33.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.187 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.210 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.252 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.253 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:33 compute-2 ceph-mon[77282]: from='client.47215 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:33 compute-2 ceph-mon[77282]: from='client.38361 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:33 compute-2 ceph-mon[77282]: from='client.52418 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:33 compute-2 ceph-mon[77282]: pgmap v4321: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 6.7 KiB/s wr, 0 op/s
Jan 31 09:15:33 compute-2 ceph-mon[77282]: from='client.47221 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1472881809' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:15:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4117241560' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 31 09:15:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4126448503' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 31 09:15:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2412532984' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 31 09:15:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1374204682' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 31 09:15:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:33.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.836 226833 INFO nova.virt.libvirt.driver [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Deleting instance files /var/lib/nova/instances/71be684d-2233-462f-8268-a0bf7ea3f281_del
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.838 226833 INFO nova.virt.libvirt.driver [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Deletion of /var/lib/nova/instances/71be684d-2233-462f-8268-a0bf7ea3f281_del complete
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.901 226833 INFO nova.compute.manager [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Took 5.01 seconds to destroy the instance on the hypervisor.
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.902 226833 DEBUG oslo.service.loopingcall [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.902 226833 DEBUG nova.compute.manager [-] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 09:15:33 compute-2 nova_compute[226829]: 2026-01-31 09:15:33.902 226833 DEBUG nova.network.neutron [-] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 09:15:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 09:15:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1832749955' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2428046179' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.47245 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.52442 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.47254 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.52448 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3354097032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3737881186' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1597342083' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: pgmap v4322: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3060654967' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2724833909' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2246476096' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4018119954' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4200454611' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2791884155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.38415 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.52478 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.47293 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1158303249' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1832749955' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4247018269' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2205925257' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Jan 31 09:15:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1105557315' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-2 nova_compute[226829]: 2026-01-31 09:15:34.907 226833 DEBUG nova.network.neutron [-] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:15:34 compute-2 nova_compute[226829]: 2026-01-31 09:15:34.941 226833 INFO nova.compute.manager [-] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Took 1.04 seconds to deallocate network for instance.
Jan 31 09:15:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Jan 31 09:15:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3283029549' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-2 nova_compute[226829]: 2026-01-31 09:15:35.149 226833 DEBUG nova.compute.manager [req-c6a84e7d-510d-4b6b-86b0-620eec49fc20 req-afb3cc20-5a29-4ada-80cd-ff0e8b713078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Received event network-vif-deleted-16bbc36d-c752-47fb-8a65-efe72e3acf06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:15:35 compute-2 nova_compute[226829]: 2026-01-31 09:15:35.184 226833 INFO nova.compute.manager [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Took 0.24 seconds to detach 1 volumes for instance.
Jan 31 09:15:35 compute-2 nova_compute[226829]: 2026-01-31 09:15:35.185 226833 DEBUG nova.compute.manager [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Deleting volume: 873ab52c-6ddb-4591-95be-4fd33ed9a07f _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 31 09:15:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:35.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:35 compute-2 nova_compute[226829]: 2026-01-31 09:15:35.253 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Jan 31 09:15:35 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/766298541' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:35.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:35 compute-2 nova_compute[226829]: 2026-01-31 09:15:35.504 226833 DEBUG oslo_concurrency.lockutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:35 compute-2 nova_compute[226829]: 2026-01-31 09:15:35.504 226833 DEBUG oslo_concurrency.lockutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:35 compute-2 nova_compute[226829]: 2026-01-31 09:15:35.593 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:35 compute-2 nova_compute[226829]: 2026-01-31 09:15:35.608 226833 DEBUG oslo_concurrency.processutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:15:35 compute-2 ceph-mon[77282]: pgmap v4323: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 3.3 KiB/s wr, 9 op/s
Jan 31 09:15:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1105557315' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3223520930' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2614475393' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3283029549' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3675144590' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3551598919' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/766298541' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/329207965' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-2 nova_compute[226829]: 2026-01-31 09:15:35.828 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:15:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1020822950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:36 compute-2 nova_compute[226829]: 2026-01-31 09:15:36.095 226833 DEBUG oslo_concurrency.processutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:15:36 compute-2 nova_compute[226829]: 2026-01-31 09:15:36.100 226833 DEBUG nova.compute.provider_tree [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:15:36 compute-2 nova_compute[226829]: 2026-01-31 09:15:36.143 226833 DEBUG nova.scheduler.client.report [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:15:36 compute-2 nova_compute[226829]: 2026-01-31 09:15:36.175 226833 DEBUG oslo_concurrency.lockutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:36 compute-2 nova_compute[226829]: 2026-01-31 09:15:36.220 226833 INFO nova.scheduler.client.report [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Deleted allocations for instance 71be684d-2233-462f-8268-a0bf7ea3f281
Jan 31 09:15:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Jan 31 09:15:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3445546757' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-2 nova_compute[226829]: 2026-01-31 09:15:36.309 226833 DEBUG oslo_concurrency.lockutils [None req-af8fd645-a38a-4597-bb0d-be3f1878438e ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "71be684d-2233-462f-8268-a0bf7ea3f281" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:36 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:15:36.361 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '115'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:15:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Jan 31 09:15:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2421434504' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-2 ceph-mon[77282]: from='client.38466 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-2 ceph-mon[77282]: from='client.47335 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2160169199' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4063524191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1020822950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3445546757' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1343188959' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/251849106' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2421434504' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:37.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:37.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Jan 31 09:15:37 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1106973295' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 systemd[1]: Starting Time & Date Service...
Jan 31 09:15:38 compute-2 virtqemud[226546]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 09:15:38 compute-2 systemd[1]: Started Time & Date Service.
Jan 31 09:15:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Jan 31 09:15:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1773245035' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.52523 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: pgmap v4324: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 255 B/s wr, 10 op/s
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.38508 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3141514078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.47374 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3073995165' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/102671148' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1106973295' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.47383 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.38532 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.47392 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/17899084' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Jan 31 09:15:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2045968433' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:39.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:39.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:15:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/627123147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:15:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/627123147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:15:39 compute-2 nova_compute[226829]: 2026-01-31 09:15:39.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.38538 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.52595 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: pgmap v4325: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.47398 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.52601 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1773245035' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1006415844' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/730697016' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2045968433' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/190154118' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2699539177' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/627123147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:15:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/627123147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 09:15:40 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1441105675' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Jan 31 09:15:40 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3502716840' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 nova_compute[226829]: 2026-01-31 09:15:40.623 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:40 compute-2 ceph-mon[77282]: from='client.38562 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: from='client.47416 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: from='client.52625 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: from='client.38580 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1441105675' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4113027999' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3189746421' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2542830996' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3502716840' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/555442405' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-2 nova_compute[226829]: 2026-01-31 09:15:40.829 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:41.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:41.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 09:15:41 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1515151025' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e426 e426: 3 total, 3 up, 3 in
Jan 31 09:15:41 compute-2 ceph-mon[77282]: from='client.47431 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-2 ceph-mon[77282]: from='client.52637 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-2 ceph-mon[77282]: pgmap v4326: 305 pgs: 305 active+clean; 205 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 31 09:15:41 compute-2 ceph-mon[77282]: from='client.52673 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3260021682' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1515151025' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/334200316' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Jan 31 09:15:42 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/67754840' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:43 compute-2 podman[349582]: 2026-01-31 09:15:43.187217694 +0000 UTC m=+0.072701658 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 09:15:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:43.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:43.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:43 compute-2 ceph-mon[77282]: from='client.47458 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-2 ceph-mon[77282]: from='client.38619 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-2 ceph-mon[77282]: from='client.52679 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-2 ceph-mon[77282]: from='client.47464 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-2 ceph-mon[77282]: from='client.38625 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-2 ceph-mon[77282]: osdmap e426: 3 total, 3 up, 3 in
Jan 31 09:15:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4210385020' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/67754840' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-2 ceph-mon[77282]: pgmap v4328: 305 pgs: 305 active+clean; 203 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 1.4 KiB/s wr, 43 op/s
Jan 31 09:15:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3385107240' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:45.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:45.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:45 compute-2 nova_compute[226829]: 2026-01-31 09:15:45.521 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850930.5206196, 71be684d-2233-462f-8268-a0bf7ea3f281 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:15:45 compute-2 nova_compute[226829]: 2026-01-31 09:15:45.522 226833 INFO nova.compute.manager [-] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] VM Stopped (Lifecycle Event)
Jan 31 09:15:45 compute-2 nova_compute[226829]: 2026-01-31 09:15:45.570 226833 DEBUG nova.compute.manager [None req-27e3238c-f886-4cfb-80f3-31081f0d6efb - - - - - -] [instance: 71be684d-2233-462f-8268-a0bf7ea3f281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:15:45 compute-2 nova_compute[226829]: 2026-01-31 09:15:45.625 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:45 compute-2 ceph-mon[77282]: pgmap v4329: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 1.8 KiB/s wr, 39 op/s
Jan 31 09:15:45 compute-2 nova_compute[226829]: 2026-01-31 09:15:45.832 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:15:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:47.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:15:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:47.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:47 compute-2 ceph-mon[77282]: pgmap v4330: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 45 op/s
Jan 31 09:15:47 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/7349103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:15:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4058064785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:15:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:15:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4058064785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:15:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4058064785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:15:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4058064785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:15:49 compute-2 podman[349613]: 2026-01-31 09:15:49.162230143 +0000 UTC m=+0.047775640 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 09:15:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:49.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:49.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:49 compute-2 ceph-mon[77282]: pgmap v4331: 305 pgs: 305 active+clean; 180 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.3 KiB/s wr, 45 op/s
Jan 31 09:15:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e427 e427: 3 total, 3 up, 3 in
Jan 31 09:15:50 compute-2 nova_compute[226829]: 2026-01-31 09:15:50.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:50 compute-2 nova_compute[226829]: 2026-01-31 09:15:50.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:15:50 compute-2 nova_compute[226829]: 2026-01-31 09:15:50.627 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:50 compute-2 sudo[349637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:50 compute-2 sudo[349637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:50 compute-2 sudo[349637]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:50 compute-2 sudo[349662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:50 compute-2 sudo[349662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:50 compute-2 nova_compute[226829]: 2026-01-31 09:15:50.834 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:50 compute-2 sudo[349662]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:51 compute-2 ceph-mon[77282]: osdmap e427: 3 total, 3 up, 3 in
Jan 31 09:15:51 compute-2 ceph-mon[77282]: pgmap v4333: 305 pgs: 305 active+clean; 163 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 31 op/s
Jan 31 09:15:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:51.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:15:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:51.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:15:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e428 e428: 3 total, 3 up, 3 in
Jan 31 09:15:52 compute-2 ceph-mon[77282]: pgmap v4334: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Jan 31 09:15:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:15:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:53.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:15:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:53.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:54 compute-2 ceph-mon[77282]: osdmap e428: 3 total, 3 up, 3 in
Jan 31 09:15:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2514494485' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:15:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2514494485' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:15:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:55.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:55.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:55 compute-2 nova_compute[226829]: 2026-01-31 09:15:55.630 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:55 compute-2 nova_compute[226829]: 2026-01-31 09:15:55.835 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:55 compute-2 ceph-mon[77282]: pgmap v4336: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 2.1 KiB/s wr, 57 op/s
Jan 31 09:15:56 compute-2 ceph-mon[77282]: pgmap v4337: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 1.6 KiB/s wr, 54 op/s
Jan 31 09:15:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:57.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:57.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:59.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:15:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:59.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:59 compute-2 ceph-mon[77282]: pgmap v4338: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Jan 31 09:15:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 e429: 3 total, 3 up, 3 in
Jan 31 09:16:00 compute-2 nova_compute[226829]: 2026-01-31 09:16:00.631 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:00 compute-2 nova_compute[226829]: 2026-01-31 09:16:00.837 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:01.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:01 compute-2 ceph-mon[77282]: osdmap e429: 3 total, 3 up, 3 in
Jan 31 09:16:01 compute-2 ceph-mon[77282]: pgmap v4340: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.1 KiB/s wr, 27 op/s
Jan 31 09:16:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:01.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #214. Immutable memtables: 0.
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.000602) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 214
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850963000680, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 994, "num_deletes": 253, "total_data_size": 1661930, "memory_usage": 1683400, "flush_reason": "Manual Compaction"}
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #215: started
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850963064225, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 215, "file_size": 1095439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 102860, "largest_seqno": 103849, "table_properties": {"data_size": 1090209, "index_size": 2433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15100, "raw_average_key_size": 22, "raw_value_size": 1078572, "raw_average_value_size": 1600, "num_data_blocks": 104, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850912, "oldest_key_time": 1769850912, "file_creation_time": 1769850962, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 63694 microseconds, and 3588 cpu microseconds.
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.064290) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #215: 1095439 bytes OK
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.064311) [db/memtable_list.cc:519] [default] Level-0 commit table #215 started
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.116393) [db/memtable_list.cc:722] [default] Level-0 commit table #215: memtable #1 done
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.116435) EVENT_LOG_v1 {"time_micros": 1769850963116426, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.116458) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 1656255, prev total WAL file size 1656255, number of live WAL files 2.
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000211.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.117125) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [215(1069KB)], [213(13MB)]
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850963117219, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [215], "files_L6": [213], "score": -1, "input_data_size": 14880497, "oldest_snapshot_seqno": -1}
Jan 31 09:16:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:16:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:03.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:16:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #216: 12176 keys, 12881811 bytes, temperature: kUnknown
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850963570255, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 216, "file_size": 12881811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12807324, "index_size": 42950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 324199, "raw_average_key_size": 26, "raw_value_size": 12598888, "raw_average_value_size": 1034, "num_data_blocks": 1615, "num_entries": 12176, "num_filter_entries": 12176, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769850963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 216, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.570554) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 12881811 bytes
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.580921) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.8 rd, 28.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.1 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(25.3) write-amplify(11.8) OK, records in: 12698, records dropped: 522 output_compression: NoCompression
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.580954) EVENT_LOG_v1 {"time_micros": 1769850963580941, "job": 138, "event": "compaction_finished", "compaction_time_micros": 453175, "compaction_time_cpu_micros": 43894, "output_level": 6, "num_output_files": 1, "total_output_size": 12881811, "num_input_records": 12698, "num_output_records": 12176, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850963581408, "job": 138, "event": "table_file_deletion", "file_number": 215}
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000213.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850963582708, "job": 138, "event": "table_file_deletion", "file_number": 213}
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.116954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.582756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.582760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.582762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.582763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:03 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:16:03.582765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:05.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:05.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:05 compute-2 nova_compute[226829]: 2026-01-31 09:16:05.633 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:05 compute-2 nova_compute[226829]: 2026-01-31 09:16:05.840 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:05 compute-2 ceph-mon[77282]: pgmap v4341: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 992 B/s wr, 24 op/s
Jan 31 09:16:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:16:06.966 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:16:06.968 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:16:06.968 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:07.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:07.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:08 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 09:16:08 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 09:16:08 compute-2 nova_compute[226829]: 2026-01-31 09:16:08.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:08 compute-2 ceph-mon[77282]: pgmap v4342: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 8 op/s
Jan 31 09:16:08 compute-2 ceph-mon[77282]: pgmap v4343: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 KiB/s rd, 307 B/s wr, 5 op/s
Jan 31 09:16:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:09.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:09.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:09 compute-2 ceph-mon[77282]: pgmap v4344: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 09:16:10 compute-2 nova_compute[226829]: 2026-01-31 09:16:10.635 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:10 compute-2 nova_compute[226829]: 2026-01-31 09:16:10.843 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:10 compute-2 sudo[349705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:10 compute-2 sudo[349705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:10 compute-2 sudo[349705]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:10 compute-2 sudo[349730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:10 compute-2 sudo[349730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:10 compute-2 sudo[349730]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:11.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:11.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:12 compute-2 ceph-mon[77282]: pgmap v4345: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 447 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 09:16:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:12 compute-2 sudo[349756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:12 compute-2 sudo[349756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:12 compute-2 sudo[349756]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:13 compute-2 sudo[349781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:16:13 compute-2 sudo[349781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:13 compute-2 sudo[349781]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:13 compute-2 sudo[349806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:13 compute-2 sudo[349806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:13 compute-2 sudo[349806]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:13 compute-2 sudo[349831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:16:13 compute-2 sudo[349831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:13.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:13 compute-2 ceph-mon[77282]: pgmap v4346: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 0 B/s wr, 4 op/s
Jan 31 09:16:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:13 compute-2 sudo[349831]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:14 compute-2 podman[349887]: 2026-01-31 09:16:14.186640413 +0000 UTC m=+0.073503539 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 09:16:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:15.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:15 compute-2 nova_compute[226829]: 2026-01-31 09:16:15.640 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:15 compute-2 nova_compute[226829]: 2026-01-31 09:16:15.845 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:16 compute-2 ceph-mon[77282]: pgmap v4347: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 31 09:16:16 compute-2 ovn_controller[133834]: 2026-01-31T09:16:16Z|00877|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 09:16:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:16:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:17.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:16:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:17.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:17 compute-2 ceph-mon[77282]: pgmap v4348: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 31 09:16:17 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:17 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:17 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:16:17 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:16:17 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:17 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:16:17 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:16:17 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:16:18 compute-2 ceph-mon[77282]: pgmap v4349: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 31 09:16:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:19.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:20 compute-2 podman[349915]: 2026-01-31 09:16:20.162617739 +0000 UTC m=+0.048139010 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 09:16:20 compute-2 nova_compute[226829]: 2026-01-31 09:16:20.643 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:20 compute-2 nova_compute[226829]: 2026-01-31 09:16:20.845 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:21.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:21.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:21 compute-2 ceph-mon[77282]: pgmap v4350: 305 pgs: 305 active+clean; 133 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 312 KiB/s wr, 9 op/s
Jan 31 09:16:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:22 compute-2 ceph-mon[77282]: pgmap v4351: 305 pgs: 305 active+clean; 155 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 31 09:16:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:23.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:25.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:25 compute-2 sudo[341909]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:25 compute-2 sshd-session[341908]: Received disconnect from 192.168.122.10 port 52050:11: disconnected by user
Jan 31 09:16:25 compute-2 sshd-session[341908]: Disconnected from user zuul 192.168.122.10 port 52050
Jan 31 09:16:25 compute-2 sshd-session[341905]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:16:25 compute-2 systemd[1]: session-66.scope: Deactivated successfully.
Jan 31 09:16:25 compute-2 systemd[1]: session-66.scope: Consumed 2min 36.869s CPU time, 981.0M memory peak, read 392.2M from disk, written 368.7M to disk.
Jan 31 09:16:25 compute-2 systemd-logind[801]: Session 66 logged out. Waiting for processes to exit.
Jan 31 09:16:25 compute-2 nova_compute[226829]: 2026-01-31 09:16:25.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:25 compute-2 systemd-logind[801]: Removed session 66.
Jan 31 09:16:25 compute-2 sshd-session[349938]: Accepted publickey for zuul from 192.168.122.10 port 35100 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 09:16:25 compute-2 systemd-logind[801]: New session 67 of user zuul.
Jan 31 09:16:25 compute-2 systemd[1]: Started Session 67 of User zuul.
Jan 31 09:16:25 compute-2 sshd-session[349938]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:16:25 compute-2 nova_compute[226829]: 2026-01-31 09:16:25.645 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:25 compute-2 sudo[349942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-2-2026-01-31-iywmhpf.tar.xz
Jan 31 09:16:25 compute-2 sudo[349942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:16:25 compute-2 sudo[349942]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:25 compute-2 sshd-session[349941]: Received disconnect from 192.168.122.10 port 35100:11: disconnected by user
Jan 31 09:16:25 compute-2 sshd-session[349941]: Disconnected from user zuul 192.168.122.10 port 35100
Jan 31 09:16:25 compute-2 sshd-session[349938]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:16:25 compute-2 systemd[1]: session-67.scope: Deactivated successfully.
Jan 31 09:16:25 compute-2 systemd-logind[801]: Session 67 logged out. Waiting for processes to exit.
Jan 31 09:16:25 compute-2 systemd-logind[801]: Removed session 67.
Jan 31 09:16:25 compute-2 nova_compute[226829]: 2026-01-31 09:16:25.855 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:25 compute-2 ceph-mon[77282]: pgmap v4352: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:16:25 compute-2 sshd-session[349967]: Accepted publickey for zuul from 192.168.122.10 port 35110 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 09:16:25 compute-2 systemd-logind[801]: New session 68 of user zuul.
Jan 31 09:16:25 compute-2 systemd[1]: Started Session 68 of User zuul.
Jan 31 09:16:25 compute-2 sshd-session[349967]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:16:26 compute-2 sudo[349971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Jan 31 09:16:26 compute-2 sudo[349971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:16:26 compute-2 sudo[349971]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:26 compute-2 sshd-session[349970]: Received disconnect from 192.168.122.10 port 35110:11: disconnected by user
Jan 31 09:16:26 compute-2 sshd-session[349970]: Disconnected from user zuul 192.168.122.10 port 35110
Jan 31 09:16:26 compute-2 sshd-session[349967]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:16:26 compute-2 systemd[1]: session-68.scope: Deactivated successfully.
Jan 31 09:16:26 compute-2 systemd-logind[801]: Session 68 logged out. Waiting for processes to exit.
Jan 31 09:16:26 compute-2 systemd-logind[801]: Removed session 68.
Jan 31 09:16:26 compute-2 sudo[349996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:26 compute-2 sudo[349996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:26 compute-2 sudo[349996]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:26 compute-2 sudo[350021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:16:26 compute-2 sudo[350021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:26 compute-2 sudo[350021]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:26 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:27.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:27.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:27 compute-2 nova_compute[226829]: 2026-01-31 09:16:27.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:27 compute-2 ceph-mon[77282]: pgmap v4353: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 31 09:16:28 compute-2 ceph-mon[77282]: pgmap v4354: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 09:16:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:29.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:29.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/305204523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:29 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3815745731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:30 compute-2 nova_compute[226829]: 2026-01-31 09:16:30.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:30 compute-2 nova_compute[226829]: 2026-01-31 09:16:30.648 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:30 compute-2 nova_compute[226829]: 2026-01-31 09:16:30.858 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:30 compute-2 sudo[350049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:30 compute-2 sudo[350049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:30 compute-2 sudo[350049]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:31 compute-2 sudo[350074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:31 compute-2 sudo[350074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:31 compute-2 sudo[350074]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4027901827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:31 compute-2 ceph-mon[77282]: pgmap v4355: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 09:16:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:31.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:31.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:31 compute-2 nova_compute[226829]: 2026-01-31 09:16:31.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:31 compute-2 nova_compute[226829]: 2026-01-31 09:16:31.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:16:31 compute-2 nova_compute[226829]: 2026-01-31 09:16:31.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:16:31 compute-2 nova_compute[226829]: 2026-01-31 09:16:31.507 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:16:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3152122720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:16:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:32 compute-2 nova_compute[226829]: 2026-01-31 09:16:32.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:32 compute-2 nova_compute[226829]: 2026-01-31 09:16:32.519 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:32 compute-2 nova_compute[226829]: 2026-01-31 09:16:32.519 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:32 compute-2 nova_compute[226829]: 2026-01-31 09:16:32.520 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:32 compute-2 nova_compute[226829]: 2026-01-31 09:16:32.520 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:16:32 compute-2 nova_compute[226829]: 2026-01-31 09:16:32.521 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:16:32 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1902503427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:32 compute-2 nova_compute[226829]: 2026-01-31 09:16:32.958 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.099 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.101 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3919MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.101 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.101 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.214 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.215 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.253 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:33.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:33 compute-2 ceph-mon[77282]: pgmap v4356: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 MiB/s wr, 28 op/s
Jan 31 09:16:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1902503427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:33.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:33 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:16:33 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4121527407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.764 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.770 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.790 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.824 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:16:33 compute-2 nova_compute[226829]: 2026-01-31 09:16:33.825 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4121527407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:34 compute-2 nova_compute[226829]: 2026-01-31 09:16:34.825 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:35.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:35.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:35 compute-2 ceph-mon[77282]: pgmap v4357: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 422 KiB/s wr, 0 op/s
Jan 31 09:16:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3800926238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:35 compute-2 nova_compute[226829]: 2026-01-31 09:16:35.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:35 compute-2 nova_compute[226829]: 2026-01-31 09:16:35.650 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:35 compute-2 nova_compute[226829]: 2026-01-31 09:16:35.858 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2599360944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:37.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:37.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:37 compute-2 ceph-mon[77282]: pgmap v4358: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:16:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2766357343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:16:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:39.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:39.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:39 compute-2 ceph-mon[77282]: pgmap v4359: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:16:40 compute-2 nova_compute[226829]: 2026-01-31 09:16:40.653 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:40 compute-2 nova_compute[226829]: 2026-01-31 09:16:40.859 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:41.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:41.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:41 compute-2 nova_compute[226829]: 2026-01-31 09:16:41.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:41 compute-2 ceph-mon[77282]: pgmap v4360: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 31 09:16:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:43.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:43.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:43 compute-2 ceph-mon[77282]: pgmap v4361: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 KiB/s rd, 12 KiB/s wr, 7 op/s
Jan 31 09:16:45 compute-2 podman[350150]: 2026-01-31 09:16:45.265941403 +0000 UTC m=+0.148228982 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:16:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:45.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:45.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:45 compute-2 nova_compute[226829]: 2026-01-31 09:16:45.654 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:45 compute-2 ceph-mon[77282]: pgmap v4362: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 869 KiB/s rd, 12 KiB/s wr, 37 op/s
Jan 31 09:16:45 compute-2 nova_compute[226829]: 2026-01-31 09:16:45.861 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:47 compute-2 ceph-mon[77282]: pgmap v4363: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:16:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:47.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:47.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:49.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:49 compute-2 ceph-mon[77282]: pgmap v4364: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:16:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:49.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:50 compute-2 nova_compute[226829]: 2026-01-31 09:16:50.656 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:50 compute-2 nova_compute[226829]: 2026-01-31 09:16:50.864 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:16:51.006 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=116, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=115) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:16:51 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:16:51.007 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:16:51 compute-2 nova_compute[226829]: 2026-01-31 09:16:51.007 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:51 compute-2 sudo[350182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:51 compute-2 sudo[350182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:51 compute-2 sudo[350182]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:51 compute-2 podman[350181]: 2026-01-31 09:16:51.176974974 +0000 UTC m=+0.058644266 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 09:16:51 compute-2 sudo[350224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:51 compute-2 sudo[350224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:51 compute-2 sudo[350224]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:51.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:51.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:51 compute-2 ceph-mon[77282]: pgmap v4365: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:16:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:52 compute-2 nova_compute[226829]: 2026-01-31 09:16:52.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:52 compute-2 nova_compute[226829]: 2026-01-31 09:16:52.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:16:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:53.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:54 compute-2 ceph-mon[77282]: pgmap v4366: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Jan 31 09:16:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3871755717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:16:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/3871755717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:16:54 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:16:54.010 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '116'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:16:55 compute-2 ceph-mon[77282]: pgmap v4367: 305 pgs: 305 active+clean; 177 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 79 op/s
Jan 31 09:16:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:55.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:55.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:55 compute-2 nova_compute[226829]: 2026-01-31 09:16:55.658 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:55 compute-2 nova_compute[226829]: 2026-01-31 09:16:55.868 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:57.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:57 compute-2 ceph-mon[77282]: pgmap v4368: 305 pgs: 305 active+clean; 180 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 60 op/s
Jan 31 09:16:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:16:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:57.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:16:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:59.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:16:59 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Cumulative writes: 21K writes, 104K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s
                                           Cumulative WAL: 21K writes, 21K syncs, 1.00 writes per sync, written: 0.21 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1572 writes, 8049 keys, 1572 commit groups, 1.0 writes per commit group, ingest: 16.40 MB, 0.03 MB/s
                                           Interval WAL: 1572 writes, 1572 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     66.7      1.94              0.37        69    0.028       0      0       0.0       0.0
                                             L6      1/0   12.29 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.7    131.8    113.6      6.44              2.01        68    0.095    581K    36K       0.0       0.0
                                            Sum      1/0   12.29 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.7    101.3    102.8      8.38              2.38       137    0.061    581K    36K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2     38.5     39.3      2.24              0.22        12    0.186     73K   3110       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    131.8    113.6      6.44              2.01        68    0.095    581K    36K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     66.8      1.94              0.37        68    0.028       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.126, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.84 GB write, 0.11 MB/s write, 0.83 GB read, 0.11 MB/s read, 8.4 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.08 GB read, 0.14 MB/s read, 2.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559ef3d1f1f0#2 capacity: 304.00 MB usage: 92.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000693 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5721,88.06 MB,28.9678%) FilterBlock(137,1.57 MB,0.514839%) IndexBlock(137,2.51 MB,0.824281%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 09:16:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:16:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:16:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:16:59 compute-2 ceph-mon[77282]: pgmap v4369: 305 pgs: 305 active+clean; 186 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 210 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:17:00 compute-2 nova_compute[226829]: 2026-01-31 09:17:00.661 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:00 compute-2 nova_compute[226829]: 2026-01-31 09:17:00.870 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:01.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:01 compute-2 ceph-mon[77282]: pgmap v4370: 305 pgs: 305 active+clean; 191 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Jan 31 09:17:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:03 compute-2 ceph-mon[77282]: pgmap v4371: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:17:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:03.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:03.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:05.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:05.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:05 compute-2 ceph-mon[77282]: pgmap v4372: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:17:05 compute-2 nova_compute[226829]: 2026-01-31 09:17:05.663 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:05 compute-2 nova_compute[226829]: 2026-01-31 09:17:05.873 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:17:06.966 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:17:06.967 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:17:06.967 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:07.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:07.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:07 compute-2 ceph-mon[77282]: pgmap v4373: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 290 KiB/s rd, 1.1 MiB/s wr, 51 op/s
Jan 31 09:17:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:17:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:09.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:17:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:09.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:09 compute-2 ceph-mon[77282]: pgmap v4374: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 270 KiB/s rd, 610 KiB/s wr, 39 op/s
Jan 31 09:17:10 compute-2 nova_compute[226829]: 2026-01-31 09:17:10.665 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:10 compute-2 nova_compute[226829]: 2026-01-31 09:17:10.875 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:11 compute-2 sudo[350260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:11 compute-2 sudo[350260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:11 compute-2 sudo[350260]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:11 compute-2 sudo[350285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:11 compute-2 sudo[350285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:11 compute-2 sudo[350285]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:11.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:11.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:11 compute-2 ceph-mon[77282]: pgmap v4375: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 330 KiB/s wr, 34 op/s
Jan 31 09:17:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:13 compute-2 ceph-mon[77282]: pgmap v4376: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 92 KiB/s wr, 21 op/s
Jan 31 09:17:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:13.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:17:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:13.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:17:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:15.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:15.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:15 compute-2 ceph-mon[77282]: pgmap v4377: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 30 KiB/s wr, 16 op/s
Jan 31 09:17:15 compute-2 nova_compute[226829]: 2026-01-31 09:17:15.668 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:15 compute-2 nova_compute[226829]: 2026-01-31 09:17:15.910 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:16 compute-2 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-2[77982]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 09:17:16 compute-2 podman[350312]: 2026-01-31 09:17:16.17988006 +0000 UTC m=+0.063433745 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 09:17:17 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/946098308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:17.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:17.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:18 compute-2 ceph-mon[77282]: pgmap v4378: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 30 KiB/s wr, 16 op/s
Jan 31 09:17:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:17:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:19.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:17:19 compute-2 ceph-mon[77282]: pgmap v4379: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 16 op/s
Jan 31 09:17:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:19.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:20 compute-2 nova_compute[226829]: 2026-01-31 09:17:20.670 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:20 compute-2 nova_compute[226829]: 2026-01-31 09:17:20.912 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:21.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:21.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:21 compute-2 ceph-mon[77282]: pgmap v4380: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 17 KiB/s wr, 15 op/s
Jan 31 09:17:22 compute-2 podman[350341]: 2026-01-31 09:17:22.147904339 +0000 UTC m=+0.037630995 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 09:17:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:17:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:23.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:17:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:23.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:23 compute-2 ceph-mon[77282]: pgmap v4381: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 8.2 KiB/s wr, 8 op/s
Jan 31 09:17:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:25.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:17:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:25.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:17:25 compute-2 ceph-mon[77282]: pgmap v4382: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 341 B/s wr, 6 op/s
Jan 31 09:17:25 compute-2 nova_compute[226829]: 2026-01-31 09:17:25.672 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:25 compute-2 nova_compute[226829]: 2026-01-31 09:17:25.915 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:26 compute-2 sudo[350364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:26 compute-2 sudo[350364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:26 compute-2 sudo[350364]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:26 compute-2 sudo[350389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:17:26 compute-2 sudo[350389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:26 compute-2 sudo[350389]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:26 compute-2 sudo[350414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:26 compute-2 sudo[350414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:26 compute-2 sudo[350414]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:26 compute-2 sudo[350439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:17:26 compute-2 sudo[350439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:26 compute-2 sudo[350439]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:27.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:27 compute-2 nova_compute[226829]: 2026-01-31 09:17:27.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:27.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:27 compute-2 ceph-mon[77282]: pgmap v4383: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:17:27 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3735691117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:17:28 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 81K writes, 333K keys, 81K commit groups, 1.0 writes per commit group, ingest: 0.34 GB, 0.04 MB/s
                                           Cumulative WAL: 81K writes, 29K syncs, 2.73 writes per sync, written: 0.34 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1949 writes, 6043 keys, 1949 commit groups, 1.0 writes per commit group, ingest: 5.98 MB, 0.01 MB/s
                                           Interval WAL: 1949 writes, 827 syncs, 2.36 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:17:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:29 compute-2 ceph-mon[77282]: pgmap v4384: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:17:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:17:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:17:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:17:29 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:17:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:29.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:29 compute-2 nova_compute[226829]: 2026-01-31 09:17:29.384 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:17:29.383 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=117, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=116) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:17:29 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:17:29.385 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:17:29 compute-2 nova_compute[226829]: 2026-01-31 09:17:29.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:29.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/695105816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:30 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/35238300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:17:30 compute-2 nova_compute[226829]: 2026-01-31 09:17:30.675 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:30 compute-2 nova_compute[226829]: 2026-01-31 09:17:30.918 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:31 compute-2 ceph-mon[77282]: pgmap v4385: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:31 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2282457959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:17:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:31.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:17:31 compute-2 sudo[350497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:31 compute-2 sudo[350497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:31 compute-2 sudo[350497]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:31 compute-2 sudo[350522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:31 compute-2 sudo[350522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:31 compute-2 sudo[350522]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:31 compute-2 nova_compute[226829]: 2026-01-31 09:17:31.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:31.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:32 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:17:32.386 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '117'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:17:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:33.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:33 compute-2 ceph-mon[77282]: pgmap v4386: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:33 compute-2 nova_compute[226829]: 2026-01-31 09:17:33.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:33 compute-2 nova_compute[226829]: 2026-01-31 09:17:33.487 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:17:33 compute-2 nova_compute[226829]: 2026-01-31 09:17:33.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:17:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:33.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:33 compute-2 nova_compute[226829]: 2026-01-31 09:17:33.531 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:17:33 compute-2 nova_compute[226829]: 2026-01-31 09:17:33.532 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:33 compute-2 nova_compute[226829]: 2026-01-31 09:17:33.599 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:33 compute-2 nova_compute[226829]: 2026-01-31 09:17:33.599 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:33 compute-2 nova_compute[226829]: 2026-01-31 09:17:33.600 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:33 compute-2 nova_compute[226829]: 2026-01-31 09:17:33.600 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:17:33 compute-2 nova_compute[226829]: 2026-01-31 09:17:33.600 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:17:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1297135005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.030 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.149 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.150 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3963MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.151 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.151 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.328 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.329 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.368 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1297135005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:34 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:17:34 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2126205365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.793 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.798 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.825 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.826 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:17:34 compute-2 nova_compute[226829]: 2026-01-31 09:17:34.826 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:35.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:35.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:35 compute-2 nova_compute[226829]: 2026-01-31 09:17:35.676 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:35 compute-2 nova_compute[226829]: 2026-01-31 09:17:35.919 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:36 compute-2 ceph-mon[77282]: pgmap v4387: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2126205365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:36 compute-2 nova_compute[226829]: 2026-01-31 09:17:36.783 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:37.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:37 compute-2 nova_compute[226829]: 2026-01-31 09:17:37.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:37.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:37 compute-2 ceph-mon[77282]: pgmap v4388: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:37 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:37 compute-2 sudo[350595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:37 compute-2 sudo[350595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:37 compute-2 sudo[350595]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:37 compute-2 sudo[350620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:17:37 compute-2 sudo[350620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:37 compute-2 sudo[350620]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:39.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:17:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:39.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:17:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3843149169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:39 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3025134325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:17:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1095028618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:40 compute-2 nova_compute[226829]: 2026-01-31 09:17:40.678 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:40 compute-2 nova_compute[226829]: 2026-01-31 09:17:40.920 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:40 compute-2 ceph-mon[77282]: pgmap v4389: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:41.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:41 compute-2 nova_compute[226829]: 2026-01-31 09:17:41.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:41.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:42 compute-2 ceph-mon[77282]: pgmap v4390: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:43.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:43.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:44 compute-2 ceph-mon[77282]: pgmap v4391: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:45.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:45.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:45 compute-2 nova_compute[226829]: 2026-01-31 09:17:45.680 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:45 compute-2 nova_compute[226829]: 2026-01-31 09:17:45.922 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:46 compute-2 ceph-mon[77282]: pgmap v4392: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 31 09:17:47 compute-2 ceph-mon[77282]: pgmap v4393: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 43 op/s
Jan 31 09:17:47 compute-2 podman[350650]: 2026-01-31 09:17:47.212211925 +0000 UTC m=+0.094666155 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:17:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:47.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:47.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:49.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:49.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:50 compute-2 ceph-mon[77282]: pgmap v4394: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 60 op/s
Jan 31 09:17:50 compute-2 nova_compute[226829]: 2026-01-31 09:17:50.682 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:50 compute-2 nova_compute[226829]: 2026-01-31 09:17:50.924 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:51.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:51 compute-2 sudo[350679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:51 compute-2 sudo[350679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:51 compute-2 sudo[350679]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:51.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:51 compute-2 sudo[350704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:51 compute-2 sudo[350704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:51 compute-2 sudo[350704]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:51 compute-2 ceph-mon[77282]: pgmap v4395: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 31 09:17:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:53 compute-2 podman[350730]: 2026-01-31 09:17:53.17781455 +0000 UTC m=+0.065385200 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 09:17:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:53.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:53.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:53 compute-2 ceph-mon[77282]: pgmap v4396: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 31 09:17:54 compute-2 nova_compute[226829]: 2026-01-31 09:17:54.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:54 compute-2 nova_compute[226829]: 2026-01-31 09:17:54.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:17:55 compute-2 ceph-mon[77282]: pgmap v4397: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 31 09:17:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:55.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:55.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:55 compute-2 nova_compute[226829]: 2026-01-31 09:17:55.684 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:55 compute-2 nova_compute[226829]: 2026-01-31 09:17:55.926 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:17:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:57.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:17:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:57 compute-2 ceph-mon[77282]: pgmap v4398: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 341 B/s wr, 74 op/s
Jan 31 09:17:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:57.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:59.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:17:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:59.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:00 compute-2 ceph-mon[77282]: pgmap v4399: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 682 B/s wr, 46 op/s
Jan 31 09:18:00 compute-2 nova_compute[226829]: 2026-01-31 09:18:00.686 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:00 compute-2 nova_compute[226829]: 2026-01-31 09:18:00.928 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:01.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:01 compute-2 ceph-mon[77282]: pgmap v4400: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 829 KiB/s rd, 11 KiB/s wr, 46 op/s
Jan 31 09:18:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:01.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:03 compute-2 ceph-mon[77282]: pgmap v4401: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 12 KiB/s wr, 44 op/s
Jan 31 09:18:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:03.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:03.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:05 compute-2 ceph-mon[77282]: pgmap v4402: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Jan 31 09:18:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:05.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:05.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:05 compute-2 nova_compute[226829]: 2026-01-31 09:18:05.688 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:05 compute-2 nova_compute[226829]: 2026-01-31 09:18:05.930 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:06.968 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:06.969 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:06.969 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:18:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:07.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:18:07 compute-2 ceph-mon[77282]: pgmap v4403: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Jan 31 09:18:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:07.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:09.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:09 compute-2 ceph-mon[77282]: pgmap v4404: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 497 KiB/s rd, 26 KiB/s wr, 40 op/s
Jan 31 09:18:10 compute-2 nova_compute[226829]: 2026-01-31 09:18:10.690 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:10 compute-2 nova_compute[226829]: 2026-01-31 09:18:10.931 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:11.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:11 compute-2 nova_compute[226829]: 2026-01-31 09:18:11.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:11.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:11 compute-2 sudo[350759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:11 compute-2 sudo[350759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:11 compute-2 sudo[350759]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:11 compute-2 sudo[350784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:11 compute-2 sudo[350784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:11 compute-2 sudo[350784]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:11 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 e430: 3 total, 3 up, 3 in
Jan 31 09:18:11 compute-2 ceph-mon[77282]: pgmap v4405: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 29 KiB/s wr, 28 op/s
Jan 31 09:18:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:13 compute-2 ceph-mon[77282]: osdmap e430: 3 total, 3 up, 3 in
Jan 31 09:18:13 compute-2 ceph-mon[77282]: pgmap v4407: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 20 KiB/s wr, 5 op/s
Jan 31 09:18:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:13.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:13.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:15.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:15 compute-2 ceph-mon[77282]: pgmap v4408: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 6.8 KiB/s wr, 10 op/s
Jan 31 09:18:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:15.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:15 compute-2 nova_compute[226829]: 2026-01-31 09:18:15.691 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:15 compute-2 nova_compute[226829]: 2026-01-31 09:18:15.933 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:17.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:17.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:17 compute-2 ceph-mon[77282]: pgmap v4409: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 8.6 KiB/s wr, 30 op/s
Jan 31 09:18:18 compute-2 podman[350812]: 2026-01-31 09:18:18.178753649 +0000 UTC m=+0.064025112 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:18:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:19.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:19.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:19 compute-2 ceph-mon[77282]: pgmap v4410: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 9.2 KiB/s wr, 31 op/s
Jan 31 09:18:20 compute-2 nova_compute[226829]: 2026-01-31 09:18:20.692 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:20 compute-2 nova_compute[226829]: 2026-01-31 09:18:20.935 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:21 compute-2 ceph-mon[77282]: pgmap v4411: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Jan 31 09:18:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:21.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:21 compute-2 nova_compute[226829]: 2026-01-31 09:18:21.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:21 compute-2 nova_compute[226829]: 2026-01-31 09:18:21.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 09:18:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:21.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:21 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 09:18:22 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 09:18:22 compute-2 radosgw[83985]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 31 09:18:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:22 compute-2 nova_compute[226829]: 2026-01-31 09:18:22.826 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:22 compute-2 nova_compute[226829]: 2026-01-31 09:18:22.827 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:22 compute-2 nova_compute[226829]: 2026-01-31 09:18:22.866 226833 DEBUG nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.096 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.097 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.107 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.107 226833 INFO nova.compute.claims [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Claim successful on node compute-2.ctlplane.example.com
Jan 31 09:18:23 compute-2 ovn_controller[133834]: 2026-01-31T09:18:23Z|00878|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.366 226833 DEBUG oslo_concurrency.processutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:23.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:23 compute-2 ceph-mon[77282]: pgmap v4412: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 5.0 KiB/s wr, 25 op/s
Jan 31 09:18:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:23.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:23 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:18:23 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3716107651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.805 226833 DEBUG oslo_concurrency.processutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.810 226833 DEBUG nova.compute.provider_tree [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.848 226833 DEBUG nova.scheduler.client.report [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.895 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.895 226833 DEBUG nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.978 226833 DEBUG nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:18:23 compute-2 nova_compute[226829]: 2026-01-31 09:18:23.978 226833 DEBUG nova.network.neutron [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.008 226833 INFO nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.039 226833 DEBUG nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.115 226833 INFO nova.virt.block_device [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Booting with volume b13e384a-f308-411e-add5-2e1bc75b4439 at /dev/vda
Jan 31 09:18:24 compute-2 podman[350865]: 2026-01-31 09:18:24.179802737 +0000 UTC m=+0.065448760 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.267 226833 DEBUG nova.policy [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecd39871d7fd438f88b36601f25d6eb6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98d10c0290e340a08e9d1726bf0066bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.357 226833 DEBUG os_brick.utils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.359 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.369 236868 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.370 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[ea976b6a-bbfd-45fb-ad61-28885b349619]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.371 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.376 236868 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.377 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[c55b30c7-abdc-438e-9567-9574f000580a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:70a4e945afb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.378 236868 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.384 236868 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.384 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[6abc1d7a-229b-471a-9f45-493021f04447]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.386 236868 DEBUG oslo.privsep.daemon [-] privsep: reply[f29bcb96-bdc0-46f5-91f9-5f3082aa290e]: (4, 'd14f084b-ec77-4fba-801f-103494d34b3a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.387 226833 DEBUG oslo_concurrency.processutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.413 226833 DEBUG oslo_concurrency.processutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.415 226833 DEBUG os_brick.initiator.connectors.lightos [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.416 226833 DEBUG os_brick.initiator.connectors.lightos [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.416 226833 DEBUG os_brick.initiator.connectors.lightos [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.417 226833 DEBUG os_brick.utils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:70a4e945afb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'd14f084b-ec77-4fba-801f-103494d34b3a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 09:18:24 compute-2 nova_compute[226829]: 2026-01-31 09:18:24.417 226833 DEBUG nova.virt.block_device [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Updating existing volume attachment record: b5702e65-231f-4ca6-87f6-2e9fa6ea2d08 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 09:18:24 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3716107651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:25.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:25.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:25 compute-2 nova_compute[226829]: 2026-01-31 09:18:25.694 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:25 compute-2 nova_compute[226829]: 2026-01-31 09:18:25.938 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:26 compute-2 ceph-mon[77282]: pgmap v4413: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 184 KiB/s rd, 4.3 KiB/s wr, 37 op/s
Jan 31 09:18:26 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1731910585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:18:26 compute-2 nova_compute[226829]: 2026-01-31 09:18:26.178 226833 DEBUG nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:18:26 compute-2 nova_compute[226829]: 2026-01-31 09:18:26.180 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:18:26 compute-2 nova_compute[226829]: 2026-01-31 09:18:26.180 226833 INFO nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Creating image(s)
Jan 31 09:18:26 compute-2 nova_compute[226829]: 2026-01-31 09:18:26.181 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 09:18:26 compute-2 nova_compute[226829]: 2026-01-31 09:18:26.181 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Ensure instance console log exists: /var/lib/nova/instances/aa28bdce-0eba-4f00-a4f5-954f6254edd9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:18:26 compute-2 nova_compute[226829]: 2026-01-31 09:18:26.181 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:26 compute-2 nova_compute[226829]: 2026-01-31 09:18:26.182 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:26 compute-2 nova_compute[226829]: 2026-01-31 09:18:26.182 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:26 compute-2 nova_compute[226829]: 2026-01-31 09:18:26.589 226833 DEBUG nova.network.neutron [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Successfully created port: a1a63aa0-845f-4e0f-ac60-f8509b84c41d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 09:18:27 compute-2 ceph-mon[77282]: pgmap v4414: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.0 KiB/s wr, 42 op/s
Jan 31 09:18:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:27.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:27.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:28 compute-2 nova_compute[226829]: 2026-01-31 09:18:28.273 226833 DEBUG nova.network.neutron [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Successfully updated port: a1a63aa0-845f-4e0f-ac60-f8509b84c41d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:18:28 compute-2 nova_compute[226829]: 2026-01-31 09:18:28.349 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:18:28 compute-2 nova_compute[226829]: 2026-01-31 09:18:28.350 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquired lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:18:28 compute-2 nova_compute[226829]: 2026-01-31 09:18:28.350 226833 DEBUG nova.network.neutron [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:18:28 compute-2 nova_compute[226829]: 2026-01-31 09:18:28.508 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:28 compute-2 nova_compute[226829]: 2026-01-31 09:18:28.532 226833 DEBUG nova.compute.manager [req-e3e1565a-0bf0-412b-bb66-967aa7a023cc req-6eef67c9-e8d4-4d57-9ce3-00e1cc7bfd9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received event network-changed-a1a63aa0-845f-4e0f-ac60-f8509b84c41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:18:28 compute-2 nova_compute[226829]: 2026-01-31 09:18:28.533 226833 DEBUG nova.compute.manager [req-e3e1565a-0bf0-412b-bb66-967aa7a023cc req-6eef67c9-e8d4-4d57-9ce3-00e1cc7bfd9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Refreshing instance network info cache due to event network-changed-a1a63aa0-845f-4e0f-ac60-f8509b84c41d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:18:28 compute-2 nova_compute[226829]: 2026-01-31 09:18:28.533 226833 DEBUG oslo_concurrency.lockutils [req-e3e1565a-0bf0-412b-bb66-967aa7a023cc req-6eef67c9-e8d4-4d57-9ce3-00e1cc7bfd9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:18:28 compute-2 nova_compute[226829]: 2026-01-31 09:18:28.665 226833 DEBUG nova.network.neutron [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:18:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:29.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:29 compute-2 ceph-mon[77282]: pgmap v4415: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 511 B/s wr, 40 op/s
Jan 31 09:18:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:29.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.572 226833 DEBUG nova.network.neutron [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Updating instance_info_cache with network_info: [{"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.594 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Releasing lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.594 226833 DEBUG nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Instance network_info: |[{"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.595 226833 DEBUG oslo_concurrency.lockutils [req-e3e1565a-0bf0-412b-bb66-967aa7a023cc req-6eef67c9-e8d4-4d57-9ce3-00e1cc7bfd9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.596 226833 DEBUG nova.network.neutron [req-e3e1565a-0bf0-412b-bb66-967aa7a023cc req-6eef67c9-e8d4-4d57-9ce3-00e1cc7bfd9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Refreshing network info cache for port a1a63aa0-845f-4e0f-ac60-f8509b84c41d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.602 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Start _get_guest_xml network_info=[{"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b13e384a-f308-411e-add5-2e1bc75b4439', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b13e384a-f308-411e-add5-2e1bc75b4439', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'aa28bdce-0eba-4f00-a4f5-954f6254edd9', 'attached_at': '', 'detached_at': '', 'volume_id': 'b13e384a-f308-411e-add5-2e1bc75b4439', 'serial': 'b13e384a-f308-411e-add5-2e1bc75b4439'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'b5702e65-231f-4ca6-87f6-2e9fa6ea2d08', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.611 226833 WARNING nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.617 226833 DEBUG nova.virt.libvirt.host [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.618 226833 DEBUG nova.virt.libvirt.host [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.623 226833 DEBUG nova.virt.libvirt.host [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.624 226833 DEBUG nova.virt.libvirt.host [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.626 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.627 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.628 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.629 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.629 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.630 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.630 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.631 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.632 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.632 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.633 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.633 226833 DEBUG nova.virt.hardware [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.673 226833 DEBUG nova.storage.rbd_utils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image aa28bdce-0eba-4f00-a4f5-954f6254edd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.678 226833 DEBUG oslo_concurrency.processutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.717 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:30 compute-2 nova_compute[226829]: 2026-01-31 09:18:30.939 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:31 compute-2 ceph-mon[77282]: pgmap v4416: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 31 09:18:31 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:18:31 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2472609812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.304 226833 DEBUG oslo_concurrency.processutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.341 226833 DEBUG nova.virt.libvirt.vif [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:18:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1926695861',display_name='tempest-TestVolumeBootPattern-server-1926695861',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1926695861',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDAfqvbo84AGo7M8dHeUcfV7f8XDMsbbH3qvJ7QZYUuK8Mi+q4nVx9EyFqWsp6cOXT2AG4HbgkO3dUTMLlMtCu+TvTdRzopwn8vz5la3KIOsONTeEClwFEs29TOnQ3Rwg==',key_name='tempest-TestVolumeBootPattern-498126782',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-jqlyu20t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:18:24Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=aa28bdce-0eba-4f00-a4f5-954f6254edd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.342 226833 DEBUG nova.network.os_vif_util [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.343 226833 DEBUG nova.network.os_vif_util [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:00:75,bridge_name='br-int',has_traffic_filtering=True,id=a1a63aa0-845f-4e0f-ac60-f8509b84c41d,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a63aa0-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.344 226833 DEBUG nova.objects.instance [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'pci_devices' on Instance uuid aa28bdce-0eba-4f00-a4f5-954f6254edd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.364 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <uuid>aa28bdce-0eba-4f00-a4f5-954f6254edd9</uuid>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <name>instance-000000e0</name>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <memory>131072</memory>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <vcpu>1</vcpu>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <metadata>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <nova:name>tempest-TestVolumeBootPattern-server-1926695861</nova:name>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <nova:creationTime>2026-01-31 09:18:30</nova:creationTime>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <nova:flavor name="m1.nano">
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <nova:memory>128</nova:memory>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <nova:disk>1</nova:disk>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <nova:swap>0</nova:swap>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       </nova:flavor>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <nova:owner>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <nova:user uuid="ecd39871d7fd438f88b36601f25d6eb6">tempest-TestVolumeBootPattern-1294459393-project-member</nova:user>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <nova:project uuid="98d10c0290e340a08e9d1726bf0066bf">tempest-TestVolumeBootPattern-1294459393</nova:project>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       </nova:owner>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <nova:ports>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <nova:port uuid="a1a63aa0-845f-4e0f-ac60-f8509b84c41d">
Jan 31 09:18:31 compute-2 nova_compute[226829]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         </nova:port>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       </nova:ports>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     </nova:instance>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   </metadata>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <sysinfo type="smbios">
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <system>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <entry name="serial">aa28bdce-0eba-4f00-a4f5-954f6254edd9</entry>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <entry name="uuid">aa28bdce-0eba-4f00-a4f5-954f6254edd9</entry>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     </system>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   </sysinfo>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <os>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <boot dev="hd"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <smbios mode="sysinfo"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   </os>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <features>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <acpi/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <apic/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <vmcoreinfo/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   </features>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <clock offset="utc">
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <timer name="hpet" present="no"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   </clock>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <cpu mode="custom" match="exact">
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <model>Nehalem</model>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   </cpu>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   <devices>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <disk type="network" device="cdrom">
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <driver type="raw" cache="none"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <source protocol="rbd" name="vms/aa28bdce-0eba-4f00-a4f5-954f6254edd9_disk.config">
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       </source>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <target dev="sda" bus="sata"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <disk type="network" device="disk">
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <source protocol="rbd" name="volumes/volume-b13e384a-f308-411e-add5-2e1bc75b4439">
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       </source>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <auth username="openstack">
Jan 31 09:18:31 compute-2 nova_compute[226829]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       </auth>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <target dev="vda" bus="virtio"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <serial>b13e384a-f308-411e-add5-2e1bc75b4439</serial>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     </disk>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <interface type="ethernet">
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <mac address="fa:16:3e:c9:00:75"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <mtu size="1442"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <target dev="tapa1a63aa0-84"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     </interface>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <serial type="pty">
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <log file="/var/lib/nova/instances/aa28bdce-0eba-4f00-a4f5-954f6254edd9/console.log" append="off"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     </serial>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <video>
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <model type="virtio"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     </video>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <input type="tablet" bus="usb"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <rng model="virtio">
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     </rng>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <controller type="usb" index="0"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     <memballoon model="virtio">
Jan 31 09:18:31 compute-2 nova_compute[226829]:       <stats period="10"/>
Jan 31 09:18:31 compute-2 nova_compute[226829]:     </memballoon>
Jan 31 09:18:31 compute-2 nova_compute[226829]:   </devices>
Jan 31 09:18:31 compute-2 nova_compute[226829]: </domain>
Jan 31 09:18:31 compute-2 nova_compute[226829]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.365 226833 DEBUG nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Preparing to wait for external event network-vif-plugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.365 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.365 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.366 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.366 226833 DEBUG nova.virt.libvirt.vif [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:18:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1926695861',display_name='tempest-TestVolumeBootPattern-server-1926695861',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1926695861',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDAfqvbo84AGo7M8dHeUcfV7f8XDMsbbH3qvJ7QZYUuK8Mi+q4nVx9EyFqWsp6cOXT2AG4HbgkO3dUTMLlMtCu+TvTdRzopwn8vz5la3KIOsONTeEClwFEs29TOnQ3Rwg==',key_name='tempest-TestVolumeBootPattern-498126782',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-jqlyu20t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:18:24Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=aa28bdce-0eba-4f00-a4f5-954f6254edd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.367 226833 DEBUG nova.network.os_vif_util [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.367 226833 DEBUG nova.network.os_vif_util [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:00:75,bridge_name='br-int',has_traffic_filtering=True,id=a1a63aa0-845f-4e0f-ac60-f8509b84c41d,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a63aa0-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.368 226833 DEBUG os_vif [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:00:75,bridge_name='br-int',has_traffic_filtering=True,id=a1a63aa0-845f-4e0f-ac60-f8509b84c41d,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a63aa0-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.368 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.368 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.369 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.374 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.375 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a63aa0-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.375 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1a63aa0-84, col_values=(('external_ids', {'iface-id': 'a1a63aa0-845f-4e0f-ac60-f8509b84c41d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:00:75', 'vm-uuid': 'aa28bdce-0eba-4f00-a4f5-954f6254edd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.376 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:31 compute-2 NetworkManager[48999]: <info>  [1769851111.3779] manager: (tapa1a63aa0-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.378 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.382 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.383 226833 INFO os_vif [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:00:75,bridge_name='br-int',has_traffic_filtering=True,id=a1a63aa0-845f-4e0f-ac60-f8509b84c41d,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a63aa0-84')
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.435 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.436 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.436 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No VIF found with MAC fa:16:3e:c9:00:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.436 226833 INFO nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Using config drive
Jan 31 09:18:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:31.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.456 226833 DEBUG nova.storage.rbd_utils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image aa28bdce-0eba-4f00-a4f5-954f6254edd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:18:31 compute-2 nova_compute[226829]: 2026-01-31 09:18:31.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:31.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:31 compute-2 sudo[350955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:31 compute-2 sudo[350955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:31 compute-2 sudo[350955]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:31 compute-2 sudo[350980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:31 compute-2 sudo[350980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:31 compute-2 sudo[350980]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2472609812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:18:32 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3661090237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:33 compute-2 nova_compute[226829]: 2026-01-31 09:18:33.029 226833 INFO nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Creating config drive at /var/lib/nova/instances/aa28bdce-0eba-4f00-a4f5-954f6254edd9/disk.config
Jan 31 09:18:33 compute-2 nova_compute[226829]: 2026-01-31 09:18:33.033 226833 DEBUG oslo_concurrency.processutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa28bdce-0eba-4f00-a4f5-954f6254edd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwisdj_i2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:33 compute-2 nova_compute[226829]: 2026-01-31 09:18:33.173 226833 DEBUG oslo_concurrency.processutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa28bdce-0eba-4f00-a4f5-954f6254edd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwisdj_i2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:33 compute-2 nova_compute[226829]: 2026-01-31 09:18:33.195 226833 DEBUG nova.storage.rbd_utils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image aa28bdce-0eba-4f00-a4f5-954f6254edd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:18:33 compute-2 nova_compute[226829]: 2026-01-31 09:18:33.199 226833 DEBUG oslo_concurrency.processutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa28bdce-0eba-4f00-a4f5-954f6254edd9/disk.config aa28bdce-0eba-4f00-a4f5-954f6254edd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:33.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:33 compute-2 nova_compute[226829]: 2026-01-31 09:18:33.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:33 compute-2 nova_compute[226829]: 2026-01-31 09:18:33.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:18:33 compute-2 nova_compute[226829]: 2026-01-31 09:18:33.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:18:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/668408901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:33 compute-2 ceph-mon[77282]: pgmap v4417: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Jan 31 09:18:33 compute-2 nova_compute[226829]: 2026-01-31 09:18:33.526 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 09:18:33 compute-2 nova_compute[226829]: 2026-01-31 09:18:33.526 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:18:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:33.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.122 226833 DEBUG oslo_concurrency.processutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa28bdce-0eba-4f00-a4f5-954f6254edd9/disk.config aa28bdce-0eba-4f00-a4f5-954f6254edd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.924s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.123 226833 INFO nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Deleting local config drive /var/lib/nova/instances/aa28bdce-0eba-4f00-a4f5-954f6254edd9/disk.config because it was imported into RBD.
Jan 31 09:18:34 compute-2 kernel: tapa1a63aa0-84: entered promiscuous mode
Jan 31 09:18:34 compute-2 NetworkManager[48999]: <info>  [1769851114.1695] manager: (tapa1a63aa0-84): new Tun device (/org/freedesktop/NetworkManager/Devices/442)
Jan 31 09:18:34 compute-2 ovn_controller[133834]: 2026-01-31T09:18:34Z|00879|binding|INFO|Claiming lport a1a63aa0-845f-4e0f-ac60-f8509b84c41d for this chassis.
Jan 31 09:18:34 compute-2 ovn_controller[133834]: 2026-01-31T09:18:34Z|00880|binding|INFO|a1a63aa0-845f-4e0f-ac60-f8509b84c41d: Claiming fa:16:3e:c9:00:75 10.100.0.9
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.170 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.175 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:34 compute-2 ovn_controller[133834]: 2026-01-31T09:18:34Z|00881|binding|INFO|Setting lport a1a63aa0-845f-4e0f-ac60-f8509b84c41d ovn-installed in OVS
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.176 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:34 compute-2 ovn_controller[133834]: 2026-01-31T09:18:34Z|00882|binding|INFO|Setting lport a1a63aa0-845f-4e0f-ac60-f8509b84c41d up in Southbound
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.180 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:00:75 10.100.0.9'], port_security=['fa:16:3e:c9:00:75 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aa28bdce-0eba-4f00-a4f5-954f6254edd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11348f26-2c0a-4b92-a927-856bca145e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=a1a63aa0-845f-4e0f-ac60-f8509b84c41d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.181 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.182 143841 INFO neutron.agent.ovn.metadata.agent [-] Port a1a63aa0-845f-4e0f-ac60-f8509b84c41d in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 bound to our chassis
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.184 143841 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.193 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[9cad0929-4fea-4b07-83d8-6a35ba2e02ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.195 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c9ca540-51 in ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:18:34 compute-2 systemd-machined[195142]: New machine qemu-99-instance-000000e0.
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.201 230393 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c9ca540-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.201 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[449af0ba-78a9-4889-83cd-2d76022ef8a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.202 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[8f157ff3-b64a-4a51-9be2-839438aacc3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.213 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7daf49-ec4b-4b12-a914-c0dfbf94b62d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 systemd[1]: Started Virtual Machine qemu-99-instance-000000e0.
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.225 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ec034c19-af6e-4cfe-ab7d-f3a65373cc70]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 systemd-udevd[351062]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:18:34 compute-2 NetworkManager[48999]: <info>  [1769851114.2471] device (tapa1a63aa0-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:18:34 compute-2 NetworkManager[48999]: <info>  [1769851114.2479] device (tapa1a63aa0-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.254 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2a27e587-f188-419c-9d25-db4cd356a3a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 NetworkManager[48999]: <info>  [1769851114.2601] manager: (tap5c9ca540-50): new Veth device (/org/freedesktop/NetworkManager/Devices/443)
Jan 31 09:18:34 compute-2 systemd-udevd[351067]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.259 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[3bda2cdb-62d4-4cea-a9cc-fb4db2af45a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.280 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[66015658-e2e0-48e4-ab7d-1e8b3843a9ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.283 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[51b9c651-a08c-4ac0-9d9b-6cc2d66ecb60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 NetworkManager[48999]: <info>  [1769851114.3004] device (tap5c9ca540-50): carrier: link connected
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.304 230408 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3202b0-c6b6-403e-bc93-9e2037ce772e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.319 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2761ad-eac6-4607-aa6b-28166912d87a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 274], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1136677, 'reachable_time': 40499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351093, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.331 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[caf10837-0552-4f9f-ae2e-eeda1dc517ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:dcf7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1136677, 'tstamp': 1136677}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351094, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.344 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[04e43570-2ea6-4a6a-8d27-fdd3b8d1edc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 274], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1136677, 'reachable_time': 40499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351095, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.363 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3240e9-bd6e-42fa-8182-621b607cd196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.403 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[7388609f-15b2-4409-84d8-e124d8201059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.405 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.405 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.405 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c9ca540-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.407 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:34 compute-2 NetworkManager[48999]: <info>  [1769851114.4085] manager: (tap5c9ca540-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Jan 31 09:18:34 compute-2 kernel: tap5c9ca540-50: entered promiscuous mode
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.411 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.412 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c9ca540-50, col_values=(('external_ids', {'iface-id': '016c97be-36ee-470a-8bac-28db98577a8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.413 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:34 compute-2 ovn_controller[133834]: 2026-01-31T09:18:34Z|00883|binding|INFO|Releasing lport 016c97be-36ee-470a-8bac-28db98577a8c from this chassis (sb_readonly=0)
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.420 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.423 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.423 143841 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.424 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[0e15cdf7-293f-4e2b-9d1d-354de07df24e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.425 143841 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: global
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     log         /dev/log local0 debug
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     log-tag     haproxy-metadata-proxy-5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     user        root
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     group       root
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     maxconn     1024
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     pidfile     /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     daemon
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: defaults
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     log global
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     mode http
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     option httplog
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     option dontlognull
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     option http-server-close
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     option forwardfor
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     retries                 3
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     timeout http-request    30s
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     timeout connect         30s
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     timeout client          32s
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     timeout server          32s
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     timeout http-keep-alive 30s
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: listen listener
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     bind 169.254.169.254:80
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:     http-request add-header X-OVN-Network-ID 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:18:34 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:34.426 143841 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'env', 'PROCESS_TAG=haproxy-5c9ca540-57e7-412d-8ef3-af923db0a265', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c9ca540-57e7-412d-8ef3-af923db0a265.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:18:34 compute-2 podman[351145]: 2026-01-31 09:18:34.80606708 +0000 UTC m=+0.062103450 container create f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 09:18:34 compute-2 systemd[1]: Started libpod-conmon-f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194.scope.
Jan 31 09:18:34 compute-2 systemd[1]: Started libcrun container.
Jan 31 09:18:34 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86646410ff83aae6a91579eebf7accf46d8d1f5ad339aa3f9fa608f63a637d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:34 compute-2 podman[351145]: 2026-01-31 09:18:34.782100349 +0000 UTC m=+0.038136749 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:18:34 compute-2 podman[351145]: 2026-01-31 09:18:34.883916057 +0000 UTC m=+0.139952447 container init f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 09:18:34 compute-2 podman[351145]: 2026-01-31 09:18:34.88992363 +0000 UTC m=+0.145960010 container start f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 09:18:34 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[351179]: [NOTICE]   (351183) : New worker (351185) forked
Jan 31 09:18:34 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[351179]: [NOTICE]   (351183) : Loading success.
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.974 226833 DEBUG nova.compute.manager [req-41a24ae9-5d27-4fbe-a76e-e8362cf0c07f req-45b8c949-ce50-429b-9040-639bd51831bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received event network-vif-plugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.975 226833 DEBUG oslo_concurrency.lockutils [req-41a24ae9-5d27-4fbe-a76e-e8362cf0c07f req-45b8c949-ce50-429b-9040-639bd51831bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.975 226833 DEBUG oslo_concurrency.lockutils [req-41a24ae9-5d27-4fbe-a76e-e8362cf0c07f req-45b8c949-ce50-429b-9040-639bd51831bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.975 226833 DEBUG oslo_concurrency.lockutils [req-41a24ae9-5d27-4fbe-a76e-e8362cf0c07f req-45b8c949-ce50-429b-9040-639bd51831bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:34 compute-2 nova_compute[226829]: 2026-01-31 09:18:34.975 226833 DEBUG nova.compute.manager [req-41a24ae9-5d27-4fbe-a76e-e8362cf0c07f req-45b8c949-ce50-429b-9040-639bd51831bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Processing event network-vif-plugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.034 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769851115.033264, aa28bdce-0eba-4f00-a4f5-954f6254edd9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.034 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] VM Started (Lifecycle Event)
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.037 226833 DEBUG nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.043 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.048 226833 INFO nova.virt.libvirt.driver [-] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Instance spawned successfully.
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.048 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.057 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.061 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.093 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.093 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.094 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.094 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.095 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.095 226833 DEBUG nova.virt.libvirt.driver [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.099 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.100 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769851115.033397, aa28bdce-0eba-4f00-a4f5-954f6254edd9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.100 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] VM Paused (Lifecycle Event)
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.151 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.155 226833 DEBUG nova.virt.driver [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] Emitting event <LifecycleEvent: 1769851115.0417104, aa28bdce-0eba-4f00-a4f5-954f6254edd9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.155 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] VM Resumed (Lifecycle Event)
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.166 226833 INFO nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Took 8.99 seconds to spawn the instance on the hypervisor.
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.167 226833 DEBUG nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.176 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.179 226833 DEBUG nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.181 226833 DEBUG nova.network.neutron [req-e3e1565a-0bf0-412b-bb66-967aa7a023cc req-6eef67c9-e8d4-4d57-9ce3-00e1cc7bfd9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Updated VIF entry in instance network info cache for port a1a63aa0-845f-4e0f-ac60-f8509b84c41d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.182 226833 DEBUG nova.network.neutron [req-e3e1565a-0bf0-412b-bb66-967aa7a023cc req-6eef67c9-e8d4-4d57-9ce3-00e1cc7bfd9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Updating instance_info_cache with network_info: [{"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.239 226833 DEBUG oslo_concurrency.lockutils [req-e3e1565a-0bf0-412b-bb66-967aa7a023cc req-6eef67c9-e8d4-4d57-9ce3-00e1cc7bfd9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.244 226833 INFO nova.compute.manager [None req-fd39de86-16b2-400f-88a2-62c8fa4c26c3 - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.330 226833 INFO nova.compute.manager [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Took 12.30 seconds to build instance.
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.365 226833 DEBUG oslo_concurrency.lockutils [None req-3f1a06d1-6b5d-45df-9982-5d27209e964b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:35.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.518 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.519 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.520 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.521 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.521 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:35.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:35 compute-2 ceph-mon[77282]: pgmap v4418: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 82 op/s
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.941 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:35 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:18:35 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3402391468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:35 compute-2 nova_compute[226829]: 2026-01-31 09:18:35.977 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.044 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000e0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.044 226833 DEBUG nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] skipping disk for instance-000000e0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.173 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.174 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3831MB free_disk=20.987987518310547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.174 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.175 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.260 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Instance aa28bdce-0eba-4f00-a4f5-954f6254edd9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.260 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.260 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.337 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.415 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:36 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:18:36 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2063266035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.761 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.765 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.782 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:18:36 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3402391468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.861 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:18:36 compute-2 nova_compute[226829]: 2026-01-31 09:18:36.863 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.116 226833 DEBUG nova.compute.manager [req-94dd42af-ac23-42af-a7af-d72ed76fb09b req-c8ac551d-52e0-42fb-995f-7b267526ad3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received event network-vif-plugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.117 226833 DEBUG oslo_concurrency.lockutils [req-94dd42af-ac23-42af-a7af-d72ed76fb09b req-c8ac551d-52e0-42fb-995f-7b267526ad3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.118 226833 DEBUG oslo_concurrency.lockutils [req-94dd42af-ac23-42af-a7af-d72ed76fb09b req-c8ac551d-52e0-42fb-995f-7b267526ad3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.118 226833 DEBUG oslo_concurrency.lockutils [req-94dd42af-ac23-42af-a7af-d72ed76fb09b req-c8ac551d-52e0-42fb-995f-7b267526ad3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.118 226833 DEBUG nova.compute.manager [req-94dd42af-ac23-42af-a7af-d72ed76fb09b req-c8ac551d-52e0-42fb-995f-7b267526ad3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] No waiting events found dispatching network-vif-plugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.118 226833 WARNING nova.compute.manager [req-94dd42af-ac23-42af-a7af-d72ed76fb09b req-c8ac551d-52e0-42fb-995f-7b267526ad3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received unexpected event network-vif-plugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d for instance with vm_state active and task_state None.
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.122 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.144 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Triggering sync for uuid aa28bdce-0eba-4f00-a4f5-954f6254edd9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.144 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.145 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.178 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:37.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 09:18:37 compute-2 nova_compute[226829]: 2026-01-31 09:18:37.523 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 09:18:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:37.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:37 compute-2 sudo[351246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:37 compute-2 sudo[351246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:37 compute-2 sudo[351246]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:37 compute-2 sudo[351271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:18:37 compute-2 sudo[351271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:37 compute-2 sudo[351271]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:37 compute-2 sudo[351296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:37 compute-2 sudo[351296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:37 compute-2 sudo[351296]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:37 compute-2 sudo[351321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:18:37 compute-2 sudo[351321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:38 compute-2 sudo[351321]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:39 compute-2 ceph-mon[77282]: pgmap v4419: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 110 KiB/s rd, 15 KiB/s wr, 92 op/s
Jan 31 09:18:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2063266035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:39.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:39 compute-2 nova_compute[226829]: 2026-01-31 09:18:39.524 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:39.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:40 compute-2 nova_compute[226829]: 2026-01-31 09:18:40.065 226833 DEBUG nova.compute.manager [req-8f5b3b93-0a0f-4005-88b8-4380a30cb826 req-16f28039-38e1-46e3-a0bf-ea85f0936ee8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received event network-changed-a1a63aa0-845f-4e0f-ac60-f8509b84c41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:18:40 compute-2 nova_compute[226829]: 2026-01-31 09:18:40.065 226833 DEBUG nova.compute.manager [req-8f5b3b93-0a0f-4005-88b8-4380a30cb826 req-16f28039-38e1-46e3-a0bf-ea85f0936ee8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Refreshing instance network info cache due to event network-changed-a1a63aa0-845f-4e0f-ac60-f8509b84c41d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:18:40 compute-2 nova_compute[226829]: 2026-01-31 09:18:40.066 226833 DEBUG oslo_concurrency.lockutils [req-8f5b3b93-0a0f-4005-88b8-4380a30cb826 req-16f28039-38e1-46e3-a0bf-ea85f0936ee8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:18:40 compute-2 nova_compute[226829]: 2026-01-31 09:18:40.066 226833 DEBUG oslo_concurrency.lockutils [req-8f5b3b93-0a0f-4005-88b8-4380a30cb826 req-16f28039-38e1-46e3-a0bf-ea85f0936ee8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:18:40 compute-2 nova_compute[226829]: 2026-01-31 09:18:40.066 226833 DEBUG nova.network.neutron [req-8f5b3b93-0a0f-4005-88b8-4380a30cb826 req-16f28039-38e1-46e3-a0bf-ea85f0936ee8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Refreshing network info cache for port a1a63aa0-845f-4e0f-ac60-f8509b84c41d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:18:40 compute-2 ceph-mon[77282]: pgmap v4420: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 571 KiB/s rd, 15 KiB/s wr, 123 op/s
Jan 31 09:18:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:18:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:18:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:18:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:18:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:18:40 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:18:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1275901134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:40 compute-2 nova_compute[226829]: 2026-01-31 09:18:40.978 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:41 compute-2 nova_compute[226829]: 2026-01-31 09:18:41.418 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:41.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:41 compute-2 nova_compute[226829]: 2026-01-31 09:18:41.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:41.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:42 compute-2 ceph-mon[77282]: pgmap v4421: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 994 KiB/s rd, 15 KiB/s wr, 136 op/s
Jan 31 09:18:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3447361038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:43 compute-2 nova_compute[226829]: 2026-01-31 09:18:43.208 226833 DEBUG nova.network.neutron [req-8f5b3b93-0a0f-4005-88b8-4380a30cb826 req-16f28039-38e1-46e3-a0bf-ea85f0936ee8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Updated VIF entry in instance network info cache for port a1a63aa0-845f-4e0f-ac60-f8509b84c41d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:18:43 compute-2 nova_compute[226829]: 2026-01-31 09:18:43.209 226833 DEBUG nova.network.neutron [req-8f5b3b93-0a0f-4005-88b8-4380a30cb826 req-16f28039-38e1-46e3-a0bf-ea85f0936ee8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Updating instance_info_cache with network_info: [{"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:18:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:43.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:43.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:43 compute-2 nova_compute[226829]: 2026-01-31 09:18:43.686 226833 DEBUG oslo_concurrency.lockutils [req-8f5b3b93-0a0f-4005-88b8-4380a30cb826 req-16f28039-38e1-46e3-a0bf-ea85f0936ee8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:18:44 compute-2 ceph-mon[77282]: pgmap v4422: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 176 op/s
Jan 31 09:18:45 compute-2 ceph-mon[77282]: pgmap v4423: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 172 op/s
Jan 31 09:18:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:45.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:45 compute-2 nova_compute[226829]: 2026-01-31 09:18:45.489 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:45.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:45 compute-2 nova_compute[226829]: 2026-01-31 09:18:45.982 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:46 compute-2 nova_compute[226829]: 2026-01-31 09:18:46.420 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:47 compute-2 ceph-mon[77282]: pgmap v4424: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 597 B/s wr, 146 op/s
Jan 31 09:18:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:47.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:47.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:48 compute-2 ovn_controller[133834]: 2026-01-31T09:18:48Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:00:75 10.100.0.9
Jan 31 09:18:48 compute-2 ovn_controller[133834]: 2026-01-31T09:18:48Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:00:75 10.100.0.9
Jan 31 09:18:49 compute-2 podman[351383]: 2026-01-31 09:18:49.198332588 +0000 UTC m=+0.075878504 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:18:49 compute-2 sudo[351411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:49 compute-2 sudo[351411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:49 compute-2 sudo[351411]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:49 compute-2 sudo[351436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:18:49 compute-2 sudo[351436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:49 compute-2 sudo[351436]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:49.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:49 compute-2 ceph-mon[77282]: pgmap v4425: 305 pgs: 305 active+clean; 206 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 62 KiB/s wr, 126 op/s
Jan 31 09:18:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:18:49 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:18:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:49.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:50 compute-2 nova_compute[226829]: 2026-01-31 09:18:50.985 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:51 compute-2 nova_compute[226829]: 2026-01-31 09:18:51.422 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:51.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:51.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:51 compute-2 sudo[351462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:51 compute-2 sudo[351462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:51 compute-2 sudo[351462]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:51 compute-2 ceph-mon[77282]: pgmap v4426: 305 pgs: 305 active+clean; 208 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 89 KiB/s wr, 105 op/s
Jan 31 09:18:51 compute-2 sudo[351487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:51 compute-2 sudo[351487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:51 compute-2 sudo[351487]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:52 compute-2 ceph-mon[77282]: pgmap v4427: 305 pgs: 305 active+clean; 216 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 499 KiB/s wr, 98 op/s
Jan 31 09:18:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:53.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:53.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1372370753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:18:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/1372370753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:18:55 compute-2 ceph-mon[77282]: pgmap v4428: 305 pgs: 305 active+clean; 216 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 53 op/s
Jan 31 09:18:55 compute-2 podman[351514]: 2026-01-31 09:18:55.154605678 +0000 UTC m=+0.043657288 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 09:18:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:55.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:55 compute-2 nova_compute[226829]: 2026-01-31 09:18:55.517 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:55 compute-2 nova_compute[226829]: 2026-01-31 09:18:55.518 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:18:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:55.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:55 compute-2 nova_compute[226829]: 2026-01-31 09:18:55.986 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.313 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=118, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=117) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.314 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.366 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.424 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.536 226833 DEBUG nova.compute.manager [req-ed48bc5f-4b7a-4cff-816a-2a3e4174a17b req-3c4e0b46-0879-4d91-b3f9-d328344106b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received event network-changed-a1a63aa0-845f-4e0f-ac60-f8509b84c41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.536 226833 DEBUG nova.compute.manager [req-ed48bc5f-4b7a-4cff-816a-2a3e4174a17b req-3c4e0b46-0879-4d91-b3f9-d328344106b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Refreshing instance network info cache due to event network-changed-a1a63aa0-845f-4e0f-ac60-f8509b84c41d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.536 226833 DEBUG oslo_concurrency.lockutils [req-ed48bc5f-4b7a-4cff-816a-2a3e4174a17b req-3c4e0b46-0879-4d91-b3f9-d328344106b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.536 226833 DEBUG oslo_concurrency.lockutils [req-ed48bc5f-4b7a-4cff-816a-2a3e4174a17b req-3c4e0b46-0879-4d91-b3f9-d328344106b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.537 226833 DEBUG nova.network.neutron [req-ed48bc5f-4b7a-4cff-816a-2a3e4174a17b req-3c4e0b46-0879-4d91-b3f9-d328344106b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Refreshing network info cache for port a1a63aa0-845f-4e0f-ac60-f8509b84c41d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.615 226833 DEBUG oslo_concurrency.lockutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.615 226833 DEBUG oslo_concurrency.lockutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.616 226833 DEBUG oslo_concurrency.lockutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.617 226833 DEBUG oslo_concurrency.lockutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.617 226833 DEBUG oslo_concurrency.lockutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.618 226833 INFO nova.compute.manager [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Terminating instance
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.620 226833 DEBUG nova.compute.manager [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 09:18:56 compute-2 kernel: tapa1a63aa0-84 (unregistering): left promiscuous mode
Jan 31 09:18:56 compute-2 NetworkManager[48999]: <info>  [1769851136.6901] device (tapa1a63aa0-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.694 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:56 compute-2 ovn_controller[133834]: 2026-01-31T09:18:56Z|00884|binding|INFO|Releasing lport a1a63aa0-845f-4e0f-ac60-f8509b84c41d from this chassis (sb_readonly=0)
Jan 31 09:18:56 compute-2 ovn_controller[133834]: 2026-01-31T09:18:56Z|00885|binding|INFO|Setting lport a1a63aa0-845f-4e0f-ac60-f8509b84c41d down in Southbound
Jan 31 09:18:56 compute-2 ovn_controller[133834]: 2026-01-31T09:18:56Z|00886|binding|INFO|Removing iface tapa1a63aa0-84 ovn-installed in OVS
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.700 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:00:75 10.100.0.9'], port_security=['fa:16:3e:c9:00:75 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aa28bdce-0eba-4f00-a4f5-954f6254edd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11348f26-2c0a-4b92-a927-856bca145e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>], logical_port=a1a63aa0-845f-4e0f-ac60-f8509b84c41d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f54849eaac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.702 143841 INFO neutron.agent.ovn.metadata.agent [-] Port a1a63aa0-845f-4e0f-ac60-f8509b84c41d in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 unbound from our chassis
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.704 143841 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c9ca540-57e7-412d-8ef3-af923db0a265, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.706 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.706 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[276c8566-66e2-401a-87cf-13eb51a5a649]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.706 143841 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace which is not needed anymore
Jan 31 09:18:56 compute-2 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000e0.scope: Deactivated successfully.
Jan 31 09:18:56 compute-2 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000e0.scope: Consumed 13.982s CPU time.
Jan 31 09:18:56 compute-2 systemd-machined[195142]: Machine qemu-99-instance-000000e0 terminated.
Jan 31 09:18:56 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[351179]: [NOTICE]   (351183) : haproxy version is 2.8.14-c23fe91
Jan 31 09:18:56 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[351179]: [NOTICE]   (351183) : path to executable is /usr/sbin/haproxy
Jan 31 09:18:56 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[351179]: [WARNING]  (351183) : Exiting Master process...
Jan 31 09:18:56 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[351179]: [WARNING]  (351183) : Exiting Master process...
Jan 31 09:18:56 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[351179]: [ALERT]    (351183) : Current worker (351185) exited with code 143 (Terminated)
Jan 31 09:18:56 compute-2 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[351179]: [WARNING]  (351183) : All workers exited. Exiting... (0)
Jan 31 09:18:56 compute-2 systemd[1]: libpod-f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194.scope: Deactivated successfully.
Jan 31 09:18:56 compute-2 podman[351560]: 2026-01-31 09:18:56.822559632 +0000 UTC m=+0.040217805 container died f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:18:56 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194-userdata-shm.mount: Deactivated successfully.
Jan 31 09:18:56 compute-2 systemd[1]: var-lib-containers-storage-overlay-c86646410ff83aae6a91579eebf7accf46d8d1f5ad339aa3f9fa608f63a637d9-merged.mount: Deactivated successfully.
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.856 226833 INFO nova.virt.libvirt.driver [-] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Instance destroyed successfully.
Jan 31 09:18:56 compute-2 podman[351560]: 2026-01-31 09:18:56.85744485 +0000 UTC m=+0.075103023 container cleanup f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.857 226833 DEBUG nova.objects.instance [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'resources' on Instance uuid aa28bdce-0eba-4f00-a4f5-954f6254edd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:18:56 compute-2 systemd[1]: libpod-conmon-f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194.scope: Deactivated successfully.
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.882 226833 DEBUG nova.virt.libvirt.vif [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:18:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1926695861',display_name='tempest-TestVolumeBootPattern-server-1926695861',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1926695861',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDAfqvbo84AGo7M8dHeUcfV7f8XDMsbbH3qvJ7QZYUuK8Mi+q4nVx9EyFqWsp6cOXT2AG4HbgkO3dUTMLlMtCu+TvTdRzopwn8vz5la3KIOsONTeEClwFEs29TOnQ3Rwg==',key_name='tempest-TestVolumeBootPattern-498126782',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:18:35Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-jqlyu20t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:18:35Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=aa28bdce-0eba-4f00-a4f5-954f6254edd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.884 226833 DEBUG nova.network.os_vif_util [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.885 226833 DEBUG nova.network.os_vif_util [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:00:75,bridge_name='br-int',has_traffic_filtering=True,id=a1a63aa0-845f-4e0f-ac60-f8509b84c41d,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a63aa0-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.885 226833 DEBUG os_vif [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:00:75,bridge_name='br-int',has_traffic_filtering=True,id=a1a63aa0-845f-4e0f-ac60-f8509b84c41d,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a63aa0-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.888 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.888 226833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a63aa0-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.890 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.891 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.894 226833 INFO os_vif [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:00:75,bridge_name='br-int',has_traffic_filtering=True,id=a1a63aa0-845f-4e0f-ac60-f8509b84c41d,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a63aa0-84')
Jan 31 09:18:56 compute-2 podman[351601]: 2026-01-31 09:18:56.924326949 +0000 UTC m=+0.044319866 container remove f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.926 226833 DEBUG nova.compute.manager [req-ee01e8c9-cb51-47b1-bc40-0dc882e684d5 req-2de4a8a2-0258-41ed-906d-bc9b53c5a6bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received event network-vif-unplugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.926 226833 DEBUG oslo_concurrency.lockutils [req-ee01e8c9-cb51-47b1-bc40-0dc882e684d5 req-2de4a8a2-0258-41ed-906d-bc9b53c5a6bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.927 226833 DEBUG oslo_concurrency.lockutils [req-ee01e8c9-cb51-47b1-bc40-0dc882e684d5 req-2de4a8a2-0258-41ed-906d-bc9b53c5a6bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.927 226833 DEBUG oslo_concurrency.lockutils [req-ee01e8c9-cb51-47b1-bc40-0dc882e684d5 req-2de4a8a2-0258-41ed-906d-bc9b53c5a6bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.927 226833 DEBUG nova.compute.manager [req-ee01e8c9-cb51-47b1-bc40-0dc882e684d5 req-2de4a8a2-0258-41ed-906d-bc9b53c5a6bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] No waiting events found dispatching network-vif-unplugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.928 226833 DEBUG nova.compute.manager [req-ee01e8c9-cb51-47b1-bc40-0dc882e684d5 req-2de4a8a2-0258-41ed-906d-bc9b53c5a6bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received event network-vif-unplugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.928 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed6170c-1294-4861-98b2-de42fc4e6fd6]: (4, ('Sat Jan 31 09:18:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194)\nf68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194\nSat Jan 31 09:18:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (f68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194)\nf68a6b6073100e75ba91ef4398c6de640275c4f2810258d60719d6287b595194\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.929 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[12e0552e-1d10-4d12-a424-d61422477de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.930 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.932 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:56 compute-2 kernel: tap5c9ca540-50: left promiscuous mode
Jan 31 09:18:56 compute-2 nova_compute[226829]: 2026-01-31 09:18:56.937 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.940 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9d4fb1-d898-4e42-a6bd-597c68d0e333]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.954 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[98874d1b-541a-4bf7-9012-571d4fbefbf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.955 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[99ac77ad-3496-47a8-92fb-3f3c4a73972d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.965 230393 DEBUG oslo.privsep.daemon [-] privsep: reply[304a84ab-a64e-4188-a156-b36f6d7a0b98]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1136672, 'reachable_time': 21653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351635, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.968 143954 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:18:56 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:18:56.968 143954 DEBUG oslo.privsep.daemon [-] privsep: reply[1c0bf4aa-2d7b-4e6d-beb1-8be07355fb86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:18:56 compute-2 systemd[1]: run-netns-ovnmeta\x2d5c9ca540\x2d57e7\x2d412d\x2d8ef3\x2daf923db0a265.mount: Deactivated successfully.
Jan 31 09:18:57 compute-2 nova_compute[226829]: 2026-01-31 09:18:57.399 226833 INFO nova.virt.libvirt.driver [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Deleting instance files /var/lib/nova/instances/aa28bdce-0eba-4f00-a4f5-954f6254edd9_del
Jan 31 09:18:57 compute-2 nova_compute[226829]: 2026-01-31 09:18:57.400 226833 INFO nova.virt.libvirt.driver [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Deletion of /var/lib/nova/instances/aa28bdce-0eba-4f00-a4f5-954f6254edd9_del complete
Jan 31 09:18:57 compute-2 nova_compute[226829]: 2026-01-31 09:18:57.458 226833 INFO nova.compute.manager [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Took 0.84 seconds to destroy the instance on the hypervisor.
Jan 31 09:18:57 compute-2 nova_compute[226829]: 2026-01-31 09:18:57.458 226833 DEBUG oslo.service.loopingcall [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 09:18:57 compute-2 nova_compute[226829]: 2026-01-31 09:18:57.459 226833 DEBUG nova.compute.manager [-] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 09:18:57 compute-2 nova_compute[226829]: 2026-01-31 09:18:57.459 226833 DEBUG nova.network.neutron [-] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 09:18:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:57.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:57 compute-2 ceph-mon[77282]: pgmap v4429: 305 pgs: 305 active+clean; 216 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 499 KiB/s wr, 54 op/s
Jan 31 09:18:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:18:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:57.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:18:58 compute-2 nova_compute[226829]: 2026-01-31 09:18:58.296 226833 DEBUG nova.network.neutron [req-ed48bc5f-4b7a-4cff-816a-2a3e4174a17b req-3c4e0b46-0879-4d91-b3f9-d328344106b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Updated VIF entry in instance network info cache for port a1a63aa0-845f-4e0f-ac60-f8509b84c41d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:18:58 compute-2 nova_compute[226829]: 2026-01-31 09:18:58.297 226833 DEBUG nova.network.neutron [req-ed48bc5f-4b7a-4cff-816a-2a3e4174a17b req-3c4e0b46-0879-4d91-b3f9-d328344106b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Updating instance_info_cache with network_info: [{"id": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "address": "fa:16:3e:c9:00:75", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a63aa0-84", "ovs_interfaceid": "a1a63aa0-845f-4e0f-ac60-f8509b84c41d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:18:58 compute-2 nova_compute[226829]: 2026-01-31 09:18:58.324 226833 DEBUG oslo_concurrency.lockutils [req-ed48bc5f-4b7a-4cff-816a-2a3e4174a17b req-3c4e0b46-0879-4d91-b3f9-d328344106b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-aa28bdce-0eba-4f00-a4f5-954f6254edd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:18:58 compute-2 nova_compute[226829]: 2026-01-31 09:18:58.603 226833 DEBUG nova.network.neutron [-] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:18:58 compute-2 nova_compute[226829]: 2026-01-31 09:18:58.621 226833 INFO nova.compute.manager [-] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Took 1.16 seconds to deallocate network for instance.
Jan 31 09:18:58 compute-2 nova_compute[226829]: 2026-01-31 09:18:58.672 226833 DEBUG nova.compute.manager [req-5c4875dc-6f64-4f26-a0b9-585511fd3f57 req-c3998da5-ac04-4439-a2a4-6ae99303ff00 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received event network-vif-deleted-a1a63aa0-845f-4e0f-ac60-f8509b84c41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:18:58 compute-2 nova_compute[226829]: 2026-01-31 09:18:58.848 226833 INFO nova.compute.manager [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Took 0.23 seconds to detach 1 volumes for instance.
Jan 31 09:18:58 compute-2 nova_compute[226829]: 2026-01-31 09:18:58.891 226833 DEBUG oslo_concurrency.lockutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:58 compute-2 nova_compute[226829]: 2026-01-31 09:18:58.892 226833 DEBUG oslo_concurrency.lockutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:58 compute-2 nova_compute[226829]: 2026-01-31 09:18:58.941 226833 DEBUG oslo_concurrency.processutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.006 226833 DEBUG nova.compute.manager [req-a9823955-34df-44d4-9ffc-5fad3c7744f0 req-c09acb0e-fa9f-4b5e-88a7-701a97c85cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received event network-vif-plugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.007 226833 DEBUG oslo_concurrency.lockutils [req-a9823955-34df-44d4-9ffc-5fad3c7744f0 req-c09acb0e-fa9f-4b5e-88a7-701a97c85cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.007 226833 DEBUG oslo_concurrency.lockutils [req-a9823955-34df-44d4-9ffc-5fad3c7744f0 req-c09acb0e-fa9f-4b5e-88a7-701a97c85cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.008 226833 DEBUG oslo_concurrency.lockutils [req-a9823955-34df-44d4-9ffc-5fad3c7744f0 req-c09acb0e-fa9f-4b5e-88a7-701a97c85cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.008 226833 DEBUG nova.compute.manager [req-a9823955-34df-44d4-9ffc-5fad3c7744f0 req-c09acb0e-fa9f-4b5e-88a7-701a97c85cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] No waiting events found dispatching network-vif-plugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.008 226833 WARNING nova.compute.manager [req-a9823955-34df-44d4-9ffc-5fad3c7744f0 req-c09acb0e-fa9f-4b5e-88a7-701a97c85cec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Received unexpected event network-vif-plugged-a1a63aa0-845f-4e0f-ac60-f8509b84c41d for instance with vm_state deleted and task_state None.
Jan 31 09:18:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:18:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1164652490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.374 226833 DEBUG oslo_concurrency.processutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.380 226833 DEBUG nova.compute.provider_tree [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.395 226833 DEBUG nova.scheduler.client.report [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.419 226833 DEBUG oslo_concurrency.lockutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.464 226833 INFO nova.scheduler.client.report [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Deleted allocations for instance aa28bdce-0eba-4f00-a4f5-954f6254edd9
Jan 31 09:18:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:59.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:59 compute-2 nova_compute[226829]: 2026-01-31 09:18:59.538 226833 DEBUG oslo_concurrency.lockutils [None req-216626c8-5ba2-4586-838a-da3dce9edf81 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "aa28bdce-0eba-4f00-a4f5-954f6254edd9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:59 compute-2 ceph-mon[77282]: pgmap v4430: 305 pgs: 305 active+clean; 220 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 581 KiB/s wr, 60 op/s
Jan 31 09:18:59 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1164652490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:18:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:59.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:19:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/822958513' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:19:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:19:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/822958513' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:19:00 compute-2 nova_compute[226829]: 2026-01-31 09:19:00.989 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:01.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:01.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:01 compute-2 ceph-mon[77282]: pgmap v4431: 305 pgs: 305 active+clean; 220 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 523 KiB/s wr, 61 op/s
Jan 31 09:19:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/822958513' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:19:01 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/822958513' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:19:01 compute-2 nova_compute[226829]: 2026-01-31 09:19:01.929 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:02 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:19:02.316 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '118'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:19:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e431 e431: 3 total, 3 up, 3 in
Jan 31 09:19:03 compute-2 ceph-mon[77282]: pgmap v4432: 305 pgs: 305 active+clean; 212 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 815 KiB/s rd, 498 KiB/s wr, 68 op/s
Jan 31 09:19:03 compute-2 ceph-mon[77282]: osdmap e431: 3 total, 3 up, 3 in
Jan 31 09:19:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:03.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:03.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:05.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:05.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:05 compute-2 ceph-mon[77282]: pgmap v4434: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 363 KiB/s rd, 106 KiB/s wr, 53 op/s
Jan 31 09:19:05 compute-2 nova_compute[226829]: 2026-01-31 09:19:05.991 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:06 compute-2 nova_compute[226829]: 2026-01-31 09:19:06.930 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:19:06.968 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:19:06.969 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:19:06.969 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:07 compute-2 ceph-mon[77282]: pgmap v4435: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 107 KiB/s wr, 65 op/s
Jan 31 09:19:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:07.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:07.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:09.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:09.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:09 compute-2 ceph-mon[77282]: pgmap v4436: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 9.4 KiB/s wr, 67 op/s
Jan 31 09:19:10 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 e432: 3 total, 3 up, 3 in
Jan 31 09:19:10 compute-2 nova_compute[226829]: 2026-01-31 09:19:10.993 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:11 compute-2 ceph-mon[77282]: pgmap v4437: 305 pgs: 305 active+clean; 201 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 4.2 KiB/s wr, 62 op/s
Jan 31 09:19:11 compute-2 ceph-mon[77282]: osdmap e432: 3 total, 3 up, 3 in
Jan 31 09:19:11 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3503540223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:11.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:11.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:11 compute-2 nova_compute[226829]: 2026-01-31 09:19:11.853 226833 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769851136.8488855, aa28bdce-0eba-4f00-a4f5-954f6254edd9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:19:11 compute-2 nova_compute[226829]: 2026-01-31 09:19:11.853 226833 INFO nova.compute.manager [-] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] VM Stopped (Lifecycle Event)
Jan 31 09:19:11 compute-2 nova_compute[226829]: 2026-01-31 09:19:11.935 226833 DEBUG nova.compute.manager [None req-478409d7-387a-4d5c-a781-dd0cdf8141fd - - - - - -] [instance: aa28bdce-0eba-4f00-a4f5-954f6254edd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:19:11 compute-2 nova_compute[226829]: 2026-01-31 09:19:11.958 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:11 compute-2 sudo[351667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:11 compute-2 sudo[351667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:11 compute-2 sudo[351667]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:12 compute-2 sudo[351692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:12 compute-2 sudo[351692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:12 compute-2 sudo[351692]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:13.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:13.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:14 compute-2 ceph-mon[77282]: pgmap v4439: 305 pgs: 305 active+clean; 201 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Jan 31 09:19:15 compute-2 ceph-mon[77282]: pgmap v4440: 305 pgs: 305 active+clean; 201 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Jan 31 09:19:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:19:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:15.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:19:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:15.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:15 compute-2 nova_compute[226829]: 2026-01-31 09:19:15.995 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:16 compute-2 nova_compute[226829]: 2026-01-31 09:19:16.961 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:17 compute-2 ceph-mon[77282]: pgmap v4441: 305 pgs: 305 active+clean; 168 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 31 09:19:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:17.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:17.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/314481979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:19:18 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/314481979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:19:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:19.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:19.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:20 compute-2 ceph-mon[77282]: pgmap v4442: 305 pgs: 305 active+clean; 142 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 31 09:19:20 compute-2 podman[351721]: 2026-01-31 09:19:20.178861793 +0000 UTC m=+0.069250554 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 09:19:20 compute-2 nova_compute[226829]: 2026-01-31 09:19:20.997 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:21 compute-2 ceph-mon[77282]: pgmap v4443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 31 09:19:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:21.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:21.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:22 compute-2 nova_compute[226829]: 2026-01-31 09:19:22.005 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:23.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:23.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:23 compute-2 ceph-mon[77282]: pgmap v4444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Jan 31 09:19:24 compute-2 nova_compute[226829]: 2026-01-31 09:19:24.278 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:24 compute-2 nova_compute[226829]: 2026-01-31 09:19:24.336 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:25 compute-2 ceph-mon[77282]: pgmap v4445: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 17 op/s
Jan 31 09:19:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:25.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:25.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:26 compute-2 nova_compute[226829]: 2026-01-31 09:19:25.999 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:26 compute-2 podman[351751]: 2026-01-31 09:19:26.192337088 +0000 UTC m=+0.078966169 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 09:19:27 compute-2 nova_compute[226829]: 2026-01-31 09:19:27.007 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:27 compute-2 ceph-mon[77282]: pgmap v4446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Jan 31 09:19:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:27.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:27.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:29 compute-2 ceph-mon[77282]: pgmap v4447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 1 op/s
Jan 31 09:19:29 compute-2 nova_compute[226829]: 2026-01-31 09:19:29.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:29.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:29.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:30 compute-2 nova_compute[226829]: 2026-01-31 09:19:30.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:31 compute-2 nova_compute[226829]: 2026-01-31 09:19:31.002 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:31.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:31.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:31 compute-2 ceph-mon[77282]: pgmap v4448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 1 op/s
Jan 31 09:19:32 compute-2 nova_compute[226829]: 2026-01-31 09:19:32.010 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:32 compute-2 sudo[351774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:32 compute-2 sudo[351774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:32 compute-2 sudo[351774]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:32 compute-2 sudo[351799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:32 compute-2 sudo[351799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:32 compute-2 sudo[351799]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:32 compute-2 nova_compute[226829]: 2026-01-31 09:19:32.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:33.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:33.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:33 compute-2 ceph-mon[77282]: pgmap v4449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:19:33 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4079400756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:34 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2578349354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:35 compute-2 nova_compute[226829]: 2026-01-31 09:19:35.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:35 compute-2 nova_compute[226829]: 2026-01-31 09:19:35.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:19:35 compute-2 nova_compute[226829]: 2026-01-31 09:19:35.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:19:35 compute-2 nova_compute[226829]: 2026-01-31 09:19:35.512 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:19:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:35.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:35.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:35 compute-2 ceph-mon[77282]: pgmap v4450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:36 compute-2 nova_compute[226829]: 2026-01-31 09:19:36.005 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:36 compute-2 nova_compute[226829]: 2026-01-31 09:19:36.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:37 compute-2 nova_compute[226829]: 2026-01-31 09:19:37.011 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:37 compute-2 nova_compute[226829]: 2026-01-31 09:19:37.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:37 compute-2 nova_compute[226829]: 2026-01-31 09:19:37.523 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:37 compute-2 nova_compute[226829]: 2026-01-31 09:19:37.524 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:37 compute-2 nova_compute[226829]: 2026-01-31 09:19:37.525 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:37 compute-2 nova_compute[226829]: 2026-01-31 09:19:37.525 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:19:37 compute-2 nova_compute[226829]: 2026-01-31 09:19:37.525 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:19:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:37.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:37.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:37 compute-2 ceph-mon[77282]: pgmap v4451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:19:37 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/766887324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:37 compute-2 nova_compute[226829]: 2026-01-31 09:19:37.959 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:19:38 compute-2 nova_compute[226829]: 2026-01-31 09:19:38.122 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:19:38 compute-2 nova_compute[226829]: 2026-01-31 09:19:38.123 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3973MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:19:38 compute-2 nova_compute[226829]: 2026-01-31 09:19:38.124 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:38 compute-2 nova_compute[226829]: 2026-01-31 09:19:38.124 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:38 compute-2 nova_compute[226829]: 2026-01-31 09:19:38.563 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:19:38 compute-2 nova_compute[226829]: 2026-01-31 09:19:38.564 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:19:38 compute-2 nova_compute[226829]: 2026-01-31 09:19:38.593 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:19:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/766887324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3719793373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/676889966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:19:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2518777605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:39 compute-2 nova_compute[226829]: 2026-01-31 09:19:39.071 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:19:39 compute-2 nova_compute[226829]: 2026-01-31 09:19:39.078 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:19:39 compute-2 nova_compute[226829]: 2026-01-31 09:19:39.100 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:19:39 compute-2 nova_compute[226829]: 2026-01-31 09:19:39.123 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:19:39 compute-2 nova_compute[226829]: 2026-01-31 09:19:39.124 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:39.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:39.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:40 compute-2 ceph-mon[77282]: pgmap v4452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2518777605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:41 compute-2 ceph-mon[77282]: pgmap v4453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:41 compute-2 nova_compute[226829]: 2026-01-31 09:19:41.040 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:41.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:41.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:42 compute-2 nova_compute[226829]: 2026-01-31 09:19:42.013 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:42 compute-2 nova_compute[226829]: 2026-01-31 09:19:42.124 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:42 compute-2 nova_compute[226829]: 2026-01-31 09:19:42.125 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #217. Immutable memtables: 0.
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.373203) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 217
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182373279, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 2381, "num_deletes": 252, "total_data_size": 5866596, "memory_usage": 5932456, "flush_reason": "Manual Compaction"}
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #218: started
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182417958, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 218, "file_size": 3837073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 103854, "largest_seqno": 106230, "table_properties": {"data_size": 3827441, "index_size": 6125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19746, "raw_average_key_size": 20, "raw_value_size": 3808261, "raw_average_value_size": 3946, "num_data_blocks": 267, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850963, "oldest_key_time": 1769850963, "file_creation_time": 1769851182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 44963 microseconds, and 6527 cpu microseconds.
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.418166) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #218: 3837073 bytes OK
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.418189) [db/memtable_list.cc:519] [default] Level-0 commit table #218 started
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.448480) [db/memtable_list.cc:722] [default] Level-0 commit table #218: memtable #1 done
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.448526) EVENT_LOG_v1 {"time_micros": 1769851182448516, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.448548) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 5856265, prev total WAL file size 5856265, number of live WAL files 2.
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000214.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.449952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [218(3747KB)], [216(12MB)]
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182450664, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [218], "files_L6": [216], "score": -1, "input_data_size": 16718884, "oldest_snapshot_seqno": -1}
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #219: 12618 keys, 14724814 bytes, temperature: kUnknown
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182601199, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 219, "file_size": 14724814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14645707, "index_size": 46482, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31557, "raw_key_size": 334173, "raw_average_key_size": 26, "raw_value_size": 14427920, "raw_average_value_size": 1143, "num_data_blocks": 1760, "num_entries": 12618, "num_filter_entries": 12618, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843219, "oldest_key_time": 0, "file_creation_time": 1769851182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5168d376-4658-426b-b196-7d1a850c0693", "db_session_id": "XEFS1N7FWZKLAXQ54AIG", "orig_file_number": 219, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.601482) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 14724814 bytes
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.606241) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.0 rd, 97.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 13141, records dropped: 523 output_compression: NoCompression
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.606269) EVENT_LOG_v1 {"time_micros": 1769851182606257, "job": 140, "event": "compaction_finished", "compaction_time_micros": 150613, "compaction_time_cpu_micros": 64729, "output_level": 6, "num_output_files": 1, "total_output_size": 14724814, "num_input_records": 13141, "num_output_records": 12618, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182606688, "job": 140, "event": "table_file_deletion", "file_number": 218}
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000216.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182607816, "job": 140, "event": "table_file_deletion", "file_number": 216}
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.449387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.607878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.607885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.607887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.607889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-2 ceph-mon[77282]: rocksdb: (Original Log Time 2026/01/31-09:19:42.607891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:43 compute-2 ceph-mon[77282]: pgmap v4454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:43.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:43.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:45.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:45 compute-2 ceph-mon[77282]: pgmap v4455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:46 compute-2 nova_compute[226829]: 2026-01-31 09:19:46.040 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:47 compute-2 nova_compute[226829]: 2026-01-31 09:19:47.016 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:19:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:47.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:19:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:47.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:47 compute-2 ceph-mon[77282]: pgmap v4456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:49 compute-2 sudo[351878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:49 compute-2 sudo[351878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:49 compute-2 sudo[351878]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:49 compute-2 sudo[351903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:19:49 compute-2 sudo[351903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:49 compute-2 sudo[351903]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:49 compute-2 sudo[351928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:49 compute-2 sudo[351928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:49 compute-2 sudo[351928]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:49.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:49 compute-2 sudo[351953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 09:19:49 compute-2 sudo[351953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:49.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:49 compute-2 ceph-mon[77282]: pgmap v4457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:49 compute-2 podman[352050]: 2026-01-31 09:19:49.989198008 +0000 UTC m=+0.052041025 container exec 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:19:50 compute-2 podman[352050]: 2026-01-31 09:19:50.0793611 +0000 UTC m=+0.142204097 container exec_died 630bbce25a07ce1c479ce9b0562d3aa731c6db71de6a37e73bdc63dfd192de67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 09:19:50 compute-2 podman[352105]: 2026-01-31 09:19:50.276699496 +0000 UTC m=+0.068222057 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:19:50 compute-2 podman[352234]: 2026-01-31 09:19:50.616726832 +0000 UTC m=+0.078321011 container exec f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 09:19:50 compute-2 podman[352257]: 2026-01-31 09:19:50.681250336 +0000 UTC m=+0.050382911 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 09:19:50 compute-2 podman[352234]: 2026-01-31 09:19:50.68727029 +0000 UTC m=+0.148864469 container exec_died f9d7cd87be088a25ec0890f775ce1057688984651c8395c4246622df61a072ff (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-2-envbir)
Jan 31 09:19:50 compute-2 podman[352300]: 2026-01-31 09:19:50.862833324 +0000 UTC m=+0.052643413 container exec 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, architecture=x86_64, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, version=2.2.4, release=1793, description=keepalived for Ceph, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 09:19:50 compute-2 podman[352300]: 2026-01-31 09:19:50.877510852 +0000 UTC m=+0.067320872 container exec_died 59dbac992ce9d303969ecd7a68527a92cafb4298efe799f24ed62dcc826a7b68 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-2-faavbs, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, release=1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=)
Jan 31 09:19:50 compute-2 sudo[351953]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:51 compute-2 nova_compute[226829]: 2026-01-31 09:19:51.042 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:51 compute-2 sudo[352335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:51 compute-2 sudo[352335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:51 compute-2 sudo[352335]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:51 compute-2 sudo[352360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:19:51 compute-2 sudo[352360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:51 compute-2 sudo[352360]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:51 compute-2 sudo[352385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:51 compute-2 sudo[352385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:51 compute-2 sudo[352385]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:51 compute-2 sudo[352410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:19:51 compute-2 sudo[352410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:51.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:51 compute-2 sudo[352410]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:51 compute-2 ceph-mon[77282]: pgmap v4458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:19:51 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:19:52 compute-2 nova_compute[226829]: 2026-01-31 09:19:52.018 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:52 compute-2 sudo[352466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:52 compute-2 sudo[352466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:52 compute-2 sudo[352466]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:52 compute-2 sudo[352491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:52 compute-2 sudo[352491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:52 compute-2 sudo[352491]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:19:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:19:53 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:19:53 compute-2 ceph-mon[77282]: pgmap v4459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:53.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:53.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2690524856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:19:54 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/2690524856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:19:55 compute-2 ceph-mon[77282]: pgmap v4460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:19:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:55.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:19:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:55.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:56 compute-2 nova_compute[226829]: 2026-01-31 09:19:56.043 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:57 compute-2 nova_compute[226829]: 2026-01-31 09:19:57.019 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:57 compute-2 podman[352519]: 2026-01-31 09:19:57.165927754 +0000 UTC m=+0.053479945 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 09:19:57 compute-2 nova_compute[226829]: 2026-01-31 09:19:57.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:57 compute-2 nova_compute[226829]: 2026-01-31 09:19:57.488 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:19:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:19:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:57.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:19:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:57 compute-2 ceph-mon[77282]: pgmap v4461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:57.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:59 compute-2 sudo[352540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:59 compute-2 sudo[352540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:59 compute-2 sudo[352540]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:59 compute-2 sudo[352565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:19:59 compute-2 sudo[352565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:59 compute-2 sudo[352565]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:59.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:59 compute-2 ceph-mon[77282]: pgmap v4462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:59 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:19:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:59.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:00 compute-2 ceph-mon[77282]: overall HEALTH_OK
Jan 31 09:20:01 compute-2 nova_compute[226829]: 2026-01-31 09:20:01.045 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:20:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:01.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:20:01 compute-2 ovn_controller[133834]: 2026-01-31T09:20:01Z|00887|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 31 09:20:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:01.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:01 compute-2 ceph-mon[77282]: pgmap v4463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:02 compute-2 nova_compute[226829]: 2026-01-31 09:20:02.021 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:02 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:20:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:20:03 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:03 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:03 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:03.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:03 compute-2 ceph-mon[77282]: pgmap v4464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:05.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:05 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:05 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:05 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:05.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:05 compute-2 ceph-mon[77282]: pgmap v4465: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:06 compute-2 nova_compute[226829]: 2026-01-31 09:20:06.047 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:20:06.969 143841 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:20:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:20:06.971 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:20:06 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:20:06.971 143841 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:20:07 compute-2 nova_compute[226829]: 2026-01-31 09:20:07.023 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:07.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:07 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:07 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:07 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:07 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:07.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:07 compute-2 ceph-mon[77282]: pgmap v4466: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:09 compute-2 ceph-mon[77282]: pgmap v4467: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:09.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:09 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:09 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:09 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:09.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:11 compute-2 nova_compute[226829]: 2026-01-31 09:20:11.087 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:11 compute-2 ceph-mon[77282]: pgmap v4468: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:11.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:11 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:11 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:11 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:11.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:12 compute-2 nova_compute[226829]: 2026-01-31 09:20:12.026 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:12 compute-2 sudo[352597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:12 compute-2 sudo[352597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:12 compute-2 sudo[352597]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:12 compute-2 sudo[352622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:12 compute-2 sudo[352622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:12 compute-2 sudo[352622]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:12 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:13.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:13 compute-2 ceph-mon[77282]: pgmap v4469: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:13 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:13 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:13 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:13.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:15 compute-2 nova_compute[226829]: 2026-01-31 09:20:15.484 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:15.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:15 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:15 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:15 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:15.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:15 compute-2 ceph-mon[77282]: pgmap v4470: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:16 compute-2 nova_compute[226829]: 2026-01-31 09:20:16.091 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:17 compute-2 nova_compute[226829]: 2026-01-31 09:20:17.027 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:17.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:17 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:17 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:17 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:20:17 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:17.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:20:17 compute-2 ceph-mon[77282]: pgmap v4471: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:18 compute-2 ceph-mon[77282]: pgmap v4472: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:19.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:19 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:19 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:19 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:19.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:21 compute-2 nova_compute[226829]: 2026-01-31 09:20:21.091 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:21 compute-2 podman[352651]: 2026-01-31 09:20:21.177954799 +0000 UTC m=+0.060646560 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 09:20:21 compute-2 ceph-mon[77282]: pgmap v4473: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:20:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:21.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:20:21 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:21 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:21 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:21.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:22 compute-2 nova_compute[226829]: 2026-01-31 09:20:22.030 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:22 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:23 compute-2 ceph-mon[77282]: pgmap v4474: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:20:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:23.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:20:23 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:23 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:23 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:23.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:25 compute-2 ceph-mon[77282]: pgmap v4475: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:20:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:25.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:20:25 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:25 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:25 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:25.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:26 compute-2 nova_compute[226829]: 2026-01-31 09:20:26.092 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:27 compute-2 nova_compute[226829]: 2026-01-31 09:20:27.033 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:27 compute-2 ceph-mon[77282]: pgmap v4476: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:20:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:27.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:20:27 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:27 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:27 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:27 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:27.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:28 compute-2 podman[352680]: 2026-01-31 09:20:28.14672863 +0000 UTC m=+0.039326081 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:20:29 compute-2 nova_compute[226829]: 2026-01-31 09:20:29.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:29 compute-2 ceph-mon[77282]: pgmap v4477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:20:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:29.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:20:29 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:29 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:29 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:29.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:31 compute-2 nova_compute[226829]: 2026-01-31 09:20:31.095 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:31 compute-2 nova_compute[226829]: 2026-01-31 09:20:31.483 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:31.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:31 compute-2 ceph-mon[77282]: pgmap v4478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:31 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:31 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:31 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:31.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:32 compute-2 nova_compute[226829]: 2026-01-31 09:20:32.035 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:32 compute-2 sudo[352702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:32 compute-2 sudo[352702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:32 compute-2 sudo[352702]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:32 compute-2 sudo[352727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:32 compute-2 sudo[352727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:32 compute-2 sudo[352727]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:32 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:33.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:33 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:33 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:33 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:33.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:33 compute-2 ceph-mon[77282]: pgmap v4479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:34 compute-2 nova_compute[226829]: 2026-01-31 09:20:34.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:35.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:35 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:35 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:35 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:35.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:35 compute-2 ceph-mon[77282]: pgmap v4480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:35 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2450555092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:36 compute-2 nova_compute[226829]: 2026-01-31 09:20:36.097 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:36 compute-2 sshd-session[352754]: Accepted publickey for zuul from 192.168.122.10 port 35902 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 09:20:36 compute-2 systemd-logind[801]: New session 69 of user zuul.
Jan 31 09:20:36 compute-2 systemd[1]: Started Session 69 of User zuul.
Jan 31 09:20:36 compute-2 sshd-session[352754]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:20:36 compute-2 sudo[352758]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 31 09:20:36 compute-2 sudo[352758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.036 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:37 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2351824196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.539 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.539 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.539 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:37 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:37.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.691 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.692 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.693 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.693 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:20:37 compute-2 nova_compute[226829]: 2026-01-31 09:20:37.693 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:20:37 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:37 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:37 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:38 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:20:38 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2864978510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.115 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:20:38 compute-2 ceph-mon[77282]: from='client.53033 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:38 compute-2 ceph-mon[77282]: pgmap v4481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:38 compute-2 ceph-mon[77282]: from='client.39042 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:38 compute-2 ceph-mon[77282]: from='client.53042 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:38 compute-2 ceph-mon[77282]: from='client.39048 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1860320840' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:20:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/378316414' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:20:38 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2864978510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.263 226833 WARNING nova.virt.libvirt.driver [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.264 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3917MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.264 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.264 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.461 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.463 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.623 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing inventories for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.655 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating ProviderTree inventory for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.655 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Updating inventory in ProviderTree for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.682 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing aggregate associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.880 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Refreshing trait associations for resource provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:20:38 compute-2 nova_compute[226829]: 2026-01-31 09:20:38.928 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:20:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:20:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3307499885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:39 compute-2 nova_compute[226829]: 2026-01-31 09:20:39.341 226833 DEBUG oslo_concurrency.processutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:20:39 compute-2 nova_compute[226829]: 2026-01-31 09:20:39.349 226833 DEBUG nova.compute.provider_tree [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed in ProviderTree for provider: 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:20:39 compute-2 nova_compute[226829]: 2026-01-31 09:20:39.420 226833 DEBUG nova.scheduler.client.report [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Inventory has not changed for provider 2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:20:39 compute-2 nova_compute[226829]: 2026-01-31 09:20:39.421 226833 DEBUG nova.compute.resource_tracker [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:20:39 compute-2 nova_compute[226829]: 2026-01-31 09:20:39.422 226833 DEBUG oslo_concurrency.lockutils [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:20:39 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/797563121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:39 compute-2 ceph-mon[77282]: pgmap v4482: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:39 compute-2 ceph-mon[77282]: from='client.47830 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:39.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:39 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:39 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:39 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:39.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:39 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 09:20:39 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2043660081' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:20:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3307499885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:40 compute-2 ceph-mon[77282]: from='client.47842 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/841653288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:40 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2043660081' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:20:41 compute-2 nova_compute[226829]: 2026-01-31 09:20:41.098 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:41 compute-2 ceph-mon[77282]: pgmap v4483: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:41.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:41 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:41 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:41 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:41.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:42 compute-2 nova_compute[226829]: 2026-01-31 09:20:42.039 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:42 compute-2 ovs-vsctl[353087]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 09:20:42 compute-2 nova_compute[226829]: 2026-01-31 09:20:42.371 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:42 compute-2 nova_compute[226829]: 2026-01-31 09:20:42.487 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:42 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:42 compute-2 ceph-mon[77282]: from='client.53066 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:42 compute-2 ceph-mon[77282]: from='client.39075 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:42 compute-2 ceph-mon[77282]: from='client.53084 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:42 compute-2 ceph-mon[77282]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3750243798' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1860847577' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/265012548' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:20:42 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3357587613' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:20:42 compute-2 virtqemud[226546]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 09:20:42 compute-2 virtqemud[226546]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 09:20:43 compute-2 virtqemud[226546]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 09:20:43 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: cache status {prefix=cache status} (starting...)
Jan 31 09:20:43 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: client ls {prefix=client ls} (starting...)
Jan 31 09:20:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:43.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:43 compute-2 lvm[353452]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 09:20:43 compute-2 lvm[353452]: VG ceph_vg0 finished
Jan 31 09:20:43 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:43 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:20:43 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:43.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:20:43 compute-2 ceph-mon[77282]: from='client.39087 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:43 compute-2 ceph-mon[77282]: pgmap v4484: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:43 compute-2 ceph-mon[77282]: from='client.53111 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1582570693' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:20:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3373967391' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:20:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/557018449' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:20:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1224619771' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:20:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2560769704' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:20:43 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3084447510' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 09:20:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 09:20:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4051288875' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 09:20:44 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 09:20:44 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 09:20:44 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 09:20:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:20:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4217219295' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.39117 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.53147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3586976088' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.39135 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4061422320' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.47872 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.53162 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/212560233' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4051288875' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4282404701' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2415556615' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3362579220' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1534305306' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/4217219295' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:20:44 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 09:20:44 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 31 09:20:44 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3537338723' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 09:20:45 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: ops {prefix=ops} (starting...)
Jan 31 09:20:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 31 09:20:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3556641598' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 31 09:20:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2502158679' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:20:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:45.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:45 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 09:20:45 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/41175167' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:45 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:45 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:45 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:45.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:45 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: session ls {prefix=session ls} (starting...)
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.53177 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.47887 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: pgmap v4485: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1256069273' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2691839288' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3537338723' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/662583840' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2360045533' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1133334762' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3556641598' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2502158679' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3302768494' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3709068946' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/41175167' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2660041370' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:20:45 compute-2 ceph-mds[84366]: mds.cephfs.compute-2.ihffma asok_command: status {prefix=status} (starting...)
Jan 31 09:20:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 09:20:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1149910150' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:46 compute-2 nova_compute[226829]: 2026-01-31 09:20:46.101 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 09:20:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/361904903' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 09:20:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/358223706' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 09:20:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1624731936' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.47908 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.53213 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.39201 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.47947 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2386080269' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1562770847' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1149910150' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/480645662' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4171911681' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2189887801' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/361904903' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/358223706' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3387722562' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:46 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 09:20:46 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2700614294' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:20:47 compute-2 nova_compute[226829]: 2026-01-31 09:20:47.040 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 09:20:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/730144916' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:20:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:47.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 09:20:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1045953086' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:47 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:47 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:47 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:47.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:47 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 31 09:20:47 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1469538700' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 31 09:20:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/418996094' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.47959 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: pgmap v4486: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.53255 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.39231 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.53267 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.39246 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1624731936' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2700614294' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/4221179603' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2047880028' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.53291 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.39252 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.48001 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/730144916' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2480106286' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1707424259' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.53315 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.39267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1045953086' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1469538700' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 09:20:48 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1113723371' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:49.092955+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:50.093105+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:51.093224+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:52.093978+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:53.094174+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 84115456 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:54.094361+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:55.094513+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:56.094700+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:57.094893+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:58.095114+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:59.095288+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:00.095438+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:01.095571+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535420928 unmapped: 84107264 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:02.095728+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:03.095885+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:04.096160+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:05.096305+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:06.096494+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:07.096677+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:08.096820+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:09.096977+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535429120 unmapped: 84099072 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:10.097126+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:11.097298+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:12.097544+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:13.097704+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a976000/0x0/0x1bfc00000, data 0x1d57ae2/0x1f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:14.097903+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:15.098077+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096270 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 535437312 unmapped: 84090880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ff400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc10ff400 session 0x562dbb56eb40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc73c000 session 0x562dbb34e5a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc736c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc736c00 session 0x562dbd7fe960
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4800 session 0x562dbd84d2c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.421176910s of 50.721790314s, submitted: 58
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc73c000 session 0x562dbd84d0e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:16.098344+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ff400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc10ff400 session 0x562dbabd1e00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc1100800 session 0x562dba84cf00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57dac00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc57dac00 session 0x562dbd84a3c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4800 session 0x562dbb9df860
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536502272 unmapped: 83025920 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:17.098565+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536502272 unmapped: 83025920 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:18.098733+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:19.098931+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:20.099219+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5164673 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:21.099394+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:22.099517+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:23.099704+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:24.099897+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:25.100073+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5164673 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 83017728 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:26.100232+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:27.100462+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:28.100614+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:29.100846+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:30.101095+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5164673 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0e5000/0x0/0x1bfc00000, data 0x25e9b54/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:31.101288+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:32.101571+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:33.101759+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536526848 unmapped: 83001344 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:34.101919+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dd2f19000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.573759079s of 18.680780411s, submitted: 29
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dd2f19000 session 0x562dbd572b40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536657920 unmapped: 82870272 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc834c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc736400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:35.102082+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5166757 data_alloc: 218103808 data_used: 8937472
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0c1000/0x0/0x1bfc00000, data 0x260db54/0x280d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536657920 unmapped: 82870272 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:36.102249+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536657920 unmapped: 82870272 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0c1000/0x0/0x1bfc00000, data 0x260db54/0x280d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [1])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:37.102420+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:38.102649+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:39.102893+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0c1000/0x0/0x1bfc00000, data 0x260db54/0x280d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:40.103044+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5230757 data_alloc: 218103808 data_used: 17928192
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:41.103232+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:42.103416+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:43.103573+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:44.103750+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0c1000/0x0/0x1bfc00000, data 0x260db54/0x280d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:45.103947+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5230757 data_alloc: 218103808 data_used: 17928192
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:46.104127+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19a0c1000/0x0/0x1bfc00000, data 0x260db54/0x280d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 536674304 unmapped: 82853888 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.525684357s of 12.534331322s, submitted: 2
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:47.104358+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538509312 unmapped: 81018880 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:48.104561+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 81002496 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:49.104779+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 81002496 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:50.104947+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1995ac000/0x0/0x1bfc00000, data 0x3122b54/0x3322000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5330569 data_alloc: 218103808 data_used: 18362368
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538525696 unmapped: 81002496 heap: 619528192 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:51.105144+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1995ac000/0x0/0x1bfc00000, data 0x3122b54/0x3322000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac10c00 session 0x562dbd84e000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:52.105377+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989c7000/0x0/0x1bfc00000, data 0x3d07b54/0x3f07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:53.105529+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:54.105738+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:55.105941+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5419695 data_alloc: 218103808 data_used: 18366464
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:56.106159+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:57.106341+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538591232 unmapped: 88293376 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:58.106492+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989c7000/0x0/0x1bfc00000, data 0x3d07b54/0x3f07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:59.106631+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989c7000/0x0/0x1bfc00000, data 0x3d07b54/0x3f07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:00.106774+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5419695 data_alloc: 218103808 data_used: 18366464
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:01.106989+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc04af800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc04af800 session 0x562dbc8f2780
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:02.107223+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc04af000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc04af000 session 0x562dbd84a960
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1989c7000/0x0/0x1bfc00000, data 0x3d07b54/0x3f07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:03.107446+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538599424 unmapped: 88285184 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:04.107615+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac10c00 session 0x562dbd84f680
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.902978897s of 17.200611115s, submitted: 92
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbb5a4800 session 0x562dbd5d8780
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 88317952 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:05.107786+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc707c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5425836 data_alloc: 218103808 data_used: 18366464
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 88317952 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:06.107950+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 88317952 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:07.108145+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 88317952 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:08.108327+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538566656 unmapped: 88317952 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:09.108503+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538574848 unmapped: 88309760 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:10.108721+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5425836 data_alloc: 218103808 data_used: 18366464
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538574848 unmapped: 88309760 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:11.108904+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538574848 unmapped: 88309760 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:12.109084+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 538574848 unmapped: 88309760 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:13.109246+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 539549696 unmapped: 87334912 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:14.109439+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:15.109606+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5512716 data_alloc: 234881024 data_used: 29069312
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:16.109801+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:17.109979+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:18.110190+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:19.110376+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:20.110515+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5512716 data_alloc: 234881024 data_used: 29069312
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540991488 unmapped: 85893120 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:21.110690+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540999680 unmapped: 85884928 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:22.110885+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540999680 unmapped: 85884928 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:23.111117+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540999680 unmapped: 85884928 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:24.111271+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 540999680 unmapped: 85884928 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.574790955s of 20.601783752s, submitted: 7
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:25.111488+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19899b000/0x0/0x1bfc00000, data 0x3d31b87/0x3f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5623834 data_alloc: 234881024 data_used: 29339648
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 544997376 unmapped: 81887232 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:26.111638+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545292288 unmapped: 81592320 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:27.111857+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545292288 unmapped: 81592320 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:28.112097+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e1000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbe2e1000 session 0x562dbc8dfa40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73b000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc73b000 session 0x562dbab8bc20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545308672 unmapped: 81575936 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc834c00 session 0x562dbff905a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc736400 session 0x562dbc8f2d20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:29.112258+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x195fb5000/0x0/0x1bfc00000, data 0x5576be9/0x5779000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbac10c00 session 0x562dbd5fe5a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:30.112435+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x197132000/0x0/0x1bfc00000, data 0x41abb77/0x43ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5507283 data_alloc: 234881024 data_used: 22052864
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:31.112627+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:32.112859+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:33.113053+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x197360000/0x0/0x1bfc00000, data 0x41cdb77/0x43ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:34.113232+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:35.113410+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5506939 data_alloc: 234881024 data_used: 22061056
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:36.113587+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 81551360 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:37.113776+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x197360000/0x0/0x1bfc00000, data 0x41cdb77/0x43ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545341440 unmapped: 81543168 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:38.113888+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545341440 unmapped: 81543168 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:39.114110+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 545341440 unmapped: 81543168 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:40.114278+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.408903122s of 15.796367645s, submitted: 162
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5507167 data_alloc: 234881024 data_used: 22061056
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546390016 unmapped: 80494592 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dc5acd000 session 0x562dbba82960
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:41.114411+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ee800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546586624 unmapped: 80297984 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:42.114543+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10fec00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 549724160 unmapped: 77160448 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x197336000/0x0/0x1bfc00000, data 0x41f6b9a/0x43f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:43.114664+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd5ff2c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbc707c00 session 0x562dc822e1e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd5feb40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:44.114801+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:45.114971+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5276119 data_alloc: 218103808 data_used: 18386944
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:46.115115+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:47.115282+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:48.115514+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198e57000/0x0/0x1bfc00000, data 0x26d6b67/0x28d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:49.115687+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:50.115876+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5276119 data_alloc: 218103808 data_used: 18386944
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:51.116054+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:52.116178+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.169054031s of 12.246128082s, submitted: 36
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547348480 unmapped: 79536128 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198e57000/0x0/0x1bfc00000, data 0x26d6b67/0x28d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,0,3])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:53.116327+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x198e57000/0x0/0x1bfc00000, data 0x26d6b67/0x28d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546766848 unmapped: 80117760 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:54.116506+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x19826a000/0x0/0x1bfc00000, data 0x32c4b67/0x34c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:55.116759+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5383197 data_alloc: 234881024 data_used: 18669568
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:56.116930+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:57.117124+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:58.117273+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:59.117447+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981e1000/0x0/0x1bfc00000, data 0x334db67/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:00.117626+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5380193 data_alloc: 234881024 data_used: 18673664
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:01.117803+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:02.117941+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981c0000/0x0/0x1bfc00000, data 0x336eb67/0x356e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:03.118139+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:04.118326+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:05.118549+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5380193 data_alloc: 234881024 data_used: 18673664
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546988032 unmapped: 79896576 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:06.118745+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.875093460s of 14.082410812s, submitted: 95
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 546996224 unmapped: 79888384 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:07.118966+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981c0000/0x0/0x1bfc00000, data 0x336eb67/0x356e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 79822848 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:08.119130+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981ad000/0x0/0x1bfc00000, data 0x3380b67/0x3580000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 79822848 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:09.119262+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981ad000/0x0/0x1bfc00000, data 0x3380b67/0x3580000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 79822848 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:10.119429+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5381577 data_alloc: 234881024 data_used: 18681856
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547061760 unmapped: 79822848 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:11.119608+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 heartbeat osd_stat(store_statfs(0x1981ad000/0x0/0x1bfc00000, data 0x3380b67/0x3580000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 547069952 unmapped: 79814656 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:12.119758+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 385 ms_handle_reset con 0x562dbabaf000 session 0x562dbd573680
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 385 ms_handle_reset con 0x562dbac10c00 session 0x562dbd1b3e00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e1000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 549101568 unmapped: 77783040 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:13.119970+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559349760 unmapped: 67534848 heap: 626884608 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:14.120147+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 385 ms_handle_reset con 0x562dbe2e1000 session 0x562dc822c000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaf8f400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560914432 unmapped: 81928192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:15.120321+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 386 ms_handle_reset con 0x562dbaf8f400 session 0x562dbd5d9c20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5570981 data_alloc: 234881024 data_used: 25153536
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 81911808 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:16.120454+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd5272c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 81911808 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:17.120840+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 heartbeat osd_stat(store_statfs(0x195a9b000/0x0/0x1bfc00000, data 0x48ec154/0x4af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 81903616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:18.120995+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 81903616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:19.121186+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbabaf000 session 0x562dbd572780
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbac10c00 session 0x562dbb9de960
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e4400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dc18e4400 session 0x562dbb47e1e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 81903616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:20.121339+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5576603 data_alloc: 234881024 data_used: 25157632
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 81903616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:21.121479+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 heartbeat osd_stat(store_statfs(0x195a9b000/0x0/0x1bfc00000, data 0x48ec154/0x4af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 81903616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:22.121640+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa19400 session 0x562dc822c5a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa19400 session 0x562dc1832780
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa1b800 session 0x562dbc8f3680
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43dec00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dc43dec00 session 0x562dbb56e5a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.199718475s of 15.581806183s, submitted: 41
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbabaf000 session 0x562dbd5d92c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbac10c00 session 0x562dbd527680
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa19400 session 0x562dbd84cd20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd5ff0e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 ms_handle_reset con 0x562dbabaf000 session 0x562dbabd10e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:23.121810+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:24.121981+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950f1000/0x0/0x1bfc00000, data 0x52961c6/0x549d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: get_auth_request con 0x562dbabaec00 auth_method 0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558104576 unmapped: 84738048 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:25.122212+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950ed000/0x0/0x1bfc00000, data 0x5297d05/0x54a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5653084 data_alloc: 234881024 data_used: 25165824
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:26.122397+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:27.122601+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950ed000/0x0/0x1bfc00000, data 0x5297d05/0x54a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:28.122739+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:29.122951+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950ed000/0x0/0x1bfc00000, data 0x5297d05/0x54a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:30.123067+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ef000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbf6ef000 session 0x562dc822c1e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5653084 data_alloc: 234881024 data_used: 25165824
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558120960 unmapped: 84721664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:31.123220+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd882400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbd882400 session 0x562dbc777c20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa19400 session 0x562dbb549680
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd5d9c20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbabaf000 session 0x562dc822c000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ef000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbf6ef000 session 0x562dbd573680
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a82000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e5800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dc18e5800 session 0x562dbff912c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dc5a82000 session 0x562dbd5feb40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa19400 session 0x562dbd5ff2c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa1b800 session 0x562dbba82960
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbabaf000 session 0x562dbc8f2d20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558137344 unmapped: 84705280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:32.123339+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:33.123483+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558137344 unmapped: 84705280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ef000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbf6ef000 session 0x562dbab8bc20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.702510834s of 10.845293999s, submitted: 53
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa19400 session 0x562dbc8dfa40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:34.123618+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558153728 unmapped: 84688896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950c8000/0x0/0x1bfc00000, data 0x52bbd77/0x54c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:35.123780+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558153728 unmapped: 84688896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5658694 data_alloc: 234881024 data_used: 25174016
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:36.123922+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558153728 unmapped: 84688896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:37.124067+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:38.124199+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:39.124323+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950c8000/0x0/0x1bfc00000, data 0x52bbd77/0x54c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:40.124459+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5729094 data_alloc: 234881024 data_used: 31989760
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:41.124608+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acec00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dc5acec00 session 0x562dba8b7c20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:42.124750+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a83000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2102000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:43.124949+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558178304 unmapped: 84664320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:44.125165+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558186496 unmapped: 84656128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1950c8000/0x0/0x1bfc00000, data 0x52bbd77/0x54c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:45.125320+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558186496 unmapped: 84656128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5776001 data_alloc: 251658240 data_used: 38297600
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:46.125461+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558186496 unmapped: 84656128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:47.125647+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558186496 unmapped: 84656128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.278841019s of 14.302386284s, submitted: 9
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:48.125795+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559742976 unmapped: 83099648 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:49.125916+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x194867000/0x0/0x1bfc00000, data 0x5b16d77/0x5d21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [0,0,0,1])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:50.126072+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560144384 unmapped: 82698240 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5855407 data_alloc: 251658240 data_used: 39071744
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:51.126205+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560144384 unmapped: 82698240 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:52.126320+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560144384 unmapped: 82698240 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x1947da000/0x0/0x1bfc00000, data 0x5b9bd77/0x5da6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:53.126469+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560144384 unmapped: 82698240 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:54.126676+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560144384 unmapped: 82698240 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:55.126843+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564051968 unmapped: 78790656 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5923366 data_alloc: 251658240 data_used: 45199360
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:56.127005+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564232192 unmapped: 78610432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x194476000/0x0/0x1bfc00000, data 0x5f0dd77/0x6118000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:57.127192+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564232192 unmapped: 78610432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:58.127354+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564232192 unmapped: 78610432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:59.127514+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564232192 unmapped: 78610432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.490035057s of 11.795151711s, submitted: 117
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd84f680
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbabaf000 session 0x562dbb47e000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:00.127765+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561037312 unmapped: 81805312 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x194477000/0x0/0x1bfc00000, data 0x5f0dd67/0x6117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dbd67f000 session 0x562dbd526780
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5698756 data_alloc: 234881024 data_used: 34893824
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:01.127964+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:02.128119+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:03.128290+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:04.128463+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:05.128643+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x195312000/0x0/0x1bfc00000, data 0x4c3ed05/0x4e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5698756 data_alloc: 234881024 data_used: 34893824
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:06.128797+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:07.128987+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:08.129114+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 heartbeat osd_stat(store_statfs(0x195312000/0x0/0x1bfc00000, data 0x4c3ed05/0x4e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:09.129287+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:10.129447+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5703556 data_alloc: 234881024 data_used: 35553280
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:11.129606+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5ace800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 ms_handle_reset con 0x562dc5ace800 session 0x562dbecb32c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57da000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:12.129774+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560070656 unmapped: 82771968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.805006981s of 12.859222412s, submitted: 23
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 389 ms_handle_reset con 0x562dc57da000 session 0x562dbecb3c20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd8a3800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43dec00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 389 ms_handle_reset con 0x562dc43dec00 session 0x562dbecb3e00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 389 ms_handle_reset con 0x562dbd8a3800 session 0x562dba8a1860
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73d400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:13.129978+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575791104 unmapped: 67051520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 389 ms_handle_reset con 0x562dbc73d400 session 0x562dbc8f34a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd8a3800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 389 heartbeat osd_stat(store_statfs(0x1946fe000/0x0/0x1bfc00000, data 0x587595e/0x5a7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:14.130135+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575791104 unmapped: 67051520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 390 ms_handle_reset con 0x562dbd8a3800 session 0x562dbc7770e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43dec00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:15.130316+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575848448 unmapped: 66994176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 391 ms_handle_reset con 0x562dc43dec00 session 0x562dbb56e780
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5916266 data_alloc: 251658240 data_used: 47558656
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:16.130484+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575905792 unmapped: 66936832 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:17.130681+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575905792 unmapped: 66936832 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 391 ms_handle_reset con 0x562dbc738400 session 0x562dba8a05a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 391 ms_handle_reset con 0x562dc5acd000 session 0x562dbd84ab40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc4da5800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 391 ms_handle_reset con 0x562dc4da5800 session 0x562dba7fb4a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:18.130833+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 391 heartbeat osd_stat(store_statfs(0x193ef6000/0x0/0x1bfc00000, data 0x6079280/0x6285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575905792 unmapped: 66936832 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:19.130960+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572170240 unmapped: 70672384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 392 ms_handle_reset con 0x562dbc738400 session 0x562dbd84ed20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:20.131135+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 70664192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 392 heartbeat osd_stat(store_statfs(0x19532a000/0x0/0x1bfc00000, data 0x4c45f49/0x4e53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5764012 data_alloc: 251658240 data_used: 47558656
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:21.131264+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 70664192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:22.131442+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 70664192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:23.131654+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 70664192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:24.131838+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 70664192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.179473877s of 12.394300461s, submitted: 48
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:25.131977+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 393 heartbeat osd_stat(store_statfs(0x19532a000/0x0/0x1bfc00000, data 0x4c45f49/0x4e53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: get_auth_request con 0x562dbabae000 auth_method 0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572194816 unmapped: 70647808 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:26.132094+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5767178 data_alloc: 251658240 data_used: 47558656
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 393 ms_handle_reset con 0x562dc5a83000 session 0x562dbb34e5a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 393 ms_handle_reset con 0x562dc2102000 session 0x562dba7fb680
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572194816 unmapped: 70647808 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3666c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 393 ms_handle_reset con 0x562dc3666c00 session 0x562dbd5d8b40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:27.132270+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:28.132482+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1101000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:29.132643+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 393 ms_handle_reset con 0x562dc1101000 session 0x562dc1832b40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 393 heartbeat osd_stat(store_statfs(0x195679000/0x0/0x1bfc00000, data 0x48f6a4e/0x4b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:30.132850+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 393 heartbeat osd_stat(store_statfs(0x195679000/0x0/0x1bfc00000, data 0x48f6a4e/0x4b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:31.133003+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5728205 data_alloc: 251658240 data_used: 44077056
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3666c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:32.133162+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 70623232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 394 ms_handle_reset con 0x562dc3666c00 session 0x562dbd84dc20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:33.133354+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559374336 unmapped: 83468288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:34.133508+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559374336 unmapped: 83468288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:35.133711+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559374336 unmapped: 83468288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:36.133870+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5458850 data_alloc: 234881024 data_used: 21647360
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559374336 unmapped: 83468288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd883000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.765892029s of 12.027838707s, submitted: 79
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 394 heartbeat osd_stat(store_statfs(0x196bdb000/0x0/0x1bfc00000, data 0x3396689/0x35a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 394 ms_handle_reset con 0x562dbd883000 session 0x562dbd5d8d20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:37.134083+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77e800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 394 ms_handle_reset con 0x562dba77e800 session 0x562dbd84b860
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554950656 unmapped: 87891968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:38.134179+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554950656 unmapped: 87891968 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:39.134316+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 394 ms_handle_reset con 0x562dc10fec00 session 0x562dbd5721e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 394 ms_handle_reset con 0x562dbf6ee800 session 0x562dbc8def00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbcffb000 session 0x562dbb56f2c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:40.134434+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196a90000/0x0/0x1bfc00000, data 0x273a1a5/0x2946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:41.134576+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301797 data_alloc: 218103808 data_used: 8826880
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196a90000/0x0/0x1bfc00000, data 0x273a1a5/0x2946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:42.134698+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:43.134843+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:44.134989+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196a90000/0x0/0x1bfc00000, data 0x273a1a5/0x2946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd8a3000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbd8a3000 session 0x562dbff91860
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:45.135189+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbc738800 session 0x562dbc8f30e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:46.135375+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301797 data_alloc: 218103808 data_used: 8826880
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:47.135613+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10fe400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dc10fe400 session 0x562dbc8f2b40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.382782936s of 10.577653885s, submitted: 83
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542056448 unmapped: 100786176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dc1100400 session 0x562dbd5d83c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:48.136122+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542064640 unmapped: 100777984 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x197837000/0x0/0x1bfc00000, data 0x273a1c8/0x2947000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:49.136264+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542072832 unmapped: 100769792 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:50.136379+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542048256 unmapped: 100794368 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:51.136555+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5376823 data_alloc: 234881024 data_used: 18935808
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542965760 unmapped: 99876864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:52.136687+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542965760 unmapped: 99876864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:53.136890+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542965760 unmapped: 99876864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x197837000/0x0/0x1bfc00000, data 0x273a1c8/0x2947000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:54.137066+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 542965760 unmapped: 99876864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dba095c00 session 0x562dbd84e960
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbad4ac00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbad4ac00 session 0x562dba8730e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dc2854c00 session 0x562dc18334a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x197837000/0x0/0x1bfc00000, data 0x273a1c8/0x2947000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc736400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbc736400 session 0x562dbd1b25a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1162800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dc1162800 session 0x562dbabfa3c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:55.137260+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dba095c00 session 0x562dbecb2780
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbad4ac00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbad4ac00 session 0x562dba8b6d20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc736400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbc736400 session 0x562dba8b7a40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dc2854c00 session 0x562dba2854a0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:56.137421+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5429297 data_alloc: 234881024 data_used: 18939904
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:57.137597+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x19725d000/0x0/0x1bfc00000, data 0x2d1223a/0x2f21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:58.137750+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:59.137940+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x19725d000/0x0/0x1bfc00000, data 0x2d1223a/0x2f21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:00.138118+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 543023104 unmapped: 99819520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffa000
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbcffa000 session 0x562dbd572780
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:01.138245+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.624453545s of 13.748756409s, submitted: 50
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5453979 data_alloc: 234881024 data_used: 18939904
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548265984 unmapped: 94576640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dba095c00 session 0x562dbab8b680
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196e38000/0x0/0x1bfc00000, data 0x313723a/0x3346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:02.138394+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196e38000/0x0/0x1bfc00000, data 0x313723a/0x3346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbad4ac00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbad4ac00 session 0x562dbb47fc20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc736400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 549715968 unmapped: 93126656 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbc736400 session 0x562dbb34ed20
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:03.138545+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 548503552 unmapped: 94339072 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:04.138737+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550371328 unmapped: 92471296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:05.138865+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:06.139081+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5551333 data_alloc: 234881024 data_used: 26824704
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:07.139252+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196a72000/0x0/0x1bfc00000, data 0x34f723a/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:08.139448+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:09.139607+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:10.139753+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x196a72000/0x0/0x1bfc00000, data 0x34f723a/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:11.139894+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5551349 data_alloc: 234881024 data_used: 26824704
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:12.140054+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:13.140198+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:14.140318+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550436864 unmapped: 92405760 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.047923088s of 13.415540695s, submitted: 128
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:15.140465+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553410560 unmapped: 89432064 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:16.140614+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5651979 data_alloc: 234881024 data_used: 27148288
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 heartbeat osd_stat(store_statfs(0x195f00000/0x0/0x1bfc00000, data 0x406f23a/0x427e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554885120 unmapped: 87957504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:17.141076+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:18.141276+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 ms_handle_reset con 0x562dbca8dc00 session 0x562dbd5ff0e0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:19.141436+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:20.141599+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffa800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:21.141742+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5671089 data_alloc: 234881024 data_used: 27164672
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 396 ms_handle_reset con 0x562dbcffa800 session 0x562dbd572b40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:22.141912+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 396 heartbeat osd_stat(store_statfs(0x195cd4000/0x0/0x1bfc00000, data 0x42970f5/0x44a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 396 heartbeat osd_stat(store_statfs(0x195cd4000/0x0/0x1bfc00000, data 0x42970f5/0x44a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:23.142124+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 396 heartbeat osd_stat(store_statfs(0x195cd4000/0x0/0x1bfc00000, data 0x42970f5/0x44a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:24.142331+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffa800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095c00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 397 ms_handle_reset con 0x562dba095c00 session 0x562dba0592c0
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 397 ms_handle_reset con 0x562dbcffa800 session 0x562dbb548b40
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbad4ac00
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 87883776 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.674940109s of 10.024015427s, submitted: 146
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 397 ms_handle_reset con 0x562dbad4ac00 session 0x562dbd526780
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:25.322019+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553779200 unmapped: 89063424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:48 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:26.322208+0000)
Jan 31 09:20:48 compute-2 ceph-osd[79942]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:48 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:48 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5817283 data_alloc: 234881024 data_used: 33517568
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 398 ms_handle_reset con 0x562dbaa1b800 session 0x562dbb549680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7ce800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553787392 unmapped: 89055232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:27.322406+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbc7ce800 session 0x562dbd1b3e00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553787392 unmapped: 89055232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:28.322556+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 553787392 unmapped: 89055232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba095c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dba095c00 session 0x562dbecb21e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbaa1b800 session 0x562dbb34f680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbad4ac00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbad4ac00 session 0x562dbbb154a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 heartbeat osd_stat(store_statfs(0x194e32000/0x0/0x1bfc00000, data 0x51336d2/0x534a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:29.322851+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550412288 unmapped: 92430336 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:30.323068+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550412288 unmapped: 92430336 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc738400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbc738400 session 0x562dbb47e780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbb701800 session 0x562dbd5d94a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 ms_handle_reset con 0x562dbb701800 session 0x562dbc7770e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:31.323255+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5816329 data_alloc: 234881024 data_used: 33525760
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550420480 unmapped: 92422144 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 heartbeat osd_stat(store_statfs(0x194e28000/0x0/0x1bfc00000, data 0x513d744/0x5356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:32.323399+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550420480 unmapped: 92422144 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:33.323621+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550420480 unmapped: 92422144 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 heartbeat osd_stat(store_statfs(0x194e28000/0x0/0x1bfc00000, data 0x513d744/0x5356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:34.323808+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550420480 unmapped: 92422144 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.589864731s of 10.252849579s, submitted: 32
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:35.323987+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550428672 unmapped: 92413952 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a83000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc5a83000 session 0x562dbd527a40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbecb2000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbcffb800 session 0x562dbbb15e00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:36.324196+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5819623 data_alloc: 234881024 data_used: 33533952
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbba83860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550428672 unmapped: 92413952 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10fec00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbd7ff4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc10fec00 session 0x562dbc8f2b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb701800 session 0x562dbd84f680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbd527680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a83000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc5a83000 session 0x562dbd7fe1e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb701800 session 0x562dbbb230e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbbb232c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbcffb800 session 0x562dbb56e5a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10fec00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc10fec00 session 0x562dbd1b30e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbd526f00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbc8dfc20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb701800 session 0x562dbd84fe00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:37.324365+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbd84be00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbcffb800 session 0x562dbabfbc20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x19457b000/0x0/0x1bfc00000, data 0x59e432e/0x5c02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:38.324526+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:39.324732+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:40.324891+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:41.325091+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5902396 data_alloc: 234881024 data_used: 33533952
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:42.325279+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x19457b000/0x0/0x1bfc00000, data 0x59e4367/0x5c02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:43.325410+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:44.325559+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:45.325732+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:46.325887+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5902572 data_alloc: 234881024 data_used: 33529856
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 552034304 unmapped: 90808320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.640461922s of 11.789118767s, submitted: 56
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73b400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbc73b400 session 0x562dc822c960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb701800 session 0x562dbab8bc20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:47.326089+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbd5feb40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550830080 unmapped: 92012544 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbcffb800 session 0x562dc822c1e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x19457c000/0x0/0x1bfc00000, data 0x59e4367/0x5c02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:48.326265+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbbb6b2c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1163800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550830080 unmapped: 92012544 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc1163800 session 0x562dbd84b680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb701800 session 0x562dbd526f00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8dc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbca8dc00 session 0x562dbb56e5a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffb800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:49.326448+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc834c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550993920 unmapped: 91848704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:50.326597+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550993920 unmapped: 91848704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194530000/0x0/0x1bfc00000, data 0x5a2c3cc/0x5c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:51.326738+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5915091 data_alloc: 234881024 data_used: 33542144
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 550993920 unmapped: 91848704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:52.326865+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559505408 unmapped: 83337216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:53.327131+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:54.327499+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:55.327638+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194530000/0x0/0x1bfc00000, data 0x5a2c3cc/0x5c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:56.327751+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6058291 data_alloc: 251658240 data_used: 52396032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:57.327893+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:58.328011+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:59.328173+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:00.328294+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:01.328418+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194530000/0x0/0x1bfc00000, data 0x5a2c3cc/0x5c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6058291 data_alloc: 251658240 data_used: 52396032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:02.328601+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:03.328716+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 81100800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.273065567s of 17.347787857s, submitted: 23
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:04.328819+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569090048 unmapped: 73752576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:05.328952+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569851904 unmapped: 72990720 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:06.329092+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6205106 data_alloc: 268435456 data_used: 55169024
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570286080 unmapped: 72556544 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x193697000/0x0/0x1bfc00000, data 0x68bd3cc/0x6adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:07.329249+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570286080 unmapped: 72556544 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:08.329437+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570286080 unmapped: 72556544 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:09.329615+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x193697000/0x0/0x1bfc00000, data 0x68bd3cc/0x6adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570351616 unmapped: 72491008 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x193697000/0x0/0x1bfc00000, data 0x68bd3cc/0x6adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:10.329766+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571031552 unmapped: 71811072 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbcffb800 session 0x562dbc8f2b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc54c6800 session 0x562dbc777a40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x193697000/0x0/0x1bfc00000, data 0x68bd3cc/0x6adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:11.329896+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cf800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6203724 data_alloc: 268435456 data_used: 55177216
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567336960 unmapped: 75505664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbc7cf800 session 0x562dbd5730e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:12.330049+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567345152 unmapped: 75497472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:13.330225+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567345152 unmapped: 75497472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:14.330368+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567345152 unmapped: 75497472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194d8d000/0x0/0x1bfc00000, data 0x51d1328/0x53ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:15.330517+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567345152 unmapped: 75497472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:16.330711+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5955331 data_alloc: 251658240 data_used: 44916736
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567345152 unmapped: 75497472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:17.330893+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.040993690s of 13.372898102s, submitted: 214
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567353344 unmapped: 75489280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:18.331165+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567353344 unmapped: 75489280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:19.331306+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567353344 unmapped: 75489280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:20.331470+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194d8d000/0x0/0x1bfc00000, data 0x51d1328/0x53ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567353344 unmapped: 75489280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194d8d000/0x0/0x1bfc00000, data 0x51d1328/0x53ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:21.331603+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5955331 data_alloc: 251658240 data_used: 44916736
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567353344 unmapped: 75489280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:22.331757+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567361536 unmapped: 75481088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:23.331893+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc2854c00 session 0x562dbd5d9680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbaa19400 session 0x562dc1832d20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567361536 unmapped: 75481088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x194d8e000/0x0/0x1bfc00000, data 0x51d2328/0x53f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:24.332014+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567361536 unmapped: 75481088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:25.332172+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567377920 unmapped: 75464704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:26.332948+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbac11c00 session 0x562dba285e00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774259 data_alloc: 251658240 data_used: 38666240
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195b85000/0x0/0x1bfc00000, data 0x40302b6/0x424c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:27.333195+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:28.333460+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:29.333620+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:30.334184+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:31.334315+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5774579 data_alloc: 251658240 data_used: 38674432
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195b85000/0x0/0x1bfc00000, data 0x40302b6/0x424c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:32.334442+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:33.334769+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:34.335171+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd883800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 75456512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.279706955s of 17.971815109s, submitted: 44
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:35.335467+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195b85000/0x0/0x1bfc00000, data 0x40302b6/0x424c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570195968 unmapped: 72646656 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:36.335618+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5802171 data_alloc: 251658240 data_used: 38973440
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570204160 unmapped: 72638464 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:37.335782+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570671104 unmapped: 72171520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195b04000/0x0/0x1bfc00000, data 0x445d4b6/0x467a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:38.336099+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570679296 unmapped: 72163328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:39.336362+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570679296 unmapped: 72163328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:40.336661+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570679296 unmapped: 72163328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbd883800 session 0x562dbd7fe960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:41.336927+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5823598 data_alloc: 251658240 data_used: 41078784
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568393728 unmapped: 74448896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195b03000/0x0/0x1bfc00000, data 0x445e4b6/0x467b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:42.337206+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568393728 unmapped: 74448896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:43.337329+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568393728 unmapped: 74448896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:44.337548+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbb5a4c00 session 0x562dbd84f860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbc834c00 session 0x562dbd7ff4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568393728 unmapped: 74448896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:45.337739+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.340224266s of 10.018557549s, submitted: 24
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dbaa19400 session 0x562dbc8f2780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:46.337961+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5781271 data_alloc: 251658240 data_used: 40714240
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195dc5000/0x0/0x1bfc00000, data 0x419e411/0x43b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:47.338178+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:48.338361+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:49.338487+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195dc5000/0x0/0x1bfc00000, data 0x419e411/0x43b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:50.338682+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:51.338869+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1163c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5781271 data_alloc: 251658240 data_used: 40714240
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 heartbeat osd_stat(store_statfs(0x195dc5000/0x0/0x1bfc00000, data 0x419e411/0x43b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc1163c00 session 0x562dbd7ff0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e4800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:52.339076+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 ms_handle_reset con 0x562dc18e4800 session 0x562dbc8de000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568418304 unmapped: 74424320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:53.339265+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568426496 unmapped: 74416128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e0800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:54.339393+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 401 ms_handle_reset con 0x562dc68e0800 session 0x562dbbac14a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:55.339546+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 401 ms_handle_reset con 0x562dbaa19400 session 0x562dba8a0960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc834c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.123268127s of 10.318778992s, submitted: 56
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:56.339703+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 402 ms_handle_reset con 0x562dbc834c00 session 0x562dbd84ab40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 402 heartbeat osd_stat(store_statfs(0x196e60000/0x0/0x1bfc00000, data 0x3107d88/0x331d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5572465 data_alloc: 234881024 data_used: 27693056
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:57.339902+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:58.340090+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 402 heartbeat osd_stat(store_statfs(0x196e5d000/0x0/0x1bfc00000, data 0x3109a51/0x3320000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:59.340228+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568442880 unmapped: 74399744 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 402 heartbeat osd_stat(store_statfs(0x196e5d000/0x0/0x1bfc00000, data 0x3109a51/0x3320000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:00.340369+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568451072 unmapped: 74391552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 403 ms_handle_reset con 0x562dbcffb000 session 0x562dbecb30e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 403 ms_handle_reset con 0x562dbc738800 session 0x562dbd1b3c20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:01.340576+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43dd800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5575235 data_alloc: 234881024 data_used: 27693056
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568459264 unmapped: 74383360 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:02.340697+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554188800 unmapped: 88653824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 403 ms_handle_reset con 0x562dc43dd800 session 0x562dbecb2000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:03.340877+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:04.341111+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:05.341230+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x197d6c000/0x0/0x1bfc00000, data 0x1f7a527/0x2190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:06.341395+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5333061 data_alloc: 218103808 data_used: 10977280
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:07.341566+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:08.341722+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:09.341860+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x197fea000/0x0/0x1bfc00000, data 0x1f7c066/0x2193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:10.341995+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:11.342129+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7ce400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbc7ce400 session 0x562dbd5d9860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5333061 data_alloc: 218103808 data_used: 10977280
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.315015793s of 16.486459732s, submitted: 80
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc54c6400 session 0x562dba7fb680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:12.342266+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:13.342404+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:14.342539+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:15.342670+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:16.342794+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5313558 data_alloc: 218103808 data_used: 8871936
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:17.342951+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:18.343087+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:19.343208+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:20.343336+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:21.343465+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5313718 data_alloc: 218103808 data_used: 8876032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:22.343601+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:23.343735+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:24.343923+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88612864 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:25.344102+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:26.344284+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5313718 data_alloc: 218103808 data_used: 8876032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:27.344488+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:28.344630+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:29.344796+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:30.344968+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:31.345143+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5313718 data_alloc: 218103808 data_used: 8876032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:32.345284+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88604672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:33.345462+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1981ec000/0x0/0x1bfc00000, data 0x1d7ae66/0x1f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554246144 unmapped: 88596480 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:34.345645+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554246144 unmapped: 88596480 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:35.345794+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554246144 unmapped: 88596480 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:36.345995+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5313718 data_alloc: 218103808 data_used: 8876032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554246144 unmapped: 88596480 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:37.346244+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554246144 unmapped: 88596480 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc4da4000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc4da4000 session 0x562dbd1b2960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbb5a5800 session 0x562dba872780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a1000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dba6a1000 session 0x562dbabd03c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:38.346378+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbb5a5800 session 0x562dbb9dfc20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7ce400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.329820633s of 26.333835602s, submitted: 1
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbc7ce400 session 0x562dbd5721e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc4da4000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc4da4000 session 0x562dbff90f00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc54c6400 session 0x562dbb9df0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cf400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbc7cf400 session 0x562dc1832960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbb5a5800 session 0x562dbd84e3c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554270720 unmapped: 88571904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:39.346509+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554270720 unmapped: 88571904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:40.346695+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554270720 unmapped: 88571904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:41.346851+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5421124 data_alloc: 218103808 data_used: 8876032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:42.347076+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:43.347216+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:44.347361+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:45.347488+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:46.347689+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5421124 data_alloc: 218103808 data_used: 8876032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:47.347859+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:48.348093+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554278912 unmapped: 88563712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:49.348286+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:50.348439+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:51.348604+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5421124 data_alloc: 218103808 data_used: 8876032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:52.348761+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:53.348904+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:54.349138+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:55.349331+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:56.349475+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5421124 data_alloc: 218103808 data_used: 8876032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554287104 unmapped: 88555520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:57.349694+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554303488 unmapped: 88539136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:58.349869+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 76K writes, 315K keys, 76K commit groups, 1.0 writes per commit group, ingest: 0.32 GB, 0.05 MB/s
                                           Cumulative WAL: 76K writes, 27K syncs, 2.76 writes per sync, written: 0.32 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4519 writes, 18K keys, 4519 commit groups, 1.0 writes per commit group, ingest: 19.24 MB, 0.03 MB/s
                                           Interval WAL: 4519 writes, 1796 syncs, 2.52 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554303488 unmapped: 88539136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:59.350076+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77f000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.789453506s of 20.890707016s, submitted: 20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dba77f000 session 0x562dbd1b2960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554319872 unmapped: 88522752 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc737c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:00.350280+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554319872 unmapped: 88522752 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:01.350435+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5421489 data_alloc: 218103808 data_used: 8884224
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554319872 unmapped: 88522752 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:02.350599+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:03.350788+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:04.350961+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:05.351111+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:06.351258+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5508529 data_alloc: 234881024 data_used: 21176320
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:07.351458+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:08.351732+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:09.351882+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:10.352206+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1974d5000/0x0/0x1bfc00000, data 0x2a91e76/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:11.352454+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5508529 data_alloc: 234881024 data_used: 21176320
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 554295296 unmapped: 88547328 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:12.352599+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.633522987s of 13.655125618s, submitted: 7
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 558006272 unmapped: 84836352 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:13.352793+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x19672a000/0x0/0x1bfc00000, data 0x3834e76/0x3a4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560463872 unmapped: 82378752 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:14.352956+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559759360 unmapped: 83083264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:15.353134+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559759360 unmapped: 83083264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1966a2000/0x0/0x1bfc00000, data 0x38c4e76/0x3adc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:16.353350+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5635579 data_alloc: 234881024 data_used: 22097920
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559759360 unmapped: 83083264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:17.353569+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1966a2000/0x0/0x1bfc00000, data 0x38c4e76/0x3adc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559759360 unmapped: 83083264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:18.353758+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559759360 unmapped: 83083264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:19.353926+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:20.354084+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196681000/0x0/0x1bfc00000, data 0x38e5e76/0x3afd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:21.354234+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5634223 data_alloc: 234881024 data_used: 22106112
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:22.354420+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196681000/0x0/0x1bfc00000, data 0x38e5e76/0x3afd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:23.354572+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:24.354707+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196681000/0x0/0x1bfc00000, data 0x38e5e76/0x3afd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:25.520138+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.650030136s of 12.929843903s, submitted: 104
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:26.520415+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5634279 data_alloc: 234881024 data_used: 22106112
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:27.520697+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:28.520851+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:29.521099+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x19667b000/0x0/0x1bfc00000, data 0x38ebe76/0x3b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:30.521312+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:31.521601+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x19667b000/0x0/0x1bfc00000, data 0x38ebe76/0x3b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5634279 data_alloc: 234881024 data_used: 22106112
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:32.521867+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:33.522092+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559685632 unmapped: 83156992 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:34.522260+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196678000/0x0/0x1bfc00000, data 0x38eee76/0x3b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:35.522462+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e1000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc68e1000 session 0x562dbd1b30e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3666000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc3666000 session 0x562dbd84e1e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:36.522658+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5637555 data_alloc: 234881024 data_used: 23416832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:37.522866+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:38.523080+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc739400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.833637238s of 12.855980873s, submitted: 4
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbc739400 session 0x562dbb47e000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196678000/0x0/0x1bfc00000, data 0x38eee76/0x3b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:39.523284+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196677000/0x0/0x1bfc00000, data 0x38eee86/0x3b07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:40.523440+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x196677000/0x0/0x1bfc00000, data 0x38eee86/0x3b07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:41.523742+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5640703 data_alloc: 234881024 data_used: 23441408
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:42.523921+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a82000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc5a82000 session 0x562dbab8a3c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57da000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dc57da000 session 0x562dbd1b21e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:43.524128+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559693824 unmapped: 83148800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc835800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:44.524295+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbc835800 session 0x562dc822c960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 ms_handle_reset con 0x562dbaba0c00 session 0x562dbc8f2b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:45.524499+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 heartbeat osd_stat(store_statfs(0x1960c9000/0x0/0x1bfc00000, data 0x3e9be86/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:46.524625+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5685536 data_alloc: 234881024 data_used: 23441408
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:47.524809+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:48.524957+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:49.525111+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.471217155s of 10.727568626s, submitted: 25
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559783936 unmapped: 83058688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:50.525285+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 405 heartbeat osd_stat(store_statfs(0x1960c6000/0x0/0x1bfc00000, data 0x3e9dadf/0x40b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560865280 unmapped: 81977344 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:51.525396+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 406 ms_handle_reset con 0x562dbc721c00 session 0x562dbd7ff0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5693801 data_alloc: 234881024 data_used: 23449600
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 81960960 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:52.525506+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc835800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 407 ms_handle_reset con 0x562dbaba0c00 session 0x562dbd5d9860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560717824 unmapped: 82124800 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 407 heartbeat osd_stat(store_statfs(0x195dad000/0x0/0x1bfc00000, data 0x41b1463/0x43ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:53.525631+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dbc835800 session 0x562dbba83860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dbb5a5c00 session 0x562dbb56f4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560807936 unmapped: 82034688 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:54.525746+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 81993728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:55.525871+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c7c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dc54c7c00 session 0x562dbabfa3c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 81985536 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:56.526094+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e1000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dc68e1000 session 0x562dbb9de5a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5885576 data_alloc: 234881024 data_used: 23461888
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560824320 unmapped: 82018304 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 408 heartbeat osd_stat(store_statfs(0x19449c000/0x0/0x1bfc00000, data 0x56b313a/0x58d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:57.526243+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dbb5a5c00 session 0x562dbff914a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc835800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c7c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559374336 unmapped: 83468288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:58.526498+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 408 ms_handle_reset con 0x562dbc835800 session 0x562dbecb2960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559603712 unmapped: 83238912 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:59.526651+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.030683517s of 10.079618454s, submitted: 340
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 409 ms_handle_reset con 0x562dc54c7c00 session 0x562dbb9df860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559579136 unmapped: 83263488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x19387f000/0x0/0x1bfc00000, data 0x62cddaf/0x64ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:00.526821+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbaba0c00 session 0x562dbd7ff2c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559579136 unmapped: 83263488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:01.526971+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5987540 data_alloc: 234881024 data_used: 23470080
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559587328 unmapped: 83255296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:02.527176+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x19387b000/0x0/0x1bfc00000, data 0x62cf8ee/0x64f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559595520 unmapped: 83247104 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:03.527319+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc68e0400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dc68e0400 session 0x562dbd5d8960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559595520 unmapped: 83247104 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbaba0c00 session 0x562dbc8f2780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:04.527440+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd882c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbd882c00 session 0x562dba872d20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2102c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dc2102c00 session 0x562dbd84dc20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559595520 unmapped: 83247104 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67e800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ef800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:05.527558+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 559595520 unmapped: 83247104 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:06.527695+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6048401 data_alloc: 234881024 data_used: 32047104
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:07.527863+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:08.528004+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x19387d000/0x0/0x1bfc00000, data 0x62cf8ee/0x64f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:09.528140+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:10.528268+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:11.528549+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6078161 data_alloc: 251658240 data_used: 36192256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:12.528694+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:13.528994+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563273728 unmapped: 79568896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x19387d000/0x0/0x1bfc00000, data 0x62cf8ee/0x64f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:14.529069+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.160544395s of 15.233458519s, submitted: 31
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x19387d000/0x0/0x1bfc00000, data 0x62cf8ee/0x64f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563290112 unmapped: 79552512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:15.529189+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563290112 unmapped: 79552512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:16.529306+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6079081 data_alloc: 251658240 data_used: 36192256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 79544320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:17.529458+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565846016 unmapped: 76996608 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:18.529606+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc834c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbc834c00 session 0x562dbbb230e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf563400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566075392 unmapped: 76767232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:19.529761+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x193302000/0x0/0x1bfc00000, data 0x683b8ee/0x6a5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566575104 unmapped: 76267520 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:20.529876+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 71319552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:21.530020+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6225035 data_alloc: 251658240 data_used: 48607232
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 71319552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:22.530222+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571531264 unmapped: 71311360 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:23.530394+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x1932ff000/0x0/0x1bfc00000, data 0x683e8ee/0x6a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571531264 unmapped: 71311360 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:24.530566+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571531264 unmapped: 71311360 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:25.530710+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.780009270s of 11.007946014s, submitted: 78
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:26.530842+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6217183 data_alloc: 251658240 data_used: 48607232
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:27.531096+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:28.531210+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x193309000/0x0/0x1bfc00000, data 0x68438ee/0x6a65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:29.531364+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:30.531614+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:31.531755+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6363169 data_alloc: 251658240 data_used: 49393664
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbd67e800 session 0x562dbd5feb40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbf6ef800 session 0x562dbd5d92c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575488000 unmapped: 67354624 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:32.531890+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbaba0c00 session 0x562dc822e780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575586304 unmapped: 67256320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:33.532045+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575586304 unmapped: 67256320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:34.532221+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x194b86000/0x0/0x1bfc00000, data 0x68348ee/0x6220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575586304 unmapped: 67256320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:35.532381+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575586304 unmapped: 67256320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:36.532544+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.567891121s of 11.008643150s, submitted: 149
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6146745 data_alloc: 251658240 data_used: 36081664
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576643072 unmapped: 66199552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:37.532797+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576643072 unmapped: 66199552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:38.533013+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x194b8b000/0x0/0x1bfc00000, data 0x68378ee/0x6223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576643072 unmapped: 66199552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:39.533257+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576643072 unmapped: 66199552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:40.533439+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x194b87000/0x0/0x1bfc00000, data 0x683a8ee/0x6226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576643072 unmapped: 66199552 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:41.533613+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6147625 data_alloc: 251658240 data_used: 36089856
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576651264 unmapped: 66191360 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:42.533846+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:43.534080+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbf563400 session 0x562dbc8de000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbb701400 session 0x562dbff912c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:44.534282+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd8a3c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbd8a3c00 session 0x562dbb4cb0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:45.534435+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:46.534564+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x194bac000/0x0/0x1bfc00000, data 0x68168ee/0x6202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6140485 data_alloc: 251658240 data_used: 35979264
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:47.534801+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbc737c00 session 0x562dbd1b3c20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.351653099s of 11.635720253s, submitted: 10
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dc2854c00 session 0x562dbab8a780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576659456 unmapped: 66183168 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:48.534953+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 ms_handle_reset con 0x562dbaba0c00 session 0x562dbbb230e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 71131136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:49.535092+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x194bac000/0x0/0x1bfc00000, data 0x68168ee/0x6202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:50.535267+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 71131136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:51.535453+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 71131136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5833995 data_alloc: 234881024 data_used: 22663168
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:52.535605+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 71131136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:53.535751+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 71131136 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57da000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 heartbeat osd_stat(store_statfs(0x196747000/0x0/0x1bfc00000, data 0x4c7c8de/0x4667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:54.535879+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571719680 unmapped: 71122944 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 411 ms_handle_reset con 0x562dc57da000 session 0x562dbd1b21e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2855c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 411 ms_handle_reset con 0x562dc2855c00 session 0x562dbb9df0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:55.536070+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565313536 unmapped: 77529088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 411 heartbeat osd_stat(store_statfs(0x197d7b000/0x0/0x1bfc00000, data 0x2e0c529/0x302d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 411 heartbeat osd_stat(store_statfs(0x197d7b000/0x0/0x1bfc00000, data 0x2e0c529/0x302d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:56.536189+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565313536 unmapped: 77529088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc737c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5445562 data_alloc: 218103808 data_used: 10223616
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:57.536384+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565313536 unmapped: 77529088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 412 ms_handle_reset con 0x562dbaba0c00 session 0x562dbd1b23c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 413 ms_handle_reset con 0x562dbc737c00 session 0x562dbb4ca1e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 413 ms_handle_reset con 0x562dc2854c00 session 0x562dbd84c1e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:58.536518+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 413 heartbeat osd_stat(store_statfs(0x198e00000/0x0/0x1bfc00000, data 0x1d8ae49/0x1fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:59.536665+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 413 heartbeat osd_stat(store_statfs(0x198e00000/0x0/0x1bfc00000, data 0x1d8ae49/0x1fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 413 heartbeat osd_stat(store_statfs(0x198e00000/0x0/0x1bfc00000, data 0x1d8ae49/0x1fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:00.536833+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 413 heartbeat osd_stat(store_statfs(0x198e00000/0x0/0x1bfc00000, data 0x1d8ae49/0x1fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:01.536963+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5408029 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:02.537118+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:03.537294+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:04.537410+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.379796982s of 16.798450470s, submitted: 154
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:05.537556+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565288960 unmapped: 77553664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:06.537687+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565297152 unmapped: 77545472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:07.537867+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565297152 unmapped: 77545472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:08.538005+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565297152 unmapped: 77545472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:09.538236+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565305344 unmapped: 77537280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:10.538489+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565305344 unmapped: 77537280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:11.538690+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565305344 unmapped: 77537280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:12.538831+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565305344 unmapped: 77537280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:13.538979+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565313536 unmapped: 77529088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:14.539352+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:15.539474+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:16.539594+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:17.539769+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:18.539894+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:19.540015+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:20.540200+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:21.540334+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:22.540465+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:23.540598+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:24.540751+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:25.540923+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:26.541090+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565321728 unmapped: 77520896 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:27.541356+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565329920 unmapped: 77512704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:28.541502+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565329920 unmapped: 77512704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:29.541666+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565329920 unmapped: 77512704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:30.541815+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:31.541959+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:32.542199+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:33.542330+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:34.542487+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:35.542596+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:36.542734+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:37.542902+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565338112 unmapped: 77504512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:38.543069+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:39.543194+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:40.543334+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:41.543464+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410331 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:42.543629+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:43.543773+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:44.543944+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:45.544100+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565346304 unmapped: 77496320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:46.544374+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565354496 unmapped: 77488128 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57d8800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc57d8800 session 0x562dba8a01e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60e800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb60e800 session 0x562dba872b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaba0c00 session 0x562dbecb2000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc737c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc737c00 session 0x562dbbb14960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2854c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.153324127s of 42.163852692s, submitted: 22
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413057 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:47.544579+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc2854c00 session 0x562dba8b6000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57d8800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc57d8800 session 0x562dbd526960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e4800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc18e4800 session 0x562dbd7fe960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaba0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaba0c00 session 0x562dbbb14f00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc737c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc737c00 session 0x562dbba832c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:48.544726+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:49.544842+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1986cf000/0x0/0x1bfc00000, data 0x24baa32/0x26df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:50.544965+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:51.545106+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5474404 data_alloc: 218103808 data_used: 10223616
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:52.545248+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565673984 unmapped: 77168640 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acc000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc5acc000 session 0x562dbd7feb40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:53.545382+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565682176 unmapped: 77160448 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc707000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc707000 session 0x562dbbb150e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:54.545512+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dba6a0c00 session 0x562dbd5ff0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565690368 unmapped: 77152256 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dba6a0c00 session 0x562dc822fe00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:55.545644+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1986aa000/0x0/0x1bfc00000, data 0x24dea41/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:56.545786+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:57.546007+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5478980 data_alloc: 218103808 data_used: 10223616
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1986aa000/0x0/0x1bfc00000, data 0x24dea41/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:58.546192+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:59.546330+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1986aa000/0x0/0x1bfc00000, data 0x24dea41/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:00.546532+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1986aa000/0x0/0x1bfc00000, data 0x24dea41/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:01.546674+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:02.546813+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532420 data_alloc: 234881024 data_used: 17657856
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc5acd400 session 0x562dbbb6ad20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb5a5800 session 0x562dbd573680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565993472 unmapped: 76849152 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.490621567s of 15.586330414s, submitted: 39
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:03.546986+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbac11400 session 0x562dbbb22780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:04.547134+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:05.547280+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:06.547468+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:07.547674+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418162 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:08.547828+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:09.547946+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:10.548089+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:11.548223+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:12.548388+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418162 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:13.548560+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:14.548755+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:15.548900+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:16.549062+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:17.549254+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418162 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:18.549397+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:19.549580+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:20.549729+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:21.549868+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:22.550019+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418162 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:23.550201+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:24.550359+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:25.550489+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:26.550606+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 76800000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:27.550832+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5418162 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 76800000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:28.550980+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 76800000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:29.551142+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 76800000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dfe000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:30.551310+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 76800000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.252534866s of 28.299730301s, submitted: 19
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:31.551501+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaa1a400 session 0x562dbd84be00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba6a0c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dba6a0c00 session 0x562dc822fa40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbac11400 session 0x562dbb56f860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a5800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:32.551647+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5519877 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb5a5800 session 0x562dbff91a40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc5acd400 session 0x562dbb47ef00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1980f6000/0x0/0x1bfc00000, data 0x2a959c0/0x2cb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:33.551786+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:34.551971+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:35.552362+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:36.552467+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566018048 unmapped: 76824576 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1980f6000/0x0/0x1bfc00000, data 0x2a959c0/0x2cb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:37.552633+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5519877 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:38.552808+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e5c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc18e5c00 session 0x562dbd84ba40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:39.553008+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3668c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:40.553196+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3668c00 session 0x562dba8b7e00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:41.553367+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1980f6000/0x0/0x1bfc00000, data 0x2a959c0/0x2cb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3669000 session 0x562dbd5272c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:42.553518+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc8be7800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.124199867s of 11.224075317s, submitted: 25
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5521363 data_alloc: 218103808 data_used: 10219520
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc8be7800 session 0x562dbd1b25a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:43.553682+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77f000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566026240 unmapped: 76816384 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8d400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:44.553801+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566034432 unmapped: 76808192 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:45.553961+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565919744 unmapped: 76922880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:46.554123+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1980d1000/0x0/0x1bfc00000, data 0x2ab99d0/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:47.554314+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5620731 data_alloc: 234881024 data_used: 21942272
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:48.554511+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:49.554700+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1980d1000/0x0/0x1bfc00000, data 0x2ab99d0/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:50.554849+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:51.555650+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 75849728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dba77f000 session 0x562dbb56f860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbca8d400 session 0x562dba7fa960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:52.555807+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8d400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.052662849s of 10.073778152s, submitted: 5
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5620647 data_alloc: 234881024 data_used: 21942272
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 75841536 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:53.556327+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:54.556617+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:55.557846+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbca8d400 session 0x562dbd5d9c20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ddb000/0x0/0x1bfc00000, data 0x1db09c0/0x1fd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:56.558402+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:57.558846+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:58.559209+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:59.569370+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:00.569811+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:01.570219+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:02.570342+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:03.570645+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:04.571306+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:05.571571+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:06.571794+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:07.572084+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:08.572226+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:09.572378+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:10.572572+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:11.572818+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:12.572967+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:13.573169+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:14.573311+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:15.573578+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:16.573857+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:17.574131+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:18.574351+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:19.574497+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:20.574665+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560209920 unmapped: 82632704 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:21.574858+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:22.574994+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:23.575147+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:24.575338+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:25.575504+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:26.575655+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:27.575811+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:28.575953+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560218112 unmapped: 82624512 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:29.576077+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:30.576221+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:31.576357+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:32.576487+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:33.576578+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:34.576741+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:35.576900+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:36.577124+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560226304 unmapped: 82616320 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:37.577324+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc707000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 45.020236969s of 45.082485199s, submitted: 25
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5427684 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc707000 session 0x562dbd84e960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:38.577457+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:39.577585+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:40.577758+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:41.577928+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:42.578132+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5441886 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:43.578302+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:44.578440+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 82255872 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:45.578617+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbd67f400 session 0x562dbc8def00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:46.578786+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3edc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:47.578943+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5446846 data_alloc: 218103808 data_used: 9527296
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:48.579102+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:49.579225+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:50.579364+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:51.579521+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:52.579695+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5447966 data_alloc: 218103808 data_used: 9695232
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560594944 unmapped: 82247680 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:53.579825+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 82239488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:54.579999+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 82239488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:55.580212+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 82239488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:56.580392+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 82239488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:57.580610+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198ce2000/0x0/0x1bfc00000, data 0x1ea99c0/0x20cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5447966 data_alloc: 218103808 data_used: 9695232
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 82239488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.518692017s of 20.542280197s, submitted: 11
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:58.580792+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564060160 unmapped: 78782464 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:59.580920+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19826e000/0x0/0x1bfc00000, data 0x291d9c0/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564076544 unmapped: 78766080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19826e000/0x0/0x1bfc00000, data 0x291d9c0/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:00.581094+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564076544 unmapped: 78766080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:01.581261+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19826e000/0x0/0x1bfc00000, data 0x291d9c0/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564076544 unmapped: 78766080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:02.581395+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5536814 data_alloc: 218103808 data_used: 10964992
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564076544 unmapped: 78766080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:03.581528+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564076544 unmapped: 78766080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:04.581658+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:05.581803+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19824d000/0x0/0x1bfc00000, data 0x293e9c0/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:06.581926+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:07.582083+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5530274 data_alloc: 218103808 data_used: 10977280
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:08.582191+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:09.582362+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:10.582518+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200681686s of 12.423737526s, submitted: 87
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563994624 unmapped: 78848000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:11.582736+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19823f000/0x0/0x1bfc00000, data 0x294c9c0/0x2b6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563994624 unmapped: 78848000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:12.582899+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbe3edc00 session 0x562dba8a03c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5531018 data_alloc: 218103808 data_used: 10977280
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60e400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563994624 unmapped: 78848000 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:13.583068+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb60e400 session 0x562dbc8f30e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:14.583185+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:15.583344+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:16.583530+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564011008 unmapped: 78831616 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:17.583703+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433604 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:18.583874+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:19.584215+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:20.584368+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:21.584503+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:22.584633+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433604 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:23.584769+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:24.584938+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564019200 unmapped: 78823424 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:25.585108+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:26.585287+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:27.585501+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433604 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:28.585688+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:29.585828+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:30.585973+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:31.586125+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:32.586265+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433604 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564027392 unmapped: 78815232 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:33.586425+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:34.586666+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:35.586849+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:36.587007+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:37.587265+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433604 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:38.587488+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:39.587655+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:40.587818+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564035584 unmapped: 78807040 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:41.587979+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 564043776 unmapped: 78798848 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:42.588129+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbae52400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbae52400 session 0x562dbc776b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1497c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc1497c00 session 0x562dba8a1a40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaa1a800 session 0x562dbbb154a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a82c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc5a82c00 session 0x562dbb4cad20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198dff000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed2400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.121410370s of 32.159393311s, submitted: 10
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5435078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:43.588279+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc1ed2400 session 0x562dbd5d8b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaa1a800 session 0x562dbb9df860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbae52400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbae52400 session 0x562dbd1b3680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1497c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:44.588450+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc1497c00 session 0x562dc822c780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4f800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbce4f800 session 0x562dbd84d0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:45.588595+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:46.593137+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:47.593359+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a47000/0x0/0x1bfc00000, data 0x21449c0/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5464721 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:48.593505+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565108736 unmapped: 77733888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:49.593621+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:50.593771+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c7800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:51.593922+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc54c7800 session 0x562dbd573680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbae52400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:52.594101+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5483289 data_alloc: 218103808 data_used: 11051008
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:53.594231+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:54.594358+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:55.594589+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:56.594787+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:57.594991+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494969 data_alloc: 218103808 data_used: 12722176
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565116928 unmapped: 77725696 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:58.595167+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565125120 unmapped: 77717504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:59.595322+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565125120 unmapped: 77717504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:00.595455+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565125120 unmapped: 77717504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:01.595620+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x198a23000/0x0/0x1bfc00000, data 0x21689c0/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565125120 unmapped: 77717504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:02.595775+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494969 data_alloc: 218103808 data_used: 12722176
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.423847198s of 20.521347046s, submitted: 17
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565125120 unmapped: 77717504 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:03.595924+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568721408 unmapped: 74121216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3669000 session 0x562dbb56f2c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:04.596067+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc8be6c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc8be6c00 session 0x562dba895860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57db800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc57db800 session 0x562dc18323c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196c64000/0x0/0x1bfc00000, data 0x2d7f9c0/0x2fa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbabaf400 session 0x562dbd1b2d20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568721408 unmapped: 74121216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:05.596209+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568852480 unmapped: 73990144 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:06.596360+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc5acd400 session 0x562dbff910e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc720000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc720000 session 0x562dbab8a3c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbabaf400 session 0x562dbabfb860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19689f000/0x0/0x1bfc00000, data 0x31449c0/0x3367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567951360 unmapped: 74891264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:07.596532+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3669000 session 0x562dbd5ff4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57db800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc57db800 session 0x562dbb34f680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5636806 data_alloc: 218103808 data_used: 13115392
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc8be6c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc8be6c00 session 0x562dbbac0b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567951360 unmapped: 74891264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:08.596623+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbabaf400 session 0x562dbd84f0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc720000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc720000 session 0x562dba872780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567951360 unmapped: 74891264 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:09.596750+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3669000 session 0x562dbecb2960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ed800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3668000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567869440 unmapped: 74973184 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:10.596875+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196874000/0x0/0x1bfc00000, data 0x31769d0/0x339a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 75259904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:11.596973+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:12.597107+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5669817 data_alloc: 234881024 data_used: 16855040
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:13.597225+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:14.597363+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:15.597493+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196871000/0x0/0x1bfc00000, data 0x31799d0/0x339d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196871000/0x0/0x1bfc00000, data 0x31799d0/0x339d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:16.597638+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196871000/0x0/0x1bfc00000, data 0x31799d0/0x339d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:17.597822+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5669817 data_alloc: 234881024 data_used: 16855040
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:18.597954+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196871000/0x0/0x1bfc00000, data 0x31799d0/0x339d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:19.598063+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:20.598199+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196871000/0x0/0x1bfc00000, data 0x31799d0/0x339d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:21.598365+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 75251712 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.615375519s of 19.088668823s, submitted: 114
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:22.598488+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567812096 unmapped: 75030528 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [O-0] New memtable created with log file: #60. Immutable memtables: 0.
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5721487 data_alloc: 234881024 data_used: 17354752
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:23.598575+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571244544 unmapped: 71598080 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:24.598721+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 71589888 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19510d000/0x0/0x1bfc00000, data 0x372f9d0/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:25.598829+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:26.598963+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:27.599125+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5728055 data_alloc: 234881024 data_used: 17260544
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:28.599249+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950ff000/0x0/0x1bfc00000, data 0x37439d0/0x3967000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:29.599376+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:30.599576+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:31.599772+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:32.599936+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5728071 data_alloc: 234881024 data_used: 17260544
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:33.600105+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950ff000/0x0/0x1bfc00000, data 0x37439d0/0x3967000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:34.600264+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:35.600395+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 71385088 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950ff000/0x0/0x1bfc00000, data 0x37439d0/0x3967000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.496047974s of 13.780189514s, submitted: 72
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:36.600525+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571392000 unmapped: 71450624 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbe3ed800 session 0x562dbecb25a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3668000 session 0x562dbabfb680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbabaf400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:37.600670+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 71442432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722903 data_alloc: 234881024 data_used: 17260544
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:38.600798+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 71442432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbabaf400 session 0x562dc18332c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:39.600950+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 71417856 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x195a9b000/0x0/0x1bfc00000, data 0x2db09c0/0x2fd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:40.601088+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 71417856 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:41.601268+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 71417856 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:42.601457+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 71417856 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5612912 data_alloc: 218103808 data_used: 13115392
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:43.601642+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 71417856 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaa1a800 session 0x562dba872b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbae52400 session 0x562dbd84a000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd881000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:44.601809+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196a9b000/0x0/0x1bfc00000, data 0x1db09c0/0x1fd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbd881000 session 0x562dbabfa3c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:45.601977+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:46.602118+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:47.602288+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:48.602442+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:49.602580+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:50.602716+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:51.602842+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:52.602980+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:53.603103+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:54.603225+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:55.603422+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:56.603577+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:57.603827+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:58.604006+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:59.604186+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:00.604321+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:01.604455+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 71409664 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:02.604647+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:03.604781+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:04.604920+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:05.605104+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:06.605246+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:07.605432+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:08.605743+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:09.605930+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 71401472 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:10.606103+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:11.606268+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:12.606425+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457360 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:13.606565+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:14.606732+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:15.606849+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:16.606984+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 71393280 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x196abf000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:17.607167+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 71442432 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbbb83c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.288108826s of 42.024108887s, submitted: 75
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494804 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbbb83c00 session 0x562dbb34e5a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:18.607288+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:19.607385+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:20.607647+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:21.607752+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1965ea000/0x0/0x1bfc00000, data 0x22619c0/0x2484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:22.607877+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ed800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbe3ed800 session 0x562dbbb14f00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5494804 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:23.607989+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ff000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc10ff000 session 0x562dbd526960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:24.608122+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 71278592 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3668400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc3668400 session 0x562dba8954a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc720000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc720000 session 0x562dbd1b32c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:25.608269+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1965c5000/0x0/0x1bfc00000, data 0x22859e3/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571588608 unmapped: 71254016 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ffc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc706c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:26.608432+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:27.608607+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5537178 data_alloc: 218103808 data_used: 13893632
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:28.609543+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:29.609705+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1965c5000/0x0/0x1bfc00000, data 0x22859e3/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1965c5000/0x0/0x1bfc00000, data 0x22859e3/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:30.609860+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:31.609998+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-01-31T09:04:32.610184+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _finish_auth 0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:32.611528+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5537178 data_alloc: 218103808 data_used: 13893632
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:33.610339+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1965c5000/0x0/0x1bfc00000, data 0x22859e3/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:34.610497+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:35.610671+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:36.610807+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 71245824 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.229181290s of 19.316902161s, submitted: 19
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:37.611052+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571621376 unmapped: 71221248 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5587726 data_alloc: 218103808 data_used: 13918208
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:38.613920+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571752448 unmapped: 71090176 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x195f2a000/0x0/0x1bfc00000, data 0x29209e3/0x2b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2718f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:39.615896+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #61. Immutable memtables: 16.
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574160896 unmapped: 68681728 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:40.616645+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:41.616908+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:42.618017+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5618182 data_alloc: 218103808 data_used: 15122432
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:43.626845+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194af2000/0x0/0x1bfc00000, data 0x2bb89e3/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:44.627496+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:45.628233+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:46.628402+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194af2000/0x0/0x1bfc00000, data 0x2bb89e3/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:47.629359+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194af2000/0x0/0x1bfc00000, data 0x2bb89e3/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5618182 data_alloc: 218103808 data_used: 15122432
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:48.629569+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194af2000/0x0/0x1bfc00000, data 0x2bb89e3/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:49.630362+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:50.630706+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:51.631003+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194af2000/0x0/0x1bfc00000, data 0x2bb89e3/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 68730880 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc706c00 session 0x562dbd5d94a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.434545517s of 14.631439209s, submitted: 69
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dc10ffc00 session 0x562dbd573e00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc706c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:52.631088+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc706c00 session 0x562dbba832c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:53.631294+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:54.631430+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:55.631549+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:56.631687+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:57.631914+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:58.632212+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:59.632440+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:00.632720+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:01.632943+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:02.633188+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:03.633425+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:04.633614+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:05.633883+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:06.634144+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:07.634334+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:08.634598+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:09.634788+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:10.634920+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:11.635096+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:12.638691+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:13.638843+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:14.639004+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:15.639184+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:16.639326+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:17.639513+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:18.639680+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:19.639866+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:20.640043+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:21.640237+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:22.640331+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:23.640446+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:24.640575+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:25.640730+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:26.640854+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:27.640983+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:28.641133+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:29.641224+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:30.641326+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:31.641441+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:32.641622+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:33.641786+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:34.641951+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:35.642152+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:36.642343+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:37.642539+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:38.642673+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:39.642779+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:40.643172+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:41.643415+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:42.643802+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:43.644319+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:44.644618+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 81436672 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:45.645008+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:46.645301+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:47.645640+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:48.646104+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:49.646608+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:50.647048+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:51.647341+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:52.647600+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 81420288 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:53.648418+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5468078 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 81403904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:54.648611+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 81403904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:55.648774+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19591e000/0x0/0x1bfc00000, data 0x1d8c9c0/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 81403904 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:56.650107+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.340148926s of 64.443199158s, submitted: 35
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561627136 unmapped: 81215488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbac11400 session 0x562dc822de00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbd67f800 session 0x562dba8b7e00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbaa1a000 session 0x562dbb56f860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbac11400 session 0x562dbbb154a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc706c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:57.650225+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbc706c00 session 0x562dbd84d4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561627136 unmapped: 81215488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:58.650398+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5537708 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561627136 unmapped: 81215488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:59.650552+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:00.650692+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:01.650854+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:02.650998+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:03.651076+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5537708 data_alloc: 218103808 data_used: 8908800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:04.651196+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 81207296 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb701400 session 0x562dbd7ff4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:05.651809+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60f400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561520640 unmapped: 81321984 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:06.651991+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 561479680 unmapped: 81362944 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:07.652170+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:08.652316+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5596320 data_alloc: 234881024 data_used: 17432576
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:09.652482+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:10.652752+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:11.652911+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:12.653072+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:13.653205+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5596320 data_alloc: 234881024 data_used: 17432576
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:14.653353+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x25e1a22/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:15.653559+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:16.653718+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 563847168 unmapped: 78995456 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:17.654764+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.878530502s of 21.040298462s, submitted: 45
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566747136 unmapped: 76095488 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:18.655466+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x194d9b000/0x0/0x1bfc00000, data 0x290fa22/0x2b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,3])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5656128 data_alloc: 234881024 data_used: 17444864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566198272 unmapped: 76644352 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:19.655967+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566239232 unmapped: 76603392 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:20.656089+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566665216 unmapped: 76177408 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:21.656351+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:22.656487+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19488e000/0x0/0x1bfc00000, data 0x2e1ca22/0x3040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:23.657010+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19488e000/0x0/0x1bfc00000, data 0x2e1ca22/0x3040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5679952 data_alloc: 234881024 data_used: 18206720
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:24.657195+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:25.657326+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:26.657462+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 heartbeat osd_stat(store_statfs(0x19488e000/0x0/0x1bfc00000, data 0x2e1ca22/0x3040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:27.657851+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.117003441s of 10.006759644s, submitted: 89
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:28.658188+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5681556 data_alloc: 234881024 data_used: 18247680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:29.658426+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:30.658609+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbb60f400 session 0x562dbecb21e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 76169216 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:31.658810+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 ms_handle_reset con 0x562dbd67f800 session 0x562dbab8ad20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566681600 unmapped: 76161024 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac11400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:32.659107+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 415 heartbeat osd_stat(store_statfs(0x19484d000/0x0/0x1bfc00000, data 0x2e5da22/0x3081000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566681600 unmapped: 76161024 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:33.659409+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 415 ms_handle_reset con 0x562dbac11400 session 0x562dbd84da40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60f400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb701400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5684570 data_alloc: 234881024 data_used: 18251776
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 76152832 heap: 642842624 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 415 ms_handle_reset con 0x562dbb701400 session 0x562dbc8dfa40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 415 ms_handle_reset con 0x562dbb60f400 session 0x562dba284960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc706c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:34.659615+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 579526656 unmapped: 74473472 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:35.659746+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572686336 unmapped: 81313792 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 415 ms_handle_reset con 0x562dbc706c00 session 0x562dbb4cb860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:36.659925+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _renew_subs
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 415 handle_osd_map epochs [416,416], i have 416, src has [1,416]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572686336 unmapped: 81313792 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 416 ms_handle_reset con 0x562dbaa1a800 session 0x562dbba82780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:37.660159+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.213700294s of 10.016177177s, submitted: 54
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 416 heartbeat osd_stat(store_statfs(0x193e76000/0x0/0x1bfc00000, data 0x383367b/0x3a58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572702720 unmapped: 81297408 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:38.660383+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5793518 data_alloc: 234881024 data_used: 26558464
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572710912 unmapped: 81289216 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:39.660596+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 417 ms_handle_reset con 0x562dbaa1a800 session 0x562dbd5d8d20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572719104 unmapped: 81281024 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:40.660842+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbca8cc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 417 ms_handle_reset con 0x562dbca8cc00 session 0x562dc822fc20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc720000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572719104 unmapped: 81281024 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 417 ms_handle_reset con 0x562dbc720000 session 0x562dbbb14b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:41.661705+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 417 ms_handle_reset con 0x562dbb5a4800 session 0x562dbba82000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572727296 unmapped: 81272832 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:42.661949+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 417 heartbeat osd_stat(store_statfs(0x193e6e000/0x0/0x1bfc00000, data 0x3836f9d/0x3a5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572727296 unmapped: 81272832 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:43.662220+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 417 heartbeat osd_stat(store_statfs(0x193e6e000/0x0/0x1bfc00000, data 0x3836f9d/0x3a5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5793518 data_alloc: 234881024 data_used: 26558464
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572727296 unmapped: 81272832 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:44.662391+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 417 heartbeat osd_stat(store_statfs(0x193e6e000/0x0/0x1bfc00000, data 0x3836f9d/0x3a5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573784064 unmapped: 80216064 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:45.662532+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 417 ms_handle_reset con 0x562dbd67f400 session 0x562dbd5d8d20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:46.662692+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:47.662918+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:48.663062+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569148 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:49.663203+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 heartbeat osd_stat(store_statfs(0x194f3d000/0x0/0x1bfc00000, data 0x2767a7a/0x298f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:50.663390+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:51.663537+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:52.663710+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 heartbeat osd_stat(store_statfs(0x194f3d000/0x0/0x1bfc00000, data 0x2767a7a/0x298f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:53.663855+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569148 data_alloc: 218103808 data_used: 8933376
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:54.663994+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569262080 unmapped: 84738048 heap: 654000128 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:55.664150+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77fc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dba77fc00 session 0x562dba8954a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd881000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4ec00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.006954193s of 18.347778320s, submitted: 84
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbce4ec00 session 0x562dbba83860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbd881000 session 0x562dbd526960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbc73c800 session 0x562dbbb14f00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed2400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dc1ed2400 session 0x562dbabfa3c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dba77fc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dba77fc00 session 0x562dbd84a000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbc73c800 session 0x562dba872b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4ec00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 581124096 unmapped: 76652544 heap: 657776640 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 heartbeat osd_stat(store_statfs(0x194f3d000/0x0/0x1bfc00000, data 0x2767a7a/0x298f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2832f9c6), peers [0,1] op hist [0,0,0,0,0,0,1,3,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:56.664290+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbce4ec00 session 0x562dc18332c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 581312512 unmapped: 88498176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:57.664430+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd881000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 ms_handle_reset con 0x562dbd881000 session 0x562dbd7ff4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574291968 unmapped: 95518720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:58.664574+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 79K writes, 327K keys, 79K commit groups, 1.0 writes per commit group, ingest: 0.33 GB, 0.05 MB/s
                                           Cumulative WAL: 79K writes, 29K syncs, 2.74 writes per sync, written: 0.33 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3124 writes, 12K keys, 3124 commit groups, 1.0 writes per commit group, ingest: 12.28 MB, 0.02 MB/s
                                           Interval WAL: 3124 writes, 1286 syncs, 2.43 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ee400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc04af800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5783238 data_alloc: 234881024 data_used: 19243008
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dd2f18800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574316544 unmapped: 95494144 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:59.664725+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 419 ms_handle_reset con 0x562dd2f18800 session 0x562dbc8de3c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570941440 unmapped: 98869248 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:00.664892+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 419 heartbeat osd_stat(store_statfs(0x193d58000/0x0/0x1bfc00000, data 0x353b74a/0x3765000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 98836480 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:01.665095+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 98836480 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:02.665198+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 98836480 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:03.665370+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5783588 data_alloc: 234881024 data_used: 21766144
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 98836480 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:04.665525+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 98836480 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:05.665720+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.061226845s of 10.367254257s, submitted: 36
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567123968 unmapped: 102686720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d58000/0x0/0x1bfc00000, data 0x353b74a/0x3765000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:06.665906+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567123968 unmapped: 102686720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:07.666080+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567123968 unmapped: 102686720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:08.666184+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d55000/0x0/0x1bfc00000, data 0x353d289/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: mgrc ms_handle_reset ms_handle_reset con 0x562dbf563800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3465938080
Jan 31 09:20:49 compute-2 ceph-osd[79942]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3465938080,v1:192.168.122.100:6801/3465938080]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: get_auth_request con 0x562dd2f18800 auth_method 0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: mgrc handle_mgr_configure stats_period=5
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5776882 data_alloc: 234881024 data_used: 21770240
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567123968 unmapped: 102686720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:09.666299+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567123968 unmapped: 102686720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:10.666456+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:11.666620+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:12.666799+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:13.666955+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5802946 data_alloc: 234881024 data_used: 24379392
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:14.667123+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d56000/0x0/0x1bfc00000, data 0x353d289/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:15.667267+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:16.667513+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d56000/0x0/0x1bfc00000, data 0x353d289/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567189504 unmapped: 102621184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:17.667722+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:18.667910+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5809346 data_alloc: 234881024 data_used: 25161728
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:19.668118+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d56000/0x0/0x1bfc00000, data 0x353d289/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:20.668453+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:21.668608+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbf6ee400 session 0x562dbb9def00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.527951241s of 15.564998627s, submitted: 14
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc04af800 session 0x562dbd5d9c20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:22.668764+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbbb83800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:23.668928+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5808510 data_alloc: 234881024 data_used: 25161728
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:24.669171+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x193d56000/0x0/0x1bfc00000, data 0x353d289/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 102588416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:25.669358+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954d8000/0x0/0x1bfc00000, data 0x1dbb289/0x1fe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954d8000/0x0/0x1bfc00000, data 0x1dbb289/0x1fe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbbb83800 session 0x562dbd84ab40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:26.669508+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954d8000/0x0/0x1bfc00000, data 0x1dbb289/0x1fe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:27.669650+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:28.669766+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5515958 data_alloc: 218103808 data_used: 8941568
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:29.669893+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:30.670057+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:31.670240+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:32.670376+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:33.670502+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5515958 data_alloc: 218103808 data_used: 8941568
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:34.670649+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:35.670798+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:36.670931+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:37.671091+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:38.671248+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5515958 data_alloc: 218103808 data_used: 8941568
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:39.671413+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed3000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:40.671570+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1ed3000 session 0x562dbd1b21e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbb5a4400 session 0x562dbd5d94a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:41.671737+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:42.671880+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.716526031s of 20.871770859s, submitted: 35
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbb5a4400 session 0x562dbd84f680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:43.672091+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5520866 data_alloc: 218103808 data_used: 10252288
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567230464 unmapped: 102580224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:44.672229+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc54c6c00 session 0x562dbecb32c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567246848 unmapped: 102563840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:45.672354+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc3669000 session 0x562dbbb14960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd881800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbd881800 session 0x562dbd5ff4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1100c00 session 0x562dbc8f2780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:46.672495+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567558144 unmapped: 102252544 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:47.672685+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567558144 unmapped: 102252544 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0b000/0x0/0x1bfc00000, data 0x2488276/0x26b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:48.672845+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 567566336 unmapped: 102244352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5576699 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0b000/0x0/0x1bfc00000, data 0x2488276/0x26b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,2,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:49.672972+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565886976 unmapped: 103923712 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0b000/0x0/0x1bfc00000, data 0x2488276/0x26b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:50.673152+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565886976 unmapped: 103923712 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:51.673299+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565886976 unmapped: 103923712 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:52.673381+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565919744 unmapped: 103890944 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:53.673503+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565919744 unmapped: 103890944 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.528172493s of 10.987412453s, submitted: 168
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5576771 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:54.673680+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565936128 unmapped: 103874560 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0b000/0x0/0x1bfc00000, data 0x2488276/0x26b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,2])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:55.673807+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 565985280 unmapped: 103825408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1496800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1496800 session 0x562dbba83e00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:56.673952+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566042624 unmapped: 103768064 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4e000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbce4e000 session 0x562dbc8f23c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:57.674127+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2855800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc2855800 session 0x562dba894780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566059008 unmapped: 103751680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbb5a4000 session 0x562dbb548780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0b000/0x0/0x1bfc00000, data 0x2488276/0x26b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:58.674250+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3668000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566050816 unmapped: 103759872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5579808 data_alloc: 218103808 data_used: 10407936
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:59.674378+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566059008 unmapped: 103751680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:00.674529+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:01.674653+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:02.674775+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0a000/0x0/0x1bfc00000, data 0x2488299/0x26b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:03.674916+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5629248 data_alloc: 234881024 data_used: 17272832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:04.675060+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0a000/0x0/0x1bfc00000, data 0x2488299/0x26b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:05.675198+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:06.675327+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:07.675477+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:08.675650+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5629248 data_alloc: 234881024 data_used: 17272832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0a000/0x0/0x1bfc00000, data 0x2488299/0x26b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:09.675825+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566362112 unmapped: 103448576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194e0a000/0x0/0x1bfc00000, data 0x2488299/0x26b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.626567841s of 16.307867050s, submitted: 196
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:10.675986+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 96436224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:11.676151+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570761216 unmapped: 99049472 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1943db000/0x0/0x1bfc00000, data 0x2eb7299/0x30e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:12.676256+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570761216 unmapped: 99049472 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:13.676429+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570761216 unmapped: 99049472 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5714668 data_alloc: 234881024 data_used: 18874368
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x194394000/0x0/0x1bfc00000, data 0x2efe299/0x312a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:14.676577+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:15.676723+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:16.680579+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19435b000/0x0/0x1bfc00000, data 0x2f37299/0x3163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:17.680768+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:18.680932+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19435b000/0x0/0x1bfc00000, data 0x2f37299/0x3163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722260 data_alloc: 234881024 data_used: 19111936
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:19.681091+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19435b000/0x0/0x1bfc00000, data 0x2f37299/0x3163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:20.681247+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.862438202s of 10.933945656s, submitted: 83
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:21.681422+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:22.681614+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:23.681875+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc3668000 session 0x562dbd526b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5723952 data_alloc: 234881024 data_used: 19161088
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:24.682496+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:25.682728+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76299/0x31a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:26.683505+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 98787328 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbb5a4000 session 0x562dba873e00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:27.683774+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:28.684159+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722728 data_alloc: 234881024 data_used: 19156992
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:29.684342+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:30.684747+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:31.685168+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:32.685377+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:33.685521+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722728 data_alloc: 234881024 data_used: 19156992
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:34.685824+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:35.686062+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:36.686369+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:37.686568+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431c000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:38.686723+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722728 data_alloc: 234881024 data_used: 19156992
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:39.687003+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:40.687237+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbac10800 session 0x562dbd84ad20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:41.687375+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc3669000 session 0x562dbd84bc20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 98762752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7ce800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbc7ce800 session 0x562dbd5d9a40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.005628586s of 21.517366409s, submitted: 22
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:42.687542+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1100c00 session 0x562dbb56f2c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571064320 unmapped: 98746368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:43.687732+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5724341 data_alloc: 234881024 data_used: 19165184
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:44.687910+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:45.688124+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:46.688250+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:47.688427+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:48.688642+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5725301 data_alloc: 234881024 data_used: 19234816
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:49.688778+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:50.688986+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:51.689213+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:52.689391+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:53.689544+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5725301 data_alloc: 234881024 data_used: 19234816
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:54.689759+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:55.689860+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.429984093s of 13.465677261s, submitted: 7
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:56.690015+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:57.690186+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:58.690428+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5741525 data_alloc: 234881024 data_used: 20668416
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:59.690810+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:00.690955+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:01.691074+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:02.691230+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:03.691373+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5741525 data_alloc: 234881024 data_used: 20668416
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:04.691531+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:05.691719+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:06.691873+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:07.692082+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:08.692226+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5741525 data_alloc: 234881024 data_used: 20668416
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:09.692374+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:10.692512+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:11.692667+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:12.692773+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:13.692932+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5741525 data_alloc: 234881024 data_used: 20668416
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:14.693081+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:15.693213+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571072512 unmapped: 98738176 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.990146637s of 20.020147324s, submitted: 4
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:16.693348+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:17.693507+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:18.693655+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5742629 data_alloc: 234881024 data_used: 20701184
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:19.693799+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbac10800 session 0x562dbd526b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbb5a4000 session 0x562dbb34ed20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ffc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:20.693910+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc10ffc00 session 0x562dbbb14960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:21.694126+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:22.694264+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ee000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbf6ee000 session 0x562dbd84a000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:23.694416+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1496800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 98729984 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x19431d000/0x0/0x1bfc00000, data 0x2f76276/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1496800 session 0x562dbbb14f00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:24.694544+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 98721792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:25.694669+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 98721792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:26.694798+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 98721792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:27.694966+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 98721792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:28.695090+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 98721792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:29.695220+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:30.695357+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:31.695498+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:32.695611+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:33.696159+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:34.696354+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:35.696497+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:36.696666+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 98713600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:37.696860+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:38.697046+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:39.697271+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:40.697429+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:41.697609+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:42.697745+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:43.697863+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:44.698120+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 98705408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:45.698286+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:46.698447+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:47.698659+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:48.698795+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:49.699187+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:50.699353+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:51.699503+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:52.699706+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 98697216 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:53.699849+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:54.699990+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:55.700143+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:56.700306+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:57.700659+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:58.700835+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:59.703167+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:00.703303+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 98672640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:01.703444+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:02.704365+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:03.704571+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:04.705046+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:05.705350+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:06.705856+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:07.706285+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:08.706556+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:09.706844+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:10.707093+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:11.707222+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:12.707452+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:13.707762+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:14.707927+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:15.708095+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:16.708288+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 98656256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:17.708635+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571170816 unmapped: 98639872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:18.708803+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571179008 unmapped: 98631680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:19.709392+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:20.709614+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:21.710112+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:22.710231+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:23.710522+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:24.710710+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571187200 unmapped: 98623488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:25.710951+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:27.152975+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:28.153292+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:29.153417+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:30.153686+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:31.153844+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:32.154148+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:33.154294+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 98615296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:34.155238+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:35.155377+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:36.155505+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:37.155642+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:38.155840+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:39.155995+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:40.156122+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:41.156279+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 98598912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:42.156590+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:43.156780+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:44.157478+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526753 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:45.157838+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:46.158205+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:47.158517+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fd000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 91.183334351s of 91.992614746s, submitted: 27
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:48.158809+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 98582528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:49.158975+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571236352 unmapped: 98574336 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532376 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:50.159257+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbc73c000 session 0x562dbb9dfe00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:51.159461+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:52.159718+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:53.159858+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:54.160284+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532304 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:55.160564+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571252736 unmapped: 98557952 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:56.160810+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571260928 unmapped: 98549760 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:57.161080+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571260928 unmapped: 98549760 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:58.161318+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:59.161446+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532304 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:00.161637+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:01.161821+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:02.161989+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:03.162198+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:04.162366+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532304 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:05.162519+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 98533376 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:06.162686+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:07.162836+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:08.163064+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:09.163242+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532304 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:10.163404+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:11.163558+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:12.163747+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:13.163911+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 98525184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:14.164012+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 98508800 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532304 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:15.164193+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc04ae400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc04ae400 session 0x562dbc8f3c20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 98508800 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:16.164354+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 98508800 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:17.164510+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571310080 unmapped: 98500608 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:18.164697+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73a000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbc73a000 session 0x562dba8a1a40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571310080 unmapped: 98500608 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:19.164777+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed3000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1ed3000 session 0x562dbb9df860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1101800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.385900497s of 31.481319427s, submitted: 5
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571310080 unmapped: 98500608 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:20.164901+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5532256 data_alloc: 218103808 data_used: 10280960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 98492416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1101800 session 0x562dbecb3e00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:21.165066+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73a000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73c000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 98492416 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:22.165183+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:23.165274+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:24.165365+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:25.165504+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5536132 data_alloc: 218103808 data_used: 10547200
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x195498000/0x0/0x1bfc00000, data 0x1dfb2c9/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:26.165663+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:27.165801+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:28.165940+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:29.166322+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 98484224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:30.166589+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5536132 data_alloc: 218103808 data_used: 10547200
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x195498000/0x0/0x1bfc00000, data 0x1dfb2c9/0x2026000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbc73a000 session 0x562dbb34e1e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbc73c000 session 0x562dc18330e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571334656 unmapped: 98476032 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57da400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.131487846s of 11.351829529s, submitted: 2
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:31.166736+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc57da400 session 0x562dbecb2000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:32.166875+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:33.167061+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:34.167234+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:35.167392+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5533976 data_alloc: 218103808 data_used: 10543104
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:36.167591+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e4400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc18e4400 session 0x562dbb47ef00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43dd800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:37.167725+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 98467840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:38.259549+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954bc000/0x0/0x1bfc00000, data 0x1dd72c9/0x2002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571359232 unmapped: 98451456 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc43dd800 session 0x562dbd84f4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:39.259882+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5529916 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:40.260049+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:41.260484+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:42.260633+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:43.260944+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:44.261090+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e0000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5529916 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:45.261378+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbe2e0000 session 0x562dbd84da40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57d9400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc57d9400 session 0x562dba285a40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:46.261551+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:47.261780+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 98443264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:48.261955+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ff800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.310680389s of 17.490665436s, submitted: 12
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97266/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 98435072 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc10ff800 session 0x562dbd84e1e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:49.262082+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 98435072 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:50.262215+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5533282 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 98435072 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:51.262578+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 98435072 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:52.262729+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ecc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbe3ecc00 session 0x562dbd7ff680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed3400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 98426880 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x1954fc000/0x0/0x1bfc00000, data 0x1d97276/0x1fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:53.262875+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 98410496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:54.263020+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 98394112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dc1ed3400 session 0x562dbc8dfc20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:55.263244+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5531817 data_alloc: 218103808 data_used: 10276864
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e0000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 98394112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:56.263386+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbe2e0000 session 0x562dbecb3680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 98394112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ecc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:57.263546+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 ms_handle_reset con 0x562dbe3ecc00 session 0x562dbd5feb40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 98394112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:58.263718+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc737400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.789875031s of 10.388438225s, submitted: 39
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 98394112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 heartbeat osd_stat(store_statfs(0x195113000/0x0/0x1bfc00000, data 0x2180276/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:59.263975+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _renew_subs
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 98377728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:00.264115+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5573668 data_alloc: 218103808 data_used: 10289152
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbc737400 session 0x562dbb549680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 98377728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:01.264265+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 98377728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:02.264393+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:03.264547+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:04.264680+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x19510f000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:05.264881+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5573668 data_alloc: 218103808 data_used: 10289152
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:06.265014+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x19510f000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:07.265154+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:08.265326+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 98369536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:09.265487+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 98361344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:10.265619+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5573668 data_alloc: 218103808 data_used: 10289152
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbcffbc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed3000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.935684204s of 11.886166573s, submitted: 7
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x19510f000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:11.265786+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dc1ed3000 session 0x562dc822c5a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbcffbc00 session 0x562dbb34f680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:12.265939+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x195110000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:13.266140+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:14.266308+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:15.266457+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 09:20:49 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1725596042' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5574262 data_alloc: 218103808 data_used: 10289152
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57d9000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dc57d9000 session 0x562dba8a1860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:16.266589+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbc721c00 session 0x562dbd5732c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 98353152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:17.266823+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf563800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbf563800 session 0x562dbd84a3c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571465728 unmapped: 98344960 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43df400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:18.267084+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x195110000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571473920 unmapped: 98336768 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:19.267243+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dc43df400 session 0x562dba8b65a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571482112 unmapped: 98328576 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:20.267587+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1b800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5580845 data_alloc: 218103808 data_used: 10289152
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x1950ea000/0x0/0x1bfc00000, data 0x21a5f02/0x23d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571498496 unmapped: 98312192 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:21.267709+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:22.267826+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:23.268019+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:24.268185+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:25.268463+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5607857 data_alloc: 218103808 data_used: 14090240
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbaa1b800 session 0x562dbd84e3c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.251832008s of 14.803118706s, submitted: 14
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbaa19400 session 0x562dbbac1860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x1950ea000/0x0/0x1bfc00000, data 0x21a5f02/0x23d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:26.268620+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:27.268808+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:28.269082+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbc721c00 session 0x562dba8730e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x19510f000/0x0/0x1bfc00000, data 0x2181ef2/0x23af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:29.269273+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:30.269444+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5603924 data_alloc: 218103808 data_used: 14086144
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a82c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:31.269672+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dc5a82c00 session 0x562dbd1b32c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2855800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 98304000 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:32.269831+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 98295808 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:33.269985+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 98295808 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 heartbeat osd_stat(store_statfs(0x195110000/0x0/0x1bfc00000, data 0x2181ecf/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:34.270135+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dc2855800 session 0x562dba284960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 98295808 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:35.270285+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5603179 data_alloc: 218103808 data_used: 14086144
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 98295808 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60f400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.243093014s of 10.641144753s, submitted: 30
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:36.270435+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 ms_handle_reset con 0x562dbb60f400 session 0x562dbd5732c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 98287616 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:37.270592+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 422 ms_handle_reset con 0x562dbaa19400 session 0x562dbd5feb40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572579840 unmapped: 97230848 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 422 heartbeat osd_stat(store_statfs(0x19510c000/0x0/0x1bfc00000, data 0x2183b7c/0x23b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:38.270771+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572579840 unmapped: 97230848 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 422 ms_handle_reset con 0x562dbc721c00 session 0x562dba285a40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:39.270919+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbae52400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 572579840 unmapped: 97230848 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:40.271089+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5605593 data_alloc: 218103808 data_used: 14090240
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:41.271264+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 422 ms_handle_reset con 0x562dbae52400 session 0x562dbb47ef00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:42.271398+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:43.271570+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 422 heartbeat osd_stat(store_statfs(0x1954f7000/0x0/0x1bfc00000, data 0x1d9ab6c/0x1fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:44.271715+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:45.271859+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5547029 data_alloc: 218103808 data_used: 10293248
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc720c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 422 ms_handle_reset con 0x562dbc720c00 session 0x562dbb34ed20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc54c6800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:46.271994+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 422 ms_handle_reset con 0x562dc54c6800 session 0x562dbb56f2c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.051100731s of 11.013141632s, submitted: 41
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:47.272159+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x1954f7000/0x0/0x1bfc00000, data 0x1d9ab6c/0x1fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 98099200 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x1954f3000/0x0/0x1bfc00000, data 0x1d9c6ab/0x1fca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:48.272346+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa19400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dbaa19400 session 0x562dbbb14960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571719680 unmapped: 98091008 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:49.272498+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571736064 unmapped: 98074624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:50.272647+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5552677 data_alloc: 218103808 data_used: 10301440
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571736064 unmapped: 98074624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:51.272806+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60ec00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dbb60ec00 session 0x562dbd1b30e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43df400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571736064 unmapped: 98074624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x1954f4000/0x0/0x1bfc00000, data 0x1d9c6ab/0x1fca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:52.272962+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571736064 unmapped: 98074624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:53.273133+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dc43df400 session 0x562dbd84d4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571736064 unmapped: 98074624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaf8fc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:54.273335+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dbaf8fc00 session 0x562dbd5d81e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dbb5a4400 session 0x562dbd5d9c20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:55.273493+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608672 data_alloc: 218103808 data_used: 10301440
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:56.273646+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:57.273830+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:58.274002+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:59.274146+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:00.274261+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608992 data_alloc: 218103808 data_used: 10309632
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:01.274378+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:02.274610+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:03.274912+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:04.275169+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:05.275353+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608992 data_alloc: 218103808 data_used: 10309632
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:06.275609+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:07.275779+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:08.276088+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:09.276483+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:10.276642+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5608992 data_alloc: 218103808 data_used: 10309632
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:11.276794+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:12.276976+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:13.277130+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe2e0800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.204570770s of 26.430173874s, submitted: 37
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:14.277584+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 ms_handle_reset con 0x562dbe2e0800 session 0x562dbc8f2d20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ed800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566673408 unmapped: 103137280 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:15.277951+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5610621 data_alloc: 218103808 data_used: 10309632
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:16.278230+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566681600 unmapped: 103129088 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:17.278514+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:18.278813+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:19.279108+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:20.279239+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5665661 data_alloc: 234881024 data_used: 17715200
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:21.279479+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:22.279642+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:23.279825+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:24.280104+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5665661 data_alloc: 234881024 data_used: 17715200
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:25.755279+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:26.756808+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x194d85000/0x0/0x1bfc00000, data 0x250b6ab/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 566689792 unmapped: 103120896 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.507968903s of 14.137735367s, submitted: 7
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:27.757021+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 568377344 unmapped: 101433344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:28.757878+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:29.758158+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x1947a2000/0x0/0x1bfc00000, data 0x2aee6ab/0x2d1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:30.758334+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5710805 data_alloc: 234881024 data_used: 18042880
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:31.758669+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:32.758859+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:33.759111+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:34.759345+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:35.759538+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5710017 data_alloc: 234881024 data_used: 18305024
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:36.759730+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569614336 unmapped: 100196352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:37.759907+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:38.760203+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:39.760413+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:40.760660+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5710017 data_alloc: 234881024 data_used: 18305024
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:41.760824+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:42.760964+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:43.761134+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:44.761266+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569622528 unmapped: 100188160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:45.761422+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5710017 data_alloc: 234881024 data_used: 18305024
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:46.764084+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:47.767230+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:48.769783+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:49.771170+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:50.772589+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5710017 data_alloc: 234881024 data_used: 18305024
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:51.772906+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:52.773164+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569630720 unmapped: 100179968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:53.773691+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbaa1a400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569638912 unmapped: 100171776 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:54.773840+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6eec00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.571533203s of 27.842430115s, submitted: 62
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569638912 unmapped: 100171776 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:55.773987+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5714694 data_alloc: 234881024 data_used: 18313216
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 424 heartbeat osd_stat(store_statfs(0x19479a000/0x0/0x1bfc00000, data 0x2af66ab/0x2d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 424 ms_handle_reset con 0x562dbaa1a400 session 0x562dbd84d0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569655296 unmapped: 100155392 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:56.774137+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dbf6eec00 session 0x562dbba82000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:57.774395+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569671680 unmapped: 100139008 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:58.774629+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569679872 unmapped: 100130816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194793000/0x0/0x1bfc00000, data 0x2af9f5d/0x2d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:59.775544+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569679872 unmapped: 100130816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:00.775741+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569679872 unmapped: 100130816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5717444 data_alloc: 234881024 data_used: 18313216
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194793000/0x0/0x1bfc00000, data 0x2af9f5d/0x2d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194793000/0x0/0x1bfc00000, data 0x2af9f5d/0x2d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:01.775881+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569688064 unmapped: 100122624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:02.776083+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:03.776325+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:04.776943+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:05.777374+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5739694 data_alloc: 234881024 data_used: 18313216
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:06.777644+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194792000/0x0/0x1bfc00000, data 0x2e04f5d/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:07.777803+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1100800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc2855c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dc2855c00 session 0x562dbc8f30e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dc1100800 session 0x562dbd84dc20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:08.778117+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:09.778390+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569696256 unmapped: 100114432 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194792000/0x0/0x1bfc00000, data 0x2e04f5d/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:10.778559+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5739694 data_alloc: 234881024 data_used: 18313216
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:11.778722+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:12.778923+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:13.779138+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194792000/0x0/0x1bfc00000, data 0x2e04f5d/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:14.779357+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:15.779609+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569704448 unmapped: 100106240 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5739694 data_alloc: 234881024 data_used: 18313216
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.885513306s of 20.959287643s, submitted: 11
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:16.779830+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569679872 unmapped: 100130816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:17.779982+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569679872 unmapped: 100130816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194792000/0x0/0x1bfc00000, data 0x2e04f5d/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dbd67f800 session 0x562dc822c000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:18.780155+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5a82c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1ed3000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:19.780309+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:20.780415+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5745121 data_alloc: 234881024 data_used: 18382848
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194768000/0x0/0x1bfc00000, data 0x2e2ef5d/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:21.780703+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:22.780850+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:23.781007+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:24.781159+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:25.781287+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5745121 data_alloc: 234881024 data_used: 18382848
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:26.781402+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194768000/0x0/0x1bfc00000, data 0x2e2ef5d/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:27.781529+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:28.781669+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:29.781792+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:30.781904+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5745121 data_alloc: 234881024 data_used: 18382848
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:31.782055+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570040320 unmapped: 99770368 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.065715790s of 15.500749588s, submitted: 8
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194768000/0x0/0x1bfc00000, data 0x2e2ef5d/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:32.782169+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574160896 unmapped: 95649792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:33.782285+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574160896 unmapped: 95649792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:34.788855+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574160896 unmapped: 95649792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:35.789002+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574160896 unmapped: 95649792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5824545 data_alloc: 234881024 data_used: 21880832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:36.789154+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574341120 unmapped: 95469568 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'config diff' '{prefix=config diff}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'config show' '{prefix=config show}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:37.789264+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573784064 unmapped: 96026624 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:38.789426+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571121664 unmapped: 98689024 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'log dump' '{prefix=log dump}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:39.789839+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 571146240 unmapped: 98664448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'perf dump' '{prefix=perf dump}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'perf schema' '{prefix=perf schema}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:40.790011+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570318848 unmapped: 99491840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5816865 data_alloc: 234881024 data_used: 21880832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:41.790157+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570318848 unmapped: 99491840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:42.790282+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570318848 unmapped: 99491840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:43.790458+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570318848 unmapped: 99491840 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.642022133s of 12.396923065s, submitted: 9
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:44.790578+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570368000 unmapped: 99442688 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:45.790707+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570368000 unmapped: 99442688 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5816513 data_alloc: 234881024 data_used: 21880832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:46.790832+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570368000 unmapped: 99442688 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:47.790953+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570368000 unmapped: 99442688 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:48.791121+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570368000 unmapped: 99442688 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:49.791250+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:50.791331+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5816865 data_alloc: 234881024 data_used: 21880832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:51.791481+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:52.791587+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:53.791723+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:54.791857+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:55.791968+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5816513 data_alloc: 234881024 data_used: 21880832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:56.792083+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:57.792206+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:58.792384+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.434090614s of 14.463706970s, submitted: 4
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dc5a82c00 session 0x562dbd84ba40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:59.792534+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570408960 unmapped: 99401728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dc1ed3000 session 0x562dbd5d8b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x1941e9000/0x0/0x1bfc00000, data 0x33adf5d/0x32d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc10ffc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:00.792679+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570425344 unmapped: 99385344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5816397 data_alloc: 234881024 data_used: 21889024
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:01.792831+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570425344 unmapped: 99385344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:02.792961+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570425344 unmapped: 99385344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:03.793114+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dc10ffc00 session 0x562dba894f00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570441728 unmapped: 99368960 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:04.793243+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570441728 unmapped: 99368960 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4f400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:05.793357+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194213000/0x0/0x1bfc00000, data 0x3383f5d/0x32ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dbce4f400 session 0x562dc822d860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbd67f800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570449920 unmapped: 99360768 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807869 data_alloc: 234881024 data_used: 21753856
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:06.793482+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570449920 unmapped: 99360768 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:07.793607+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570449920 unmapped: 99360768 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:08.793789+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dbd67f800 session 0x562dc822cf00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570449920 unmapped: 99360768 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:09.793931+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570449920 unmapped: 99360768 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3667400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 ms_handle_reset con 0x562dc3667400 session 0x562dba0592c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 heartbeat osd_stat(store_statfs(0x194792000/0x0/0x1bfc00000, data 0x2e04f5d/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ee000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:10.794079+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570449920 unmapped: 99360768 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5764269 data_alloc: 234881024 data_used: 21688320
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:11.794203+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.148148537s of 12.946197510s, submitted: 21
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 ms_handle_reset con 0x562dbf6ee000 session 0x562dbb34e5a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570449920 unmapped: 99360768 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 ms_handle_reset con 0x562dc5acd800 session 0x562dbd527a40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:12.794348+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570449920 unmapped: 99360768 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 ms_handle_reset con 0x562dbe3ed800 session 0x562dbb56e780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:13.794485+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3ed800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570474496 unmapped: 99336192 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:14.794621+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 ms_handle_reset con 0x562dbe3ed800 session 0x562dbd84e3c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570474496 unmapped: 99336192 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 heartbeat osd_stat(store_statfs(0x194791000/0x0/0x1bfc00000, data 0x2afbc0a/0x2d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 heartbeat osd_stat(store_statfs(0x194791000/0x0/0x1bfc00000, data 0x2afbc0a/0x2d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:15.794776+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570474496 unmapped: 99336192 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5754277 data_alloc: 234881024 data_used: 21696512
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:16.794908+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 heartbeat osd_stat(store_statfs(0x194791000/0x0/0x1bfc00000, data 0x2afbc0a/0x2d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf6ef000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 ms_handle_reset con 0x562dbf6ef000 session 0x562dbbb154a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 570474496 unmapped: 99336192 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4e000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 ms_handle_reset con 0x562dbce4e000 session 0x562dbb4f92c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:17.795039+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569040896 unmapped: 100769792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:18.795420+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569040896 unmapped: 100769792 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 heartbeat osd_stat(store_statfs(0x1954eb000/0x0/0x1bfc00000, data 0x1da1c0a/0x1fd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:19.795548+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569049088 unmapped: 100761600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 427 heartbeat osd_stat(store_statfs(0x1954e7000/0x0/0x1bfc00000, data 0x1da3749/0x1fd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:20.795728+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e5800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569049088 unmapped: 100761600 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5579263 data_alloc: 218103808 data_used: 10031104
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:21.795911+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.758684158s of 10.042148590s, submitted: 45
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569057280 unmapped: 100753408 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:22.796279+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 428 heartbeat osd_stat(store_statfs(0x1954e4000/0x0/0x1bfc00000, data 0x1da53f6/0x1fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569073664 unmapped: 100737024 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 428 heartbeat osd_stat(store_statfs(0x1954e4000/0x0/0x1bfc00000, data 0x1da53f6/0x1fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:23.796485+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569073664 unmapped: 100737024 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:24.796636+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569081856 unmapped: 100728832 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:25.796766+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 428 ms_handle_reset con 0x562dc18e5800 session 0x562dbd84a5a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569090048 unmapped: 100720640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5581834 data_alloc: 218103808 data_used: 10039296
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:26.796903+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569090048 unmapped: 100720640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 428 heartbeat osd_stat(store_statfs(0x1954e5000/0x0/0x1bfc00000, data 0x1da53f6/0x1fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:27.797127+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569090048 unmapped: 100720640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 428 heartbeat osd_stat(store_statfs(0x1954e5000/0x0/0x1bfc00000, data 0x1da53f6/0x1fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:28.797301+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569090048 unmapped: 100720640 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:29.797434+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569098240 unmapped: 100712448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:30.797572+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569098240 unmapped: 100712448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5586008 data_alloc: 218103808 data_used: 10047488
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:31.797700+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569098240 unmapped: 100712448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x1954e1000/0x0/0x1bfc00000, data 0x1da6f35/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:32.797835+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x1954e1000/0x0/0x1bfc00000, data 0x1da6f35/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569098240 unmapped: 100712448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:33.797960+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569098240 unmapped: 100712448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:34.798096+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569098240 unmapped: 100712448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:35.798228+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569098240 unmapped: 100712448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5586168 data_alloc: 218103808 data_used: 10051584
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x1954e1000/0x0/0x1bfc00000, data 0x1da6f35/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:36.798406+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569106432 unmapped: 100704256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acc400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:37.798532+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dc5acc400 session 0x562dbd5d8b40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569114624 unmapped: 100696064 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbce4e000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:38.798702+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbce4e000 session 0x562dc822c000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569114624 unmapped: 100696064 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:39.798877+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569114624 unmapped: 100696064 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:40.799043+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569114624 unmapped: 100696064 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5586168 data_alloc: 218103808 data_used: 10051584
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc18e5800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.397699356s of 19.772464752s, submitted: 39
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x1954e1000/0x0/0x1bfc00000, data 0x1da6f35/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:41.799175+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dc18e5800 session 0x562dbc8f30e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569122816 unmapped: 100687872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:42.799295+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569122816 unmapped: 100687872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:43.799460+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569122816 unmapped: 100687872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x1954e0000/0x0/0x1bfc00000, data 0x1da6f45/0x1fdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:44.799604+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569122816 unmapped: 100687872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:45.799786+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbac10800 session 0x562dbd84d0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569131008 unmapped: 100679680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5587996 data_alloc: 218103808 data_used: 10051584
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73a800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:46.799960+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x1954e0000/0x0/0x1bfc00000, data 0x1da6f45/0x1fdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbc73a800 session 0x562dbc8f2d20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569131008 unmapped: 100679680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc706c00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:47.800081+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569139200 unmapped: 100671488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbc706c00 session 0x562dbd5d9c20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:48.800236+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbac10800 session 0x562dbbb14960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:49.800407+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:50.800545+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5619872 data_alloc: 218103808 data_used: 10055680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:51.800690+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x19513b000/0x0/0x1bfc00000, data 0x214cf45/0x2383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:52.800887+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:53.801133+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:54.801839+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:55.802049+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5619872 data_alloc: 218103808 data_used: 10055680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:56.802577+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:57.802833+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x19513b000/0x0/0x1bfc00000, data 0x214cf45/0x2383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:58.802989+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:59.803377+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:00.803524+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569147392 unmapped: 100663296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5619872 data_alloc: 218103808 data_used: 10055680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:01.803851+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569163776 unmapped: 100646912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x19513b000/0x0/0x1bfc00000, data 0x214cf45/0x2383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:02.804063+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569163776 unmapped: 100646912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:03.804266+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569163776 unmapped: 100646912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x19513b000/0x0/0x1bfc00000, data 0x214cf45/0x2383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:04.804600+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569163776 unmapped: 100646912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:05.804761+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569163776 unmapped: 100646912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5619872 data_alloc: 218103808 data_used: 10055680
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:06.804925+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569163776 unmapped: 100646912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:07.805128+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569163776 unmapped: 100646912 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:08.805349+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb5a4000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.246437073s of 27.328783035s, submitted: 11
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbb5a4000 session 0x562dba285a40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x19513a000/0x0/0x1bfc00000, data 0x214cf68/0x2384000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569311232 unmapped: 100499456 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:09.805609+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cf800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569319424 unmapped: 100491264 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:10.805741+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5653446 data_alloc: 218103808 data_used: 13795328
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:11.805910+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:12.806124+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:13.806365+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x195116000/0x0/0x1bfc00000, data 0x2170f68/0x23a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:14.806552+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x195116000/0x0/0x1bfc00000, data 0x2170f68/0x23a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:15.806750+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5653446 data_alloc: 218103808 data_used: 13795328
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:16.806951+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:17.807097+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x195116000/0x0/0x1bfc00000, data 0x2170f68/0x23a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:18.807297+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:19.807458+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:20.807622+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x195116000/0x0/0x1bfc00000, data 0x2170f68/0x23a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 569524224 unmapped: 100286464 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5653446 data_alloc: 218103808 data_used: 13795328
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.455139160s of 12.554609299s, submitted: 12
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:21.807805+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574275584 unmapped: 95535104 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:22.807957+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x19508f000/0x0/0x1bfc00000, data 0x21f7f68/0x242f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,15])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573775872 unmapped: 96034816 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194b3c000/0x0/0x1bfc00000, data 0x274af68/0x2982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:23.808093+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 576045056 unmapped: 93765632 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:24.808311+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 96436224 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:25.808449+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573341696 unmapped: 96468992 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5727510 data_alloc: 218103808 data_used: 14000128
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:26.808614+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 96362496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:27.809358+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 96362496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:28.809942+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 96362496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194803000/0x0/0x1bfc00000, data 0x2a75f68/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:29.810556+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 96362496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:30.810815+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 96362496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5731542 data_alloc: 218103808 data_used: 13996032
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194803000/0x0/0x1bfc00000, data 0x2a75f68/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:31.811267+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.409059525s of 10.831089020s, submitted: 88
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 96362496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:32.811446+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 96362496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:33.811798+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 96362496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:34.812121+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 96362496 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:35.812371+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573456384 unmapped: 96354304 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5730198 data_alloc: 218103808 data_used: 13979648
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:36.812672+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194803000/0x0/0x1bfc00000, data 0x2a75f68/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbc7cf800 session 0x562dba8730e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573456384 unmapped: 96354304 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbc721000 session 0x562dbb4cb2c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:37.812878+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43e1800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573464576 unmapped: 96346112 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194811000/0x0/0x1bfc00000, data 0x2a75f68/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:38.813146+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573472768 unmapped: 96337920 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:39.813374+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573472768 unmapped: 96337920 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:40.813653+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 96329728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718942 data_alloc: 218103808 data_used: 13864960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dc43e1800 session 0x562dbbb6a1e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:41.813898+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 96329728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:42.814068+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194835000/0x0/0x1bfc00000, data 0x2a51f45/0x2c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 96329728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:43.814256+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 96329728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:44.814496+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 96329728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:45.814673+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 96329728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718942 data_alloc: 218103808 data_used: 13864960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:46.814811+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 96329728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194835000/0x0/0x1bfc00000, data 0x2a51f45/0x2c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:47.815011+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 96329728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:48.815248+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194835000/0x0/0x1bfc00000, data 0x2a51f45/0x2c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 96329728 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:49.815479+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 96321536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:50.815642+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 96321536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718942 data_alloc: 218103808 data_used: 13864960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:51.815820+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 96321536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:52.815984+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194835000/0x0/0x1bfc00000, data 0x2a51f45/0x2c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 96321536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:53.816191+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 96321536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:54.816412+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 96321536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:55.816580+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 96321536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718942 data_alloc: 218103808 data_used: 13864960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:56.816778+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 96321536 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:57.816948+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573497344 unmapped: 96313344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194835000/0x0/0x1bfc00000, data 0x2a51f45/0x2c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 81K writes, 333K keys, 81K commit groups, 1.0 writes per commit group, ingest: 0.34 GB, 0.04 MB/s
                                           Cumulative WAL: 81K writes, 29K syncs, 2.73 writes per sync, written: 0.34 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1949 writes, 6043 keys, 1949 commit groups, 1.0 writes per commit group, ingest: 5.98 MB, 0.01 MB/s
                                           Interval WAL: 1949 writes, 827 syncs, 2.36 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:58.817225+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573497344 unmapped: 96313344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:59.817443+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573497344 unmapped: 96313344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:00.817627+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573497344 unmapped: 96313344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718942 data_alloc: 218103808 data_used: 13864960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:01.817904+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573497344 unmapped: 96313344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:02.818102+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573497344 unmapped: 96313344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:03.818327+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194835000/0x0/0x1bfc00000, data 0x2a51f45/0x2c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573497344 unmapped: 96313344 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:04.818471+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194835000/0x0/0x1bfc00000, data 0x2a51f45/0x2c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573505536 unmapped: 96305152 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:05.818645+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 96296960 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718942 data_alloc: 218103808 data_used: 13864960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194835000/0x0/0x1bfc00000, data 0x2a51f45/0x2c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:06.818848+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf563800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbf563800 session 0x562dbb4ca780
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 96296960 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:07.819003+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 96296960 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cec00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbc7cec00 session 0x562dbb47e000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:08.819290+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 96296960 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:09.819481+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 96296960 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:10.819650+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbf563400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbf563400 session 0x562dc822f4a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3edc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.594203949s of 38.843463898s, submitted: 46
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 96296960 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5720604 data_alloc: 218103808 data_used: 13864960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:11.819797+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194835000/0x0/0x1bfc00000, data 0x2a51f45/0x2c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194835000/0x0/0x1bfc00000, data 0x2a51f55/0x2c89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:12.819963+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 ms_handle_reset con 0x562dbe3edc00 session 0x562dbd7fe5a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb700400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:13.820135+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:14.820350+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:15.820488+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5724640 data_alloc: 218103808 data_used: 13963264
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:16.820657+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194811000/0x0/0x1bfc00000, data 0x2a75f55/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:17.820863+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:18.821198+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194811000/0x0/0x1bfc00000, data 0x2a75f55/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:19.821443+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:20.821592+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5724640 data_alloc: 218103808 data_used: 13963264
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:21.821804+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:22.821993+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194811000/0x0/0x1bfc00000, data 0x2a75f55/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:23.822218+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:24.822468+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 96141312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:25.822650+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.568667412s of 14.664469719s, submitted: 3
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194811000/0x0/0x1bfc00000, data 0x2a75f55/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573693952 unmapped: 96116736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5733872 data_alloc: 218103808 data_used: 14282752
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:26.822924+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573693952 unmapped: 96116736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:27.823092+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573693952 unmapped: 96116736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:28.823304+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573693952 unmapped: 96116736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:29.823509+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194811000/0x0/0x1bfc00000, data 0x2a75f55/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573693952 unmapped: 96116736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:30.823678+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574676992 unmapped: 95133696 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5735568 data_alloc: 234881024 data_used: 14958592
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194811000/0x0/0x1bfc00000, data 0x2a75f55/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:31.823872+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574676992 unmapped: 95133696 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:32.824076+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194811000/0x0/0x1bfc00000, data 0x2a75f55/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574685184 unmapped: 95125504 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:33.824230+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574685184 unmapped: 95125504 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:34.824360+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574685184 unmapped: 95125504 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:35.824517+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574685184 unmapped: 95125504 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5735744 data_alloc: 234881024 data_used: 14958592
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:36.824675+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.054299355s of 11.408945084s, submitted: 21
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574685184 unmapped: 95125504 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:37.824833+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574693376 unmapped: 95117312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 heartbeat osd_stat(store_statfs(0x194811000/0x0/0x1bfc00000, data 0x2a75f55/0x2cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:38.825067+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574693376 unmapped: 95117312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:39.825186+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc73b400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574693376 unmapped: 95117312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:40.825292+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574693376 unmapped: 95117312 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5738752 data_alloc: 234881024 data_used: 15773696
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:41.825416+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dbc73b400 session 0x562dc1832960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574709760 unmapped: 95100928 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:42.825544+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574709760 unmapped: 95100928 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:43.825669+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cec00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbe3edc00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dbe3edc00 session 0x562dbb56e5a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dbc7cec00 session 0x562dbd84e1e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x194807000/0x0/0x1bfc00000, data 0x2b2dbae/0x2cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574709760 unmapped: 95100928 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:44.825790+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574709760 unmapped: 95100928 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:45.825921+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574717952 unmapped: 95092736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5750794 data_alloc: 234881024 data_used: 15781888
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:46.826083+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x194807000/0x0/0x1bfc00000, data 0x2b2dbae/0x2cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574717952 unmapped: 95092736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:47.826192+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574717952 unmapped: 95092736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:48.826388+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574717952 unmapped: 95092736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:49.826602+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574717952 unmapped: 95092736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:50.826726+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x194807000/0x0/0x1bfc00000, data 0x2b2dbae/0x2cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.119896889s of 14.203001022s, submitted: 5
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574685184 unmapped: 95125504 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5750602 data_alloc: 234881024 data_used: 15785984
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:51.826873+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574701568 unmapped: 95109120 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:52.827092+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x194807000/0x0/0x1bfc00000, data 0x2b2dbae/0x2cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574701568 unmapped: 95109120 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:53.827231+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574701568 unmapped: 95109120 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:54.827415+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x194807000/0x0/0x1bfc00000, data 0x2b2dbae/0x2cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574717952 unmapped: 95092736 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:55.827568+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574734336 unmapped: 95076352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5750602 data_alloc: 234881024 data_used: 15785984
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:56.827773+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574734336 unmapped: 95076352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:57.827873+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574734336 unmapped: 95076352 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:58.828064+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x194807000/0x0/0x1bfc00000, data 0x2b2dbae/0x2cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [0,0,0,1])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574742528 unmapped: 95068160 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:59.828190+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc1101400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dc1101400 session 0x562dbd526d20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574750720 unmapped: 95059968 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:00.828317+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc57d9400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dc57d9400 session 0x562dbc776d20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.197306633s of 10.034410477s, submitted: 146
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574767104 unmapped: 95043584 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5750602 data_alloc: 234881024 data_used: 15785984
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:01.828447+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x194807000/0x0/0x1bfc00000, data 0x2b2dbae/0x2cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574767104 unmapped: 95043584 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:02.828579+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43e1000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dc43e1000 session 0x562dc822cb40
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cec00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574767104 unmapped: 95043584 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:03.828766+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dbc7cec00 session 0x562dbc8dfc20
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x1947e1000/0x0/0x1bfc00000, data 0x2b51be0/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2873f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43de000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574914560 unmapped: 94896128 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:04.828871+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:05.828990+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:06.829126+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574906368 unmapped: 94904320 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5758269 data_alloc: 234881024 data_used: 15839232
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:07.829254+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x1943d1000/0x0/0x1bfc00000, data 0x2b51be0/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:08.829420+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x1943d1000/0x0/0x1bfc00000, data 0x2b51be0/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:09.829584+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x1943d1000/0x0/0x1bfc00000, data 0x2b51be0/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:10.829688+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:11.829840+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5758237 data_alloc: 234881024 data_used: 15843328
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x1943d1000/0x0/0x1bfc00000, data 0x2b51be0/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:12.830007+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:13.830247+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:14.830385+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:15.830570+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x1943d1000/0x0/0x1bfc00000, data 0x2b51be0/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:16.830719+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574930944 unmapped: 94879744 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5758237 data_alloc: 234881024 data_used: 15843328
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.832912445s of 16.136066437s, submitted: 105
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:17.830853+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575143936 unmapped: 94666752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:18.831053+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575143936 unmapped: 94666752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x194381000/0x0/0x1bfc00000, data 0x2ba1be0/0x2d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:19.831214+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575143936 unmapped: 94666752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x194381000/0x0/0x1bfc00000, data 0x2ba1be0/0x2d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:20.831369+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575143936 unmapped: 94666752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:21.831510+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575143936 unmapped: 94666752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5779955 data_alloc: 234881024 data_used: 16850944
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:22.831664+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575143936 unmapped: 94666752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:23.831811+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575143936 unmapped: 94666752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x194381000/0x0/0x1bfc00000, data 0x2ba1be0/0x2d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:24.831990+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575143936 unmapped: 94666752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:25.832111+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575143936 unmapped: 94666752 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dbc721000 session 0x562dbd84f2c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dc43de000 session 0x562dbff90960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:26.832232+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5784723 data_alloc: 234881024 data_used: 16850944
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60e800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575078400 unmapped: 94732288 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dbb60e800 session 0x562dc822ef00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:27.832405+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:28.832596+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 heartbeat osd_stat(store_statfs(0x19439a000/0x0/0x1bfc00000, data 0x2bc3bae/0x2d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:29.832777+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc3669800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dc3669800 session 0x562dbb9df860
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbb60e800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.569629669s of 12.641505241s, submitted: 23
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dbb60e800 session 0x562dba8b72c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:30.832912+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc721000
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 ms_handle_reset con 0x562dbc721000 session 0x562dba0592c0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbc7cec00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:31.833061+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5775704 data_alloc: 234881024 data_used: 16613376
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:32.833231+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 431 ms_handle_reset con 0x562dbc7cec00 session 0x562dbff905a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:33.833398+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 431 ms_handle_reset con 0x562dbb700400 session 0x562dbbac0f00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:34.833563+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 431 ms_handle_reset con 0x562dbac10800 session 0x562dbabfbe00
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 431 heartbeat osd_stat(store_statfs(0x1943fa000/0x0/0x1bfc00000, data 0x2a7985b/0x2cb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dbac10800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:35.833744+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:36.833919+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 431 heartbeat osd_stat(store_statfs(0x1943fa000/0x0/0x1bfc00000, data 0x2a7985b/0x2cb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5770395 data_alloc: 234881024 data_used: 16621568
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:37.834089+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 431 ms_handle_reset con 0x562dbac10800 session 0x562dba8a05a0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:38.834302+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:39.834457+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575086592 unmapped: 94724096 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.773720741s of 10.001324654s, submitted: 19
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x194420000/0x0/0x1bfc00000, data 0x2a5584b/0x2c8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:40.834622+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575094784 unmapped: 94715904 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:41.834822+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5768253 data_alloc: 234881024 data_used: 16519168
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575094784 unmapped: 94715904 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc43df400
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 ms_handle_reset con 0x562dc43df400 session 0x562dbb4cb0e0
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: handle_auth_request added challenge on 0x562dc5acd800
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x19441c000/0x0/0x1bfc00000, data 0x2a5738a/0x2c91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:42.834974+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575094784 unmapped: 94715904 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:43.835121+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 575094784 unmapped: 94715904 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x19441c000/0x0/0x1bfc00000, data 0x2a5737a/0x2c90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:44.835315+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574218240 unmapped: 95592448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:45.835492+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574218240 unmapped: 95592448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:46.835619+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574218240 unmapped: 95592448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 ms_handle_reset con 0x562dc5acd800 session 0x562dbd572960
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:47.835761+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574218240 unmapped: 95592448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:48.836018+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574218240 unmapped: 95592448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:49.836216+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574218240 unmapped: 95592448 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:50.836331+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:51.836473+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:52.836645+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:53.836768+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:54.836883+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:55.837076+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:56.837241+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:57.837392+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:58.837576+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:59.837688+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:00.837798+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:01.838012+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574226432 unmapped: 95584256 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:02.838228+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574234624 unmapped: 95576064 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:03.838407+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574234624 unmapped: 95576064 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:04.838557+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574234624 unmapped: 95576064 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:05.838690+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574242816 unmapped: 95567872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:06.838862+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574242816 unmapped: 95567872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:07.838997+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574242816 unmapped: 95567872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:08.839189+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574242816 unmapped: 95567872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:09.839323+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574242816 unmapped: 95567872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:10.839465+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574242816 unmapped: 95567872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:11.839615+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574242816 unmapped: 95567872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:12.839758+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574242816 unmapped: 95567872 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:13.839901+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574251008 unmapped: 95559680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:14.840081+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574251008 unmapped: 95559680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:15.840220+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574251008 unmapped: 95559680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:16.840356+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574251008 unmapped: 95559680 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:17.840539+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574259200 unmapped: 95551488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:18.840794+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574259200 unmapped: 95551488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:19.840940+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574259200 unmapped: 95551488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:20.841070+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574259200 unmapped: 95551488 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:21.841202+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574267392 unmapped: 95543296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:22.841358+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574267392 unmapped: 95543296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:23.841540+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574267392 unmapped: 95543296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:24.841689+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574267392 unmapped: 95543296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:25.841829+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574267392 unmapped: 95543296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:26.841974+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574267392 unmapped: 95543296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:27.842121+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574267392 unmapped: 95543296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:28.842281+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574267392 unmapped: 95543296 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:29.842429+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574291968 unmapped: 95518720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:30.842555+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574291968 unmapped: 95518720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:31.842691+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574291968 unmapped: 95518720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:32.842847+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574291968 unmapped: 95518720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:33.842991+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574291968 unmapped: 95518720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:34.843078+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574291968 unmapped: 95518720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:35.843204+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574291968 unmapped: 95518720 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:36.843374+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574300160 unmapped: 95510528 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:37.843553+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574308352 unmapped: 95502336 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:38.843707+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574308352 unmapped: 95502336 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:39.843835+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574308352 unmapped: 95502336 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:40.844996+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574308352 unmapped: 95502336 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:41.847094+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574308352 unmapped: 95502336 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:42.848438+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574308352 unmapped: 95502336 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:43.849935+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574308352 unmapped: 95502336 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:44.851168+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574308352 unmapped: 95502336 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:45.852214+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574316544 unmapped: 95494144 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:46.853225+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574316544 unmapped: 95494144 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:47.853781+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574316544 unmapped: 95494144 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:48.854642+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574316544 unmapped: 95494144 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:49.855446+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574316544 unmapped: 95494144 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:50.856043+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574316544 unmapped: 95494144 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:51.856692+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574316544 unmapped: 95494144 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:52.857218+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574316544 unmapped: 95494144 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:53.857639+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574332928 unmapped: 95477760 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:54.857953+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574332928 unmapped: 95477760 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:55.858199+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574341120 unmapped: 95469568 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:56.858366+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574341120 unmapped: 95469568 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:57.858685+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574341120 unmapped: 95469568 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:58.859012+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574341120 unmapped: 95469568 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:59.859346+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574341120 unmapped: 95469568 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:00.859637+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574341120 unmapped: 95469568 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:01.859908+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574357504 unmapped: 95453184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:02.860192+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574357504 unmapped: 95453184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:03.860451+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574357504 unmapped: 95453184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:04.860597+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574357504 unmapped: 95453184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:05.860718+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574357504 unmapped: 95453184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:06.860858+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574357504 unmapped: 95453184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:07.861097+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574357504 unmapped: 95453184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:08.861340+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574357504 unmapped: 95453184 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:09.861519+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574365696 unmapped: 95444992 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:10.861687+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574365696 unmapped: 95444992 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:11.861818+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574365696 unmapped: 95444992 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:12.861993+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574365696 unmapped: 95444992 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:13.862077+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574373888 unmapped: 95436800 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:14.862322+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574373888 unmapped: 95436800 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:15.862524+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574373888 unmapped: 95436800 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'config diff' '{prefix=config diff}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:16.862671+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'config show' '{prefix=config show}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:49 compute-2 ceph-osd[79942]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:49 compute-2 ceph-osd[79942]: bluestore.MempoolThread(0x562db9359b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5614297 data_alloc: 218103808 data_used: 10080256
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 574111744 unmapped: 95698944 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:17.862847+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: prioritycache tune_memory target: 4294967296 mapped: 573808640 unmapped: 96002048 heap: 669810688 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:49 compute-2 ceph-osd[79942]: osd.2 432 heartbeat osd_stat(store_statfs(0x1950c9000/0x0/0x1bfc00000, data 0x1dac37a/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x28b4f9c6), peers [0,1] op hist [])
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: tick
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_tickets
Jan 31 09:20:49 compute-2 ceph-osd[79942]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:18.863038+0000)
Jan 31 09:20:49 compute-2 ceph-osd[79942]: do_command 'log dump' '{prefix=log dump}'
Jan 31 09:20:49 compute-2 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:20:49 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 09:20:49 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3061686296' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:20:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:49.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:20:49 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:49 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:49 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:49.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.53339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3621906018' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1791372070' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.48034 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.39276 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/418996094' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: pgmap v4487: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.48046 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.39285 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.53354 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1113723371' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.48064 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3899040109' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2293239702' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:49 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1725596042' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 09:20:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1036068566' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:50 compute-2 crontab[354660]: (root) LIST (root)
Jan 31 09:20:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 09:20:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3224229522' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:50 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 09:20:50 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/798322276' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:20:50 compute-2 nova_compute[226829]: 2026-01-31 09:20:50.986 226833 DEBUG oslo_concurrency.processutils [None req-bd2c07e4-843a-4507-9f7a-8063f84b952d 7236ad21d7ab4000b7ab6db9df93bca9 f1803bf3df964a3f90dda65daa6f9a53 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:20:51 compute-2 nova_compute[226829]: 2026-01-31 09:20:51.019 226833 DEBUG oslo_concurrency.processutils [None req-bd2c07e4-843a-4507-9f7a-8063f84b952d 7236ad21d7ab4000b7ab6db9df93bca9 f1803bf3df964a3f90dda65daa6f9a53 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:20:51 compute-2 nova_compute[226829]: 2026-01-31 09:20:51.103 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 31 09:20:51 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2202396126' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.39306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.53372 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.48076 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3610126339' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.39321 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.53387 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3710960062' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.48091 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3061686296' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.53399 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.48100 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.39342 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2779901335' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1036068566' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3224229522' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:51 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2362632188' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:20:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:20:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:51.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:20:51 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:51 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:51 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:51.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:51 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 31 09:20:51 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/159826231' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:20:52 compute-2 nova_compute[226829]: 2026-01-31 09:20:52.042 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 31 09:20:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1990826684' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 podman[354885]: 2026-01-31 09:20:52.225081735 +0000 UTC m=+0.093581956 container health_status 44911f053173712dd690ec4cecde63bd3d0a4454e543281877e1105ec5952bcb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.48112 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: pgmap v4488: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.48127 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.53420 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.39369 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.48139 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/798322276' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2421450604' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2202396126' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/486569988' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1824071166' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1377901725' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1184234516' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2731074068' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.48169 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/872526189' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/159826231' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3672146262' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/440537423' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4221942119' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1990826684' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 31 09:20:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3753261357' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:20:52 compute-2 sudo[354990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:52 compute-2 sudo[354990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:52 compute-2 sudo[354990]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 31 09:20:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1182153588' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-2 sudo[355015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:52 compute-2 sudo[355015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:52 compute-2 sudo[355015]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 31 09:20:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2404465682' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 31 09:20:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1212520424' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:20:52 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 31 09:20:52 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1381767948' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-2 systemd[1]: Starting Hostname Service...
Jan 31 09:20:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 31 09:20:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1903579792' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 31 09:20:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3884220018' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-2 systemd[1]: Started Hostname Service.
Jan 31 09:20:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 09:20:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:53.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 09:20:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 31 09:20:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1555477891' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:20:53 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:53 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:53 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:53.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:53 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 31 09:20:53 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1587363947' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1258845554' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/105534902' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3753261357' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: pgmap v4489: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/423536003' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3247337375' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1134706696' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1182153588' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2404465682' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2216596902' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/377591572' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/725191915' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/993795347' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1212520424' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1381767948' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/198751604' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/329836358' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:20:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:20:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4253000560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:20:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:20:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4253000560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:20:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 31 09:20:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3472749174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:20:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 31 09:20:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1290483748' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:20:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 09:20:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2293008804' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:20:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 31 09:20:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1046202068' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:20:54 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 09:20:54 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3857913238' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3369620064' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1903579792' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3884220018' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1767818276' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2512753890' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1555477891' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/840181540' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3280397369' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1587363947' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3092234834' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4253000560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.10:0/4253000560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3472749174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1268494990' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1184909786' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1290483748' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2293008804' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1372215000' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:20:55 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1046202068' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:55.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:55 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:55 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:55 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:55.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:56 compute-2 nova_compute[226829]: 2026-01-31 09:20:56.142 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 31 09:20:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3066766351' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.53585 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: pgmap v4490: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.53603 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.53609 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.39504 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.53621 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3857913238' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.53615 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.39519 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.48313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.53630 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.48319 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.48328 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.39540 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.39543 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.48334 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.53660 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.48346 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/2130671381' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1562061555' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3066766351' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:20:56 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 31 09:20:56 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3781054631' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:20:57 compute-2 nova_compute[226829]: 2026-01-31 09:20:57.044 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 09:20:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3772205554' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:20:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:57.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:20:57 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 31 09:20:57 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/772713638' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:57 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:57 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:57.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.39549 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.39555 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.48364 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.53675 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: pgmap v4491: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.39564 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.48376 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.53696 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/1463004898' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.39579 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.48388 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3781054631' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/2939106005' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/834676227' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3772205554' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3454513562' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:57 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/772713638' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:20:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:20:58.079 143841 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=119, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=118) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:20:58 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:20:58.080 143841 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:20:58 compute-2 nova_compute[226829]: 2026-01-31 09:20:58.081 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 31 09:20:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/472564801' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:20:58 compute-2 nova_compute[226829]: 2026-01-31 09:20:58.488 226833 DEBUG oslo_service.periodic_task [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:58 compute-2 nova_compute[226829]: 2026-01-31 09:20:58.489 226833 DEBUG nova.compute.manager [None req-a65c73ad-c8d2-44b3-8475-7906bd3b2f9a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:20:58 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 31 09:20:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/184316149' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='client.39591 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='client.48394 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/1734205460' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/661100414' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/472564801' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:59 compute-2 podman[356090]: 2026-01-31 09:20:59.115594698 +0000 UTC m=+0.054109392 container health_status b6a02a3ad159d9429bc1e5fe1f477eec7deb552c1d84ad66622dbdbec6305666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-84876ae3640ea2f7b6f5dce07fdf6253757f33da5fe6bf7381a6f01aebbd0099-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 31 09:20:59 compute-2 sudo[356161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:59 compute-2 sudo[356161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:59 compute-2 sudo[356161]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:59 compute-2 sudo[356194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:20:59 compute-2 sudo[356194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:59 compute-2 sudo[356194]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:59 compute-2 sudo[356221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:59 compute-2 sudo[356221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:59 compute-2 sudo[356221]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:59 compute-2 sudo[356249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:20:59 compute-2 sudo[356249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:59 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 31 09:20:59 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3609422971' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:20:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 09:20:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:59.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 09:20:59 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:20:59 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:59 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:59.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:59 compute-2 sudo[356249]: pam_unix(sudo:session): session closed for user root
Jan 31 09:21:00 compute-2 ceph-mon[77282]: pgmap v4492: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='client.53795 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/184316149' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/3081234303' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3609422971' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/4252084512' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/608546425' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:21:00 compute-2 ovn_metadata_agent[143834]: 2026-01-31 09:21:00.083 143841 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c06836a7-1d29-4815-800d-4e6d21a36ae0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '119'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:21:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 31 09:21:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2918085594' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 31 09:21:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1992965450' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:21:00 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 31 09:21:00 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3890139264' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:21:01 compute-2 nova_compute[226829]: 2026-01-31 09:21:01.144 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:21:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:21:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:21:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:01.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:21:01 compute-2 radosgw[83985]: ====== starting new request req=0x7fa735b706f0 =====
Jan 31 09:21:01 compute-2 radosgw[83985]: ====== req done req=0x7fa735b706f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:21:01 compute-2 radosgw[83985]: beast: 0x7fa735b706f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:01.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:21:01 compute-2 ceph-mon[77282]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 31 09:21:01 compute-2 ceph-mon[77282]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/774808781' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='client.48511 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/2918085594' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='client.39708 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/185072006' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: pgmap v4493: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/1992965450' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.100:0/3396172421' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.101:0/201906423' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:21:02 compute-2 ceph-mon[77282]: from='client.? 192.168.122.102:0/3890139264' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:21:02 compute-2 nova_compute[226829]: 2026-01-31 09:21:02.045 226833 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
